WorldWideScience

Sample records for monte carlo uncertainty

  1. Uncertainty analysis in Monte Carlo criticality computations

    International Nuclear Information System (INIS)

    Qi Ao

    2011-01-01

    Highlights: ► Two types of uncertainty methods for k eff Monte Carlo computations are examined. ► Sampling method has the least restrictions on perturbation but computing resources. ► Analytical method is limited to small perturbation on material properties. ► Practicality relies on efficiency, multiparameter applicability and data availability. - Abstract: Uncertainty analysis is imperative for nuclear criticality risk assessments when using Monte Carlo neutron transport methods to predict the effective neutron multiplication factor (k eff ) for fissionable material systems. For the validation of Monte Carlo codes for criticality computations against benchmark experiments, code accuracy and precision are measured by both the computational bias and uncertainty in the bias. The uncertainty in the bias accounts for known or quantified experimental, computational and model uncertainties. For the application of Monte Carlo codes for criticality analysis of fissionable material systems, an administrative margin of subcriticality must be imposed to provide additional assurance of subcriticality for any unknown or unquantified uncertainties. Because of a substantial impact of the administrative margin of subcriticality on economics and safety of nuclear fuel cycle operations, recently increasing interests in reducing the administrative margin of subcriticality make the uncertainty analysis in criticality safety computations more risk-significant. This paper provides an overview of two most popular k eff uncertainty analysis methods for Monte Carlo criticality computations: (1) sampling-based methods, and (2) analytical methods. Examples are given to demonstrate their usage in the k eff uncertainty analysis due to uncertainties in both neutronic and non-neutronic parameters of fissionable material systems.

  2. Improved Monte Carlo Method for PSA Uncertainty Analysis

    International Nuclear Information System (INIS)

    Choi, Jongsoo

    2016-01-01

    The treatment of uncertainty is an important issue for regulatory decisions. Uncertainties exist from knowledge limitations. A probabilistic approach has exposed some of these limitations and provided a framework to assess their significance and assist in developing a strategy to accommodate them in the regulatory process. The uncertainty analysis (UA) is usually based on the Monte Carlo method. This paper proposes a Monte Carlo UA approach to calculate the mean risk metrics accounting for the SOKC between basic events (including CCFs) using efficient random number generators and to meet Capability Category III of the ASME/ANS PRA standard. Audit calculation is needed in PSA regulatory reviews of uncertainty analysis results submitted for licensing. The proposed Monte Carlo UA approach provides a high degree of confidence in PSA reviews. All PSA needs accounting for the SOKC between event probabilities to meet the ASME/ANS PRA standard

  3. Improved Monte Carlo Method for PSA Uncertainty Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jongsoo [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2016-10-15

    The treatment of uncertainty is an important issue for regulatory decisions. Uncertainties exist from knowledge limitations. A probabilistic approach has exposed some of these limitations and provided a framework to assess their significance and assist in developing a strategy to accommodate them in the regulatory process. The uncertainty analysis (UA) is usually based on the Monte Carlo method. This paper proposes a Monte Carlo UA approach to calculate the mean risk metrics accounting for the SOKC between basic events (including CCFs) using efficient random number generators and to meet Capability Category III of the ASME/ANS PRA standard. Audit calculation is needed in PSA regulatory reviews of uncertainty analysis results submitted for licensing. The proposed Monte Carlo UA approach provides a high degree of confidence in PSA reviews. All PSA needs accounting for the SOKC between event probabilities to meet the ASME/ANS PRA standard.

  4. Range uncertainties in proton therapy and the role of Monte Carlo simulations

    International Nuclear Information System (INIS)

    Paganetti, Harald

    2012-01-01

    The main advantages of proton therapy are the reduced total energy deposited in the patient as compared to photon techniques and the finite range of the proton beam. The latter adds an additional degree of freedom to treatment planning. The range in tissue is associated with considerable uncertainties caused by imaging, patient setup, beam delivery and dose calculation. Reducing the uncertainties would allow a reduction of the treatment volume and thus allow a better utilization of the advantages of protons. This paper summarizes the role of Monte Carlo simulations when aiming at a reduction of range uncertainties in proton therapy. Differences in dose calculation when comparing Monte Carlo with analytical algorithms are analyzed as well as range uncertainties due to material constants and CT conversion. Range uncertainties due to biological effects and the role of Monte Carlo for in vivo range verification are discussed. Furthermore, the current range uncertainty recipes used at several proton therapy facilities are revisited. We conclude that a significant impact of Monte Carlo dose calculation can be expected in complex geometries where local range uncertainties due to multiple Coulomb scattering will reduce the accuracy of analytical algorithms. In these cases Monte Carlo techniques might reduce the range uncertainty by several mm. (topical review)

  5. pyNSMC: A Python Module for Null-Space Monte Carlo Uncertainty Analysis

    Science.gov (United States)

    White, J.; Brakefield, L. K.

    2015-12-01

    The null-space monte carlo technique is a non-linear uncertainty analyses technique that is well-suited to high-dimensional inverse problems. While the technique is powerful, the existing workflow for completing null-space monte carlo is cumbersome, requiring the use of multiple commandline utilities, several sets of intermediate files and even a text editor. pyNSMC is an open-source python module that automates the workflow of null-space monte carlo uncertainty analyses. The module is fully compatible with the PEST and PEST++ software suites and leverages existing functionality of pyEMU, a python framework for linear-based uncertainty analyses. pyNSMC greatly simplifies the existing workflow for null-space monte carlo by taking advantage of object oriented design facilities in python. The core of pyNSMC is the ensemble class, which draws and stores realized random vectors and also provides functionality for exporting and visualizing results. By relieving users of the tedium associated with file handling and command line utility execution, pyNSMC instead focuses the user on the important steps and assumptions of null-space monte carlo analysis. Furthermore, pyNSMC facilitates learning through flow charts and results visualization, which are available at many points in the algorithm. The ease-of-use of the pyNSMC workflow is compared to the existing workflow for null-space monte carlo for a synthetic groundwater model with hundreds of estimable parameters.

  6. Monte Carlo eigenfunction strategies and uncertainties

    International Nuclear Information System (INIS)

    Gast, R.C.; Candelore, N.R.

    1974-01-01

    Comparisons of convergence rates for several possible eigenfunction source strategies led to the selection of the ''straight'' analog of the analytic power method as the source strategy for Monte Carlo eigenfunction calculations. To insure a fair game strategy, the number of histories per iteration increases with increasing iteration number. The estimate of eigenfunction uncertainty is obtained from a modification of a proposal by D. B. MacMillan and involves only estimates of the usual purely statistical component of uncertainty and a serial correlation coefficient of lag one. 14 references. (U.S.)

  7. Propagation of statistical and nuclear data uncertainties in Monte Carlo burn-up calculations

    International Nuclear Information System (INIS)

    Garcia-Herranz, Nuria; Cabellos, Oscar; Sanz, Javier; Juan, Jesus; Kuijper, Jim C.

    2008-01-01

    Two methodologies to propagate the uncertainties on the nuclide inventory in combined Monte Carlo-spectrum and burn-up calculations are presented, based on sensitivity/uncertainty and random sampling techniques (uncertainty Monte Carlo method). Both enable the assessment of the impact of uncertainties in the nuclear data as well as uncertainties due to the statistical nature of the Monte Carlo neutron transport calculation. The methodologies are implemented in our MCNP-ACAB system, which combines the neutron transport code MCNP-4C and the inventory code ACAB. A high burn-up benchmark problem is used to test the MCNP-ACAB performance in inventory predictions, with no uncertainties. A good agreement is found with the results of other participants. This benchmark problem is also used to assess the impact of nuclear data uncertainties and statistical flux errors in high burn-up applications. A detailed calculation is performed to evaluate the effect of cross-section uncertainties in the inventory prediction, taking into account the temporal evolution of the neutron flux level and spectrum. Very large uncertainties are found at the unusually high burn-up of this exercise (800 MWd/kgHM). To compare the impact of the statistical errors in the calculated flux with respect to the cross uncertainties, a simplified problem is considered, taking a constant neutron flux level and spectrum. It is shown that, provided that the flux statistical deviations in the Monte Carlo transport calculation do not exceed a given value, the effect of the flux errors in the calculated isotopic inventory are negligible (even at very high burn-up) compared to the effect of the large cross-section uncertainties available at present in the data files

  8. Propagation of statistical and nuclear data uncertainties in Monte Carlo burn-up calculations

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Herranz, Nuria [Departamento de Ingenieria Nuclear, Universidad Politecnica de Madrid, UPM (Spain)], E-mail: nuria@din.upm.es; Cabellos, Oscar [Departamento de Ingenieria Nuclear, Universidad Politecnica de Madrid, UPM (Spain); Sanz, Javier [Departamento de Ingenieria Energetica, Universidad Nacional de Educacion a Distancia, UNED (Spain); Juan, Jesus [Laboratorio de Estadistica, Universidad Politecnica de Madrid, UPM (Spain); Kuijper, Jim C. [NRG - Fuels, Actinides and Isotopes Group, Petten (Netherlands)

    2008-04-15

    Two methodologies to propagate the uncertainties on the nuclide inventory in combined Monte Carlo-spectrum and burn-up calculations are presented, based on sensitivity/uncertainty and random sampling techniques (uncertainty Monte Carlo method). Both enable the assessment of the impact of uncertainties in the nuclear data as well as uncertainties due to the statistical nature of the Monte Carlo neutron transport calculation. The methodologies are implemented in our MCNP-ACAB system, which combines the neutron transport code MCNP-4C and the inventory code ACAB. A high burn-up benchmark problem is used to test the MCNP-ACAB performance in inventory predictions, with no uncertainties. A good agreement is found with the results of other participants. This benchmark problem is also used to assess the impact of nuclear data uncertainties and statistical flux errors in high burn-up applications. A detailed calculation is performed to evaluate the effect of cross-section uncertainties in the inventory prediction, taking into account the temporal evolution of the neutron flux level and spectrum. Very large uncertainties are found at the unusually high burn-up of this exercise (800 MWd/kgHM). To compare the impact of the statistical errors in the calculated flux with respect to the cross uncertainties, a simplified problem is considered, taking a constant neutron flux level and spectrum. It is shown that, provided that the flux statistical deviations in the Monte Carlo transport calculation do not exceed a given value, the effect of the flux errors in the calculated isotopic inventory are negligible (even at very high burn-up) compared to the effect of the large cross-section uncertainties available at present in the data files.

  9. Propagation of nuclear data uncertainties in fuel cycle calculations using Monte-Carlo technique

    International Nuclear Information System (INIS)

    Diez, C.J.; Cabellos, O.; Martinez, J.S.

    2011-01-01

    Nowadays, the knowledge of uncertainty propagation in depletion calculations is a critical issue because of the safety and economical performance of fuel cycles. Response magnitudes such as decay heat, radiotoxicity and isotopic inventory and their uncertainties should be known to handle spent fuel in present fuel cycles (e.g. high burnup fuel programme) and furthermore in new fuel cycles designs (e.g. fast breeder reactors and ADS). To deal with this task, there are different error propagation techniques, deterministic (adjoint/forward sensitivity analysis) and stochastic (Monte-Carlo technique) to evaluate the error in response magnitudes due to nuclear data uncertainties. In our previous works, cross-section uncertainties were propagated using a Monte-Carlo technique to calculate the uncertainty of response magnitudes such as decay heat and neutron emission. Also, the propagation of decay data, fission yield and cross-section uncertainties was performed, but only isotopic composition was the response magnitude calculated. Following the previous technique, the nuclear data uncertainties are taken into account and propagated to response magnitudes, decay heat and radiotoxicity. These uncertainties are assessed during cooling time. To evaluate this Monte-Carlo technique, two different applications are performed. First, a fission pulse decay heat calculation is carried out to check the Monte-Carlo technique, using decay data and fission yields uncertainties. Then, the results, experimental data and reference calculation (JEFF Report20), are compared. Second, we assess the impact of basic nuclear data (activation cross-section, decay data and fission yields) uncertainties on relevant fuel cycle parameters (decay heat and radiotoxicity) for a conceptual design of a modular European Facility for Industrial Transmutation (EFIT) fuel cycle. After identifying which time steps have higher uncertainties, an assessment of which uncertainties have more relevance is performed

  10. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution.

    Science.gov (United States)

    Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr

    2012-01-01

    Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Exploring uncertainty in glacier mass balance modelling with Monte Carlo simulation

    NARCIS (Netherlands)

    Machguth, H.; Purves, R.S.; Oerlemans, J.; Hoelzle, M.; Paul, F.

    2008-01-01

    By means of Monte Carlo simulations we calculated uncertainty in modelled cumulative mass balance over 400 days at one particular point on the tongue of Morteratsch Glacier, Switzerland, using a glacier energy balance model of intermediate complexity. Before uncertainty assessment, the model was

  12. Monte Carlo Simulation of Influence of Input Parameters Uncertainty on Output Data

    International Nuclear Information System (INIS)

    Sobek, Lukas

    2010-01-01

    Input parameters of a complex system in the probabilistic simulation are treated by means of probability density function (PDF). The result of the simulation have also probabilistic character. Monte Carlo simulation is widely used to obtain predictions concerning the probability of the risk. The Monte Carlo method was performed to calculate histograms of PDF for release rate given by uncertainty in distribution coefficient of radionuclides 135 Cs and 235 U.

  13. Systematic uncertainties on Monte Carlo simulation of lead based ADS

    International Nuclear Information System (INIS)

    Embid, M.; Fernandez, R.; Garcia-Sanz, J.M.; Gonzalez, E.

    1999-01-01

    Computer simulations of the neutronic behaviour of ADS systems foreseen for actinide and fission product transmutation are affected by many sources of systematic uncertainties, both from the nuclear data and by the methodology selected when applying the codes. Several actual ADS Monte Carlo simulations are presented, comparing different options both for the data and for the methodology, evaluating the relevance of the different uncertainties. (author)

  14. Pore-scale uncertainty quantification with multilevel Monte Carlo

    KAUST Repository

    Icardi, Matteo

    2014-01-06

    Computational fluid dynamics (CFD) simulations of pore-scale transport processes in porous media have recently gained large popularity. However the geometrical details of the pore structures can be known only in a very low number of samples and the detailed flow computations can be carried out only on a limited number of cases. The explicit introduction of randomness in the geometry and in other setup parameters can be crucial for the optimization of pore-scale investigations for random homogenization. Since there are no generic ways to parametrize the randomness in the porescale structures, Monte Carlo techniques are the most accessible to compute statistics. We propose a multilevel Monte Carlo (MLMC) technique to reduce the computational cost of estimating quantities of interest within a prescribed accuracy constraint. Random samples of pore geometries with a hierarchy of geometrical complexities and grid refinements, are synthetically generated and used to propagate the uncertainties in the flow simulations and compute statistics of macro-scale effective parameters.

  15. Uncertainty Propagation Analysis for the Monte Carlo Time-Dependent Simulations

    International Nuclear Information System (INIS)

    Shaukata, Nadeem; Shim, Hyung Jin

    2015-01-01

    In this paper, a conventional method to control the neutron population for super-critical systems is implemented. Instead of considering the cycles, the simulation is divided in time intervals. At the end of each time interval, neutron population control is applied on the banked neutrons. Randomly selected neutrons are discarded, until the size of neutron population matches the initial neutron histories at the beginning of time simulation. A time-dependent simulation mode has also been implemented in the development version of SERPENT 2 Monte Carlo code. In this mode, sequential population control mechanism has been proposed for modeling of prompt super-critical systems. A Monte Carlo method has been properly used in TART code for dynamic criticality calculations. For super-critical systems, the neutron population is allowed to grow over a period of time. The neutron population is uniformly combed to return it to the neutron population started with at the beginning of time boundary. In this study, conventional time-dependent Monte Carlo (TDMC) algorithm is implemented. There is an exponential growth of neutron population in estimation of neutron density tally for super-critical systems and the number of neutrons being tracked exceed the memory of the computer. In order to control this exponential growth at the end of each time boundary, a conventional time cut-off controlling population strategy is included in TDMC. A scale factor is introduced to tally the desired neutron density at the end of each time boundary. The main purpose of this paper is the quantification of uncertainty propagation in neutron densities at the end of each time boundary for super-critical systems. This uncertainty is caused by the uncertainty resulting from the introduction of scale factor. The effectiveness of TDMC is examined for one-group infinite homogeneous problem (the rod model) and two-group infinite homogeneous problem. The desired neutron density is tallied by the introduction of

  16. Uncertainty Propagation Analysis for the Monte Carlo Time-Dependent Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Shaukata, Nadeem; Shim, Hyung Jin [Seoul National University, Seoul (Korea, Republic of)

    2015-10-15

    In this paper, a conventional method to control the neutron population for super-critical systems is implemented. Instead of considering the cycles, the simulation is divided in time intervals. At the end of each time interval, neutron population control is applied on the banked neutrons. Randomly selected neutrons are discarded, until the size of neutron population matches the initial neutron histories at the beginning of time simulation. A time-dependent simulation mode has also been implemented in the development version of SERPENT 2 Monte Carlo code. In this mode, sequential population control mechanism has been proposed for modeling of prompt super-critical systems. A Monte Carlo method has been properly used in TART code for dynamic criticality calculations. For super-critical systems, the neutron population is allowed to grow over a period of time. The neutron population is uniformly combed to return it to the neutron population started with at the beginning of time boundary. In this study, conventional time-dependent Monte Carlo (TDMC) algorithm is implemented. There is an exponential growth of neutron population in estimation of neutron density tally for super-critical systems and the number of neutrons being tracked exceed the memory of the computer. In order to control this exponential growth at the end of each time boundary, a conventional time cut-off controlling population strategy is included in TDMC. A scale factor is introduced to tally the desired neutron density at the end of each time boundary. The main purpose of this paper is the quantification of uncertainty propagation in neutron densities at the end of each time boundary for super-critical systems. This uncertainty is caused by the uncertainty resulting from the introduction of scale factor. The effectiveness of TDMC is examined for one-group infinite homogeneous problem (the rod model) and two-group infinite homogeneous problem. The desired neutron density is tallied by the introduction of

  17. A smart Monte Carlo procedure for production costing and uncertainty analysis

    International Nuclear Information System (INIS)

    Parker, C.; Stremel, J.

    1996-01-01

    Electric utilities using chronological production costing models to decide whether to buy or sell power over the next week or next few weeks need to determine potential profits or losses under a number of uncertainties. A large amount of money can be at stake--often $100,000 a day or more--and one party of the sale must always take on the risk. In the case of fixed price ($/MWh) contracts, the seller accepts the risk. In the case of cost plus contracts, the buyer must accept the risk. So, modeling uncertainty and understanding the risk accurately can improve the competitive edge of the user. This paper investigates an efficient procedure for representing risks and costs from capacity outages. Typically, production costing models use an algorithm based on some form of random number generator to select resources as available or on outage. These algorithms allow experiments to be repeated and gains and losses to be observed in a short time. The authors perform several experiments to examine the capability of three unit outage selection methods and measures their results. Specifically, a brute force Monte Carlo procedure, a Monte Carlo procedure with Latin Hypercube sampling, and a Smart Monte Carlo procedure with cost stratification and directed sampling are examined

  18. Monte Carlo uncertainty analysis of dose estimates in radiochromic film dosimetry with single-channel and multichannel algorithms.

    Science.gov (United States)

    Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen; González-López, Antonio

    2018-03-01

    To provide a multi-stage model to calculate uncertainty in radiochromic film dosimetry with Monte-Carlo techniques. This new approach is applied to single-channel and multichannel algorithms. Two lots of Gafchromic EBT3 are exposed in two different Varian linacs. They are read with an EPSON V800 flatbed scanner. The Monte-Carlo techniques in uncertainty analysis provide a numerical representation of the probability density functions of the output magnitudes. From this numerical representation, traditional parameters of uncertainty analysis as the standard deviations and bias are calculated. Moreover, these numerical representations are used to investigate the shape of the probability density functions of the output magnitudes. Also, another calibration film is read in four EPSON scanners (two V800 and two 10000XL) and the uncertainty analysis is carried out with the four images. The dose estimates of single-channel and multichannel algorithms show a Gaussian behavior and low bias. The multichannel algorithms lead to less uncertainty in the final dose estimates when the EPSON V800 is employed as reading device. In the case of the EPSON 10000XL, the single-channel algorithms provide less uncertainty in the dose estimates for doses higher than four Gy. A multi-stage model has been presented. With the aid of this model and the use of the Monte-Carlo techniques, the uncertainty of dose estimates for single-channel and multichannel algorithms are estimated. The application of the model together with Monte-Carlo techniques leads to a complete characterization of the uncertainties in radiochromic film dosimetry. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  19. Pore-scale uncertainty quantification with multilevel Monte Carlo

    KAUST Repository

    Icardi, Matteo; Hoel, Haakon; Long, Quan; Tempone, Raul

    2014-01-01

    . Since there are no generic ways to parametrize the randomness in the porescale structures, Monte Carlo techniques are the most accessible to compute statistics. We propose a multilevel Monte Carlo (MLMC) technique to reduce the computational cost

  20. Monte-Carlo-based uncertainty propagation with hierarchical models—a case study in dynamic torque

    Science.gov (United States)

    Klaus, Leonard; Eichstädt, Sascha

    2018-04-01

    For a dynamic calibration, a torque transducer is described by a mechanical model, and the corresponding model parameters are to be identified from measurement data. A measuring device for the primary calibration of dynamic torque, and a corresponding model-based calibration approach, have recently been developed at PTB. The complete mechanical model of the calibration set-up is very complex, and involves several calibration steps—making a straightforward implementation of a Monte Carlo uncertainty evaluation tedious. With this in mind, we here propose to separate the complete model into sub-models, with each sub-model being treated with individual experiments and analysis. The uncertainty evaluation for the overall model then has to combine the information from the sub-models in line with Supplement 2 of the Guide to the Expression of Uncertainty in Measurement. In this contribution, we demonstrate how to carry this out using the Monte Carlo method. The uncertainty evaluation involves various input quantities of different origin and the solution of a numerical optimisation problem.

  1. Monte Carlo parameter studies and uncertainty analyses with MCNP5

    International Nuclear Information System (INIS)

    Brown, F. B.; Sweezy, J. E.; Hayes, R.

    2004-01-01

    A software tool called mcnp p study has been developed to automate the setup, execution, and collection of results from a series of MCNP5 Monte Carlo calculations. This tool provides a convenient means of performing parameter studies, total uncertainty analyses, parallel job execution on clusters, stochastic geometry modeling, and other types of calculations where a series of MCNP5 jobs must be performed with varying problem input specifications. (authors)

  2. Uncertainty Propagation in Monte Carlo Depletion Analysis

    International Nuclear Information System (INIS)

    Shim, Hyung Jin; Kim, Yeong-il; Park, Ho Jin; Joo, Han Gyu; Kim, Chang Hyo

    2008-01-01

    A new formulation aimed at quantifying uncertainties of Monte Carlo (MC) tallies such as k eff and the microscopic reaction rates of nuclides and nuclide number densities in MC depletion analysis and examining their propagation behaviour as a function of depletion time step (DTS) is presented. It is shown that the variance of a given MC tally used as a measure of its uncertainty in this formulation arises from four sources; the statistical uncertainty of the MC tally, uncertainties of microscopic cross sections and nuclide number densities, and the cross correlations between them and the contribution of the latter three sources can be determined by computing the correlation coefficients between the uncertain variables. It is also shown that the variance of any given nuclide number density at the end of each DTS stems from uncertainties of the nuclide number densities (NND) and microscopic reaction rates (MRR) of nuclides at the beginning of each DTS and they are determined by computing correlation coefficients between these two uncertain variables. To test the viability of the formulation, we conducted MC depletion analysis for two sample depletion problems involving a simplified 7x7 fuel assembly (FA) and a 17x17 PWR FA, determined number densities of uranium and plutonium isotopes and their variances as well as k ∞ and its variance as a function of DTS, and demonstrated the applicability of the new formulation for uncertainty propagation analysis that need be followed in MC depletion computations. (authors)

  3. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media.

    Science.gov (United States)

    Crevillén-García, D; Power, H

    2017-08-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.

  4. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media

    Science.gov (United States)

    Crevillén-García, D.; Power, H.

    2017-08-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.

  5. Reconstruction of Monte Carlo replicas from Hessian parton distributions

    Energy Technology Data Exchange (ETDEWEB)

    Hou, Tie-Jiun [Department of Physics, Southern Methodist University,Dallas, TX 75275-0181 (United States); Gao, Jun [INPAC, Shanghai Key Laboratory for Particle Physics and Cosmology,Department of Physics and Astronomy, Shanghai Jiao-Tong University, Shanghai 200240 (China); High Energy Physics Division, Argonne National Laboratory,Argonne, Illinois, 60439 (United States); Huston, Joey [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States); Nadolsky, Pavel [Department of Physics, Southern Methodist University,Dallas, TX 75275-0181 (United States); Schmidt, Carl; Stump, Daniel [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States); Wang, Bo-Ting; Xie, Ke Ping [Department of Physics, Southern Methodist University,Dallas, TX 75275-0181 (United States); Dulat, Sayipjamal [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States); School of Physics Science and Technology, Xinjiang University,Urumqi, Xinjiang 830046 (China); Center for Theoretical Physics, Xinjiang University,Urumqi, Xinjiang 830046 (China); Pumplin, Jon; Yuan, C.P. [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States)

    2017-03-20

    We explore connections between two common methods for quantifying the uncertainty in parton distribution functions (PDFs), based on the Hessian error matrix and Monte-Carlo sampling. CT14 parton distributions in the Hessian representation are converted into Monte-Carlo replicas by a numerical method that reproduces important properties of CT14 Hessian PDFs: the asymmetry of CT14 uncertainties and positivity of individual parton distributions. The ensembles of CT14 Monte-Carlo replicas constructed this way at NNLO and NLO are suitable for various collider applications, such as cross section reweighting. Master formulas for computation of asymmetric standard deviations in the Monte-Carlo representation are derived. A correction is proposed to address a bias in asymmetric uncertainties introduced by the Taylor series approximation. A numerical program is made available for conversion of Hessian PDFs into Monte-Carlo replicas according to normal, log-normal, and Watt-Thorne sampling procedures.

  6. Propagation of uncertainty in nasal spray in vitro performance models using Monte Carlo simulation: Part II. Error propagation during product performance modeling.

    Science.gov (United States)

    Guo, Changning; Doub, William H; Kauffman, John F

    2010-08-01

    Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association

  7. Study of Monte Carlo approach to experimental uncertainty propagation with MSTW 2008 PDFs

    CERN Document Server

    Watt, G.

    2012-01-01

    We investigate the Monte Carlo approach to propagation of experimental uncertainties within the context of the established 'MSTW 2008' global analysis of parton distribution functions (PDFs) of the proton at next-to-leading order in the strong coupling. We show that the Monte Carlo approach using replicas of the original data gives PDF uncertainties in good agreement with the usual Hessian approach using the standard Delta(chi^2) = 1 criterion, then we explore potential parameterisation bias by increasing the number of free parameters, concluding that any parameterisation bias is likely to be small, with the exception of the valence-quark distributions at low momentum fractions x. We motivate the need for a larger tolerance, Delta(chi^2) > 1, by making fits to restricted data sets and idealised consistent or inconsistent pseudodata. Instead of using data replicas, we alternatively produce PDF sets randomly distributed according to the covariance matrix of fit parameters including appropriate tolerance values,...

  8. Monte Carlo codes and Monte Carlo simulator program

    International Nuclear Information System (INIS)

    Higuchi, Kenji; Asai, Kiyoshi; Suganuma, Masayuki.

    1990-03-01

    Four typical Monte Carlo codes KENO-IV, MORSE, MCNP and VIM have been vectorized on VP-100 at Computing Center, JAERI. The problems in vector processing of Monte Carlo codes on vector processors have become clear through the work. As the result, it is recognized that these are difficulties to obtain good performance in vector processing of Monte Carlo codes. A Monte Carlo computing machine, which processes the Monte Carlo codes with high performances is being developed at our Computing Center since 1987. The concept of Monte Carlo computing machine and its performance have been investigated and estimated by using a software simulator. In this report the problems in vectorization of Monte Carlo codes, Monte Carlo pipelines proposed to mitigate these difficulties and the results of the performance estimation of the Monte Carlo computing machine by the simulator are described. (author)

  9. Uncertainty Analysis of Power Grid Investment Capacity Based on Monte Carlo

    Science.gov (United States)

    Qin, Junsong; Liu, Bingyi; Niu, Dongxiao

    By analyzing the influence factors of the investment capacity of power grid, to depreciation cost, sales price and sales quantity, net profit, financing and GDP of the second industry as the dependent variable to build the investment capacity analysis model. After carrying out Kolmogorov-Smirnov test, get the probability distribution of each influence factor. Finally, obtained the grid investment capacity uncertainty of analysis results by Monte Carlo simulation.

  10. Coupling of system thermal–hydraulics and Monte-Carlo code: Convergence criteria and quantification of correlation between statistical uncertainty and coupled error

    International Nuclear Information System (INIS)

    Wu, Xu; Kozlowski, Tomasz

    2015-01-01

    Highlights: • Coupling of Monte Carlo code Serpent and thermal–hydraulics code RELAP5. • A convergence criterion is developed based on the statistical uncertainty of power. • Correlation between MC statistical uncertainty and coupled error is quantified. • Both UO 2 and MOX single assembly models are used in the coupled simulation. • Validation of coupling results with a multi-group transport code DeCART. - Abstract: Coupled multi-physics approach plays an important role in improving computational accuracy. Compared with deterministic neutronics codes, Monte Carlo codes have the advantage of a higher resolution level. In the present paper, a three-dimensional continuous-energy Monte Carlo reactor physics burnup calculation code, Serpent, is coupled with a thermal–hydraulics safety analysis code, RELAP5. The coupled Serpent/RELAP5 code capability is demonstrated by the improved axial power distribution of UO 2 and MOX single assembly models, based on the OECD-NEA/NRC PWR MOX/UO 2 Core Transient Benchmark. Comparisons of calculation results using the coupled code with those from the deterministic methods, specifically heterogeneous multi-group transport code DeCART, show that the coupling produces more precise results. A new convergence criterion for the coupled simulation is developed based on the statistical uncertainty in power distribution in the Monte Carlo code, rather than ad-hoc criteria used in previous research. The new convergence criterion is shown to be more rigorous, equally convenient to use but requiring a few more coupling steps to converge. Finally, the influence of Monte Carlo statistical uncertainty on the coupled error of power and thermal–hydraulics parameters is quantified. The results are presented such that they can be used to find the statistical uncertainty to use in Monte Carlo in order to achieve a desired precision in coupled simulation

  11. Calculating Remote Sensing Reflectance Uncertainties Using an Instrument Model Propagated Through Atmospheric Correction via Monte Carlo Simulations

    Science.gov (United States)

    Karakoylu, E.; Franz, B.

    2016-01-01

    First attempt at quantifying uncertainties in ocean remote sensing reflectance satellite measurements. Based on 1000 iterations of Monte Carlo. Data source is a SeaWiFS 4-day composite, 2003. The uncertainty is for remote sensing reflectance (Rrs) at 443 nm.

  12. Uncertainties in models of tropospheric ozone based on Monte Carlo analysis: Tropospheric ozone burdens, atmospheric lifetimes and surface distributions

    Science.gov (United States)

    Derwent, Richard G.; Parrish, David D.; Galbally, Ian E.; Stevenson, David S.; Doherty, Ruth M.; Naik, Vaishali; Young, Paul J.

    2018-05-01

    Recognising that global tropospheric ozone models have many uncertain input parameters, an attempt has been made to employ Monte Carlo sampling to quantify the uncertainties in model output that arise from global tropospheric ozone precursor emissions and from ozone production and destruction in a global Lagrangian chemistry-transport model. Ninety eight quasi-randomly Monte Carlo sampled model runs were completed and the uncertainties were quantified in tropospheric burdens and lifetimes of ozone, carbon monoxide and methane, together with the surface distribution and seasonal cycle in ozone. The results have shown a satisfactory degree of convergence and provide a first estimate of the likely uncertainties in tropospheric ozone model outputs. There are likely to be diminishing returns in carrying out many more Monte Carlo runs in order to refine further these outputs. Uncertainties due to model formulation were separately addressed using the results from 14 Atmospheric Chemistry Coupled Climate Model Intercomparison Project (ACCMIP) chemistry-climate models. The 95% confidence ranges surrounding the ACCMIP model burdens and lifetimes for ozone, carbon monoxide and methane were somewhat smaller than for the Monte Carlo estimates. This reflected the situation where the ACCMIP models used harmonised emissions data and differed only in their meteorological data and model formulations whereas a conscious effort was made to describe the uncertainties in the ozone precursor emissions and in the kinetic and photochemical data in the Monte Carlo runs. Attention was focussed on the model predictions of the ozone seasonal cycles at three marine boundary layer stations: Mace Head, Ireland, Trinidad Head, California and Cape Grim, Tasmania. Despite comprehensively addressing the uncertainties due to global emissions and ozone sources and sinks, none of the Monte Carlo runs were able to generate seasonal cycles that matched the observations at all three MBL stations. Although

  13. Monte Carlo and Quasi-Monte Carlo Sampling

    CERN Document Server

    Lemieux, Christiane

    2009-01-01

    Presents essential tools for using quasi-Monte Carlo sampling in practice. This book focuses on issues related to Monte Carlo methods - uniform and non-uniform random number generation, variance reduction techniques. It covers several aspects of quasi-Monte Carlo methods.

  14. Reducing uncertainty of Monte Carlo estimated fatigue damage in offshore wind turbines using FORM

    DEFF Research Database (Denmark)

    H. Horn, Jan-Tore; Jensen, Jørgen Juncher

    2016-01-01

    Uncertainties related to fatigue damage estimation of non-linear systems are highly dependent on the tail behaviour and extreme values of the stress range distribution. By using a combination of the First Order Reliability Method (FORM) and Monte Carlo simulations (MCS), the accuracy of the fatigue...

  15. Estimation of balance uncertainty using Direct Monte Carlo Simulation (DSMC) on a CPU-GPU architecture

    CSIR Research Space (South Africa)

    Bidgood, Peter M

    2017-01-01

    Full Text Available The estimation of balance uncertainty using conventional statistical and error propagation methods has been found to be both approximate and laborious to the point of being untenable. Direct Simulation by Monte Carlo (DSMC) has been shown...

  16. Use of the Monte Carlo uncertainty combination method in nuclear reactor setpoint evaluation

    International Nuclear Information System (INIS)

    Berte, Frank J.

    2004-01-01

    This paper provides an overview of an alternate method for the performance of instrument uncertainty calculation and instrument setpoint determination, when a setpoint analysis requires application of techniques beyond that provided by the widely used 'Root Sum Squares' approach. The paper will address, when the application of the Monte Carlo (MC) method should be considered, application of the MC method when independent and/or dependent uncertainties are involved, and finally interpretation of results obtained. Both single module as well as instrument string sample applications will be presented. (author)

  17. Monte Carlo techniques for analyzing deep-penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1986-01-01

    Current methods and difficulties in Monte Carlo deep-penetration calculations are reviewed, including statistical uncertainty and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multigroup Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications

  18. Monte Carlo techniques for analyzing deep penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1985-01-01

    A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications

  19. Monte Carlo techniques for analyzing deep penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1985-01-01

    A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications. 29 refs

  20. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution

    Czech Academy of Sciences Publication Activity Database

    Mukhopadhyay, N. D.; Sampson, A. J.; Deniz, D.; Carlsson, G. A.; Williamson, J.; Malušek, Alexandr

    2012-01-01

    Roč. 70, č. 1 (2012), s. 315-323 ISSN 0969-8043 Institutional research plan: CEZ:AV0Z10480505 Keywords : Monte Carlo * correlated sampling * efficiency * uncertainty * bootstrap Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 1.179, year: 2012 http://www.sciencedirect.com/science/article/pii/S0969804311004775

  1. Validation of uncertainty of weighing in the preparation of radionuclide standards by Monte Carlo Method

    International Nuclear Information System (INIS)

    Cacais, F.L.; Delgado, J.U.; Loayza, V.M.

    2016-01-01

    In preparing solutions for the production of radionuclide metrology standards is necessary measuring the quantity Activity by mass. The gravimetric method by elimination is applied to perform weighing with smaller uncertainties. At this work is carried out the validation, by the Monte Carlo method, of the uncertainty calculation approach implemented by Lourenco and Bobin according to ISO GUM for the method by elimination. The results obtained by both uncertainty calculation methods were consistent indicating that were fulfilled the conditions for the application of ISO GUM in the preparation of radioactive standards. (author)

  2. Studies of Monte Carlo Modelling of Jets at ATLAS

    CERN Document Server

    Kar, Deepak; The ATLAS collaboration

    2017-01-01

    The predictions of different Monte Carlo generators for QCD jet production, both in multijets and for jets produced in association with other objects, are presented. Recent improvements in showering Monte Carlos provide new tools for assessing systematic uncertainties associated with these jets.  Studies of the dependence of physical observables on the choice of shower tune parameters and new prescriptions for assessing systematic uncertainties associated with the choice of shower model and tune are presented.

  3. Latent uncertainties of the precalculated track Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Renaud, Marc-André; Seuntjens, Jan [Medical Physics Unit, McGill University, Montreal, Quebec H3G 1A4 (Canada); Roberge, David [Département de radio-oncologie, Centre Hospitalier de l’Université de Montréal, Montreal, Quebec H2L 4M1 (Canada)

    2015-01-15

    Purpose: While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited number of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pregenerated for electrons and protons using EGSnrc and GEANT4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (CUDA) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a “ground truth” benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of D{sub max}. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Results: Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of

  4. Latent uncertainties of the precalculated track Monte Carlo method

    International Nuclear Information System (INIS)

    Renaud, Marc-André; Seuntjens, Jan; Roberge, David

    2015-01-01

    Purpose: While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited number of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pregenerated for electrons and protons using EGSnrc and GEANT4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (CUDA) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a “ground truth” benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of D max . Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Results: Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of the

  5. Uncertainties in personal dosimetry for external radiation: A Monte Carlo approach

    International Nuclear Information System (INIS)

    Van Dijk, J. W. E.

    2006-01-01

    This paper explores the possibilities of numerical methods for uncertainty analysis of personal dosimetry systems. Using a numerical method based on Monte Carlo sampling the probability density function (PDF) of the dose measured using a personal dosemeter can be calculated using type-test measurements. From this PDF the combined standard uncertainty in the measurements with the dosemeter and the confidence interval can be calculated. The method calculates the output PDF directly from the PDFs of the inputs of the system such as the spectral distribution of the radiation and distributions of detector parameters like sensitivity and zero signal. The method can be used not only in its own right but also for validating other methods because it is not limited by restrictions that apply to using the Law of Propagation of Uncertainty and the Central Limit Theorem. The use of the method is demonstrated using the type-test data of the NRG-TLD. (authors)

  6. The use of Monte-Carlo simulation and order statistics for uncertainty analysis of a LBLOCA transient (LOFT-L2-5)

    International Nuclear Information System (INIS)

    Chojnacki, E.; Benoit, J.P.

    2007-01-01

    Best estimate computer codes are increasingly used in nuclear industry for the accident management procedures and have been planned to be used for the licensing procedures. Contrary to conservative codes which are supposed to give penalizing results, best estimate codes attempt to calculate accidental transients in a realistic way. It becomes therefore of prime importance, in particular for technical organization as IRSN in charge of safety assessment, to know the uncertainty on the results of such codes. Thus, CSNI has sponsored few years ago (published in 1998) the Uncertainty Methods Study (UMS) program on uncertainty methodologies used for a SBLOCA transient (LSTF-CL-18) and is now supporting the BEMUSE program for a LBLOCA transient (LOFT-L2-5). The large majority of BEMUSE participants (9 out of 10) use uncertainty methodologies based on a probabilistic modelling and all of them use Monte-Carlo simulations to propagate the uncertainties through their computer codes. Also, all of 'probabilistic participants' intend to use order statistics to determine the sampling size of the Monte-Carlo simulation and to derive the uncertainty ranges associated to their computer calculations. The first aim of this paper is to remind the advantages and also the assumptions of the probabilistic modelling and more specifically of order statistics (as Wilks' formula) in uncertainty methodologies. Indeed Monte-Carlo methods provide flexible and extremely powerful techniques for solving many of the uncertainty propagation problems encountered in nuclear safety analysis. However it is important to keep in mind that probabilistic methods are data intensive. That means, probabilistic methods cannot produce robust results unless a considerable body of information has been collected. A main interest of the use of order statistics results is to allow to take into account an unlimited number of uncertain parameters and, from a restricted number of code calculations to provide statistical

  7. SPQR: a Monte Carlo reactor kinetics code

    International Nuclear Information System (INIS)

    Cramer, S.N.; Dodds, H.L.

    1980-02-01

    The SPQR Monte Carlo code has been developed to analyze fast reactor core accident problems where conventional methods are considered inadequate. The code is based on the adiabatic approximation of the quasi-static method. This initial version contains no automatic material motion or feedback. An existing Monte Carlo code is used to calculate the shape functions and the integral quantities needed in the kinetics module. Several sample problems have been devised and analyzed. Due to the large statistical uncertainty associated with the calculation of reactivity in accident simulations, the results, especially at later times, differ greatly from deterministic methods. It was also found that in large uncoupled systems, the Monte Carlo method has difficulty in handling asymmetric perturbations

  8. Application of Monte Carlo Method for Evaluation of Uncertainties of ITS-90 by Standard Platinum Resistance Thermometer

    Science.gov (United States)

    Palenčár, Rudolf; Sopkuliak, Peter; Palenčár, Jakub; Ďuriš, Stanislav; Suroviak, Emil; Halaj, Martin

    2017-06-01

    Evaluation of uncertainties of the temperature measurement by standard platinum resistance thermometer calibrated at the defining fixed points according to ITS-90 is a problem that can be solved in different ways. The paper presents a procedure based on the propagation of distributions using the Monte Carlo method. The procedure employs generation of pseudo-random numbers for the input variables of resistances at the defining fixed points, supposing the multivariate Gaussian distribution for input quantities. This allows taking into account the correlations among resistances at the defining fixed points. Assumption of Gaussian probability density function is acceptable, with respect to the several sources of uncertainties of resistances. In the case of uncorrelated resistances at the defining fixed points, the method is applicable to any probability density function. Validation of the law of propagation of uncertainty using the Monte Carlo method is presented on the example of specific data for 25 Ω standard platinum resistance thermometer in the temperature range from 0 to 660 °C. Using this example, we demonstrate suitability of the method by validation of its results.

  9. Uncertainties associated with the use of the KENO Monte Carlo criticality codes

    International Nuclear Information System (INIS)

    Landers, N.F.; Petrie, L.M.

    1989-01-01

    The KENO multi-group Monte Carlo criticality codes have earned the reputation of being efficient, user friendly tools especially suited for the analysis of situations commonly encountered in the storage and transportation of fissile materials. Throughout their twenty years of service, a continuing effort has been made to maintain and improve these codes to meet the needs of the nuclear criticality safety community. Foremost among these needs is the knowledge of how to utilize the results safely and effectively. Therefore it is important that code users be aware of uncertainties that may affect their results. These uncertainties originate from approximations in the problem data, methods used to process cross sections, and assumptions, limitations and approximations within the criticality computer code itself. 6 refs., 8 figs., 1 tab

  10. Parameter sensitivity and uncertainty of the forest carbon flux model FORUG : a Monte Carlo analysis

    Energy Technology Data Exchange (ETDEWEB)

    Verbeeck, H.; Samson, R.; Lemeur, R. [Ghent Univ., Ghent (Belgium). Laboratory of Plant Ecology; Verdonck, F. [Ghent Univ., Ghent (Belgium). Dept. of Applied Mathematics, Biometrics and Process Control

    2006-06-15

    The FORUG model is a multi-layer process-based model that simulates carbon dioxide (CO{sub 2}) and water exchange between forest stands and the atmosphere. The main model outputs are net ecosystem exchange (NEE), total ecosystem respiration (TER), gross primary production (GPP) and evapotranspiration. This study used a sensitivity analysis to identify the parameters contributing to NEE uncertainty in the FORUG model. The aim was to determine if it is necessary to estimate the uncertainty of all parameters of a model to determine overall output uncertainty. Data used in the study were the meteorological and flux data of beech trees in Hesse. The Monte Carlo method was used to rank sensitivity and uncertainty parameters in combination with a multiple linear regression. Simulations were run in which parameters were assigned probability distributions and the effect of variance in the parameters on the output distribution was assessed. The uncertainty of the output for NEE was estimated. Based on the arbitrary uncertainty of 10 key parameters, a standard deviation of 0.88 Mg C per year per NEE was found, which was equal to 24 per cent of the mean value of NEE. The sensitivity analysis showed that the overall output uncertainty of the FORUG model could be determined by accounting for only a few key parameters, which were identified as corresponding to critical parameters in the literature. It was concluded that the 10 most important parameters determined more than 90 per cent of the output uncertainty. High ranking parameters included soil respiration; photosynthesis; and crown architecture. It was concluded that the Monte Carlo technique is a useful tool for ranking the uncertainty of parameters of process-based forest flux models. 48 refs., 2 tabs., 2 figs.

  11. Monte Carlo approaches for uncertainty quantification of criticality for system dimensions

    International Nuclear Information System (INIS)

    Kiedrowski, B.C.; Brown, F.B.

    2013-01-01

    One of the current challenges in nuclear engineering computations is the issue of performing uncertainty analysis for either calculations or experimental measurements. This paper specifically focuses on the issue of estimating the uncertainties arising from geometric tolerances. For this paper, two techniques for uncertainty quantification are studied. The first is the forward propagation technique, which can be thought of as a 'brute force' approach; uncertain system parameters are randomly sampled, the calculation is run, and uncertainties are found from the empirically obtained distribution of results. This approach need make no approximations in principle, but is very computationally expensive. The other approach investigated is the adjoint-based approach; system sensitivities are computed via a single Monte Carlo calculation and those are used with a covariance matrix to provide a linear estimate of the uncertainty. Demonstration calculations are performed with the MCNP6 code for both techniques. The 2 techniques are tested on 2 cases: the first case is a solid, bare cylinder of Pu-metal while the second case is a can of plutonium nitrate solution. The results show that the forward and adjoint approaches appear to agree in some cases where the responses are not non-linearly correlated. In other cases, the uncertainties in the effective multiplication k disagree for reasons not yet known

  12. Monte Carlo calculations of kQ, the beam quality conversion factor

    International Nuclear Information System (INIS)

    Muir, B. R.; Rogers, D. W. O.

    2010-01-01

    Purpose: To use EGSnrc Monte Carlo simulations to directly calculate beam quality conversion factors, k Q , for 32 cylindrical ionization chambers over a range of beam qualities and to quantify the effect of systematic uncertainties on Monte Carlo calculations of k Q . These factors are required to use the TG-51 or TRS-398 clinical dosimetry protocols for calibrating external radiotherapy beams. Methods: Ionization chambers are modeled either from blueprints or manufacturers' user's manuals. The dose-to-air in the chamber is calculated using the EGSnrc user-code egs c hamber using 11 different tabulated clinical photon spectra for the incident beams. The dose to a small volume of water is also calculated in the absence of the chamber at the midpoint of the chamber on its central axis. Using a simple equation, k Q is calculated from these quantities under the assumption that W/e is constant with energy and compared to TG-51 protocol and measured values. Results: Polynomial fits to the Monte Carlo calculated k Q factors as a function of beam quality expressed as %dd(10) x and TPR 10 20 are given for each ionization chamber. Differences are explained between Monte Carlo calculated values and values from the TG-51 protocol or calculated using the computer program used for TG-51 calculations. Systematic uncertainties in calculated k Q values are analyzed and amount to a maximum of one standard deviation uncertainty of 0.99% if one assumes that photon cross-section uncertainties are uncorrelated and 0.63% if they are assumed correlated. The largest components of the uncertainty are the constancy of W/e and the uncertainty in the cross-section for photons in water. Conclusions: It is now possible to calculate k Q directly using Monte Carlo simulations. Monte Carlo calculations for most ionization chambers give results which are comparable to TG-51 values. Discrepancies can be explained using individual Monte Carlo calculations of various correction factors which are more

  13. Reflections on early Monte Carlo calculations

    International Nuclear Information System (INIS)

    Spanier, J.

    1992-01-01

    Monte Carlo methods for solving various particle transport problems developed in parallel with the evolution of increasingly sophisticated computer programs implementing diffusion theory and low-order moments calculations. In these early years, Monte Carlo calculations and high-order approximations to the transport equation were seen as too expensive to use routinely for nuclear design but served as invaluable aids and supplements to design with less expensive tools. The earliest Monte Carlo programs were quite literal; i.e., neutron and other particle random walk histories were simulated by sampling from the probability laws inherent in the physical system without distoration. Use of such analogue sampling schemes resulted in a good deal of time being spent in examining the possibility of lowering the statistical uncertainties in the sample estimates by replacing simple, and intuitively obvious, random variables by those with identical means but lower variances

  14. Stand-alone core sensitivity and uncertainty analysis of ALFRED from Monte Carlo simulations

    International Nuclear Information System (INIS)

    Pérez-Valseca, A.-D.; Espinosa-Paredes, G.; François, J.L.; Vázquez Rodríguez, A.; Martín-del-Campo, C.

    2017-01-01

    Highlights: • Methodology based on Monte Carlo simulation. • Sensitivity analysis of Lead Fast Reactor (LFR). • Uncertainty and regression analysis of LFR. • 10% change in the core inlet flow, the response in thermal power change is 0.58%. • 2.5% change in the inlet lead temperature the response is 1.87% in power. - Abstract: The aim of this paper is the sensitivity and uncertainty analysis of a Lead-Cooled Fast Reactor (LFR) based on Monte Carlo simulation of sizes up to 2000. The methodology developed in this work considers the uncertainty of sensitivities and uncertainty of output variables due to a single-input-variable variation. The Advanced Lead fast Reactor European Demonstrator (ALFRED) is analyzed to determine the behavior of the essential parameters due to effects of mass flow and temperature of liquid lead. The ALFRED core mathematical model developed in this work is fully transient, which takes into account the heat transfer in an annular fuel pellet design, the thermo-fluid in the core, and the neutronic processes, which are modeled with point kinetic with feedback fuel temperature and expansion effects. The sensitivity evaluated in terms of the relative standard deviation (RSD) showed that for 10% change in the core inlet flow, the response in thermal power change is 0.58%, and for 2.5% change in the inlet lead temperature is 1.87%. The regression analysis with mass flow rate as the predictor variable showed statistically valid cubic correlations for neutron flux and linear relationship neutron flux as a function of the lead temperature. No statistically valid correlation was observed for the reactivity as a function of the mass flow rate and for the lead temperature. These correlations are useful for the study, analysis, and design of any LFR.

  15. Uncertainty Analysis Based on Sparse Grid Collocation and Quasi-Monte Carlo Sampling with Application in Groundwater Modeling

    Science.gov (United States)

    Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.

    2011-12-01

    Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently

  16. Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments

    International Nuclear Information System (INIS)

    Pevey, Ronald E.

    2005-01-01

    Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL

  17. Monte Carlo Uncertainty Quantification Using Quasi-1D SRM Ballistic Model

    Directory of Open Access Journals (Sweden)

    Davide Viganò

    2016-01-01

    Full Text Available Compactness, reliability, readiness, and construction simplicity of solid rocket motors make them very appealing for commercial launcher missions and embarked systems. Solid propulsion grants high thrust-to-weight ratio, high volumetric specific impulse, and a Technology Readiness Level of 9. However, solid rocket systems are missing any throttling capability at run-time, since pressure-time evolution is defined at the design phase. This lack of mission flexibility makes their missions sensitive to deviations of performance from nominal behavior. For this reason, the reliability of predictions and reproducibility of performances represent a primary goal in this field. This paper presents an analysis of SRM performance uncertainties throughout the implementation of a quasi-1D numerical model of motor internal ballistics based on Shapiro’s equations. The code is coupled with a Monte Carlo algorithm to evaluate statistics and propagation of some peculiar uncertainties from design data to rocker performance parameters. The model has been set for the reproduction of a small-scale rocket motor, discussing a set of parametric investigations on uncertainty propagation across the ballistic model.

  18. Propagation of Nuclear Data Uncertainties in Integral Measurements by Monte-Carlo Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Noguere, G.; Bernard, D.; De Saint-Jean, C. [CEA Cadarache, 13 - Saint Paul lez Durance (France)

    2006-07-01

    Full text of the publication follows: The generation of Multi-group cross sections together with relevant uncertainties is fundamental to assess the quality of integral data. The key information that are needed to propagate the microscopic experimental uncertainties to macroscopic reactor calculations are (1) the experimental covariance matrices, (2) the correlations between the parameters of the model and (3) the covariance matrices for the multi-group cross sections. The propagation of microscopic errors by Monte-Carlo technique was applied to determine the accuracy of the integral trends provided by the OSMOSE experiment carried out in the MINERVE reactor of the CEA Cadarache. The technique consists in coupling resonance shape analysis and deterministic codes. The integral trend and its accuracy obtained on the {sup 237}Np(n,{gamma}) reaction will be presented. (author)

  19. Usefulness of the Monte Carlo method in reliability calculations

    International Nuclear Information System (INIS)

    Lanore, J.M.; Kalli, H.

    1977-01-01

    Three examples of reliability Monte Carlo programs developed in the LEP (Laboratory for Radiation Shielding Studies in the Nuclear Research Center at Saclay) are presented. First, an uncertainty analysis is given for a simplified spray system; a Monte Carlo program PATREC-MC has been written to solve the problem with the system components given in the fault tree representation. The second program MONARC 2 has been written to solve the problem of complex systems reliability by the Monte Carlo simulation, here again the system (a residual heat removal system) is in the fault tree representation. Third, the Monte Carlo program MONARC was used instead of the Markov diagram to solve the simulation problem of an electric power supply including two nets and two stand-by diesels

  20. Statistics of Monte Carlo methods used in radiation transport calculation

    International Nuclear Information System (INIS)

    Datta, D.

    2009-01-01

    Radiation transport calculation can be carried out by using either deterministic or statistical methods. Radiation transport calculation based on statistical methods is basic theme of the Monte Carlo methods. The aim of this lecture is to describe the fundamental statistics required to build the foundations of Monte Carlo technique for radiation transport calculation. Lecture note is organized in the following way. Section (1) will describe the introduction of Basic Monte Carlo and its classification towards the respective field. Section (2) will describe the random sampling methods, a key component of Monte Carlo radiation transport calculation, Section (3) will provide the statistical uncertainty of Monte Carlo estimates, Section (4) will describe in brief the importance of variance reduction techniques while sampling particles such as photon, or neutron in the process of radiation transport

  1. APPLICATION OF BAYESIAN MONTE CARLO ANALYSIS TO A LAGRANGIAN PHOTOCHEMICAL AIR QUALITY MODEL. (R824792)

    Science.gov (United States)

    Uncertainties in ozone concentrations predicted with a Lagrangian photochemical air quality model have been estimated using Bayesian Monte Carlo (BMC) analysis. Bayesian Monte Carlo analysis provides a means of combining subjective "prior" uncertainty estimates developed ...

  2. Research on perturbation based Monte Carlo reactor criticality search

    International Nuclear Information System (INIS)

    Li Zeguang; Wang Kan; Li Yangliu; Deng Jingkang

    2013-01-01

    Criticality search is a very important aspect in reactor physics analysis. Due to the advantages of Monte Carlo method and the development of computer technologies, Monte Carlo criticality search is becoming more and more necessary and feasible. Traditional Monte Carlo criticality search method is suffered from large amount of individual criticality runs and uncertainty and fluctuation of Monte Carlo results. A new Monte Carlo criticality search method based on perturbation calculation is put forward in this paper to overcome the disadvantages of traditional method. By using only one criticality run to get initial k_e_f_f and differential coefficients of concerned parameter, the polynomial estimator of k_e_f_f changing function is solved to get the critical value of concerned parameter. The feasibility of this method was tested. The results show that the accuracy and efficiency of perturbation based criticality search method are quite inspiring and the method overcomes the disadvantages of traditional one. (authors)

  3. Bayesian Monte Carlo and Maximum Likelihood Approach for Uncertainty Estimation and Risk Management: Application to Lake Oxygen Recovery Model

    Science.gov (United States)

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...

  4. Comparison of ISO-GUM and Monte Carlo Method for Evaluation of Measurement Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Young-Cheol; Her, Jae-Young; Lee, Seung-Jun; Lee, Kang-Jin [Korea Gas Corporation, Daegu (Korea, Republic of)

    2014-07-15

    To supplement the ISO-GUM method for the evaluation of measurement uncertainty, a simulation program using the Monte Carlo method (MCM) was developed, and the MCM and GUM methods were compared. The results are as follows: (1) Even under a non-normal probability distribution of the measurement, MCM provides an accurate coverage interval; (2) Even if a probability distribution that emerged from combining a few non-normal distributions looks as normal, there are cases in which the actual distribution is not normal and the non-normality can be determined by the probability distribution of the combined variance; and (3) If type-A standard uncertainties are involved in the evaluation of measurement uncertainty, GUM generally offers an under-valued coverage interval. However, this problem can be solved by the Bayesian evaluation of type-A standard uncertainty. In this case, the effective degree of freedom for the combined variance is not required in the evaluation of expanded uncertainty, and the appropriate coverage factor for 95% level of confidence was determined to be 1.96.

  5. Monte Carlo applications to radiation shielding problems

    International Nuclear Information System (INIS)

    Subbaiah, K.V.

    2009-01-01

    transport in complex geometries is straightforward, while even the simplest finite geometries (e.g., thin foils) are very difficult to be dealt with by the transport equation. The main drawback of the Monte Carlo method lies in its random nature: all the results are affected by statistical uncertainties, which can be reduced at the expense of increasing the sampled population, and, hence, the computation time. Under special circumstances, the statistical uncertainties may be lowered by using variance-reduction techniques. Monte Carlo methods tend to be used when it is infeasible or impossible to compute an exact result with a deterministic algorithm. The term Monte Carlo was coined in the 1940s by physicists working on nuclear weapon projects in the Los Alamos National Laboratory

  6. Validation of uncertainty of weighing in the preparation of radionuclide standards by Monte Carlo Method; Validacao da incerteza de pesagens no preparo de padroes de radionuclideos por Metodo de Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Cacais, F.L.; Delgado, J.U., E-mail: facacais@gmail.com [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Loayza, V.M. [Instituto Nacional de Metrologia (INMETRO), Rio de Janeiro, RJ (Brazil). Qualidade e Tecnologia

    2016-07-01

    In preparing solutions for the production of radionuclide metrology standards is necessary measuring the quantity Activity by mass. The gravimetric method by elimination is applied to perform weighing with smaller uncertainties. At this work is carried out the validation, by the Monte Carlo method, of the uncertainty calculation approach implemented by Lourenco and Bobin according to ISO GUM for the method by elimination. The results obtained by both uncertainty calculation methods were consistent indicating that were fulfilled the conditions for the application of ISO GUM in the preparation of radioactive standards. (author)

  7. Monte Carlo methods

    Directory of Open Access Journals (Sweden)

    Bardenet Rémi

    2013-07-01

    Full Text Available Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of Monte Carlo in experimental physics, and point to landmarks in the literature for the curious reader.

  8. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    International Nuclear Information System (INIS)

    Brown, Forrest B.; Univ. of New Mexico, Albuquerque, NM

    2016-01-01

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations

  9. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications Group; Univ. of New Mexico, Albuquerque, NM (United States). Nuclear Engineering Dept.

    2016-11-29

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations

  10. Reporting and analyzing statistical uncertainties in Monte Carlo-based treatment planning

    International Nuclear Information System (INIS)

    Chetty, Indrin J.; Rosu, Mihaela; Kessler, Marc L.; Fraass, Benedick A.; Haken, Randall K. ten; Kong, Feng-Ming; McShan, Daniel L.

    2006-01-01

    Purpose: To investigate methods of reporting and analyzing statistical uncertainties in doses to targets and normal tissues in Monte Carlo (MC)-based treatment planning. Methods and Materials: Methods for quantifying statistical uncertainties in dose, such as uncertainty specification to specific dose points, or to volume-based regions, were analyzed in MC-based treatment planning for 5 lung cancer patients. The effect of statistical uncertainties on target and normal tissue dose indices was evaluated. The concept of uncertainty volume histograms for targets and organs at risk was examined, along with its utility, in conjunction with dose volume histograms, in assessing the acceptability of the statistical precision in dose distributions. The uncertainty evaluation tools were extended to four-dimensional planning for application on multiple instances of the patient geometry. All calculations were performed using the Dose Planning Method MC code. Results: For targets, generalized equivalent uniform doses and mean target doses converged at 150 million simulated histories, corresponding to relative uncertainties of less than 2% in the mean target doses. For the normal lung tissue (a volume-effect organ), mean lung dose and normal tissue complication probability converged at 150 million histories despite the large range in the relative organ uncertainty volume histograms. For 'serial' normal tissues such as the spinal cord, large fluctuations exist in point dose relative uncertainties. Conclusions: The tools presented here provide useful means for evaluating statistical precision in MC-based dose distributions. Tradeoffs between uncertainties in doses to targets, volume-effect organs, and 'serial' normal tissues must be considered carefully in determining acceptable levels of statistical precision in MC-computed dose distributions

  11. Uncertainty assessment of integrated distributed hydrological models using GLUE with Markov chain Monte Carlo sampling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan

    2008-01-01

    uncertainty estimation (GLUE) procedure based on Markov chain Monte Carlo sampling is applied in order to improve the performance of the methodology in estimating parameters and posterior output distributions. The description of the spatial variations of the hydrological processes is accounted for by defining......In recent years, there has been an increase in the application of distributed, physically-based and integrated hydrological models. Many questions regarding how to properly calibrate and validate distributed models and assess the uncertainty of the estimated parameters and the spatially......-site validation must complement the usual time validation. In this study, we develop, through an application, a comprehensive framework for multi-criteria calibration and uncertainty assessment of distributed physically-based, integrated hydrological models. A revised version of the generalized likelihood...

  12. Monte Carlo simulation of Markov unreliability models

    International Nuclear Information System (INIS)

    Lewis, E.E.; Boehm, F.

    1984-01-01

    A Monte Carlo method is formulated for the evaluation of the unrealibility of complex systems with known component failure and repair rates. The formulation is in terms of a Markov process allowing dependences between components to be modeled and computational efficiencies to be achieved in the Monte Carlo simulation. Two variance reduction techniques, forced transition and failure biasing, are employed to increase computational efficiency of the random walk procedure. For an example problem these result in improved computational efficiency by more than three orders of magnitudes over analog Monte Carlo. The method is generalized to treat problems with distributed failure and repair rate data, and a batching technique is introduced and shown to result in substantial increases in computational efficiency for an example problem. A method for separating the variance due to the data uncertainty from that due to the finite number of random walks is presented. (orig.)

  13. Present status and future prospects of neutronics Monte Carlo

    International Nuclear Information System (INIS)

    Gelbard, E.M.

    1990-01-01

    It is fair to say that the Monte Carlo method, over the last decade, has grown steadily more important as a neutronics computational tool. Apparently this has happened for assorted reasons. Thus, for example, as the power of computers has increased, the cost of the method has dropped, steadily becoming less and less of an obstacle to its use. In addition, more and more sophisticated input processors have now made it feasible to model extremely complicated systems routinely with really remarkable fidelity. Finally, as we demand greater and greater precision in reactor calculations, Monte Carlo is often found to be the only method accurate enough for use in benchmarking. Cross section uncertainties are now almost the only inherent limitations in our Monte Carlo capabilities. For this reason Monte Carlo has come to occupy a special position, interposed between experiment and other computational techniques. More and more often deterministic methods are tested by comparison with Monte Carlo, and cross sections are tested by comparing Monte Carlo with experiment. In this way one can distinguish very clearly between errors due to flaws in our numerical methods, and those due to deficiencies in cross section files. The special role of Monte Carlo as a benchmarking tool, often the only available benchmarking tool, makes it crucially important that this method should be polished to perfection. Problems relating to Eigenvalue calculations, variance reduction and the use of advanced computers are reviewed in this paper. (author)

  14. Application of a Monte Carlo framework with bootstrapping for quantification of uncertainty in baseline map of carbon emissions from deforestation in Tropical Regions

    Science.gov (United States)

    William Salas; Steve Hagen

    2013-01-01

    This presentation will provide an overview of an approach for quantifying uncertainty in spatial estimates of carbon emission from land use change. We generate uncertainty bounds around our final emissions estimate using a randomized, Monte Carlo (MC)-style sampling technique. This approach allows us to combine uncertainty from different sources without making...

  15. Parameter uncertainty and model predictions: a review of Monte Carlo results

    International Nuclear Information System (INIS)

    Gardner, R.H.; O'Neill, R.V.

    1979-01-01

    Studies of parameter variability by Monte Carlo analysis are reviewed using repeated simulations of the model with randomly selected parameter values. At the beginning of each simulation, parameter values are chosen from specific frequency distributions. This process is continued for a number of iterations sufficient to converge on an estimate of the frequency distribution of the output variables. The purpose was to explore the general properties of error propagaton in models. Testing the implicit assumptions of analytical methods and pointing out counter-intuitive results produced by the Monte Carlo approach are additional points covered

  16. Measurement uncertainty of dissolution test of acetaminophen immediate release tablets using Monte Carlo simulations

    Directory of Open Access Journals (Sweden)

    Daniel Cancelli Romero

    2017-10-01

    Full Text Available ABSTRACT Analytical results are widely used to assess batch-by-batch conformity, pharmaceutical equivalence, as well as in the development of drug products. Despite this, few papers describing the measurement uncertainty estimation associated with these results were found in the literature. Here, we described a simple procedure used for estimating measurement uncertainty associated with the dissolution test of acetaminophen tablets. A fractionate factorial design was used to define a mathematical model that explains the amount of acetaminophen dissolved (% as a function of time of dissolution (from 20 to 40 minutes, volume of dissolution media (from 800 to 1000 mL, pH of dissolution media (from 2.0 to 6.8, and rotation speed (from 40 to 60 rpm. Using Monte Carlo simulations, we estimated measurement uncertainty for dissolution test of acetaminophen tablets (95.2 ± 1.0%, with a 95% confidence level. Rotation speed was the most important source of uncertainty, contributing about 96.2% of overall uncertainty. Finally, it is important to note that the uncertainty calculated in this paper reflects the expected uncertainty to the dissolution test, and does not consider variations in the content of acetaminophen.

  17. Exploring Monte Carlo methods

    CERN Document Server

    Dunn, William L

    2012-01-01

    Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble

  18. Specialized Monte Carlo codes versus general-purpose Monte Carlo codes

    International Nuclear Information System (INIS)

    Moskvin, Vadim; DesRosiers, Colleen; Papiez, Lech; Lu, Xiaoyi

    2002-01-01

    The possibilities of Monte Carlo modeling for dose calculations and optimization treatment are quite limited in radiation oncology applications. The main reason is that the Monte Carlo technique for dose calculations is time consuming while treatment planning may require hundreds of possible cases of dose simulations to be evaluated for dose optimization. The second reason is that general-purpose codes widely used in practice, require an experienced user to customize them for calculations. This paper discusses the concept of Monte Carlo code design that can avoid the main problems that are preventing wide spread use of this simulation technique in medical physics. (authors)

  19. Application of Monte Carlo Methods to Perform Uncertainty and Sensitivity Analysis on Inverse Water-Rock Reactions with NETPATH

    Energy Technology Data Exchange (ETDEWEB)

    McGraw, David [Desert Research Inst. (DRI), Reno, NV (United States); Hershey, Ronald L. [Desert Research Inst. (DRI), Reno, NV (United States)

    2016-06-01

    Methods were developed to quantify uncertainty and sensitivity for NETPATH inverse water-rock reaction models and to calculate dissolved inorganic carbon, carbon-14 groundwater travel times. The NETPATH models calculate upgradient groundwater mixing fractions that produce the downgradient target water chemistry along with amounts of mineral phases that are either precipitated or dissolved. Carbon-14 groundwater travel times are calculated based on the upgradient source-water fractions, carbonate mineral phase changes, and isotopic fractionation. Custom scripts and statistical code were developed for this study to facilitate modifying input parameters, running the NETPATH simulations, extracting relevant output, postprocessing the results, and producing graphs and summaries. The scripts read userspecified values for each constituent’s coefficient of variation, distribution, sensitivity parameter, maximum dissolution or precipitation amounts, and number of Monte Carlo simulations. Monte Carlo methods for analysis of parametric uncertainty assign a distribution to each uncertain variable, sample from those distributions, and evaluate the ensemble output. The uncertainty in input affected the variability of outputs, namely source-water mixing, phase dissolution and precipitation amounts, and carbon-14 travel time. Although NETPATH may provide models that satisfy the constraints, it is up to the geochemist to determine whether the results are geochemically reasonable. Two example water-rock reaction models from previous geochemical reports were considered in this study. Sensitivity analysis was also conducted to evaluate the change in output caused by a small change in input, one constituent at a time. Results were standardized to allow for sensitivity comparisons across all inputs, which results in a representative value for each scenario. The approach yielded insight into the uncertainty in water-rock reactions and travel times. For example, there was little

  20. Current and future applications of Monte Carlo

    International Nuclear Information System (INIS)

    Zaidi, H.

    2003-01-01

    Full text: The use of radionuclides in medicine has a long history and encompasses a large area of applications including diagnosis and radiation treatment of cancer patients using either external or radionuclide radiotherapy. The 'Monte Carlo method'describes a very broad area of science, in which many processes, physical systems, and phenomena are simulated by statistical methods employing random numbers. The general idea of Monte Carlo analysis is to create a model, which is as similar as possible to the real physical system of interest, and to create interactions within that system based on known probabilities of occurrence, with random sampling of the probability density functions (pdfs). As the number of individual events (called 'histories') is increased, the quality of the reported average behavior of the system improves, meaning that the statistical uncertainty decreases. The use of the Monte Carlo method to simulate radiation transport has become the most accurate means of predicting absorbed dose distributions and other quantities of interest in the radiation treatment of cancer patients using either external or radionuclide radiotherapy. The same trend has occurred for the estimation of the absorbed dose in diagnostic procedures using radionuclides as well as the assessment of image quality and quantitative accuracy of radionuclide imaging. As a consequence of this generalized use, many questions are being raised primarily about the need and potential of Monte Carlo techniques, but also about how accurate it really is, what would it take to apply it clinically and make it available widely to the nuclear medicine community at large. Many of these questions will be answered when Monte Carlo techniques are implemented and used for more routine calculations and for in-depth investigations. In this paper, the conceptual role of the Monte Carlo method is briefly introduced and followed by a survey of its different applications in diagnostic and therapeutic

  1. Monte Carlo principles and applications

    Energy Technology Data Exchange (ETDEWEB)

    Raeside, D E [Oklahoma Univ., Oklahoma City (USA). Health Sciences Center

    1976-03-01

    The principles underlying the use of Monte Carlo methods are explained, for readers who may not be familiar with the approach. The generation of random numbers is discussed, and the connection between Monte Carlo methods and random numbers is indicated. Outlines of two well established Monte Carlo sampling techniques are given, together with examples illustrating their use. The general techniques for improving the efficiency of Monte Carlo calculations are considered. The literature relevant to the applications of Monte Carlo calculations in medical physics is reviewed.

  2. Uncertainties in s-process nucleosynthesis in massive stars determined by Monte Carlo variations

    Science.gov (United States)

    Nishimura, N.; Hirschi, R.; Rauscher, T.; St. J. Murphy, A.; Cescutti, G.

    2017-08-01

    The s-process in massive stars produces the weak component of the s-process (nuclei up to A ˜ 90), in amounts that match solar abundances. For heavier isotopes, such as barium, production through neutron capture is significantly enhanced in very metal-poor stars with fast rotation. However, detailed theoretical predictions for the resulting final s-process abundances have important uncertainties caused both by the underlying uncertainties in the nuclear physics (principally neutron-capture reaction and β-decay rates) as well as by the stellar evolution modelling. In this work, we investigated the impact of nuclear-physics uncertainties relevant to the s-process in massive stars. Using a Monte Carlo based approach, we performed extensive nuclear reaction network calculations that include newly evaluated upper and lower limits for the individual temperature-dependent reaction rates. We found that most of the uncertainty in the final abundances is caused by uncertainties in the neutron-capture rates, while β-decay rate uncertainties affect only a few nuclei near s-process branchings. The s-process in rotating metal-poor stars shows quantitatively different uncertainties and key reactions, although the qualitative characteristics are similar. We confirmed that our results do not significantly change at different metallicities for fast rotating massive stars in the very low metallicity regime. We highlight which of the identified key reactions are realistic candidates for improved measurement by future experiments.

  3. 11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing

    CERN Document Server

    Nuyens, Dirk

    2016-01-01

    This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.

  4. New Monte Carlo-based method to evaluate fission fraction uncertainties for the reactor antineutrino experiment

    Energy Technology Data Exchange (ETDEWEB)

    Ma, X.B., E-mail: maxb@ncepu.edu.cn; Qiu, R.M.; Chen, Y.X.

    2017-02-15

    Uncertainties regarding fission fractions are essential in understanding antineutrino flux predictions in reactor antineutrino experiments. A new Monte Carlo-based method to evaluate the covariance coefficients between isotopes is proposed. The covariance coefficients are found to vary with reactor burnup and may change from positive to negative because of balance effects in fissioning. For example, between {sup 235}U and {sup 239}Pu, the covariance coefficient changes from 0.15 to −0.13. Using the equation relating fission fraction and atomic density, consistent uncertainties in the fission fraction and covariance matrix were obtained. The antineutrino flux uncertainty is 0.55%, which does not vary with reactor burnup. The new value is about 8.3% smaller. - Highlights: • The covariance coefficients between isotopes vs reactor burnup may change its sign because of two opposite effects. • The relation between fission fraction uncertainty and atomic density are first studied. • A new MC-based method of evaluating the covariance coefficients between isotopes was proposed.

  5. Uncertainty assessment in geodetic network adjustment by combining GUM and Monte-Carlo-simulations

    Science.gov (United States)

    Niemeier, Wolfgang; Tengen, Dieter

    2017-06-01

    In this article first ideas are presented to extend the classical concept of geodetic network adjustment by introducing a new method for uncertainty assessment as two-step analysis. In the first step the raw data and possible influencing factors are analyzed using uncertainty modeling according to GUM (Guidelines to the Expression of Uncertainty in Measurements). This approach is well established in metrology, but rarely adapted within Geodesy. The second step consists of Monte-Carlo-Simulations (MC-simulations) for the complete processing chain from raw input data and pre-processing to adjustment computations and quality assessment. To perform these simulations, possible realizations of raw data and the influencing factors are generated, using probability distributions for all variables and the established concept of pseudo-random number generators. Final result is a point cloud which represents the uncertainty of the estimated coordinates; a confidence region can be assigned to these point clouds, as well. This concept may replace the common concept of variance propagation and the quality assessment of adjustment parameters by using their covariance matrix. It allows a new way for uncertainty assessment in accordance with the GUM concept for uncertainty modelling and propagation. As practical example the local tie network in "Metsähovi Fundamental Station", Finland is used, where classical geodetic observations are combined with GNSS data.

  6. Perturbation based Monte Carlo criticality search in density, enrichment and concentration

    International Nuclear Information System (INIS)

    Li, Zeguang; Wang, Kan; Deng, Jingkang

    2015-01-01

    Highlights: • A new perturbation based Monte Carlo criticality search method is proposed. • The method could get accurate results with only one individual criticality run. • The method is used to solve density, enrichment and concentration search problems. • Results show the feasibility and good performances of this method. • The relationship between results’ accuracy and perturbation order is discussed. - Abstract: Criticality search is a very important aspect in reactor physics analysis. Due to the advantages of Monte Carlo method and the development of computer technologies, Monte Carlo criticality search is becoming more and more necessary and feasible. Existing Monte Carlo criticality search methods need large amount of individual criticality runs and may have unstable results because of the uncertainties of criticality results. In this paper, a new perturbation based Monte Carlo criticality search method is proposed and discussed. This method only needs one individual criticality calculation with perturbation tallies to estimate k eff changing function using initial k eff and differential coefficients results, and solves polynomial equations to get the criticality search results. The new perturbation based Monte Carlo criticality search method is implemented in the Monte Carlo code RMC, and criticality search problems in density, enrichment and concentration are taken out. Results show that this method is quite inspiring in accuracy and efficiency, and has advantages compared with other criticality search methods

  7. Monte Carlo Simulation for Particle Detectors

    CERN Document Server

    Pia, Maria Grazia

    2012-01-01

    Monte Carlo simulation is an essential component of experimental particle physics in all the phases of its life-cycle: the investigation of the physics reach of detector concepts, the design of facilities and detectors, the development and optimization of data reconstruction software, the data analysis for the production of physics results. This note briefly outlines some research topics related to Monte Carlo simulation, that are relevant to future experimental perspectives in particle physics. The focus is on physics aspects: conceptual progress beyond current particle transport schemes, the incorporation of materials science knowledge relevant to novel detection technologies, functionality to model radiation damage, the capability for multi-scale simulation, quantitative validation and uncertainty quantification to determine the predictive power of simulation. The R&D on simulation for future detectors would profit from cooperation within various components of the particle physics community, and synerg...

  8. Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, Paul; Evans, Thomas; Tautges, Tim

    2012-12-24

    This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well

  9. On the use of stochastic approximation Monte Carlo for Monte Carlo integration

    KAUST Repository

    Liang, Faming

    2009-03-01

    The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration via a dynamically weighted estimator by calling some results from the literature of nonhomogeneous Markov chains. Our numerical results indicate that SAMC can yield significant savings over conventional Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, for the problems for which the energy landscape is rugged. © 2008 Elsevier B.V. All rights reserved.

  10. On the use of stochastic approximation Monte Carlo for Monte Carlo integration

    KAUST Repository

    Liang, Faming

    2009-01-01

    The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration

  11. Monte Carlo Simulation of stepping source in afterloading intracavitary brachytherapy for GZP6 unit

    International Nuclear Information System (INIS)

    Toossi, M.T.B.; Abdollahi, M.; Ghorbani, M.

    2010-01-01

    Full text: Stepping source in brachytherapy systems is used to treat a target lesion longer than the effective treatment length of the source. Dose calculation accuracy plays a vital role in the outcome of brachytherapy treatment. In this study, the stepping source (channel 6) of GZP6 brachytherapy unit was simulated by Monte Carlo simulation and matrix shift method. The stepping source of GZP6 was simulated by Monte Carlo MCNPX code. The Mesh tally (type I) was employed for absorbed dose calculation in a cylindrical water phantom. 5 x 108 photon histories were scored and a 0.2% statistical uncertainty was obtained by Monte Carlo calculations. Dose distributions were obtained by our matrix shift method for esophageal cancer tumor lengths of 8 and 10 cm. Isodose curves produced by simulation and TPS were superimposed to estimate the differences. Results Comparison of Monte Carlo and TPS dose distributions show that in longitudinal direction (source movement direction) Monte Carlo and TPS dose distributions are comparable. [n transverse direction, the dose differences of 7 and 5% were observed for esophageal tumor lengths of 8 and 10 cm respectively. Conclusions Although, the results show that the maximum difference between Monte Carlo and TPS calculations is about 7%, but considering that the certified activity is given with ± I 0%, uncertainty, then an error of the order of 20% for Monte Carlo calculation would be reasonable. It can be suggested that accuracy of the dose distribution produced by TPS is acceptable for clinical applications. (author)

  12. Clinical dosimetry in photon radiotherapy. A Monte Carlo based investigation

    International Nuclear Information System (INIS)

    Wulff, Joerg

    2010-01-01

    Practical clinical dosimetry is a fundamental step within the radiation therapy process and aims at quantifying the absorbed radiation dose within a 1-2% uncertainty. To achieve this level of accuracy, corrections are needed for calibrated and air-filled ionization chambers, which are used for dose measurement. The procedures of correction are based on cavity theory of Spencer-Attix and are defined in current dosimetry protocols. Energy dependent corrections for deviations from calibration beams account for changed ionization chamber response in the treatment beam. The corrections applied are usually based on semi-analytical models or measurements and are generally hard to determine due to their magnitude of only a few percents or even less. Furthermore the corrections are defined for fixed geometrical reference-conditions and do not apply to non-reference conditions in modern radiotherapy applications. The stochastic Monte Carlo method for the simulation of radiation transport is becoming a valuable tool in the field of Medical Physics. As a suitable tool for calculation of these corrections with high accuracy the simulations enable the investigation of ionization chambers under various conditions. The aim of this work is the consistent investigation of ionization chamber dosimetry in photon radiation therapy with the use of Monte Carlo methods. Nowadays Monte Carlo systems exist, which enable the accurate calculation of ionization chamber response in principle. Still, their bare use for studies of this type is limited due to the long calculation times needed for a meaningful result with a small statistical uncertainty, inherent to every result of a Monte Carlo simulation. Besides heavy use of computer hardware, techniques methods of variance reduction to reduce the needed calculation time can be applied. Methods for increasing the efficiency in the results of simulation were developed and incorporated in a modern and established Monte Carlo simulation environment

  13. Vectorized Monte Carlo

    International Nuclear Information System (INIS)

    Brown, F.B.

    1981-01-01

    Examination of the global algorithms and local kernels of conventional general-purpose Monte Carlo codes shows that multigroup Monte Carlo methods have sufficient structure to permit efficient vectorization. A structured multigroup Monte Carlo algorithm for vector computers is developed in which many particle events are treated at once on a cell-by-cell basis. Vectorization of kernels for tracking and variance reduction is described, and a new method for discrete sampling is developed to facilitate the vectorization of collision analysis. To demonstrate the potential of the new method, a vectorized Monte Carlo code for multigroup radiation transport analysis was developed. This code incorporates many features of conventional general-purpose production codes, including general geometry, splitting and Russian roulette, survival biasing, variance estimation via batching, a number of cutoffs, and generalized tallies of collision, tracklength, and surface crossing estimators with response functions. Predictions of vectorized performance characteristics for the CYBER-205 were made using emulated coding and a dynamic model of vector instruction timing. Computation rates were examined for a variety of test problems to determine sensitivities to batch size and vector lengths. Significant speedups are predicted for even a few hundred particles per batch, and asymptotic speedups by about 40 over equivalent Amdahl 470V/8 scalar codes arepredicted for a few thousand particles per batch. The principal conclusion is that vectorization of a general-purpose multigroup Monte Carlo code is well worth the significant effort required for stylized coding and major algorithmic changes

  14. Adjoint electron Monte Carlo calculations

    International Nuclear Information System (INIS)

    Jordan, T.M.

    1986-01-01

    Adjoint Monte Carlo is the most efficient method for accurate analysis of space systems exposed to natural and artificially enhanced electron environments. Recent adjoint calculations for isotropic electron environments include: comparative data for experimental measurements on electronics boxes; benchmark problem solutions for comparing total dose prediction methodologies; preliminary assessment of sectoring methods used during space system design; and total dose predictions on an electronics package. Adjoint Monte Carlo, forward Monte Carlo, and experiment are in excellent agreement for electron sources that simulate space environments. For electron space environments, adjoint Monte Carlo is clearly superior to forward Monte Carlo, requiring one to two orders of magnitude less computer time for relatively simple geometries. The solid-angle sectoring approximations used for routine design calculations can err by more than a factor of 2 on dose in simple shield geometries. For critical space systems exposed to severe electron environments, these potential sectoring errors demand the establishment of large design margins and/or verification of shield design by adjoint Monte Carlo/experiment

  15. Monte Carlo perturbation theory in neutron transport calculations

    International Nuclear Information System (INIS)

    Hall, M.C.G.

    1980-01-01

    The need to obtain sensitivities in complicated geometrical configurations has resulted in the development of Monte Carlo sensitivity estimation. A new method has been developed to calculate energy-dependent sensitivities of any number of responses in a single Monte Carlo calculation with a very small time penalty. This estimation typically increases the tracking time per source particle by about 30%. The method of estimation is explained. Sensitivities obtained are compared with those calculated by discrete ordinates methods. Further theoretical developments, such as second-order perturbation theory and application to k/sub eff/ calculations, are discussed. The application of the method to uncertainty analysis and to the analysis of benchmark experiments is illustrated. 5 figures

  16. Monte Carlo: Basics

    OpenAIRE

    Murthy, K. P. N.

    2001-01-01

    An introduction to the basics of Monte Carlo is given. The topics covered include, sample space, events, probabilities, random variables, mean, variance, covariance, characteristic function, chebyshev inequality, law of large numbers, central limit theorem (stable distribution, Levy distribution), random numbers (generation and testing), random sampling techniques (inversion, rejection, sampling from a Gaussian, Metropolis sampling), analogue Monte Carlo and Importance sampling (exponential b...

  17. The determination of beam quality correction factors: Monte Carlo simulations and measurements.

    Science.gov (United States)

    González-Castaño, D M; Hartmann, G H; Sánchez-Doblado, F; Gómez, F; Kapsch, R-P; Pena, J; Capote, R

    2009-08-07

    Modern dosimetry protocols are based on the use of ionization chambers provided with a calibration factor in terms of absorbed dose to water. The basic formula to determine the absorbed dose at a user's beam contains the well-known beam quality correction factor that is required whenever the quality of radiation used at calibration differs from that of the user's radiation. The dosimetry protocols describe the whole ionization chamber calibration procedure and include tabulated beam quality correction factors which refer to 60Co gamma radiation used as calibration quality. They have been calculated for a series of ionization chambers and radiation qualities based on formulae, which are also described in the protocols. In the case of high-energy photon beams, the relative standard uncertainty of the beam quality correction factor is estimated to amount to 1%. In the present work, two alternative methods to determine beam quality correction factors are prescribed-Monte Carlo simulation using the EGSnrc system and an experimental method based on a comparison with a reference chamber. Both Monte Carlo calculations and ratio measurements were carried out for nine chambers at several radiation beams. Four chamber types are not included in the current dosimetry protocols. Beam quality corrections for the reference chamber at two beam qualities were also measured using a calorimeter at a PTB Primary Standards Dosimetry Laboratory. Good agreement between the Monte Carlo calculated (1% uncertainty) and measured (0.5% uncertainty) beam quality correction factors was obtained. Based on these results we propose that beam quality correction factors can be generated both by measurements and by the Monte Carlo simulations with an uncertainty at least comparable to that given in current dosimetry protocols.

  18. MORSE Monte Carlo code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described

  19. Propagation of cross section uncertainties in combined Monte Carlo neutronics and burnup calculations

    Energy Technology Data Exchange (ETDEWEB)

    Kuijper, J.C.; Oppe, J.; Klein Meulekamp, R.; Koning, H. [NRG - Fuels, Actinides and Isotopes group, Petten (Netherlands)

    2005-07-01

    Some years ago a methodology was developed at NRG for the calculation of 'density-to-density' and 'one-group cross section-to-density' sensitivity matrices and covariance matrices for final nuclide densities for burnup schemes consisting of multiple sets of flux/spectrum and burnup calculations. The applicability of the methodology was then demonstrated by calculations of BR3 MOX pin irradiation experiments employing multi-group cross section uncertainty data from the EAF4 data library. A recent development is the extension of this methodology to enable its application in combination with the OCTOPUS-MCNP-FISPACT/ORIGEN Monte Carlo burnup scheme. This required some extensions to the sensitivity matrix calculation tool CASEMATE. The extended methodology was applied on the 'HTR Plutonium Cell Burnup Benchmark' to calculate the uncertainties (covariances) in the final densities, as far as these uncertainties are caused by uncertainties in cross sections. Up to 600 MWd/kg these uncertainties are larger than the differences between the code systems. However, it should be kept in mind that the calculated uncertainties are based on EAF4 uncertainty data. It is not exactly clear on beforehand what a proper set of associated (MCNP) cross sections and covariances would yield in terms of final uncertainties in calculated densities. This will be investigated, by the same formalism, once these data becomes available. It should be noted that the studies performed up till the present date are mainly concerned with the influence of uncertainties in cross sections. The influence of uncertainties in the decay constants, although included in the formalism, is not considered further. Also the influence of other uncertainties (such as -geometrical- modelling approximations) has been left out of consideration for the time being. (authors)

  20. Propagation of cross section uncertainties in combined Monte Carlo neutronics and burnup calculations

    International Nuclear Information System (INIS)

    Kuijper, J.C.; Oppe, J.; Klein Meulekamp, R.; Koning, H.

    2005-01-01

    Some years ago a methodology was developed at NRG for the calculation of 'density-to-density' and 'one-group cross section-to-density' sensitivity matrices and covariance matrices for final nuclide densities for burnup schemes consisting of multiple sets of flux/spectrum and burnup calculations. The applicability of the methodology was then demonstrated by calculations of BR3 MOX pin irradiation experiments employing multi-group cross section uncertainty data from the EAF4 data library. A recent development is the extension of this methodology to enable its application in combination with the OCTOPUS-MCNP-FISPACT/ORIGEN Monte Carlo burnup scheme. This required some extensions to the sensitivity matrix calculation tool CASEMATE. The extended methodology was applied on the 'HTR Plutonium Cell Burnup Benchmark' to calculate the uncertainties (covariances) in the final densities, as far as these uncertainties are caused by uncertainties in cross sections. Up to 600 MWd/kg these uncertainties are larger than the differences between the code systems. However, it should be kept in mind that the calculated uncertainties are based on EAF4 uncertainty data. It is not exactly clear on beforehand what a proper set of associated (MCNP) cross sections and covariances would yield in terms of final uncertainties in calculated densities. This will be investigated, by the same formalism, once these data becomes available. It should be noted that the studies performed up till the present date are mainly concerned with the influence of uncertainties in cross sections. The influence of uncertainties in the decay constants, although included in the formalism, is not considered further. Also the influence of other uncertainties (such as -geometrical- modelling approximations) has been left out of consideration for the time being. (authors)

  1. Taylor-series and Monte-Carlo-method uncertainty estimation of the width of a probability distribution based on varying bias and random error

    International Nuclear Information System (INIS)

    Wilson, Brandon M; Smith, Barton L

    2013-01-01

    Uncertainties are typically assumed to be constant or a linear function of the measured value; however, this is generally not true. Particle image velocimetry (PIV) is one example of a measurement technique that has highly nonlinear, time varying local uncertainties. Traditional uncertainty methods are not adequate for the estimation of the uncertainty of measurement statistics (mean and variance) in the presence of nonlinear, time varying errors. Propagation of instantaneous uncertainty estimates into measured statistics is performed allowing accurate uncertainty quantification of time-mean and statistics of measurements such as PIV. It is shown that random errors will always elevate the measured variance, and thus turbulent statistics such as u'u'-bar. Within this paper, nonlinear, time varying errors are propagated from instantaneous measurements into the measured mean and variance using the Taylor-series method. With these results and knowledge of the systematic and random uncertainty of each measurement, the uncertainty of the time-mean, the variance and covariance can be found. Applicability of the Taylor-series uncertainty equations to time varying systematic and random errors and asymmetric error distributions are demonstrated with Monte-Carlo simulations. The Taylor-series uncertainty estimates are always accurate for uncertainties on the mean quantity. The Taylor-series variance uncertainty is similar to the Monte-Carlo results for cases in which asymmetric random errors exist or the magnitude of the instantaneous variations in the random and systematic errors is near the ‘true’ variance. However, the Taylor-series method overpredicts the uncertainty in the variance as the instantaneous variations of systematic errors are large or are on the same order of magnitude as the ‘true’ variance. (paper)

  2. Monte Carlo theory and practice

    International Nuclear Information System (INIS)

    James, F.

    1987-01-01

    Historically, the first large-scale calculations to make use of the Monte Carlo method were studies of neutron scattering and absorption, random processes for which it is quite natural to employ random numbers. Such calculations, a subset of Monte Carlo calculations, are known as direct simulation, since the 'hypothetical population' of the narrower definition above corresponds directly to the real population being studied. The Monte Carlo method may be applied wherever it is possible to establish equivalence between the desired result and the expected behaviour of a stochastic system. The problem to be solved may already be of a probabilistic or statistical nature, in which case its Monte Carlo formulation will usually be a straightforward simulation, or it may be of a deterministic or analytic nature, in which case an appropriate Monte Carlo formulation may require some imagination and may appear contrived or artificial. In any case, the suitability of the method chosen will depend on its mathematical properties and not on its superficial resemblance to the problem to be solved. The authors show how Monte Carlo techniques may be compared with other methods of solution of the same physical problem

  3. Mathematical modeling of a survey-meter used to measure radioactivity in human thyroids: Monte Carlo calculations of the device response and uncertainties

    Science.gov (United States)

    Khrutchinsky, Arkady; Drozdovitch, Vladimir; Kutsen, Semion; Minenko, Victor; Khrouch, Valeri; Luckyanov, Nickolas; Voillequé, Paul; Bouville, André

    2012-01-01

    This paper presents results of Monte Carlo modeling of the SRP-68-01 survey meter used to measure exposure rates near the thyroid glands of persons exposed to radioactivity following the Chernobyl accident. This device was not designed to measure radioactivity in humans. To estimate the uncertainty associated with the measurement results, a mathematical model of the SRP-68-01 survey meter was developed and verified. A Monte Carlo method of numerical simulation of radiation transport has been used to calculate the calibration factor for the device and evaluate its uncertainty. The SRP-68-01 survey meter scale coefficient, an important characteristic of the device, was also estimated in this study. The calibration factors of the survey meter were calculated for 131I, 132I, 133I, and 135I content in the thyroid gland for six age groups of population: newborns; children aged 1 yr, 5 yr, 10 yr, 15 yr; and adults. A realistic scenario of direct thyroid measurements with an “extended” neck was used to calculate the calibration factors for newborns and one-year-olds. Uncertainties in the device calibration factors due to variability of the device scale coefficient, variability in thyroid mass and statistical uncertainty of Monte Carlo method were evaluated. Relative uncertainties in the calibration factor estimates were found to be from 0.06 for children aged 1 yr to 0.1 for 10-yr and 15-yr children. The positioning errors of the detector during measurements deviate mainly in one direction from the estimated calibration factors. Deviations of the device position from the proper geometry of measurements were found to lead to overestimation of the calibration factor by up to 24 percent for adults and up to 60 percent for 1-yr children. The results of this study improve the estimates of 131I thyroidal content and, consequently, thyroid dose estimates that are derived from direct thyroid measurements performed in Belarus shortly after the Chernobyl accident. PMID:22245289

  4. A Monte-Carlo game theoretic approach for Multi-Criteria Decision Making under uncertainty

    Science.gov (United States)

    Madani, Kaveh; Lund, Jay R.

    2011-05-01

    Game theory provides a useful framework for studying Multi-Criteria Decision Making problems. This paper suggests modeling Multi-Criteria Decision Making problems as strategic games and solving them using non-cooperative game theory concepts. The suggested method can be used to prescribe non-dominated solutions and also can be used as a method to predict the outcome of a decision making problem. Non-cooperative stability definitions for solving the games allow consideration of non-cooperative behaviors, often neglected by other methods which assume perfect cooperation among decision makers. To deal with the uncertainty in input variables a Monte-Carlo Game Theory (MCGT) approach is suggested which maps the stochastic problem into many deterministic strategic games. The games are solved using non-cooperative stability definitions and the results include possible effects of uncertainty in input variables on outcomes. The method can handle multi-criteria multi-decision-maker problems with uncertainty. The suggested method does not require criteria weighting, developing a compound decision objective, and accurate quantitative (cardinal) information as it simplifies the decision analysis by solving problems based on qualitative (ordinal) information, reducing the computational burden substantially. The MCGT method is applied to analyze California's Sacramento-San Joaquin Delta problem. The suggested method provides insights, identifies non-dominated alternatives, and predicts likely decision outcomes.

  5. Monte Carlo Methods in Physics

    International Nuclear Information System (INIS)

    Santoso, B.

    1997-01-01

    Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained

  6. Monte Carlo techniques in radiation therapy

    CERN Document Server

    Verhaegen, Frank

    2013-01-01

    Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...

  7. Statistical implications in Monte Carlo depletions - 051

    International Nuclear Information System (INIS)

    Zhiwen, Xu; Rhodes, J.; Smith, K.

    2010-01-01

    As a result of steady advances of computer power, continuous-energy Monte Carlo depletion analysis is attracting considerable attention for reactor burnup calculations. The typical Monte Carlo analysis is set up as a combination of a Monte Carlo neutron transport solver and a fuel burnup solver. Note that the burnup solver is a deterministic module. The statistical errors in Monte Carlo solutions are introduced into nuclide number densities and propagated along fuel burnup. This paper is towards the understanding of the statistical implications in Monte Carlo depletions, including both statistical bias and statistical variations in depleted fuel number densities. The deterministic Studsvik lattice physics code, CASMO-5, is modified to model the Monte Carlo depletion. The statistical bias in depleted number densities is found to be negligible compared to its statistical variations, which, in turn, demonstrates the correctness of the Monte Carlo depletion method. Meanwhile, the statistical variation in number densities generally increases with burnup. Several possible ways of reducing the statistical errors are discussed: 1) to increase the number of individual Monte Carlo histories; 2) to increase the number of time steps; 3) to run additional independent Monte Carlo depletion cases. Finally, a new Monte Carlo depletion methodology, called the batch depletion method, is proposed, which consists of performing a set of independent Monte Carlo depletions and is thus capable of estimating the overall statistical errors including both the local statistical error and the propagated statistical error. (authors)

  8. Monte Carlo simulation for IRRMA

    International Nuclear Information System (INIS)

    Gardner, R.P.; Liu Lianyan

    2000-01-01

    Monte Carlo simulation is fast becoming a standard approach for many radiation applications that were previously treated almost entirely by experimental techniques. This is certainly true for Industrial Radiation and Radioisotope Measurement Applications - IRRMA. The reasons for this include: (1) the increased cost and inadequacy of experimentation for design and interpretation purposes; (2) the availability of low cost, large memory, and fast personal computers; and (3) the general availability of general purpose Monte Carlo codes that are increasingly user-friendly, efficient, and accurate. This paper discusses the history and present status of Monte Carlo simulation for IRRMA including the general purpose (GP) and specific purpose (SP) Monte Carlo codes and future needs - primarily from the experience of the authors

  9. Selection of important Monte Carlo histories

    International Nuclear Information System (INIS)

    Egbert, Stephen D.

    1987-01-01

    The 1986 Dosimetry System (DS86) for Japanese A-bomb survivors uses information describing the behavior of individual radiation particles, simulated by Monte Carlo methods, to calculate the transmission of radiation into structures and, thence, into humans. However, there are practical constraints on the number of such particle 'histories' that may be used. First, the number must be sufficiently high to provide adequate statistical precision fir any calculated quantity of interest. For integral quantities, such as dose or kerma, statistical precision of approximately 5% (standard deviation) is required to ensure that statistical uncertainties are not a major contributor to the overall uncertainty of the transmitted value. For differential quantities, such as scalar fluence spectra, 10 to 15% standard deviation on individual energy groups is adequate. Second, the number of histories cannot be so large as to require an unacceptably large amount of computer time to process the entire survivor data base. Given that there are approx. 30,000 survivors, each having 13 or 14 organs of interest, the number of histories per organ must be constrained to less than several ten's of thousands at the very most. Selection and use of the most important Monte Carlo leakage histories from among all those calculated allows the creation of an efficient house and organ radiation transmission system for use at RERF. While attempts have been made during the adjoint Monte Carlo calculation to bias the histories toward an efficient dose estimate, this effort has been far from satisfactory. Many of the adjoint histories on a typical leakage tape are either starting in an energy group in which there is very little kerma or dose or leaking into an energy group with very little free-field couple with. By knowing the typical free-field fluence and the fluence-to-dose factors with which the leaking histories will be used, one can select histories rom a leakage tape that will contribute to dose

  10. Propagating Mixed Uncertainties in Cyber Attacker Payoffs: Exploration of Two-Phase Monte Carlo Sampling and Probability Bounds Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chatterjee, Samrat; Tipireddy, Ramakrishna; Oster, Matthew R.; Halappanavar, Mahantesh

    2016-09-16

    Securing cyber-systems on a continual basis against a multitude of adverse events is a challenging undertaking. Game-theoretic approaches, that model actions of strategic decision-makers, are increasingly being applied to address cybersecurity resource allocation challenges. Such game-based models account for multiple player actions and represent cyber attacker payoffs mostly as point utility estimates. Since a cyber-attacker’s payoff generation mechanism is largely unknown, appropriate representation and propagation of uncertainty is a critical task. In this paper we expand on prior work and focus on operationalizing the probabilistic uncertainty quantification framework, for a notional cyber system, through: 1) representation of uncertain attacker and system-related modeling variables as probability distributions and mathematical intervals, and 2) exploration of uncertainty propagation techniques including two-phase Monte Carlo sampling and probability bounds analysis.

  11. (U) Introduction to Monte Carlo Methods

    Energy Technology Data Exchange (ETDEWEB)

    Hungerford, Aimee L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-20

    Monte Carlo methods are very valuable for representing solutions to particle transport problems. Here we describe a “cook book” approach to handling the terms in a transport equation using Monte Carlo methods. Focus is on the mechanics of a numerical Monte Carlo code, rather than the mathematical foundations of the method.

  12. Comparison of the GUM and Monte Carlo methods on the flatness uncertainty estimation in coordinate measuring machine

    Directory of Open Access Journals (Sweden)

    Jalid Abdelilah

    2016-01-01

    Full Text Available In engineering industry, control of manufactured parts is usually done on a coordinate measuring machine (CMM, a sensor mounted at the end of the machine probes a set of points on the surface to be inspected. Data processing is performed subsequently using software, and the result of this measurement process either validates or not the conformity of the part. Measurement uncertainty is a crucial parameter for making the right decisions, and not taking into account this parameter can, therefore, sometimes lead to aberrant decisions. The determination of the uncertainty measurement on CMM is a complex task for the variety of influencing factors. Through this study, we aim to check if the uncertainty propagation model developed according to the guide to the expression of uncertainty in measurement (GUM approach is valid, we present here a comparison of the GUM and Monte Carlo methods. This comparison is made to estimate a flatness deviation of a surface belonging to an industrial part and the uncertainty associated to the measurement result.

  13. Hybrid SN/Monte Carlo research and results

    International Nuclear Information System (INIS)

    Baker, R.S.

    1993-01-01

    The neutral particle transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S N ) and stochastic (Monte Carlo) methods are applied. The Monte Carlo and S N regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid Monte Carlo/S N method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S N is well suited for by themselves. The hybrid method has been successfully applied to realistic shielding problems. The vectorized Monte Carlo algorithm in the hybrid method has been ported to the massively parallel architecture of the Connection Machine. Comparisons of performance on a vector machine (Cray Y-MP) and the Connection Machine (CM-2) show that significant speedups are obtainable for vectorized Monte Carlo algorithms on massively parallel machines, even when realistic problems requiring variance reduction are considered. However, the architecture of the Connection Machine does place some limitations on the regime in which the Monte Carlo algorithm may be expected to perform well

  14. Total Monte-Carlo method applied to the assessment of uncertainties in a reactivity-initiated accident

    Energy Technology Data Exchange (ETDEWEB)

    Cruz, D.F. da; Rochman, D.; Koning, A.J. [Nuclear Research and Consultancy Group NRG, Petten (Netherlands)

    2014-07-01

    The Total Monte-Carlo (TMC) method has been applied extensively since 2008 to propagate the uncertainties in nuclear data for reactor parameters and fuel inventory, and for several types of advanced nuclear systems. The analyses have been performed considering different levels of complexity, ranging from a single fuel rod to a full 3-D reactor core at steady-state. The current work applies the TMC method for a full 3-D pressurized water reactor core model under steady-state and transient conditions, considering thermal-hydraulic feedback. As a transient scenario the study focused on a reactivity-initiated accident, namely a control rod ejection accident initiated by a mechanical failure of the control rod drive mechanism. The uncertainties on the main reactor parameters due to variations in nuclear data for the isotopes {sup 235},{sup 238}U, {sup 239}Pu and thermal scattering data for {sup 1}H in water were quantified. (author)

  15. Monte Carlo calculations for r-process nucleosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Mumpower, Matthew Ryan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-11-12

    A Monte Carlo framework is developed for exploring the impact of nuclear model uncertainties on the formation of the heavy elements. Mass measurements tightly constrain the macroscopic sector of FRDM2012. For r-process nucleosynthesis, it is necessary to understand the microscopic physics of the nuclear model employed. A combined approach of measurements and a deeper understanding of the microphysics is thus warranted to elucidate the site of the r-process.

  16. Quasi-random Monte Carlo application in CGE systematic sensitivity analysis

    NARCIS (Netherlands)

    Chatzivasileiadis, T.

    2017-01-01

    The uncertainty and robustness of Computable General Equilibrium models can be assessed by conducting a Systematic Sensitivity Analysis. Different methods have been used in the literature for SSA of CGE models such as Gaussian Quadrature and Monte Carlo methods. This paper explores the use of

  17. Comparative Criticality Analysis of Two Monte Carlo Codes on Centrifugal Atomizer: MCNPS and SCALE

    International Nuclear Information System (INIS)

    Kang, H-S; Jang, M-S; Kim, S-R; Park, J-M; Kim, K-N

    2015-01-01

    There are two well-known Monte Carlo codes for criticality analysis, MCNP5 and SCALE. MCNP5 is a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron or coupled neutron / photon / electron transport, including the capability to calculate eigenvalues for critical system as a main analysis code. SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor physics, radiation shielding, radioactive source term characterization, and sensitivity and uncertainty analysis. SCALE was conceived and funded by US NRC to perform standardized computer analysis for licensing evaluation and is used widely in the world. We performed a validation test of MCNP5 and a comparative analysis of Monte Carlo codes, MCNP5 and SCALE, in terms of the critical analysis of centrifugal atomizer. In the criticality analysis using MCNP5 code, we obtained the statistically reliable results by using a large number of source histories per cycle and performing of uncertainty analysis

  18. Comparative Criticality Analysis of Two Monte Carlo Codes on Centrifugal Atomizer: MCNPS and SCALE

    Energy Technology Data Exchange (ETDEWEB)

    Kang, H-S; Jang, M-S; Kim, S-R [NESS, Daejeon (Korea, Republic of); Park, J-M; Kim, K-N [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    There are two well-known Monte Carlo codes for criticality analysis, MCNP5 and SCALE. MCNP5 is a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron or coupled neutron / photon / electron transport, including the capability to calculate eigenvalues for critical system as a main analysis code. SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor physics, radiation shielding, radioactive source term characterization, and sensitivity and uncertainty analysis. SCALE was conceived and funded by US NRC to perform standardized computer analysis for licensing evaluation and is used widely in the world. We performed a validation test of MCNP5 and a comparative analysis of Monte Carlo codes, MCNP5 and SCALE, in terms of the critical analysis of centrifugal atomizer. In the criticality analysis using MCNP5 code, we obtained the statistically reliable results by using a large number of source histories per cycle and performing of uncertainty analysis.

  19. Lectures on Monte Carlo methods

    CERN Document Server

    Madras, Neal

    2001-01-01

    Monte Carlo methods form an experimental branch of mathematics that employs simulations driven by random number generators. These methods are often used when others fail, since they are much less sensitive to the "curse of dimensionality", which plagues deterministic methods in problems with a large number of variables. Monte Carlo methods are used in many fields: mathematics, statistics, physics, chemistry, finance, computer science, and biology, for instance. This book is an introduction to Monte Carlo methods for anyone who would like to use these methods to study various kinds of mathemati

  20. Uncertainty analysis using Monte Carlo method in the measurement of phase by ESPI

    International Nuclear Information System (INIS)

    Anguiano Morales, Marcelino; Martinez, Amalia; Rayas, J. A.; Cordero, Raul R.

    2008-01-01

    A method for simultaneously measuring whole field in-plane displacements by using optical fiber and based on the dual-beam illumination principle electronic speckle pattern interferometry (ESPI) is presented in this paper. A set of single mode optical fibers and beamsplitter are employed to split the laser beam into four beams of equal intensity.One pair of fibers is utilized to illuminate the sample in the horizontal plane so it is sensitive only to horizontal in-plane displacement. Another pair of optical fibers is set to be sensitive only to vertical in-plane displacement. Each pair of optical fibers differs in longitude to avoid unwanted interference. By means of a Fourier-transform method of fringe-pattern analysis (Takeda method), we can obtain the quantitative data of whole field displacements. We found the uncertainty associated with the phases by mean of Monte Carlo-based technique

  1. Bayesian Modelling, Monte Carlo Sampling and Capital Allocation of Insurance Risks

    Directory of Open Access Journals (Sweden)

    Gareth W. Peters

    2017-09-01

    Full Text Available The main objective of this work is to develop a detailed step-by-step guide to the development and application of a new class of efficient Monte Carlo methods to solve practically important problems faced by insurers under the new solvency regulations. In particular, a novel Monte Carlo method to calculate capital allocations for a general insurance company is developed, with a focus on coherent capital allocation that is compliant with the Swiss Solvency Test. The data used is based on the balance sheet of a representative stylized company. For each line of business in that company, allocations are calculated for the one-year risk with dependencies based on correlations given by the Swiss Solvency Test. Two different approaches for dealing with parameter uncertainty are discussed and simulation algorithms based on (pseudo-marginal Sequential Monte Carlo algorithms are described and their efficiency is analysed.

  2. Monte Carlo simulation in nuclear medicine

    International Nuclear Information System (INIS)

    Morel, Ch.

    2007-01-01

    The Monte Carlo method allows for simulating random processes by using series of pseudo-random numbers. It became an important tool in nuclear medicine to assist in the design of new medical imaging devices, optimise their use and analyse their data. Presently, the sophistication of the simulation tools allows the introduction of Monte Carlo predictions in data correction and image reconstruction processes. The availability to simulate time dependent processes opens up new horizons for Monte Carlo simulation in nuclear medicine. In a near future, these developments will allow to tackle simultaneously imaging and dosimetry issues and soon, case system Monte Carlo simulations may become part of the nuclear medicine diagnostic process. This paper describes some Monte Carlo method basics and the sampling methods that were developed for it. It gives a referenced list of different simulation software used in nuclear medicine and enumerates some of their present and prospective applications. (author)

  3. Coevolution Based Adaptive Monte Carlo Localization (CEAMCL

    Directory of Open Access Journals (Sweden)

    Luo Ronghua

    2008-11-01

    Full Text Available An adaptive Monte Carlo localization algorithm based on coevolution mechanism of ecological species is proposed. Samples are clustered into species, each of which represents a hypothesis of the robot's pose. Since the coevolution between the species ensures that the multiple distinct hypotheses can be tracked stably, the problem of premature convergence when using MCL in highly symmetric environments can be solved. And the sample size can be adjusted adaptively over time according to the uncertainty of the robot's pose by using the population growth model. In addition, by using the crossover and mutation operators in evolutionary computation, intra-species evolution can drive the samples move towards the regions where the desired posterior density is large. So a small size of samples can represent the desired density well enough to make precise localization. The new algorithm is termed coevolution based adaptive Monte Carlo localization (CEAMCL. Experiments have been carried out to prove the efficiency of the new localization algorithm.

  4. Monte Carlo simulation of MOSFET dosimeter for electron backscatter using the GEANT4 code.

    Science.gov (United States)

    Chow, James C L; Leung, Michael K K

    2008-06-01

    The aim of this study is to investigate the influence of the body of the metal-oxide-semiconductor field effect transistor (MOSFET) dosimeter in measuring the electron backscatter from lead. The electron backscatter factor (EBF), which is defined as the ratio of dose at the tissue-lead interface to the dose at the same point without the presence of backscatter, was calculated by the Monte Carlo simulation using the GEANT4 code. Electron beams with energies of 4, 6, 9, and 12 MeV were used in the simulation. It was found that in the presence of the MOSFET body, the EBFs were underestimated by about 2%-0.9% for electron beam energies of 4-12 MeV, respectively. The trend of the decrease of EBF with an increase of electron energy can be explained by the small MOSFET dosimeter, mainly made of epoxy and silicon, not only attenuated the electron fluence of the electron beam from upstream, but also the electron backscatter generated by the lead underneath the dosimeter. However, this variation of the EBF underestimation is within the same order of the statistical uncertainties as the Monte Carlo simulations, which ranged from 1.3% to 0.8% for the electron energies of 4-12 MeV, due to the small dosimetric volume. Such small EBF deviation is therefore insignificant when the uncertainty of the Monte Carlo simulation is taken into account. Corresponding measurements were carried out and uncertainties compared to Monte Carlo results were within +/- 2%. Spectra of energy deposited by the backscattered electrons in dosimetric volumes with and without the lead and MOSFET were determined by Monte Carlo simulations. It was found that in both cases, when the MOSFET body is either present or absent in the simulation, deviations of electron energy spectra with and without the lead decrease with an increase of the electron beam energy. Moreover, the softer spectrum of the backscattered electron when lead is present can result in a reduction of the MOSFET response due to stronger

  5. Uncertainty evaluation of the kerma in the air, related to the active volume in the ionization chamber of concentric cylinders, by Monte Carlo simulation

    International Nuclear Information System (INIS)

    Lo Bianco, A.S.; Oliveira, H.P.S.; Peixoto, J.G.P.

    2009-01-01

    To implant the primary standard of the magnitude kerma in the air for X-ray between 10 - 50 keV, the National Metrology Laboratory of Ionizing Radiations (LNMRI) must evaluate all the uncertainties of measurement related with Victtoren chamber. So, it was evaluated the uncertainty of the kerma in the air consequent of the inaccuracy in the active volume of the chamber using the calculation of Monte Carlo as a tool through the Penelope software

  6. Confronting uncertainty in model-based geostatistics using Markov Chain Monte Carlo simulation

    NARCIS (Netherlands)

    Minasny, B.; Vrugt, J.A.; McBratney, A.B.

    2011-01-01

    This paper demonstrates for the first time the use of Markov Chain Monte Carlo (MCMC) simulation for parameter inference in model-based soil geostatistics. We implemented the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm to jointly summarize the posterior

  7. Advanced Multilevel Monte Carlo Methods

    KAUST Repository

    Jasra, Ajay

    2017-04-24

    This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.

  8. Advanced Multilevel Monte Carlo Methods

    KAUST Repository

    Jasra, Ajay; Law, Kody; Suciu, Carina

    2017-01-01

    This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.

  9. Monte Carlo simulation for slip rate sensitivity analysis in Cimandiri fault area

    Energy Technology Data Exchange (ETDEWEB)

    Pratama, Cecep, E-mail: great.pratama@gmail.com [Graduate Program of Earth Science, Faculty of Earth Science and Technology, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia); Meilano, Irwan [Geodesy Research Division, Faculty of Earth Science and Technology, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia); Nugraha, Andri Dian [Global Geophysical Group, Faculty of Mining and Petroleum Engineering, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia)

    2015-04-24

    Slip rate is used to estimate earthquake recurrence relationship which is the most influence for hazard level. We examine slip rate contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedance in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Then, Monte Carlo simulations properties have been assessed. Uncertainty and coefficient of variation from slip rate for Cimandiri Fault area has been calculated. We observe that seismic hazard estimates is sensitive to fault slip rate with seismic hazard uncertainty result about 0.25 g. For specific site, we found seismic hazard estimate for Sukabumi is between 0.4904 – 0.8465 g with uncertainty between 0.0847 – 0.2389 g and COV between 17.7% – 29.8%.

  10. Monte Carlo - Advances and Challenges

    International Nuclear Information System (INIS)

    Brown, Forrest B.; Mosteller, Russell D.; Martin, William R.

    2008-01-01

    Abstract only, full text follows: With ever-faster computers and mature Monte Carlo production codes, there has been tremendous growth in the application of Monte Carlo methods to the analysis of reactor physics and reactor systems. In the past, Monte Carlo methods were used primarily for calculating k eff of a critical system. More recently, Monte Carlo methods have been increasingly used for determining reactor power distributions and many design parameters, such as β eff , l eff , τ, reactivity coefficients, Doppler defect, dominance ratio, etc. These advanced applications of Monte Carlo methods are now becoming common, not just feasible, but bring new challenges to both developers and users: Convergence of 3D power distributions must be assured; confidence interval bias must be eliminated; iterated fission probabilities are required, rather than single-generation probabilities; temperature effects including Doppler and feedback must be represented; isotopic depletion and fission product buildup must be modeled. This workshop focuses on recent advances in Monte Carlo methods and their application to reactor physics problems, and on the resulting challenges faced by code developers and users. The workshop is partly tutorial, partly a review of the current state-of-the-art, and partly a discussion of future work that is needed. It should benefit both novice and expert Monte Carlo developers and users. In each of the topic areas, we provide an overview of needs, perspective on past and current methods, a review of recent work, and discussion of further research and capabilities that are required. Electronic copies of all workshop presentations and material will be available. The workshop is structured as 2 morning and 2 afternoon segments: - Criticality Calculations I - convergence diagnostics, acceleration methods, confidence intervals, and the iterated fission probability, - Criticality Calculations II - reactor kinetics parameters, dominance ratio, temperature

  11. SU-F-T-122: 4Dand 5D Proton Dose Evaluation with Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Titt, U; Mirkovic, D; Yepes, P; Liu, A; Peeler, C; Randenyia, S; Mohan, R [UT MD Anderson Cancer Center, Houston, TX (United States)

    2016-06-15

    Purpose: We evaluated uncertainties in therapeutic proton doses of a lung treatment, taking into account intra-fractional geometry changes, such as breathing, and inter-fractional changes, such as tumor shrinkage and weight loss. Methods: A Monte Carlo study was performed using four dimensional CT image sets (4DCTs) and weekly repeat imaging (5DCTs) to compute fixed RBE (1.1) and variable RBE weighted dose in an actual lung treatment geometry. The MC2 Monte Carlo system was employed to simulate proton energy deposition and LET distributions according to a thoracic cancer treatment plan developed with a 3D-CT in a commercial treatment planning system, as well as in each of the phases of 4DCT sets which were recorded weekly throughout the course of the treatment. A cumulative dose distribution in relevant structures was computed and compared to the predictions of the treatment planning system. Results: Using the Monte Carlo method, dose deposition estimates with the lowest possible uncertainties were produced. Comparison with treatment planning predictions indicates that significant uncertainties may be associated with therapeutic lung dose prediction from treatment planning systems, depending on the magnitude of inter- and intra-fractional geometry changes. Conclusion: As this is just a case study, a more systematic investigation accounting for a cohort of patients is warranted; however, this is less practical because Monte Carlo simulations of such cases require enormous computational resources. Hence our study and any future case studies may serve as validation/benchmarking data for faster dose prediction engines, such as the track repeating algorithm, FDC.

  12. Probabilistic learning of nonlinear dynamical systems using sequential Monte Carlo

    Science.gov (United States)

    Schön, Thomas B.; Svensson, Andreas; Murray, Lawrence; Lindsten, Fredrik

    2018-05-01

    Probabilistic modeling provides the capability to represent and manipulate uncertainty in data, models, predictions and decisions. We are concerned with the problem of learning probabilistic models of dynamical systems from measured data. Specifically, we consider learning of probabilistic nonlinear state-space models. There is no closed-form solution available for this problem, implying that we are forced to use approximations. In this tutorial we will provide a self-contained introduction to one of the state-of-the-art methods-the particle Metropolis-Hastings algorithm-which has proven to offer a practical approximation. This is a Monte Carlo based method, where the particle filter is used to guide a Markov chain Monte Carlo method through the parameter space. One of the key merits of the particle Metropolis-Hastings algorithm is that it is guaranteed to converge to the "true solution" under mild assumptions, despite being based on a particle filter with only a finite number of particles. We will also provide a motivating numerical example illustrating the method using a modeling language tailored for sequential Monte Carlo methods. The intention of modeling languages of this kind is to open up the power of sophisticated Monte Carlo methods-including particle Metropolis-Hastings-to a large group of users without requiring them to know all the underlying mathematical details.

  13. Uncertainties in the production of p nuclides in thermonuclear supernovae determined by Monte Carlo variations

    Science.gov (United States)

    Nishimura, N.; Rauscher, T.; Hirschi, R.; Murphy, A. St J.; Cescutti, G.; Travaglio, C.

    2018-03-01

    Thermonuclear supernovae originating from the explosion of a white dwarf accreting mass from a companion star have been suggested as a site for the production of p nuclides. Such nuclei are produced during the explosion, in layers enriched with seed nuclei coming from prior strong s processing. These seeds are transformed into proton-richer isotopes mainly by photodisintegration reactions. Several thousand trajectories from a 2D explosion model were used in a Monte Carlo approach. Temperature-dependent uncertainties were assigned individually to thousands of rates varied simultaneously in post-processing in an extended nuclear reaction network. The uncertainties in the final nuclear abundances originating from uncertainties in the astrophysical reaction rates were determined. In addition to the 35 classical p nuclides, abundance uncertainties were also determined for the radioactive nuclides 92Nb, 97, 98Tc, 146Sm, and for the abundance ratios Y(92Mo)/Y(94Mo), Y(92Nb)/Y(92Mo), Y(97Tc)/Y(98Ru), Y(98Tc)/Y(98Ru), and Y(146Sm)/Y(144Sm), important for Galactic Chemical Evolution studies. Uncertainties found were generally lower than a factor of 2, although most nucleosynthesis flows mainly involve predicted rates with larger uncertainties. The main contribution to the total uncertainties comes from a group of trajectories with high peak density originating from the interior of the exploding white dwarf. The distinction between low-density and high-density trajectories allows more general conclusions to be drawn, also applicable to other simulations of white dwarf explosions.

  14. Solution weighting for the SAND-II Monte Carlo code

    International Nuclear Information System (INIS)

    Oster, C.A.; McElroy, W.N.; Simons, R.L.; Lippincott, E.P.; Odette, G.R.

    1976-01-01

    Modifications to the SAND-II Error Analysis Monte Carlo code to include solution weighting based on input data uncertainties have been made and are discussed together with background information on the SAND-II algorithm. The new procedure permits input data having smaller uncertainties to have a greater influence on the solution spectrum than do the data having larger uncertainties. The results of an indepth study to find a practical procedure and the first results of its application to three important Interlaboratory LMFBR Reaction Rate (ILRR) program benchmark spectra (CFRMF, ΣΣ, and 235 U fission) are discussed

  15. Application of Monte-Carlo method in definition of key categories of most radioactive polluted soil

    International Nuclear Information System (INIS)

    Mahmudov, H.M.; Valibeyova, G.; Jafarov, Y.D.; Musaeva, Sh.Z.

    2006-01-01

    Full text: The principle of analysis by Monte Carlo method consists of a choice of random variables of coefficients of an exposition doze capacities of radiation and data on activity within the boundaries of their individual density of frequency distribution upon corresponding sizes of exposition doses capacities. This procedure repeats for many times using computer and results of each round of calculations create universal density of frequency distribution of exposition doses capacities. The analysis using Monte Carlo method can be carried out at the level of radiation polluted soil categories. The analysis by Monte Carlo method is useful for realization of sensitivity analysis of measured capacity amount of an exposition dose in order to define the major factors causing uncertainty in reports. Reception of such conceptions can be valuable for definition of key categories of radiation polluted soil and establishment of priorities to use resources for enhancement of the report. Relative uncertainty of radiation polluted soil categories determined with the help of the analysis by Monte Carlo method in case of their availability can be applied using more significant divergence between average value and a confidential limit in case when borders of a confidential interval are asymmetric. It is important to determine key categories of radiation polluted soil to establish priorities to use reports of resources available for preparation and to prepare possible estimations for the most significant categories of sources. Usage of the notion u ncertainty i n reports also allows to set threshold value for a key category of sources, if it is necessary, for exact reflection of 90 percent uncertainty in reports. According to radiation safety norms level of radiation background exceeding 33 mkR/hour is considered dangerous. By calculated Monte Carlo method much more dangerous sites and sites frequently imposed to disposals and utilization were chosen from analyzed samples of

  16. Application of Monte-Carlo method in definition of key categories of most radioactive polluted soil

    Energy Technology Data Exchange (ETDEWEB)

    Mahmudov, H M; Valibeyova, G; Jafarov, Y D; Musaeva, Sh Z [Institute of Radiation Problems, Azerbaijan National Academy of Sciences, Baku (Azerbaijan)

    2006-11-15

    Full text: The principle of analysis by Monte Carlo method consists of a choice of random variables of coefficients of an exposition doze capacities of radiation and data on activity within the boundaries of their individual density of frequency distribution upon corresponding sizes of exposition doses capacities. This procedure repeats for many times using computer and results of each round of calculations create universal density of frequency distribution of exposition doses capacities. The analysis using Monte Carlo method can be carried out at the level of radiation polluted soil categories. The analysis by Monte Carlo method is useful for realization of sensitivity analysis of measured capacity amount of an exposition dose in order to define the major factors causing uncertainty in reports. Reception of such conceptions can be valuable for definition of key categories of radiation polluted soil and establishment of priorities to use resources for enhancement of the report. Relative uncertainty of radiation polluted soil categories determined with the help of the analysis by Monte Carlo method in case of their availability can be applied using more significant divergence between average value and a confidential limit in case when borders of a confidential interval are asymmetric. It is important to determine key categories of radiation polluted soil to establish priorities to use reports of resources available for preparation and to prepare possible estimations for the most significant categories of sources. Usage of the notion {sup u}ncertainty{sup i}n reports also allows to set threshold value for a key category of sources, if it is necessary, for exact reflection of 90 percent uncertainty in reports. According to radiation safety norms level of radiation background exceeding 33 mkR/hour is considered dangerous. By calted Monte Carlo method much more dangerous sites and sites frequently imposed to disposals and utilization were chosen from analyzed samples of

  17. Parallel computing by Monte Carlo codes MVP/GMVP

    International Nuclear Information System (INIS)

    Nagaya, Yasunobu; Nakagawa, Masayuki; Mori, Takamasa

    2001-01-01

    General-purpose Monte Carlo codes MVP/GMVP are well-vectorized and thus enable us to perform high-speed Monte Carlo calculations. In order to achieve more speedups, we parallelized the codes on the different types of parallel computing platforms or by using a standard parallelization library MPI. The platforms used for benchmark calculations are a distributed-memory vector-parallel computer Fujitsu VPP500, a distributed-memory massively parallel computer Intel paragon and a distributed-memory scalar-parallel computer Hitachi SR2201, IBM SP2. As mentioned generally, linear speedup could be obtained for large-scale problems but parallelization efficiency decreased as the batch size per a processing element(PE) was smaller. It was also found that the statistical uncertainty for assembly powers was less than 0.1% by the PWR full-core calculation with more than 10 million histories and it took about 1.5 hours by massively parallel computing. (author)

  18. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  19. Monte Carlo modeling of Standard Model multi-boson production processes for √s = 13 TeV ATLAS analyses

    CERN Document Server

    Li, Shu; The ATLAS collaboration

    2017-01-01

    We present the Monte Carlo(MC) setup used by ATLAS to model multi-boson processes in √s = 13 TeV proton-proton collisions. The baseline Monte Carlo generators are compared with each other in key kinematic distributions of the processes under study. Sample normalization and systematic uncertainties are discussed.

  20. The MC21 Monte Carlo Transport Code

    International Nuclear Information System (INIS)

    Sutton TM; Donovan TJ; Trumbull TH; Dobreff PS; Caro E; Griesheimer DP; Tyburski LJ; Carpenter DC; Joo H

    2007-01-01

    MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities

  1. Monte Carlo Treatment Planning for Advanced Radiotherapy

    DEFF Research Database (Denmark)

    Cronholm, Rickard

    This Ph.d. project describes the development of a workflow for Monte Carlo Treatment Planning for clinical radiotherapy plans. The workflow may be utilized to perform an independent dose verification of treatment plans. Modern radiotherapy treatment delivery is often conducted by dynamically...... modulating the intensity of the field during the irradiation. The workflow described has the potential to fully model the dynamic delivery, including gantry rotation during irradiation, of modern radiotherapy. Three corner stones of Monte Carlo Treatment Planning are identified: Building, commissioning...... and validation of a Monte Carlo model of a medical linear accelerator (i), converting a CT scan of a patient to a Monte Carlo compliant phantom (ii) and translating the treatment plan parameters (including beam energy, angles of incidence, collimator settings etc) to a Monte Carlo input file (iii). A protocol...

  2. Monte Carlo Calculation of Thermal Neutron Inelastic Scattering Cross Section Uncertainties by Sampling Perturbed Phonon Spectra

    Science.gov (United States)

    Holmes, Jesse Curtis

    established that depends on uncertainties in the physics models and methodology employed to produce the DOS. Through Monte Carlo sampling of perturbations from the reference phonon spectrum, an S(alpha, beta) covariance matrix may be generated. In this work, density functional theory and lattice dynamics in the harmonic approximation are used to calculate the phonon DOS for hexagonal crystalline graphite. This form of graphite is used as an example material for the purpose of demonstrating procedures for analyzing, calculating and processing thermal neutron inelastic scattering uncertainty information. Several sources of uncertainty in thermal neutron inelastic scattering calculations are examined, including sources which cannot be directly characterized through a description of the phonon DOS uncertainty, and their impacts are evaluated. Covariances for hexagonal crystalline graphite S(alpha, beta) data are quantified by coupling the standard methodology of LEAPR with a Monte Carlo sampling process. The mechanics of efficiently representing and processing this covariance information is also examined. Finally, with appropriate sensitivity information, it is shown that an S(alpha, beta) covariance matrix can be propagated to generate covariance data for integrated cross sections, secondary energy distributions, and coupled energy-angle distributions. This approach enables a complete description of thermal neutron inelastic scattering cross section uncertainties which may be employed to improve the simulation of nuclear systems.

  3. Monte carlo simulation for soot dynamics

    KAUST Repository

    Zhou, Kun

    2012-01-01

    A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.

  4. Statistical estimation Monte Carlo for unreliability evaluation of highly reliable system

    International Nuclear Information System (INIS)

    Xiao Gang; Su Guanghui; Jia Dounan; Li Tianduo

    2000-01-01

    Based on analog Monte Carlo simulation, statistical Monte Carlo methods for unreliable evaluation of highly reliable system are constructed, including direct statistical estimation Monte Carlo method and weighted statistical estimation Monte Carlo method. The basal element is given, and the statistical estimation Monte Carlo estimators are derived. Direct Monte Carlo simulation method, bounding-sampling method, forced transitions Monte Carlo method, direct statistical estimation Monte Carlo and weighted statistical estimation Monte Carlo are used to evaluate unreliability of a same system. By comparing, weighted statistical estimation Monte Carlo estimator has smallest variance, and has highest calculating efficiency

  5. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-01-01

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  6. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros

    2016-08-29

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  7. Applications of Monte Carlo method in Medical Physics

    International Nuclear Information System (INIS)

    Diez Rios, A.; Labajos, M.

    1989-01-01

    The basic ideas of Monte Carlo techniques are presented. Random numbers and their generation by congruential methods, which underlie Monte Carlo calculations are shown. Monte Carlo techniques to solve integrals are discussed. The evaluation of a simple monodimensional integral with a known answer, by means of two different Monte Carlo approaches are discussed. The basic principles to simualate on a computer photon histories reduce variance and the current applications in Medical Physics are commented. (Author)

  8. Guideline of Monte Carlo calculation. Neutron/gamma ray transport simulation by Monte Carlo method

    CERN Document Server

    2002-01-01

    This report condenses basic theories and advanced applications of neutron/gamma ray transport calculations in many fields of nuclear energy research. Chapters 1 through 5 treat historical progress of Monte Carlo methods, general issues of variance reduction technique, cross section libraries used in continuous energy Monte Carlo codes. In chapter 6, the following issues are discussed: fusion benchmark experiments, design of ITER, experiment analyses of fast critical assembly, core analyses of JMTR, simulation of pulsed neutron experiment, core analyses of HTTR, duct streaming calculations, bulk shielding calculations, neutron/gamma ray transport calculations of the Hiroshima atomic bomb. Chapters 8 and 9 treat function enhancements of MCNP and MVP codes, and a parallel processing of Monte Carlo calculation, respectively. An important references are attached at the end of this report.

  9. Experience with the Monte Carlo Method

    Energy Technology Data Exchange (ETDEWEB)

    Hussein, E M.A. [Department of Mechanical Engineering University of New Brunswick, Fredericton, N.B., (Canada)

    2007-06-15

    Monte Carlo simulation of radiation transport provides a powerful research and design tool that resembles in many aspects laboratory experiments. Moreover, Monte Carlo simulations can provide an insight not attainable in the laboratory. However, the Monte Carlo method has its limitations, which if not taken into account can result in misleading conclusions. This paper will present the experience of this author, over almost three decades, in the use of the Monte Carlo method for a variety of applications. Examples will be shown on how the method was used to explore new ideas, as a parametric study and design optimization tool, and to analyze experimental data. The consequences of not accounting in detail for detector response and the scattering of radiation by surrounding structures are two of the examples that will be presented to demonstrate the pitfall of condensed.

  10. Experience with the Monte Carlo Method

    International Nuclear Information System (INIS)

    Hussein, E.M.A.

    2007-01-01

    Monte Carlo simulation of radiation transport provides a powerful research and design tool that resembles in many aspects laboratory experiments. Moreover, Monte Carlo simulations can provide an insight not attainable in the laboratory. However, the Monte Carlo method has its limitations, which if not taken into account can result in misleading conclusions. This paper will present the experience of this author, over almost three decades, in the use of the Monte Carlo method for a variety of applications. Examples will be shown on how the method was used to explore new ideas, as a parametric study and design optimization tool, and to analyze experimental data. The consequences of not accounting in detail for detector response and the scattering of radiation by surrounding structures are two of the examples that will be presented to demonstrate the pitfall of condensed

  11. Monte Carlo alpha calculation

    Energy Technology Data Exchange (ETDEWEB)

    Brockway, D.; Soran, P.; Whalen, P.

    1985-01-01

    A Monte Carlo algorithm to efficiently calculate static alpha eigenvalues, N = ne/sup ..cap alpha..t/, for supercritical systems has been developed and tested. A direct Monte Carlo approach to calculating a static alpha is to simply follow the buildup in time of neutrons in a supercritical system and evaluate the logarithmic derivative of the neutron population with respect to time. This procedure is expensive, and the solution is very noisy and almost useless for a system near critical. The modified approach is to convert the time-dependent problem to a static ..cap alpha../sup -/eigenvalue problem and regress ..cap alpha.. on solutions of a/sup -/ k/sup -/eigenvalue problem. In practice, this procedure is much more efficient than the direct calculation, and produces much more accurate results. Because the Monte Carlo codes are intrinsically three-dimensional and use elaborate continuous-energy cross sections, this technique is now used as a standard for evaluating other calculational techniques in odd geometries or with group cross sections.

  12. Monte Carlo simulations of neutron scattering instruments

    International Nuclear Information System (INIS)

    Aestrand, Per-Olof; Copenhagen Univ.; Lefmann, K.; Nielsen, K.

    2001-01-01

    A Monte Carlo simulation is an important computational tool used in many areas of science and engineering. The use of Monte Carlo techniques for simulating neutron scattering instruments is discussed. The basic ideas, techniques and approximations are presented. Since the construction of a neutron scattering instrument is very expensive, Monte Carlo software used for design of instruments have to be validated and tested extensively. The McStas software was designed with these aspects in mind and some of the basic principles of the McStas software will be discussed. Finally, some future prospects are discussed for using Monte Carlo simulations in optimizing neutron scattering experiments. (R.P.)

  13. Risk Consideration and Cost Estimation in Construction Projects Using Monte Carlo Simulation

    Directory of Open Access Journals (Sweden)

    Claudius A. Peleskei

    2015-06-01

    Full Text Available Construction projects usually involve high investments. It is, therefore, a risky adventure for companies as actual costs of construction projects nearly always exceed the planed scenario. This is due to the various risks and the large uncertainty existing within this industry. Determination and quantification of risks and their impact on project costs within the construction industry is described to be one of the most difficult areas. This paper analyses how the cost of construction projects can be estimated using Monte Carlo Simulation. It investigates if the different cost elements in a construction project follow a specific probability distribution. The research examines the effect of correlation between different project costs on the result of the Monte Carlo Simulation. The paper finds out that Monte Carlo Simulation can be a helpful tool for risk managers and can be used for cost estimation of construction projects. The research has shown that cost distributions are positively skewed and cost elements seem to have some interdependent relationships.

  14. Linear filtering applied to Monte Carlo criticality calculations

    International Nuclear Information System (INIS)

    Morrison, G.W.; Pike, D.H.; Petrie, L.M.

    1975-01-01

    A significant improvement in the acceleration of the convergence of the eigenvalue computed by Monte Carlo techniques has been developed by applying linear filtering theory to Monte Carlo calculations for multiplying systems. A Kalman filter was applied to a KENO Monte Carlo calculation of an experimental critical system consisting of eight interacting units of fissile material. A comparison of the filter estimate and the Monte Carlo realization was made. The Kalman filter converged in five iterations to 0.9977. After 95 iterations, the average k-eff from the Monte Carlo calculation was 0.9981. This demonstrates that the Kalman filter has the potential of reducing the calculational effort of multiplying systems. Other examples and results are discussed

  15. Markov chain Monte Carlo methods for statistical analysis of RF photonic devices

    DEFF Research Database (Denmark)

    Piels, Molly; Zibar, Darko

    2016-01-01

    uncertainty is shown to give unsatisfactory and incorrect results due to the nonlinear relationship between the circuit parameters and the measured data. Markov chain Monte Carlo methods are shown to provide superior results, both for individual devices and for assessing within-die variation...

  16. Comparison of the uncertainties calculated for the results of radiochemical determinations using the law of propagation of uncertainty and a Monte Carlo simulation

    International Nuclear Information System (INIS)

    Berne, A.

    2001-01-01

    Quantitative determinations of many radioactive analytes in environmental samples are based on a process in which several independent measurements of different properties are taken. The final results that are calculated using the data have to be evaluated for accuracy and precision. The estimate of the standard deviation, s, also called the combined standard uncertainty (CSU) associated with the result of this combined measurement can be used to evaluate the precision of the result. The CSU can be calculated by applying the law of propagation of uncertainty, which is based on the Taylor series expansion of the equation used to calculate the analytical result. The estimate of s can also be obtained from a Monte Carlo simulation. The data used in this simulation includes the values resulting from the individual measurements, the estimate of the variance of each value, including the type of distribution, and the equation used to calculate the analytical result. A comparison is made between these two methods of estimating the uncertainty of the calculated result. (author)

  17. Burnup calculations using Monte Carlo method

    International Nuclear Information System (INIS)

    Ghosh, Biplab; Degweker, S.B.

    2009-01-01

    In the recent years, interest in burnup calculations using Monte Carlo methods has gained momentum. Previous burn up codes have used multigroup transport theory based calculations followed by diffusion theory based core calculations for the neutronic portion of codes. The transport theory methods invariably make approximations with regard to treatment of the energy and angle variables involved in scattering, besides approximations related to geometry simplification. Cell homogenisation to produce diffusion, theory parameters adds to these approximations. Moreover, while diffusion theory works for most reactors, it does not produce accurate results in systems that have strong gradients, strong absorbers or large voids. Also, diffusion theory codes are geometry limited (rectangular, hexagonal, cylindrical, and spherical coordinates). Monte Carlo methods are ideal to solve very heterogeneous reactors and/or lattices/assemblies in which considerable burnable poisons are used. The key feature of this approach is that Monte Carlo methods permit essentially 'exact' modeling of all geometrical detail, without resort to ene and spatial homogenization of neutron cross sections. Monte Carlo method would also be better for in Accelerator Driven Systems (ADS) which could have strong gradients due to the external source and a sub-critical assembly. To meet the demand for an accurate burnup code, we have developed a Monte Carlo burnup calculation code system in which Monte Carlo neutron transport code is coupled with a versatile code (McBurn) for calculating the buildup and decay of nuclides in nuclear materials. McBurn is developed from scratch by the authors. In this article we will discuss our effort in developing the continuous energy Monte Carlo burn-up code, McBurn. McBurn is intended for entire reactor core as well as for unit cells and assemblies. Generally, McBurn can do burnup of any geometrical system which can be handled by the underlying Monte Carlo transport code

  18. Monte Carlo simulations for plasma physics

    International Nuclear Information System (INIS)

    Okamoto, M.; Murakami, S.; Nakajima, N.; Wang, W.X.

    2000-07-01

    Plasma behaviours are very complicated and the analyses are generally difficult. However, when the collisional processes play an important role in the plasma behaviour, the Monte Carlo method is often employed as a useful tool. For examples, in neutral particle injection heating (NBI heating), electron or ion cyclotron heating, and alpha heating, Coulomb collisions slow down high energetic particles and pitch angle scatter them. These processes are often studied by the Monte Carlo technique and good agreements can be obtained with the experimental results. Recently, Monte Carlo Method has been developed to study fast particle transports associated with heating and generating the radial electric field. Further it is applied to investigating the neoclassical transport in the plasma with steep gradients of density and temperatures which is beyong the conventional neoclassical theory. In this report, we briefly summarize the researches done by the present authors utilizing the Monte Carlo method. (author)

  19. A sequential Monte Carlo model of the combined GB gas and electricity network

    International Nuclear Information System (INIS)

    Chaudry, Modassar; Wu, Jianzhong; Jenkins, Nick

    2013-01-01

    A Monte Carlo model of the combined GB gas and electricity network was developed to determine the reliability of the energy infrastructure. The model integrates the gas and electricity network into a single sequential Monte Carlo simulation. The model minimises the combined costs of the gas and electricity network, these include gas supplies, gas storage operation and electricity generation. The Monte Carlo model calculates reliability indices such as loss of load probability and expected energy unserved for the combined gas and electricity network. The intention of this tool is to facilitate reliability analysis of integrated energy systems. Applications of this tool are demonstrated through a case study that quantifies the impact on the reliability of the GB gas and electricity network given uncertainties such as wind variability, gas supply availability and outages to energy infrastructure assets. Analysis is performed over a typical midwinter week on a hypothesised GB gas and electricity network in 2020 that meets European renewable energy targets. The efficacy of doubling GB gas storage capacity on the reliability of the energy system is assessed. The results highlight the value of greater gas storage facilities in enhancing the reliability of the GB energy system given various energy uncertainties. -- Highlights: •A Monte Carlo model of the combined GB gas and electricity network was developed. •Reliability indices are calculated for the combined GB gas and electricity system. •The efficacy of doubling GB gas storage capacity on reliability of the energy system is assessed. •Integrated reliability indices could be used to assess the impact of investment in energy assets

  20. Monte Carlo methods and models in finance and insurance

    CERN Document Server

    Korn, Ralf; Kroisandt, Gerald

    2010-01-01

    Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of...

  1. Monte Carlo approaches to light nuclei

    International Nuclear Information System (INIS)

    Carlson, J.

    1990-01-01

    Significant progress has been made recently in the application of Monte Carlo methods to the study of light nuclei. We review new Green's function Monte Carlo results for the alpha particle, Variational Monte Carlo studies of 16 O, and methods for low-energy scattering and transitions. Through these calculations, a coherent picture of the structure and electromagnetic properties of light nuclei has arisen. In particular, we examine the effect of the three-nucleon interaction and the importance of exchange currents in a variety of experimentally measured properties, including form factors and capture cross sections. 29 refs., 7 figs

  2. Monte Carlo approaches to light nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, J.

    1990-01-01

    Significant progress has been made recently in the application of Monte Carlo methods to the study of light nuclei. We review new Green's function Monte Carlo results for the alpha particle, Variational Monte Carlo studies of {sup 16}O, and methods for low-energy scattering and transitions. Through these calculations, a coherent picture of the structure and electromagnetic properties of light nuclei has arisen. In particular, we examine the effect of the three-nucleon interaction and the importance of exchange currents in a variety of experimentally measured properties, including form factors and capture cross sections. 29 refs., 7 figs.

  3. Bayesian uncertainty quantification for flows in heterogeneous porous media using reversible jump Markov chain Monte Carlo methods

    KAUST Repository

    Mondal, A.

    2010-03-01

    In this paper, we study the uncertainty quantification in inverse problems for flows in heterogeneous porous media. Reversible jump Markov chain Monte Carlo algorithms (MCMC) are used for hierarchical modeling of channelized permeability fields. Within each channel, the permeability is assumed to have a lognormal distribution. Uncertainty quantification in history matching is carried out hierarchically by constructing geologic facies boundaries as well as permeability fields within each facies using dynamic data such as production data. The search with Metropolis-Hastings algorithm results in very low acceptance rate, and consequently, the computations are CPU demanding. To speed-up the computations, we use a two-stage MCMC that utilizes upscaled models to screen the proposals. In our numerical results, we assume that the channels intersect the wells and the intersection locations are known. Our results show that the proposed algorithms are capable of capturing the channel boundaries and describe the permeability variations within the channels using dynamic production history at the wells. © 2009 Elsevier Ltd. All rights reserved.

  4. Simulation and the Monte Carlo method

    CERN Document Server

    Rubinstein, Reuven Y

    2016-01-01

    Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...

  5. Lecture 1. Monte Carlo basics. Lecture 2. Adjoint Monte Carlo. Lecture 3. Coupled Forward-Adjoint calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J.E. [Delft University of Technology, Interfaculty Reactor Institute, Delft (Netherlands)

    2000-07-01

    The Monte Carlo method is a statistical method to solve mathematical and physical problems using random numbers. The principle of the methods will be demonstrated for a simple mathematical problem and for neutron transport. Various types of estimators will be discussed, as well as generally applied variance reduction methods like splitting, Russian roulette and importance biasing. The theoretical formulation for solving eigenvalue problems for multiplying systems will be shown. Some reflections will be given about the applicability of the Monte Carlo method, its limitations and its future prospects for reactor physics calculations. Adjoint Monte Carlo is a Monte Carlo game to solve the adjoint neutron (or photon) transport equation. The adjoint transport equation can be interpreted in terms of simulating histories of artificial particles, which show properties of neutrons that move backwards in history. These particles will start their history at the detector from which the response must be estimated and give a contribution to the estimated quantity when they hit or pass through the neutron source. Application to multigroup transport formulation will be demonstrated Possible implementation for the continuous energy case will be outlined. The inherent advantages and disadvantages of the method will be discussed. The Midway Monte Carlo method will be presented for calculating a detector response due to a (neutron or photon) source. A derivation will be given of the basic formula for the Midway Monte Carlo method The black absorber technique, allowing for a cutoff of particle histories when reaching the midway surface in one of the calculations will be derived. An extension of the theory to coupled neutron-photon problems is given. The method will be demonstrated for an oil well logging problem, comprising a neutron source in a borehole and photon detectors to register the photons generated by inelastic neutron scattering. (author)

  6. Lecture 1. Monte Carlo basics. Lecture 2. Adjoint Monte Carlo. Lecture 3. Coupled Forward-Adjoint calculations

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.

    2000-01-01

    The Monte Carlo method is a statistical method to solve mathematical and physical problems using random numbers. The principle of the methods will be demonstrated for a simple mathematical problem and for neutron transport. Various types of estimators will be discussed, as well as generally applied variance reduction methods like splitting, Russian roulette and importance biasing. The theoretical formulation for solving eigenvalue problems for multiplying systems will be shown. Some reflections will be given about the applicability of the Monte Carlo method, its limitations and its future prospects for reactor physics calculations. Adjoint Monte Carlo is a Monte Carlo game to solve the adjoint neutron (or photon) transport equation. The adjoint transport equation can be interpreted in terms of simulating histories of artificial particles, which show properties of neutrons that move backwards in history. These particles will start their history at the detector from which the response must be estimated and give a contribution to the estimated quantity when they hit or pass through the neutron source. Application to multigroup transport formulation will be demonstrated Possible implementation for the continuous energy case will be outlined. The inherent advantages and disadvantages of the method will be discussed. The Midway Monte Carlo method will be presented for calculating a detector response due to a (neutron or photon) source. A derivation will be given of the basic formula for the Midway Monte Carlo method The black absorber technique, allowing for a cutoff of particle histories when reaching the midway surface in one of the calculations will be derived. An extension of the theory to coupled neutron-photon problems is given. The method will be demonstrated for an oil well logging problem, comprising a neutron source in a borehole and photon detectors to register the photons generated by inelastic neutron scattering. (author)

  7. Monte Carlo Transport for Electron Thermal Transport

    Science.gov (United States)

    Chenhall, Jeffrey; Cao, Duc; Moses, Gregory

    2015-11-01

    The iSNB (implicit Schurtz Nicolai Busquet multigroup electron thermal transport method of Cao et al. is adapted into a Monte Carlo transport method in order to better model the effects of non-local behavior. The end goal is a hybrid transport-diffusion method that combines Monte Carlo Transport with a discrete diffusion Monte Carlo (DDMC). The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the method will be presented. This work was supported by Sandia National Laboratory - Albuquerque and the University of Rochester Laboratory for Laser Energetics.

  8. Implementation of a Monte Carlo based inverse planning model for clinical IMRT with MCNP code

    International Nuclear Information System (INIS)

    He, Tongming Tony

    2003-01-01

    Inaccurate dose calculations and limitations of optimization algorithms in inverse planning introduce systematic and convergence errors to treatment plans. This work was to implement a Monte Carlo based inverse planning model for clinical IMRT aiming to minimize the aforementioned errors. The strategy was to precalculate the dose matrices of beamlets in a Monte Carlo based method followed by the optimization of beamlet intensities. The MCNP 4B (Monte Carlo N-Particle version 4B) code was modified to implement selective particle transport and dose tallying in voxels and efficient estimation of statistical uncertainties. The resulting performance gain was over eleven thousand times. Due to concurrent calculation of multiple beamlets of individual ports, hundreds of beamlets in an IMRT plan could be calculated within a practical length of time. A finite-sized point source model provided a simple and accurate modeling of treatment beams. The dose matrix calculations were validated through measurements in phantoms. Agreements were better than 1.5% or 0.2 cm. The beamlet intensities were optimized using a parallel platform based optimization algorithm that was capable of escape from local minima and preventing premature convergence. The Monte Carlo based inverse planning model was applied to clinical cases. The feasibility and capability of Monte Carlo based inverse planning for clinical IMRT was demonstrated. Systematic errors in treatment plans of a commercial inverse planning system were assessed in comparison with the Monte Carlo based calculations. Discrepancies in tumor doses and critical structure doses were up to 12% and 17%, respectively. The clinical importance of Monte Carlo based inverse planning for IMRT was demonstrated

  9. Generalized hybrid Monte Carlo - CMFD methods for fission source convergence

    International Nuclear Information System (INIS)

    Wolters, Emily R.; Larsen, Edward W.; Martin, William R.

    2011-01-01

    In this paper, we generalize the recently published 'CMFD-Accelerated Monte Carlo' method and present two new methods that reduce the statistical error in CMFD-Accelerated Monte Carlo. The CMFD-Accelerated Monte Carlo method uses Monte Carlo to estimate nonlinear functionals used in low-order CMFD equations for the eigenfunction and eigenvalue. The Monte Carlo fission source is then modified to match the resulting CMFD fission source in a 'feedback' procedure. The two proposed methods differ from CMFD-Accelerated Monte Carlo in the definition of the required nonlinear functionals, but they have identical CMFD equations. The proposed methods are compared with CMFD-Accelerated Monte Carlo on a high dominance ratio test problem. All hybrid methods converge the Monte Carlo fission source almost immediately, leading to a large reduction in the number of inactive cycles required. The proposed methods stabilize the fission source more efficiently than CMFD-Accelerated Monte Carlo, leading to a reduction in the number of active cycles required. Finally, as in CMFD-Accelerated Monte Carlo, the apparent variance of the eigenfunction is approximately equal to the real variance, so the real error is well-estimated from a single calculation. This is an advantage over standard Monte Carlo, in which the real error can be underestimated due to inter-cycle correlation. (author)

  10. Is Monte Carlo embarrassingly parallel?

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  11. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  12. A Monte Carlo approach to constraining uncertainties in modelled downhole gravity gradiometry applications

    Science.gov (United States)

    Matthews, Samuel J.; O'Neill, Craig; Lackie, Mark A.

    2017-06-01

    Gravity gradiometry has a long legacy, with airborne/marine applications as well as surface applications receiving renewed recent interest. Recent instrumental advances has led to the emergence of downhole gravity gradiometry applications that have the potential for greater resolving power than borehole gravity alone. This has promise in both the petroleum and geosequestration industries; however, the effect of inherent uncertainties in the ability of downhole gravity gradiometry to resolve a subsurface signal is unknown. Here, we utilise the open source modelling package, Fatiando a Terra, to model both the gravity and gravity gradiometry responses of a subsurface body. We use a Monte Carlo approach to vary the geological structure and reference densities of the model within preset distributions. We then perform 100 000 simulations to constrain the mean response of the buried body as well as uncertainties in these results. We varied our modelled borehole to be either centred on the anomaly, adjacent to the anomaly (in the x-direction), and 2500 m distant to the anomaly (also in the x-direction). We demonstrate that gravity gradiometry is able to resolve a reservoir-scale modelled subsurface density variation up to 2500 m away, and that certain gravity gradient components (Gzz, Gxz, and Gxx) are particularly sensitive to this variation in gravity/gradiometry above the level of uncertainty in the model. The responses provided by downhole gravity gradiometry modelling clearly demonstrate a technique that can be utilised in determining a buried density contrast, which will be of particular use in the emerging industry of CO2 geosequestration. The results also provide a strong benchmark for the development of newly emerging prototype downhole gravity gradiometers.

  13. Mean field simulation for Monte Carlo integration

    CERN Document Server

    Del Moral, Pierre

    2013-01-01

    In the last three decades, there has been a dramatic increase in the use of interacting particle methods as a powerful tool in real-world applications of Monte Carlo simulation in computational physics, population biology, computer sciences, and statistical machine learning. Ideally suited to parallel and distributed computation, these advanced particle algorithms include nonlinear interacting jump diffusions; quantum, diffusion, and resampled Monte Carlo methods; Feynman-Kac particle models; genetic and evolutionary algorithms; sequential Monte Carlo methods; adaptive and interacting Marko

  14. Variational Variance Reduction for Monte Carlo Criticality Calculations

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Larsen, Edward W.

    2001-01-01

    A new variational variance reduction (VVR) method for Monte Carlo criticality calculations was developed. This method employs (a) a variational functional that is more accurate than the standard direct functional, (b) a representation of the deterministically obtained adjoint flux that is especially accurate for optically thick problems with high scattering ratios, and (c) estimates of the forward flux obtained by Monte Carlo. The VVR method requires no nonanalog Monte Carlo biasing, but it may be used in conjunction with Monte Carlo biasing schemes. Some results are presented from a class of criticality calculations involving alternating arrays of fuel and moderator regions

  15. Monte Carlo Solutions for Blind Phase Noise Estimation

    Directory of Open Access Journals (Sweden)

    Çırpan Hakan

    2009-01-01

    Full Text Available This paper investigates the use of Monte Carlo sampling methods for phase noise estimation on additive white Gaussian noise (AWGN channels. The main contributions of the paper are (i the development of a Monte Carlo framework for phase noise estimation, with special attention to sequential importance sampling and Rao-Blackwellization, (ii the interpretation of existing Monte Carlo solutions within this generic framework, and (iii the derivation of a novel phase noise estimator. Contrary to the ad hoc phase noise estimators that have been proposed in the past, the estimators considered in this paper are derived from solid probabilistic and performance-determining arguments. Computer simulations demonstrate that, on one hand, the Monte Carlo phase noise estimators outperform the existing estimators and, on the other hand, our newly proposed solution exhibits a lower complexity than the existing Monte Carlo solutions.

  16. Monte Carlo based diffusion coefficients for LMFBR analysis

    International Nuclear Information System (INIS)

    Van Rooijen, Willem F.G.; Takeda, Toshikazu; Hazama, Taira

    2010-01-01

    A method based on Monte Carlo calculations is developed to estimate the diffusion coefficient of unit cells. The method uses a geometrical model similar to that used in lattice theory, but does not use the assumption of a separable fundamental mode used in lattice theory. The method uses standard Monte Carlo flux and current tallies, and the continuous energy Monte Carlo code MVP was used without modifications. Four models are presented to derive the diffusion coefficient from tally results of flux and partial currents. In this paper the method is applied to the calculation of a plate cell of the fast-spectrum critical facility ZEBRA. Conventional calculations of the diffusion coefficient diverge in the presence of planar voids in the lattice, but our Monte Carlo method can treat this situation without any problem. The Monte Carlo method was used to investigate the influence of geometrical modeling as well as the directional dependence of the diffusion coefficient. The method can be used to estimate the diffusion coefficient of complicated unit cells, the limitation being the capabilities of the Monte Carlo code. The method will be used in the future to confirm results for the diffusion coefficient obtained of the Monte Carlo code. The method will be used in the future to confirm results for the diffusion coefficient obtained with deterministic codes. (author)

  17. Comparative and Predictive Multimedia Assessments Using Monte Carlo Uncertainty Analyses

    Science.gov (United States)

    Whelan, G.

    2002-05-01

    Multiple-pathway frameworks (sometimes referred to as multimedia models) provide a platform for combining medium-specific environmental models and databases, such that they can be utilized in a more holistic assessment of contaminant fate and transport in the environment. These frameworks provide a relatively seamless transfer of information from one model to the next and from databases to models. Within these frameworks, multiple models are linked, resulting in models that consume information from upstream models and produce information to be consumed by downstream models. The Framework for Risk Analysis in Multimedia Environmental Systems (FRAMES) is an example, which allows users to link their models to other models and databases. FRAMES is an icon-driven, site-layout platform that is an open-architecture, object-oriented system that interacts with environmental databases; helps the user construct a Conceptual Site Model that is real-world based; allows the user to choose the most appropriate models to solve simulation requirements; solves the standard risk paradigm of release transport and fate; and exposure/risk assessments to people and ecology; and presents graphical packages for analyzing results. FRAMES is specifically designed allow users to link their own models into a system, which contains models developed by others. This paper will present the use of FRAMES to evaluate potential human health exposures using real site data and realistic assumptions from sources, through the vadose and saturated zones, to exposure and risk assessment at three real-world sites, using the Multimedia Environmental Pollutant Assessment System (MEPAS), which is a multimedia model contained within FRAMES. These real-world examples use predictive and comparative approaches coupled with a Monte Carlo analysis. A predictive analysis is where models are calibrated to monitored site data, prior to the assessment, and a comparative analysis is where models are not calibrated but

  18. An analytical model for backscattered luminance in fog: comparisons with Monte Carlo computations and experimental results

    International Nuclear Information System (INIS)

    Taillade, Frédéric; Dumont, Eric; Belin, Etienne

    2008-01-01

    We propose an analytical model for backscattered luminance in fog and derive an expression for the visibility signal-to-noise ratio as a function of meteorological visibility distance. The model uses single scattering processes. It is based on the Mie theory and the geometry of the optical device (emitter and receiver). In particular, we present an overlap function and take the phase function of fog into account. The results of the backscattered luminance obtained with our analytical model are compared to simulations made using the Monte Carlo method based on multiple scattering processes. An excellent agreement is found in that the discrepancy between the results is smaller than the Monte Carlo standard uncertainties. If we take no account of the geometry of the optical device, the results of the model-estimated backscattered luminance differ from the simulations by a factor 20. We also conclude that the signal-to-noise ratio computed with the Monte Carlo method and our analytical model is in good agreement with experimental results since the mean difference between the calculations and experimental measurements is smaller than the experimental uncertainty

  19. Monte Carlo simulations on a 9-node PC cluster

    International Nuclear Information System (INIS)

    Gouriou, J.

    2001-01-01

    Monte Carlo simulation methods are frequently used in the fields of medical physics, dosimetry and metrology of ionising radiation. Nevertheless, the main drawback of this technique is to be computationally slow, because the statistical uncertainty of the result improves only as the square root of the computational time. We present a method, which allows to reduce by a factor 10 to 20 the used effective running time. In practice, the aim was to reduce the calculation time in the LNHB metrological applications from several weeks to a few days. This approach includes the use of a PC-cluster, under Linux operating system and PVM parallel library (version 3.4). The Monte Carlo codes EGS4, MCNP and PENELOPE have been implemented on this platform and for the two last ones adapted for running under the PVM environment. The maximum observed speedup is ranging from a factor 13 to 18 according to the codes and the problems to be simulated. (orig.)

  20. Computer system for Monte Carlo experimentation

    International Nuclear Information System (INIS)

    Grier, D.A.

    1986-01-01

    A new computer system for Monte Carlo Experimentation is presented. The new system speeds and simplifies the process of coding and preparing a Monte Carlo Experiment; it also encourages the proper design of Monte Carlo Experiments, and the careful analysis of the experimental results. A new functional language is the core of this system. Monte Carlo Experiments, and their experimental designs, are programmed in this new language; those programs are compiled into Fortran output. The Fortran output is then compiled and executed. The experimental results are analyzed with a standard statistics package such as Si, Isp, or Minitab or with a user-supplied program. Both the experimental results and the experimental design may be directly loaded into the workspace of those packages. The new functional language frees programmers from many of the details of programming an experiment. Experimental designs such as factorial, fractional factorial, or latin square are easily described by the control structures and expressions of the language. Specific mathematical modes are generated by the routines of the language

  1. Random Numbers and Monte Carlo Methods

    Science.gov (United States)

    Scherer, Philipp O. J.

    Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.

  2. LCG Monte-Carlo Data Base

    CERN Document Server

    Bartalini, P.; Kryukov, A.; Selyuzhenkov, Ilya V.; Sherstnev, A.; Vologdin, A.

    2004-01-01

    We present the Monte-Carlo events Data Base (MCDB) project and its development plans. MCDB facilitates communication between authors of Monte-Carlo generators and experimental users. It also provides a convenient book-keeping and an easy access to generator level samples. The first release of MCDB is now operational for the CMS collaboration. In this paper we review the main ideas behind MCDB and discuss future plans to develop this Data Base further within the CERN LCG framework.

  3. MONTE CARLO SIMULATION AND VALUATION: A STOCHASTIC APPROACH SIMULAÇÃO DE MONTE CARLO E VALUATION: UMA ABORDAGEM ESTOCÁSTICA

    Directory of Open Access Journals (Sweden)

    Marcos Roberto Gois de Oliveira

    2013-01-01

    Full Text Available Among the various business valuation methodologies, the discounted cash flow is still the most adopted nowadays on both academic and professional environment. Although many authors support thatmethodology as the most adequate one for business valuation, its projective feature implies in an uncertaintyissue presents in all financial models based on future expectations, the risk that the projected assumptionsdoes not occur. One of the alternatives to measure the risk inherent to the discounted cash flow valuation isto add Monte Carlo Simulation to the deterministic business valuation model in order to create a stochastic model, which can perform a statistic analysis of risk. The objective of this work was to evaluate thepertinence regarding the Monte Carlo Simulation adoption to measure the uncertainty inherent to the business valuation using discounted cash flow, identifying whether the Monte Carlo simulation enhance theaccuracy of this asset pricing methodology. The results of this work assures the operational e icacy ofdiscounted cash flow business valuation using Monte Carlo Simulation, confirming that the adoption of thatmethodology allows a relevant enhancement of the results in comparison with those obtained by using thedeterministic business valuation model.Dentre as diversas metodologias de avaliação de empresas, a avaliação por fluxo de caixa descontadocontinua sendo a mais adotada na atualidade, tanto no meio acadêmico como no profissional. Embora  essametodologia seja considerada por diversos autores como a mais adequada para a avaliação de empresas no contexto atual, seu caráter projetivo remete a um componente de incerteza presente em todos os modelos baseados em expectativas futuras o risco de as premissas de projeção adotadas não se concretizarem. Uma das alternativas para a mensuração do risco inerente à avaliação de empresas pelo fluxo de caixa descontadoconsiste na incorporação da Simulação de Monte

  4. Alternative implementations of the Monte Carlo power method

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Gelbard, E.M.

    2002-01-01

    We compare nominal efficiencies, i.e. variances in power shapes for equal running time, of different versions of the Monte Carlo eigenvalue computation, as applied to criticality safety analysis calculations. The two main methods considered here are ''conventional'' Monte Carlo and the superhistory method, and both are used in criticality safety codes. Within each of these major methods, different variants are available for the main steps of the basic Monte Carlo algorithm. Thus, for example, different treatments of the fission process may vary in the extent to which they follow, in analog fashion, the details of real-world fission, or may vary in details of the methods by which they choose next-generation source sites. In general the same options are available in both the superhistory method and conventional Monte Carlo, but there seems not to have been much examination of the special properties of the two major methods and their minor variants. We find, first, that the superhistory method is just as efficient as conventional Monte Carlo and, secondly, that use of different variants of the basic algorithms may, in special cases, have a surprisingly large effect on Monte Carlo computational efficiency

  5. Igo - A Monte Carlo Code For Radiotherapy Planning

    International Nuclear Information System (INIS)

    Goldstein, M.; Regev, D.

    1999-01-01

    The goal of radiation therapy is to deliver a lethal dose to the tumor, while minimizing the dose to normal tissues and vital organs. To carry out this task, it is critical to calculate correctly the 3-D dose delivered. Monte Carlo transport methods (especially the Adjoint Monte Carlo have the potential to provide more accurate predictions of the 3-D dose the currently used methods. IG0 is a Monte Carlo code derived from the general Monte Carlo Program - MCNP, tailored specifically for calculating the effects of radiation therapy. This paper describes the IG0 transport code, the PIG0 interface and some preliminary results

  6. Odd-flavor Simulations by the Hybrid Monte Carlo

    CERN Document Server

    Takaishi, Tetsuya; Takaishi, Tetsuya; De Forcrand, Philippe

    2001-01-01

    The standard hybrid Monte Carlo algorithm is known to simulate even flavors QCD only. Simulations of odd flavors QCD, however, can be also performed in the framework of the hybrid Monte Carlo algorithm where the inverse of the fermion matrix is approximated by a polynomial. In this exploratory study we perform three flavors QCD simulations. We make a comparison of the hybrid Monte Carlo algorithm and the R-algorithm which also simulates odd flavors systems but has step-size errors. We find that results from our hybrid Monte Carlo algorithm are in agreement with those from the R-algorithm obtained at very small step-size.

  7. An experimental and Monte Carlo investigation of the energy dependence of alanine/EPR dosimetry: I. Clinical x-ray beams

    International Nuclear Information System (INIS)

    Zeng, G G; McEwen, M R; Rogers, D W O; Klassen, N V

    2004-01-01

    The energy dependence of alanine/EPR dosimetry, in terms of absorbed dose-to-water for clinical 6, 10, 25 MV x-rays and 60 Co rays was investigated by measurements and Monte Carlo (MC) calculations. The dose rates were traceable to the NRC primary standard for absorbed dose, a sealed water calorimetry. The electron paramagnetic resonance (EPR) spectra of irradiated pellets were measured using a Bruker EMX 081 EPR spectrometer. The DOSRZnrc Monte Carlo code of the EGSnrc system was used to simulate the experimental conditions with BEAM code calculated input spectra of x-rays and γ-rays. Within the experimental uncertainty of 0.5%, the alanine EPR response to absorbed dose-to-water for x-rays was not dependent on beam quality from 6 MV to 25 MV, but on average, it was about 0.6% lower than its response to 60 Co gamma rays. Combining experimental data with Monte Carlo calculations, it is found that the alanine/EPR response per unit absorbed dose-to-alanine is the same for clinical x-rays and 60 Co gamma rays within the uncertainty of 0.6%. Monte Carlo simulations showed that neither the presence of PMMA holder nor varying the dosimeter thickness between 1 mm and 5 mm has significant effect on the energy dependence of alanine/EPR dosimetry within the calculation uncertainty of 0.3%

  8. Quantum Monte Carlo approaches for correlated systems

    CERN Document Server

    Becca, Federico

    2017-01-01

    Over the past several decades, computational approaches to studying strongly-interacting systems have become increasingly varied and sophisticated. This book provides a comprehensive introduction to state-of-the-art quantum Monte Carlo techniques relevant for applications in correlated systems. Providing a clear overview of variational wave functions, and featuring a detailed presentation of stochastic samplings including Markov chains and Langevin dynamics, which are developed into a discussion of Monte Carlo methods. The variational technique is described, from foundations to a detailed description of its algorithms. Further topics discussed include optimisation techniques, real-time dynamics and projection methods, including Green's function, reptation and auxiliary-field Monte Carlo, from basic definitions to advanced algorithms for efficient codes, and the book concludes with recent developments on the continuum space. Quantum Monte Carlo Approaches for Correlated Systems provides an extensive reference ...

  9. Track 4: basic nuclear science variance reduction for Monte Carlo criticality simulations. 6. Variational Variance Reduction for Monte Carlo Criticality Calculations

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Larsen, Edward W.

    2001-01-01

    Recently, it has been shown that the figure of merit (FOM) of Monte Carlo source-detector problems can be enhanced by using a variational rather than a direct functional to estimate the detector response. The direct functional, which is traditionally employed in Monte Carlo simulations, requires an estimate of the solution of the forward problem within the detector region. The variational functional is theoretically more accurate than the direct functional, but it requires estimates of the solutions of the forward and adjoint source-detector problems over the entire phase-space of the problem. In recent work, we have performed Monte Carlo simulations using the variational functional by (a) approximating the adjoint solution deterministically and representing this solution as a function in phase-space and (b) estimating the forward solution using Monte Carlo. We have called this general procedure variational variance reduction (VVR). The VVR method is more computationally expensive per history than traditional Monte Carlo because extra information must be tallied and processed. However, the variational functional yields a more accurate estimate of the detector response. Our simulations have shown that the VVR reduction in variance usually outweighs the increase in cost, resulting in an increased FOM. In recent work on source-detector problems, we have calculated the adjoint solution deterministically and represented this solution as a linear-in-angle, histogram-in-space function. This procedure has several advantages over previous implementations: (a) it requires much less adjoint information to be stored and (b) it is highly efficient for diffusive problems, due to the accurate linear-in-angle representation of the adjoint solution. (Traditional variance-reduction methods perform poorly for diffusive problems.) Here, we extend this VVR method to Monte Carlo criticality calculations, which are often diffusive and difficult for traditional variance-reduction methods

  10. Non statistical Monte-Carlo

    International Nuclear Information System (INIS)

    Mercier, B.

    1985-04-01

    We have shown that the transport equation can be solved with particles, like the Monte-Carlo method, but without random numbers. In the Monte-Carlo method, particles are created from the source, and are followed from collision to collision until either they are absorbed or they leave the spatial domain. In our method, particles are created from the original source, with a variable weight taking into account both collision and absorption. These particles are followed until they leave the spatial domain, and we use them to determine a first collision source. Another set of particles is then created from this first collision source, and tracked to determine a second collision source, and so on. This process introduces an approximation which does not exist in the Monte-Carlo method. However, we have analyzed the effect of this approximation, and shown that it can be limited. Our method is deterministic, gives reproducible results. Furthermore, when extra accuracy is needed in some region, it is easier to get more particles to go there. It has the same kind of applications: rather problems where streaming is dominant than collision dominated problems

  11. Uncertainty evaluation of the kerma in the air, related to the active volume in the ionization chamber of concentric cylinders, by Monte Carlo simulation; Avaliacao de incerteza no kerma no ar, em relacao ao volume ativo da camara de ionizacao de cilindros concentricos, por simulacao de Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Lo Bianco, A.S.; Oliveira, H.P.S.; Peixoto, J.G.P., E-mail: abianco@ird.gov.b [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. Nacional de Metrologia das Radiacoes Ionizantes (LNMRI)

    2009-07-01

    To implant the primary standard of the magnitude kerma in the air for X-ray between 10 - 50 keV, the National Metrology Laboratory of Ionizing Radiations (LNMRI) must evaluate all the uncertainties of measurement related with Victtoren chamber. So, it was evaluated the uncertainty of the kerma in the air consequent of the inaccuracy in the active volume of the chamber using the calculation of Monte Carlo as a tool through the Penelope software

  12. Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy

    Science.gov (United States)

    Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe

    2015-07-01

    The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm3 calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.

  13. On Monte Carlo estimation of radiation damage in light water reactor systems

    International Nuclear Information System (INIS)

    Read, Edward A.; Oliveira, Cassiano R.E. de

    2010-01-01

    There has been a growing need in recent years for the development of methodologies to calculate damage factors, namely displacements per atom (dpa), of structural components for Light Water Reactors (LWRs). The aim of this paper is discuss and highlight the main issues associated with the calculation of radiation damage factors utilizing the Monte Carlo method. Among these issues are: particle tracking and tallying in complex geometries, dpa calculation methodology, coupled fuel depletion and uncertainty propagation. The capabilities of the Monte Carlo code Serpent such as Woodcock tracking and burnup are assessed for radiation damage calculations and its capability demonstrated and compared to those of the MCNP code for dpa calculations of a typical LWR configuration involving the core vessel and the downcomer. (author)

  14. Systematic uncertainties in the Monte Carlo calculation of ion chamber replacement correction factors

    Energy Technology Data Exchange (ETDEWEB)

    Wang, L. L. W.; La Russa, D. J.; Rogers, D. W. O. [Ottawa Carleton Institute of Physics, Carleton University, Campus Ottawa, Ottawa, Ontario KIS 5B6 (Canada)

    2009-05-15

    In a previous study [Med. Phys. 35, 1747-1755 (2008)], the authors proposed two direct methods of calculating the replacement correction factors (P{sub repl} or p{sub cav}p{sub dis}) for ion chambers by Monte Carlo calculation. By ''direct'' we meant the stopping-power ratio evaluation is not necessary. The two methods were named as the high-density air (HDA) and low-density water (LDW) methods. Although the accuracy of these methods was briefly discussed, it turns out that the assumption made regarding the dose in an HDA slab as a function of slab thickness is not correct. This issue is reinvestigated in the current study, and the accuracy of the LDW method applied to ion chambers in a {sup 60}Co photon beam is also studied. It is found that the two direct methods are in fact not completely independent of the stopping-power ratio of the two materials involved. There is an implicit dependence of the calculated P{sub repl} values upon the stopping-power ratio evaluation through the choice of an appropriate energy cutoff {Delta}, which characterizes a cavity size in the Spencer-Attix cavity theory. Since the {Delta} value is not accurately defined in the theory, this dependence on the stopping-power ratio results in a systematic uncertainty on the calculated P{sub repl} values. For phantom materials of similar effective atomic number to air, such as water and graphite, this systematic uncertainty is at most 0.2% for most commonly used chambers for either electron or photon beams. This uncertainty level is good enough for current ion chamber dosimetry, and the merits of the two direct methods of calculating P{sub repl} values are maintained, i.e., there is no need to do a separate stopping-power ratio calculation. For high-Z materials, the inherent uncertainty would make it practically impossible to calculate reliable P{sub repl} values using the two direct methods.

  15. Dosimetric effect of statistics noise of the TC image in the simulation Monte Carlo of radiotherapy treatments; Efecto dosimetrico del ruido estadistico de la imagen TC en la simulacion Monte Carlo de tratamientos de radioterapia

    Energy Technology Data Exchange (ETDEWEB)

    Laliena Bielsa, V.; Jimenez Albericio, F. J.; Gandia Martinez, A.; Font Gomez, J. A.; Mengual Gil, M. A.; Andres Redondo, M. M.

    2013-07-01

    The source of uncertainty is not exclusive of the Monte Carlo method, but it will be present in any algorithm which takes into account the correction for heterogeneity. Although we hope that the uncertainty described above is small, the objective of this work is to try to quantify depending on the CT study. (Author)

  16. Uncertainty in Measurement: A Review of Monte Carlo Simulation Using Microsoft Excel for the Calculation of Uncertainties Through Functional Relationships, Including Uncertainties in Empirically Derived Constants

    Science.gov (United States)

    Farrance, Ian; Frenkel, Robert

    2014-01-01

    The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more ‘constants’, each of which has an empirically derived numerical value. Such empirically derived ‘constants’ must also have associated uncertainties which propagate through the functional

  17. Uncertainty in measurement: a review of monte carlo simulation using microsoft excel for the calculation of uncertainties through functional relationships, including uncertainties in empirically derived constants.

    Science.gov (United States)

    Farrance, Ian; Frenkel, Robert

    2014-02-01

    The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more 'constants', each of which has an empirically derived numerical value. Such empirically derived 'constants' must also have associated uncertainties which propagate through the functional relationship

  18. Biases in Monte Carlo eigenvalue calculations

    Energy Technology Data Exchange (ETDEWEB)

    Gelbard, E.M.

    1992-12-01

    The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the ``fixed-source`` case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated (``replicated``) over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here.

  19. Biases in Monte Carlo eigenvalue calculations

    Energy Technology Data Exchange (ETDEWEB)

    Gelbard, E.M.

    1992-01-01

    The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the fixed-source'' case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated ( replicated'') over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here.

  20. Biases in Monte Carlo eigenvalue calculations

    International Nuclear Information System (INIS)

    Gelbard, E.M.

    1992-01-01

    The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the ''fixed-source'' case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated (''replicated'') over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here

  1. Importance iteration in MORSE Monte Carlo calculations

    International Nuclear Information System (INIS)

    Kloosterman, J.L.; Hoogenboom, J.E.

    1994-01-01

    An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example that shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation

  2. Importance iteration in MORSE Monte Carlo calculations

    International Nuclear Information System (INIS)

    Kloosterman, J.L.; Hoogenboom, J.E.

    1994-02-01

    An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example, which shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation. (orig.)

  3. Advanced Computational Methods for Monte Carlo Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-01-12

    This course is intended for graduate students who already have a basic understanding of Monte Carlo methods. It focuses on advanced topics that may be needed for thesis research, for developing new state-of-the-art methods, or for working with modern production Monte Carlo codes.

  4. Markov chain Monte Carlo techniques applied to parton distribution functions determination: Proof of concept

    Science.gov (United States)

    Gbedo, Yémalin Gabin; Mangin-Brinet, Mariane

    2017-07-01

    We present a new procedure to determine parton distribution functions (PDFs), based on Markov chain Monte Carlo (MCMC) methods. The aim of this paper is to show that we can replace the standard χ2 minimization by procedures grounded on statistical methods, and on Bayesian inference in particular, thus offering additional insight into the rich field of PDFs determination. After a basic introduction to these techniques, we introduce the algorithm we have chosen to implement—namely Hybrid (or Hamiltonian) Monte Carlo. This algorithm, initially developed for Lattice QCD, turns out to be very interesting when applied to PDFs determination by global analyses; we show that it allows us to circumvent the difficulties due to the high dimensionality of the problem, in particular concerning the acceptance. A first feasibility study is performed and presented, which indicates that Markov chain Monte Carlo can successfully be applied to the extraction of PDFs and of their uncertainties.

  5. Prospect on general software of Monte Carlo method

    International Nuclear Information System (INIS)

    Pei Lucheng

    1992-01-01

    This is a short paper on the prospect of Monte Carlo general software. The content consists of cluster sampling method, zero variance technique, self-improved method, and vectorized Monte Carlo method

  6. Uncertainty versus variability in Monte Carlo simulations of human exposure through food pathways

    International Nuclear Information System (INIS)

    McKone, T.E.

    1994-01-01

    An important issue in both the risk characterization and subsequent risk management of contaminated soil is how precisely we can characterize the distribution among individuals of potential doses associated with chemical contaminants in soil and whether this level of precision favors the use of population distributions of exposure over the use of single scenario representations. For lipophilic contaminants, such as dioxins, furans, polychlorinated biphenyls, pesticides, and for metals such as lead and mercury, exposures through food have been demonstrated to be dominant contributors to total dose within non-occupationally exposed populations. However, overall uncertainties in estimating potential doses through food chains are much larger than uncertainties associated with other exposure pathways. A general model is described here for estimating the ratio of potential dose to contaminant concentration in soil for homegrown foods contaminated by lipophilic, nonionic organic chemicals. This model includes parameters describing homegrown food consumption rates, exposure duration, biotransfer factors, and partition factors. For the parameters needed in this model, the mean and variance are often the only moments of the parameter distribution available. Parameters are divided into three categories, uncertain parameters, variable parameters, and mixed uncertain/variable parameters. Using soils contaminated by hexachlorbenzene (HCB) and benzo(a)pyrene (BaP) as cases studies, a stepwise Monte Carlo analysis is used to develop a histogram that apportions variance in the outcome (ratio of potential dose by food pathways to soil concentration) to variance in each of the three input categories. The results represent potential doses in households consuming homegrown foods

  7. Strategije drevesnega preiskovanja Monte Carlo

    OpenAIRE

    VODOPIVEC, TOM

    2018-01-01

    Po preboju pri igri go so metode drevesnega preiskovanja Monte Carlo (ang. Monte Carlo tree search – MCTS) sprožile bliskovit napredek agentov za igranje iger: raziskovalna skupnost je od takrat razvila veliko variant in izboljšav algoritma MCTS ter s tem zagotovila napredek umetne inteligence ne samo pri igrah, ampak tudi v številnih drugih domenah. Čeprav metode MCTS združujejo splošnost naključnega vzorčenja z natančnostjo drevesnega preiskovanja, imajo lahko v praksi težave s počasno konv...

  8. Monte Carlo electron/photon transport

    International Nuclear Information System (INIS)

    Mack, J.M.; Morel, J.E.; Hughes, H.G.

    1985-01-01

    A review of nonplasma coupled electron/photon transport using Monte Carlo method is presented. Remarks are mainly restricted to linerarized formalisms at electron energies from 1 keV to 1000 MeV. Applications involving pulse-height estimation, transport in external magnetic fields, and optical Cerenkov production are discussed to underscore the importance of this branch of computational physics. Advances in electron multigroup cross-section generation is reported, and its impact on future code development assessed. Progress toward the transformation of MCNP into a generalized neutral/charged-particle Monte Carlo code is described. 48 refs

  9. Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo

    Science.gov (United States)

    Cheong, R. Y.; Gabda, D.

    2017-09-01

    Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.

  10. Sampling-based nuclear data uncertainty quantification for continuous energy Monte-Carlo codes

    International Nuclear Information System (INIS)

    Zhu, T.

    2015-01-01

    Research on the uncertainty of nuclear data is motivated by practical necessity. Nuclear data uncertainties can propagate through nuclear system simulations into operation and safety related parameters. The tolerance for uncertainties in nuclear reactor design and operation can affect the economic efficiency of nuclear power, and essentially its sustainability. The goal of the present PhD research is to establish a methodology of nuclear data uncertainty quantification (NDUQ) for MCNPX, the continuous-energy Monte-Carlo (M-C) code. The high fidelity (continuous-energy treatment and flexible geometry modelling) of MCNPX makes it the choice of routine criticality safety calculations at PSI/LRS, but also raises challenges for NDUQ by conventional sensitivity/uncertainty (S/U) methods. For example, only recently in 2011, the capability of calculating continuous energy κ_e_f_f sensitivity to nuclear data was demonstrated in certain M-C codes by using the method of iterated fission probability. The methodology developed during this PhD research is fundamentally different from the conventional S/U approach: nuclear data are treated as random variables and sampled in accordance to presumed probability distributions. When sampled nuclear data are used in repeated model calculations, the output variance is attributed to the collective uncertainties of nuclear data. The NUSS (Nuclear data Uncertainty Stochastic Sampling) tool is based on this sampling approach and implemented to work with MCNPX’s ACE format of nuclear data, which also gives NUSS compatibility with MCNP and SERPENT M-C codes. In contrast, multigroup uncertainties are used for the sampling of ACE-formatted pointwise-energy nuclear data in a groupwise manner due to the more limited quantity and quality of nuclear data uncertainties. Conveniently, the usage of multigroup nuclear data uncertainties allows consistent comparison between NUSS and other methods (both S/U and sampling-based) that employ the same

  11. Sampling-based nuclear data uncertainty quantification for continuous energy Monte-Carlo codes

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, T.

    2015-07-01

    Research on the uncertainty of nuclear data is motivated by practical necessity. Nuclear data uncertainties can propagate through nuclear system simulations into operation and safety related parameters. The tolerance for uncertainties in nuclear reactor design and operation can affect the economic efficiency of nuclear power, and essentially its sustainability. The goal of the present PhD research is to establish a methodology of nuclear data uncertainty quantification (NDUQ) for MCNPX, the continuous-energy Monte-Carlo (M-C) code. The high fidelity (continuous-energy treatment and flexible geometry modelling) of MCNPX makes it the choice of routine criticality safety calculations at PSI/LRS, but also raises challenges for NDUQ by conventional sensitivity/uncertainty (S/U) methods. For example, only recently in 2011, the capability of calculating continuous energy κ{sub eff} sensitivity to nuclear data was demonstrated in certain M-C codes by using the method of iterated fission probability. The methodology developed during this PhD research is fundamentally different from the conventional S/U approach: nuclear data are treated as random variables and sampled in accordance to presumed probability distributions. When sampled nuclear data are used in repeated model calculations, the output variance is attributed to the collective uncertainties of nuclear data. The NUSS (Nuclear data Uncertainty Stochastic Sampling) tool is based on this sampling approach and implemented to work with MCNPX’s ACE format of nuclear data, which also gives NUSS compatibility with MCNP and SERPENT M-C codes. In contrast, multigroup uncertainties are used for the sampling of ACE-formatted pointwise-energy nuclear data in a groupwise manner due to the more limited quantity and quality of nuclear data uncertainties. Conveniently, the usage of multigroup nuclear data uncertainties allows consistent comparison between NUSS and other methods (both S/U and sampling-based) that employ the same

  12. Adaptive Markov Chain Monte Carlo

    KAUST Repository

    Jadoon, Khan

    2016-08-08

    A substantial interpretation of electromagnetic induction (EMI) measurements requires quantifying optimal model parameters and uncertainty of a nonlinear inverse problem. For this purpose, an adaptive Bayesian Markov chain Monte Carlo (MCMC) algorithm is used to assess multi-orientation and multi-offset EMI measurements in an agriculture field with non-saline and saline soil. In the MCMC simulations, posterior distribution was computed using Bayes rule. The electromagnetic forward model based on the full solution of Maxwell\\'s equations was used to simulate the apparent electrical conductivity measured with the configurations of EMI instrument, the CMD mini-Explorer. The model parameters and uncertainty for the three-layered earth model are investigated by using synthetic data. Our results show that in the scenario of non-saline soil, the parameters of layer thickness are not well estimated as compared to layers electrical conductivity because layer thicknesses in the model exhibits a low sensitivity to the EMI measurements, and is hence difficult to resolve. Application of the proposed MCMC based inversion to the field measurements in a drip irrigation system demonstrate that the parameters of the model can be well estimated for the saline soil as compared to the non-saline soil, and provide useful insight about parameter uncertainty for the assessment of the model outputs.

  13. Monte Carlo method for array criticality calculations

    International Nuclear Information System (INIS)

    Dickinson, D.; Whitesides, G.E.

    1976-01-01

    The Monte Carlo method for solving neutron transport problems consists of mathematically tracing paths of individual neutrons collision by collision until they are lost by absorption or leakage. The fate of the neutron after each collision is determined by the probability distribution functions that are formed from the neutron cross-section data. These distributions are sampled statistically to establish the successive steps in the neutron's path. The resulting data, accumulated from following a large number of batches, are analyzed to give estimates of k/sub eff/ and other collision-related quantities. The use of electronic computers to produce the simulated neutron histories, initiated at Los Alamos Scientific Laboratory, made the use of the Monte Carlo method practical for many applications. In analog Monte Carlo simulation, the calculation follows the physical events of neutron scattering, absorption, and leakage. To increase calculational efficiency, modifications such as the use of statistical weights are introduced. The Monte Carlo method permits the use of a three-dimensional geometry description and a detailed cross-section representation. Some of the problems in using the method are the selection of the spatial distribution for the initial batch, the preparation of the geometry description for complex units, and the calculation of error estimates for region-dependent quantities such as fluxes. The Monte Carlo method is especially appropriate for criticality safety calculations since it permits an accurate representation of interacting units of fissile material. Dissimilar units, units of complex shape, moderators between units, and reflected arrays may be calculated. Monte Carlo results must be correlated with relevant experimental data, and caution must be used to ensure that a representative set of neutron histories is produced

  14. Bayesian Optimal Experimental Design Using Multilevel Monte Carlo

    KAUST Repository

    Ben Issaid, Chaouki; Long, Quan; Scavino, Marco; Tempone, Raul

    2015-01-01

    Experimental design is very important since experiments are often resource-exhaustive and time-consuming. We carry out experimental design in the Bayesian framework. To measure the amount of information, which can be extracted from the data in an experiment, we use the expected information gain as the utility function, which specifically is the expected logarithmic ratio between the posterior and prior distributions. Optimizing this utility function enables us to design experiments that yield the most informative data for our purpose. One of the major difficulties in evaluating the expected information gain is that the integral is nested and can be high dimensional. We propose using Multilevel Monte Carlo techniques to accelerate the computation of the nested high dimensional integral. The advantages are twofold. First, the Multilevel Monte Carlo can significantly reduce the cost of the nested integral for a given tolerance, by using an optimal sample distribution among different sample averages of the inner integrals. Second, the Multilevel Monte Carlo method imposes less assumptions, such as the concentration of measures, required by Laplace method. We test our Multilevel Monte Carlo technique using a numerical example on the design of sensor deployment for a Darcy flow problem governed by one dimensional Laplace equation. We also compare the performance of the Multilevel Monte Carlo, Laplace approximation and direct double loop Monte Carlo.

  15. Bayesian Optimal Experimental Design Using Multilevel Monte Carlo

    KAUST Repository

    Ben Issaid, Chaouki

    2015-01-07

    Experimental design is very important since experiments are often resource-exhaustive and time-consuming. We carry out experimental design in the Bayesian framework. To measure the amount of information, which can be extracted from the data in an experiment, we use the expected information gain as the utility function, which specifically is the expected logarithmic ratio between the posterior and prior distributions. Optimizing this utility function enables us to design experiments that yield the most informative data for our purpose. One of the major difficulties in evaluating the expected information gain is that the integral is nested and can be high dimensional. We propose using Multilevel Monte Carlo techniques to accelerate the computation of the nested high dimensional integral. The advantages are twofold. First, the Multilevel Monte Carlo can significantly reduce the cost of the nested integral for a given tolerance, by using an optimal sample distribution among different sample averages of the inner integrals. Second, the Multilevel Monte Carlo method imposes less assumptions, such as the concentration of measures, required by Laplace method. We test our Multilevel Monte Carlo technique using a numerical example on the design of sensor deployment for a Darcy flow problem governed by one dimensional Laplace equation. We also compare the performance of the Multilevel Monte Carlo, Laplace approximation and direct double loop Monte Carlo.

  16. Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Verification

    Science.gov (United States)

    Hanson, John M.; Beard, Bernard B.

    2010-01-01

    This paper is focused on applying Monte Carlo simulation to probabilistic launch vehicle design and requirements verification. The approaches developed in this paper can be applied to other complex design efforts as well. Typically the verification must show that requirement "x" is met for at least "y" % of cases, with, say, 10% consumer risk or 90% confidence. Two particular aspects of making these runs for requirements verification will be explored in this paper. First, there are several types of uncertainties that should be handled in different ways, depending on when they become known (or not). The paper describes how to handle different types of uncertainties and how to develop vehicle models that can be used to examine their characteristics. This includes items that are not known exactly during the design phase but that will be known for each assembled vehicle (can be used to determine the payload capability and overall behavior of that vehicle), other items that become known before or on flight day (can be used for flight day trajectory design and go/no go decision), and items that remain unknown on flight day. Second, this paper explains a method (order statistics) for determining whether certain probabilistic requirements are met or not and enables the user to determine how many Monte Carlo samples are required. Order statistics is not new, but may not be known in general to the GN&C community. The methods also apply to determining the design values of parameters of interest in driving the vehicle design. The paper briefly discusses when it is desirable to fit a distribution to the experimental Monte Carlo results rather than using order statistics.

  17. Review and comparison of effective delayed neutron fraction calculation methods with Monte Carlo codes

    International Nuclear Information System (INIS)

    Bécares, V.; Pérez-Martín, S.; Vázquez-Antolín, M.; Villamarín, D.; Martín-Fuertes, F.; González-Romero, E.M.; Merino, I.

    2014-01-01

    Highlights: • Review of several Monte Carlo effective delayed neutron fraction calculation methods. • These methods have been implemented with the Monte Carlo code MCNPX. • They have been benchmarked against against some critical and subcritical systems. • Several nuclear data libraries have been used. - Abstract: The calculation of the effective delayed neutron fraction, β eff , with Monte Carlo codes is a complex task due to the requirement of properly considering the adjoint weighting of delayed neutrons. Nevertheless, several techniques have been proposed to circumvent this difficulty and obtain accurate Monte Carlo results for β eff without the need of explicitly determining the adjoint flux. In this paper, we make a review of some of these techniques; namely we have analyzed two variants of what we call the k-eigenvalue technique and other techniques based on different interpretations of the physical meaning of the adjoint weighting. To test the validity of all these techniques we have implemented them with the MCNPX code and we have benchmarked them against a range of critical and subcritical systems for which either experimental or deterministic values of β eff are available. Furthermore, several nuclear data libraries have been used in order to assess the impact of the uncertainty in nuclear data in the calculated value of β eff

  18. Present status of transport code development based on Monte Carlo method

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki

    1985-01-01

    The present status of development in Monte Carlo code is briefly reviewed. The main items are the followings; Application fields, Methods used in Monte Carlo code (geometry spectification, nuclear data, estimator and variance reduction technique) and unfinished works, Typical Monte Carlo codes and Merits of continuous energy Monte Carlo code. (author)

  19. Successful vectorization - reactor physics Monte Carlo code

    International Nuclear Information System (INIS)

    Martin, W.R.

    1989-01-01

    Most particle transport Monte Carlo codes in use today are based on the ''history-based'' algorithm, wherein one particle history at a time is simulated. Unfortunately, the ''history-based'' approach (present in all Monte Carlo codes until recent years) is inherently scalar and cannot be vectorized. In particular, the history-based algorithm cannot take advantage of vector architectures, which characterize the largest and fastest computers at the current time, vector supercomputers such as the Cray X/MP or IBM 3090/600. However, substantial progress has been made in recent years in developing and implementing a vectorized Monte Carlo algorithm. This algorithm follows portions of many particle histories at the same time and forms the basis for all successful vectorized Monte Carlo codes that are in use today. This paper describes the basic vectorized algorithm along with descriptions of several variations that have been developed by different researchers for specific applications. These applications have been mainly in the areas of neutron transport in nuclear reactor and shielding analysis and photon transport in fusion plasmas. The relative merits of the various approach schemes will be discussed and the present status of known vectorization efforts will be summarized along with available timing results, including results from the successful vectorization of 3-D general geometry, continuous energy Monte Carlo. (orig.)

  20. Bayesian phylogeny analysis via stochastic approximation Monte Carlo

    KAUST Repository

    Cheon, Sooyoung; Liang, Faming

    2009-01-01

    in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method

  1. Sampling from a polytope and hard-disk Monte Carlo

    International Nuclear Information System (INIS)

    Kapfer, Sebastian C; Krauth, Werner

    2013-01-01

    The hard-disk problem, the statics and the dynamics of equal two-dimensional hard spheres in a periodic box, has had a profound influence on statistical and computational physics. Markov-chain Monte Carlo and molecular dynamics were first discussed for this model. Here we reformulate hard-disk Monte Carlo algorithms in terms of another classic problem, namely the sampling from a polytope. Local Markov-chain Monte Carlo, as proposed by Metropolis et al. in 1953, appears as a sequence of random walks in high-dimensional polytopes, while the moves of the more powerful event-chain algorithm correspond to molecular dynamics evolution. We determine the convergence properties of Monte Carlo methods in a special invariant polytope associated with hard-disk configurations, and the implications for convergence of hard-disk sampling. Finally, we discuss parallelization strategies for event-chain Monte Carlo and present results for a multicore implementation

  2. Problems in radiation shielding calculations with Monte Carlo methods

    International Nuclear Information System (INIS)

    Ueki, Kohtaro

    1985-01-01

    The Monte Carlo method is a very useful tool for solving a large class of radiation transport problem. In contrast with deterministic method, geometric complexity is a much less significant problem for Monte Carlo calculations. However, the accuracy of Monte Carlo calculations is of course, limited by statistical error of the quantities to be estimated. In this report, we point out some typical problems to solve a large shielding system including radiation streaming. The Monte Carlo coupling technique was developed to settle such a shielding problem accurately. However, the variance of the Monte Carlo results using the coupling technique of which detectors were located outside the radiation streaming, was still not enough. So as to bring on more accurate results for the detectors located outside the streaming and also for a multi-legged-duct streaming problem, a practicable way of ''Prism Scattering technique'' is proposed in the study. (author)

  3. Cluster monte carlo method for nuclear criticality safety calculation

    International Nuclear Information System (INIS)

    Pei Lucheng

    1984-01-01

    One of the most important applications of the Monte Carlo method is the calculation of the nuclear criticality safety. The fair source game problem was presented at almost the same time as the Monte Carlo method was applied to calculating the nuclear criticality safety. The source iteration cost may be reduced as much as possible or no need for any source iteration. This kind of problems all belongs to the fair source game prolems, among which, the optimal source game is without any source iteration. Although the single neutron Monte Carlo method solved the problem without the source iteration, there is still quite an apparent shortcoming in it, that is, it solves the problem without the source iteration only in the asymptotic sense. In this work, a new Monte Carlo method called the cluster Monte Carlo method is given to solve the problem further

  4. Wielandt acceleration for MCNP5 Monte Carlo eigenvalue calculations

    International Nuclear Information System (INIS)

    Brown, F.

    2007-01-01

    Monte Carlo criticality calculations use the power iteration method to determine the eigenvalue (k eff ) and eigenfunction (fission source distribution) of the fundamental mode. A recently proposed method for accelerating convergence of the Monte Carlo power iteration using Wielandt's method has been implemented in a test version of MCNP5. The method is shown to provide dramatic improvements in convergence rates and to greatly reduce the possibility of false convergence assessment. The method is effective and efficient, improving the Monte Carlo figure-of-merit for many problems. In addition, the method should eliminate most of the underprediction bias in confidence intervals for Monte Carlo criticality calculations. (authors)

  5. Monte Carlo shielding analyses using an automated biasing procedure

    International Nuclear Information System (INIS)

    Tang, J.S.; Hoffman, T.J.

    1988-01-01

    A systematic and automated approach for biasing Monte Carlo shielding calculations is described. In particular, adjoint fluxes from a one-dimensional discrete ordinates calculation are used to generate biasing parameters for a Monte Carlo calculation. The entire procedure of adjoint calculation, biasing parameters generation, and Monte Carlo calculation has been automated. The automated biasing procedure has been applied to several realistic deep-penetration shipping cask problems. The results obtained for neutron and gamma-ray transport indicate that with the automated biasing procedure Monte Carlo shielding calculations of spent-fuel casks can be easily performed with minimum effort and that accurate results can be obtained at reasonable computing cost

  6. Applications of the Monte Carlo method in radiation protection

    International Nuclear Information System (INIS)

    Kulkarni, R.N.; Prasad, M.A.

    1999-01-01

    This paper gives a brief introduction to the application of the Monte Carlo method in radiation protection. It may be noted that an exhaustive review has not been attempted. The special advantage of the Monte Carlo method has been first brought out. The fundamentals of the Monte Carlo method have next been explained in brief, with special reference to two applications in radiation protection. Some sample current applications have been reported in the end in brief as examples. They are, medical radiation physics, microdosimetry, calculations of thermoluminescence intensity and probabilistic safety analysis. The limitations of the Monte Carlo method have also been mentioned in passing. (author)

  7. Quantum statistical Monte Carlo methods and applications to spin systems

    International Nuclear Information System (INIS)

    Suzuki, M.

    1986-01-01

    A short review is given concerning the quantum statistical Monte Carlo method based on the equivalence theorem that d-dimensional quantum systems are mapped onto (d+1)-dimensional classical systems. The convergence property of this approximate tansformation is discussed in detail. Some applications of this general appoach to quantum spin systems are reviewed. A new Monte Carlo method, ''thermo field Monte Carlo method,'' is presented, which is an extension of the projection Monte Carlo method at zero temperature to that at finite temperatures

  8. Optix: A Monte Carlo scintillation light transport code

    Energy Technology Data Exchange (ETDEWEB)

    Safari, M.J., E-mail: mjsafari@aut.ac.ir [Department of Energy Engineering and Physics, Amir Kabir University of Technology, PO Box 15875-4413, Tehran (Iran, Islamic Republic of); Afarideh, H. [Department of Energy Engineering and Physics, Amir Kabir University of Technology, PO Box 15875-4413, Tehran (Iran, Islamic Republic of); Ghal-Eh, N. [School of Physics, Damghan University, PO Box 36716-41167, Damghan (Iran, Islamic Republic of); Davani, F. Abbasi [Nuclear Engineering Department, Shahid Beheshti University, PO Box 1983963113, Tehran (Iran, Islamic Republic of)

    2014-02-11

    The paper reports on the capabilities of Monte Carlo scintillation light transport code Optix, which is an extended version of previously introduced code Optics. Optix provides the user a variety of both numerical and graphical outputs with a very simple and user-friendly input structure. A benchmarking strategy has been adopted based on the comparison with experimental results, semi-analytical solutions, and other Monte Carlo simulation codes to verify various aspects of the developed code. Besides, some extensive comparisons have been made against the tracking abilities of general-purpose MCNPX and FLUKA codes. The presented benchmark results for the Optix code exhibit promising agreements. -- Highlights: • Monte Carlo simulation of scintillation light transport in 3D geometry. • Evaluation of angular distribution of detected photons. • Benchmark studies to check the accuracy of Monte Carlo simulations.

  9. Bayesian phylogeny analysis via stochastic approximation Monte Carlo

    KAUST Repository

    Cheon, Sooyoung

    2009-11-01

    Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time. © 2009 Elsevier Inc. All rights reserved.

  10. Novel imaging and quality assurance techniques for ion beam therapy a Monte Carlo study

    CERN Document Server

    Rinaldi, I; Jäkel, O; Mairani, A; Parodi, K

    2010-01-01

    Ion beams exhibit a finite and well defined range in matter together with an “inverted” depth-dose profile, the so-called Bragg peak. These favourable physical properties may enable superior tumour-dose conformality for high precision radiation therapy. On the other hand, they introduce the issue of sensitivity to range uncertainties in ion beam therapy. Although these uncertainties are typically taken into account when planning the treatment, correct delivery of the intended ion beam range has to be assured to prevent undesired underdosage of the tumour or overdosage of critical structures outside the target volume. Therefore, it is necessary to define dedicated Quality Assurance procedures to enable in-vivo range verification before or during therapeutic irradiation. For these purposes, Monte Carlo transport codes are very useful tools to support the development of novel imaging modalities for ion beam therapy. In the present work, we present calculations performed with the FLUKA Monte Carlo code and pr...

  11. Diffusion Monte Carlo approach versus adiabatic computation for local Hamiltonians

    Science.gov (United States)

    Bringewatt, Jacob; Dorland, William; Jordan, Stephen P.; Mink, Alan

    2018-02-01

    Most research regarding quantum adiabatic optimization has focused on stoquastic Hamiltonians, whose ground states can be expressed with only real non-negative amplitudes and thus for whom destructive interference is not manifest. This raises the question of whether classical Monte Carlo algorithms can efficiently simulate quantum adiabatic optimization with stoquastic Hamiltonians. Recent results have given counterexamples in which path-integral and diffusion Monte Carlo fail to do so. However, most adiabatic optimization algorithms, such as for solving MAX-k -SAT problems, use k -local Hamiltonians, whereas our previous counterexample for diffusion Monte Carlo involved n -body interactions. Here we present a 6-local counterexample which demonstrates that even for these local Hamiltonians there are cases where diffusion Monte Carlo cannot efficiently simulate quantum adiabatic optimization. Furthermore, we perform empirical testing of diffusion Monte Carlo on a standard well-studied class of permutation-symmetric tunneling problems and similarly find large advantages for quantum optimization over diffusion Monte Carlo.

  12. Neutron point-flux calculation by Monte Carlo

    International Nuclear Information System (INIS)

    Eichhorn, M.

    1986-04-01

    A survey of the usual methods for estimating flux at a point is given. The associated variance-reducing techniques in direct Monte Carlo games are explained. The multigroup Monte Carlo codes MC for critical systems and PUNKT for point source-point detector-systems are represented, and problems in applying the codes to practical tasks are discussed. (author)

  13. Frequency domain Monte Carlo simulation method for cross power spectral density driven by periodically pulsed spallation neutron source using complex-valued weight Monte Carlo

    International Nuclear Information System (INIS)

    Yamamoto, Toshihiro

    2014-01-01

    Highlights: • The cross power spectral density in ADS has correlated and uncorrelated components. • A frequency domain Monte Carlo method to calculate the uncorrelated one is developed. • The method solves the Fourier transformed transport equation. • The method uses complex-valued weights to solve the equation. • The new method reproduces well the CPSDs calculated with time domain MC method. - Abstract: In an accelerator driven system (ADS), pulsed spallation neutrons are injected at a constant frequency. The cross power spectral density (CPSD), which can be used for monitoring the subcriticality of the ADS, is composed of the correlated and uncorrelated components. The uncorrelated component is described by a series of the Dirac delta functions that occur at the integer multiples of the pulse repetition frequency. In the present paper, a Monte Carlo method to solve the Fourier transformed neutron transport equation with a periodically pulsed neutron source term has been developed to obtain the CPSD in ADSs. Since the Fourier transformed flux is a complex-valued quantity, the Monte Carlo method introduces complex-valued weights to solve the Fourier transformed equation. The Monte Carlo algorithm used in this paper is similar to the one that was developed by the author of this paper to calculate the neutron noise caused by cross section perturbations. The newly-developed Monte Carlo algorithm is benchmarked to the conventional time domain Monte Carlo simulation technique. The CPSDs are obtained both with the newly-developed frequency domain Monte Carlo method and the conventional time domain Monte Carlo method for a one-dimensional infinite slab. The CPSDs obtained with the frequency domain Monte Carlo method agree well with those with the time domain method. The higher order mode effects on the CPSD in an ADS with a periodically pulsed neutron source are discussed

  14. Shell model the Monte Carlo way

    International Nuclear Information System (INIS)

    Ormand, W.E.

    1995-01-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined

  15. Shell model the Monte Carlo way

    Energy Technology Data Exchange (ETDEWEB)

    Ormand, W.E.

    1995-03-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.

  16. Monte Carlo learning/biasing experiment with intelligent random numbers

    International Nuclear Information System (INIS)

    Booth, T.E.

    1985-01-01

    A Monte Carlo learning and biasing technique is described that does its learning and biasing in the random number space rather than the physical phase-space. The technique is probably applicable to all linear Monte Carlo problems, but no proof is provided here. Instead, the technique is illustrated with a simple Monte Carlo transport problem. Problems encountered, problems solved, and speculations about future progress are discussed. 12 refs

  17. Temperature variance study in Monte-Carlo photon transport theory

    International Nuclear Information System (INIS)

    Giorla, J.

    1985-10-01

    We study different Monte-Carlo methods for solving radiative transfer problems, and particularly Fleck's Monte-Carlo method. We first give the different time-discretization schemes and the corresponding stability criteria. Then we write the temperature variance as a function of the variances of temperature and absorbed energy at the previous time step. Finally we obtain some stability criteria for the Monte-Carlo method in the stationary case [fr

  18. Randomized quasi-Monte Carlo simulation of fast-ion thermalization

    Science.gov (United States)

    Höök, L. J.; Johnson, T.; Hellsten, T.

    2012-01-01

    This work investigates the applicability of the randomized quasi-Monte Carlo method for simulation of fast-ion thermalization processes in fusion plasmas, e.g. for simulation of neutral beam injection and radio frequency heating. In contrast to the standard Monte Carlo method, the quasi-Monte Carlo method uses deterministic numbers instead of pseudo-random numbers and has a statistical weak convergence close to {O}(N^{-1}) , where N is the number of markers. We have compared different quasi-Monte Carlo methods for a neutral beam injection scenario, which is solved by many realizations of the associated stochastic differential equation, discretized with the Euler-Maruyama scheme. The statistical convergence of the methods is measured for time steps up to 214.

  19. A Monte Carlo algorithm for the Vavilov distribution

    International Nuclear Information System (INIS)

    Yi, Chul-Young; Han, Hyon-Soo

    1999-01-01

    Using the convolution property of the inverse Laplace transform, an improved Monte Carlo algorithm for the Vavilov energy-loss straggling distribution of the charged particle is developed, which is relatively simple and gives enough accuracy to be used for most Monte Carlo applications

  20. Adaptive Multilevel Monte Carlo Simulation

    KAUST Repository

    Hoel, H

    2011-08-23

    This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).

  1. Nested Sampling with Constrained Hamiltonian Monte Carlo

    OpenAIRE

    Betancourt, M. J.

    2010-01-01

    Nested sampling is a powerful approach to Bayesian inference ultimately limited by the computationally demanding task of sampling from a heavily constrained probability distribution. An effective algorithm in its own right, Hamiltonian Monte Carlo is readily adapted to efficiently sample from any smooth, constrained distribution. Utilizing this constrained Hamiltonian Monte Carlo, I introduce a general implementation of the nested sampling algorithm.

  2. Monte Carlo computation in the applied research of nuclear technology

    International Nuclear Information System (INIS)

    Xu Shuyan; Liu Baojie; Li Qin

    2007-01-01

    This article briefly introduces Monte Carlo Methods and their properties. It narrates the Monte Carlo methods with emphasis in their applications to several domains of nuclear technology. Monte Carlo simulation methods and several commonly used computer software to implement them are also introduced. The proposed methods are demonstrated by a real example. (authors)

  3. Shell model Monte Carlo methods

    International Nuclear Information System (INIS)

    Koonin, S.E.

    1996-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs

  4. Multiple histogram method and static Monte Carlo sampling

    NARCIS (Netherlands)

    Inda, M.A.; Frenkel, D.

    2004-01-01

    We describe an approach to use multiple-histogram methods in combination with static, biased Monte Carlo simulations. To illustrate this, we computed the force-extension curve of an athermal polymer from multiple histograms constructed in a series of static Rosenbluth Monte Carlo simulations. From

  5. Forest canopy BRDF simulation using Monte Carlo method

    NARCIS (Netherlands)

    Huang, J.; Wu, B.; Zeng, Y.; Tian, Y.

    2006-01-01

    Monte Carlo method is a random statistic method, which has been widely used to simulate the Bidirectional Reflectance Distribution Function (BRDF) of vegetation canopy in the field of visible remote sensing. The random process between photons and forest canopy was designed using Monte Carlo method.

  6. Characterization of decommissioned reactor internals: Monte Carlo analysis technique

    International Nuclear Information System (INIS)

    Reid, B.D.; Love, E.F.; Luksic, A.T.

    1993-03-01

    This study discusses computer analysis techniques for determining activation levels of irradiated reactor component hardware to yield data for the Department of Energy's Greater-Than-Class C Low-Level Radioactive Waste Program. The study recommends the Monte Carlo Neutron/Photon (MCNP) computer code as the best analysis tool for this application and compares the technique to direct sampling methodology. To implement the MCNP analysis, a computer model would be developed to reflect the geometry, material composition, and power history of an existing shutdown reactor. MCNP analysis would then be performed using the computer model, and the results would be validated by comparison to laboratory analysis results from samples taken from the shutdown reactor. The report estimates uncertainties for each step of the computational and laboratory analyses; the overall uncertainty of the MCNP results is projected to be ±35%. The primary source of uncertainty is identified as the material composition of the components, and research is suggested to address that uncertainty

  7. Discrete Diffusion Monte Carlo for Electron Thermal Transport

    Science.gov (United States)

    Chenhall, Jeffrey; Cao, Duc; Wollaeger, Ryan; Moses, Gregory

    2014-10-01

    The iSNB (implicit Schurtz Nicolai Busquet electron thermal transport method of Cao et al. is adapted to a Discrete Diffusion Monte Carlo (DDMC) solution method for eventual inclusion in a hybrid IMC-DDMC (Implicit Monte Carlo) method. The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the iSNB-DDMC method will be presented. This work was supported by Sandia National Laboratory - Albuquerque.

  8. Monte Carlo techniques in diagnostic and therapeutic nuclear medicine

    International Nuclear Information System (INIS)

    Zaidi, H.

    2002-01-01

    Monte Carlo techniques have become one of the most popular tools in different areas of medical radiation physics following the development and subsequent implementation of powerful computing systems for clinical use. In particular, they have been extensively applied to simulate processes involving random behaviour and to quantify physical parameters that are difficult or even impossible to calculate analytically or to determine by experimental measurements. The use of the Monte Carlo method to simulate radiation transport turned out to be the most accurate means of predicting absorbed dose distributions and other quantities of interest in the radiation treatment of cancer patients using either external or radionuclide radiotherapy. The same trend has occurred for the estimation of the absorbed dose in diagnostic procedures using radionuclides. There is broad consensus in accepting that the earliest Monte Carlo calculations in medical radiation physics were made in the area of nuclear medicine, where the technique was used for dosimetry modelling and computations. Formalism and data based on Monte Carlo calculations, developed by the Medical Internal Radiation Dose (MIRD) committee of the Society of Nuclear Medicine, were published in a series of supplements to the Journal of Nuclear Medicine, the first one being released in 1968. Some of these pamphlets made extensive use of Monte Carlo calculations to derive specific absorbed fractions for electron and photon sources uniformly distributed in organs of mathematical phantoms. Interest in Monte Carlo-based dose calculations with β-emitters has been revived with the application of radiolabelled monoclonal antibodies to radioimmunotherapy. As a consequence of this generalized use, many questions are being raised primarily about the need and potential of Monte Carlo techniques, but also about how accurate it really is, what would it take to apply it clinically and make it available widely to the medical physics

  9. On the predictivity of pore-scale simulations: estimating uncertainties with multilevel Monte Carlo

    KAUST Repository

    Icardi, Matteo

    2016-02-08

    A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another “equivalent” sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [2015. https://bitbucket.org/micardi/porescalemc.], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers

  10. Monte Carlo strategies in scientific computing

    CERN Document Server

    Liu, Jun S

    2008-01-01

    This paperback edition is a reprint of the 2001 Springer edition This book provides a self-contained and up-to-date treatment of the Monte Carlo method and develops a common framework under which various Monte Carlo techniques can be "standardized" and compared Given the interdisciplinary nature of the topics and a moderate prerequisite for the reader, this book should be of interest to a broad audience of quantitative researchers such as computational biologists, computer scientists, econometricians, engineers, probabilists, and statisticians It can also be used as the textbook for a graduate-level course on Monte Carlo methods Many problems discussed in the alter chapters can be potential thesis topics for masters’ or PhD students in statistics or computer science departments Jun Liu is Professor of Statistics at Harvard University, with a courtesy Professor appointment at Harvard Biostatistics Department Professor Liu was the recipient of the 2002 COPSS Presidents' Award, the most prestigious one for sta...

  11. Off-diagonal expansion quantum Monte Carlo.

    Science.gov (United States)

    Albash, Tameem; Wagenbreth, Gene; Hen, Itay

    2017-12-01

    We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.

  12. Monte Carlo calculation of correction factors for radionuclide neutron source emission rate measurement by manganese bath method

    International Nuclear Information System (INIS)

    Li Chunjuan; Liu Yi'na; Zhang Weihua; Wang Zhiqiang

    2014-01-01

    The manganese bath method for measuring the neutron emission rate of radionuclide sources requires corrections to be made for emitted neutrons which are not captured by manganese nuclei. The Monte Carlo particle transport code MCNP was used to simulate the manganese bath system of the standards for the measurement of neutron source intensity. The correction factors were calculated and the reliability of the model was demonstrated through the key comparison for the radionuclide neutron source emission rate measurements organized by BIPM. The uncertainties in the calculated values were evaluated by considering the sensitivities to the solution density, the density of the radioactive material, the positioning of the source, the radius of the bath, and the interaction cross-sections. A new method for the evaluation of the uncertainties in Monte Carlo calculation was given. (authors)

  13. Dynamic bounds coupled with Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Rajabalinejad, M., E-mail: M.Rajabalinejad@tudelft.n [Faculty of Civil Engineering, Delft University of Technology, Delft (Netherlands); Meester, L.E. [Delft Institute of Applied Mathematics, Delft University of Technology, Delft (Netherlands); Gelder, P.H.A.J.M. van; Vrijling, J.K. [Faculty of Civil Engineering, Delft University of Technology, Delft (Netherlands)

    2011-02-15

    For the reliability analysis of engineering structures a variety of methods is known, of which Monte Carlo (MC) simulation is widely considered to be among the most robust and most generally applicable. To reduce simulation cost of the MC method, variance reduction methods are applied. This paper describes a method to reduce the simulation cost even further, while retaining the accuracy of Monte Carlo, by taking into account widely present monotonicity. For models exhibiting monotonic (decreasing or increasing) behavior, dynamic bounds (DB) are defined, which in a coupled Monte Carlo simulation are updated dynamically, resulting in a failure probability estimate, as well as a strict (non-probabilistic) upper and lower bounds. Accurate results are obtained at a much lower cost than an equivalent ordinary Monte Carlo simulation. In a two-dimensional and a four-dimensional numerical example, the cost reduction factors are 130 and 9, respectively, where the relative error is smaller than 5%. At higher accuracy levels, this factor increases, though this effect is expected to be smaller with increasing dimension. To show the application of DB method to real world problems, it is applied to a complex finite element model of a flood wall in New Orleans.

  14. Coded aperture optimization using Monte Carlo simulations

    International Nuclear Information System (INIS)

    Martineau, A.; Rocchisani, J.M.; Moretti, J.L.

    2010-01-01

    Coded apertures using Uniformly Redundant Arrays (URA) have been unsuccessfully evaluated for two-dimensional and three-dimensional imaging in Nuclear Medicine. The images reconstructed from coded projections contain artifacts and suffer from poor spatial resolution in the longitudinal direction. We introduce a Maximum-Likelihood Expectation-Maximization (MLEM) algorithm for three-dimensional coded aperture imaging which uses a projection matrix calculated by Monte Carlo simulations. The aim of the algorithm is to reduce artifacts and improve the three-dimensional spatial resolution in the reconstructed images. Firstly, we present the validation of GATE (Geant4 Application for Emission Tomography) for Monte Carlo simulations of a coded mask installed on a clinical gamma camera. The coded mask modelling was validated by comparison between experimental and simulated data in terms of energy spectra, sensitivity and spatial resolution. In the second part of the study, we use the validated model to calculate the projection matrix with Monte Carlo simulations. A three-dimensional thyroid phantom study was performed to compare the performance of the three-dimensional MLEM reconstruction with conventional correlation method. The results indicate that the artifacts are reduced and three-dimensional spatial resolution is improved with the Monte Carlo-based MLEM reconstruction.

  15. Variational Monte Carlo Technique

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 19; Issue 8. Variational Monte Carlo Technique: Ground State Energies of Quantum Mechanical Systems. Sukanta Deb. General Article Volume 19 Issue 8 August 2014 pp 713-739 ...

  16. The Monte Carlo performance benchmark test - AIMS, specifications and first results

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. Eduard, E-mail: j.e.hoogenboom@tudelft.nl [Faculty of Applied Sciences, Delft University of Technology (Netherlands); Martin, William R., E-mail: wrm@umich.edu [Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI (United States); Petrovic, Bojan, E-mail: Bojan.Petrovic@gatech.edu [Nuclear and Radiological Engineering, Georgia Institute of Technology, Atlanta, GA (United States)

    2011-07-01

    The Monte Carlo performance benchmark for detailed power density calculation in a full-size reactor core is organized under the auspices of the OECD NEA Data Bank. It aims at monitoring over a range of years the increase in performance, measured in terms of standard deviation and computer time, of Monte Carlo calculation of the power density in small volumes. A short description of the reactor geometry and composition is discussed. One of the unique features of the benchmark exercise is the possibility to upload results from participants at a web site of the NEA Data Bank which enables online analysis of results and to graphically display how near we are at the goal of doing a detailed power distribution calculation with acceptable statistical uncertainty in an acceptable computing time. First results are discussed which show that 10 to 100 billion histories must be simulated to reach a standard deviation of a few percent in the estimated power of most of the requested the fuel zones. Even when using a large supercomputer, a considerable speedup is still needed to reach the target of 1 hour computer time. An outlook is given of what to expect from this benchmark exercise over the years. Possible extensions of the benchmark for specific issues relevant in current Monte Carlo calculation for nuclear reactors are also discussed. (author)

  17. The Monte Carlo performance benchmark test - AIMS, specifications and first results

    International Nuclear Information System (INIS)

    Hoogenboom, J. Eduard; Martin, William R.; Petrovic, Bojan

    2011-01-01

    The Monte Carlo performance benchmark for detailed power density calculation in a full-size reactor core is organized under the auspices of the OECD NEA Data Bank. It aims at monitoring over a range of years the increase in performance, measured in terms of standard deviation and computer time, of Monte Carlo calculation of the power density in small volumes. A short description of the reactor geometry and composition is discussed. One of the unique features of the benchmark exercise is the possibility to upload results from participants at a web site of the NEA Data Bank which enables online analysis of results and to graphically display how near we are at the goal of doing a detailed power distribution calculation with acceptable statistical uncertainty in an acceptable computing time. First results are discussed which show that 10 to 100 billion histories must be simulated to reach a standard deviation of a few percent in the estimated power of most of the requested the fuel zones. Even when using a large supercomputer, a considerable speedup is still needed to reach the target of 1 hour computer time. An outlook is given of what to expect from this benchmark exercise over the years. Possible extensions of the benchmark for specific issues relevant in current Monte Carlo calculation for nuclear reactors are also discussed. (author)

  18. Dosimetric effect of statistics noise of the TC image in the simulation Monte Carlo of radiotherapy treatments

    International Nuclear Information System (INIS)

    Laliena Bielsa, V.; Jimenez Albericio, F. J.; Gandia Martinez, A.; Font Gomez, J. A.; Mengual Gil, M. A.; Andres Redondo, M. M.

    2013-01-01

    The source of uncertainty is not exclusive of the Monte Carlo method, but it will be present in any algorithm which takes into account the correction for heterogeneity. Although we hope that the uncertainty described above is small, the objective of this work is to try to quantify depending on the CT study. (Author)

  19. Randomized quasi-Monte Carlo simulation of fast-ion thermalization

    International Nuclear Information System (INIS)

    Höök, L J; Johnson, T; Hellsten, T

    2012-01-01

    This work investigates the applicability of the randomized quasi-Monte Carlo method for simulation of fast-ion thermalization processes in fusion plasmas, e.g. for simulation of neutral beam injection and radio frequency heating. In contrast to the standard Monte Carlo method, the quasi-Monte Carlo method uses deterministic numbers instead of pseudo-random numbers and has a statistical weak convergence close to O(N -1 ), where N is the number of markers. We have compared different quasi-Monte Carlo methods for a neutral beam injection scenario, which is solved by many realizations of the associated stochastic differential equation, discretized with the Euler-Maruyama scheme. The statistical convergence of the methods is measured for time steps up to 2 14 . (paper)

  20. Combinatorial nuclear level density by a Monte Carlo method

    International Nuclear Information System (INIS)

    Cerf, N.

    1994-01-01

    We present a new combinatorial method for the calculation of the nuclear level density. It is based on a Monte Carlo technique, in order to avoid a direct counting procedure which is generally impracticable for high-A nuclei. The Monte Carlo simulation, making use of the Metropolis sampling scheme, allows a computationally fast estimate of the level density for many fermion systems in large shell model spaces. We emphasize the advantages of this Monte Carlo approach, particularly concerning the prediction of the spin and parity distributions of the excited states,and compare our results with those derived from a traditional combinatorial or a statistical method. Such a Monte Carlo technique seems very promising to determine accurate level densities in a large energy range for nuclear reaction calculations

  1. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    International Nuclear Information System (INIS)

    Booth, T.E.

    1992-12-01

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed

  2. The vector and parallel processing of MORSE code on Monte Carlo Machine

    International Nuclear Information System (INIS)

    Hasegawa, Yukihiro; Higuchi, Kenji.

    1995-11-01

    Multi-group Monte Carlo Code for particle transport, MORSE is modified for high performance computing on Monte Carlo Machine Monte-4. The method and the results are described. Monte-4 was specially developed to realize high performance computing of Monte Carlo codes for particle transport, which have been difficult to obtain high performance in vector processing on conventional vector processors. Monte-4 has four vector processor units with the special hardware called Monte Carlo pipelines. The vectorization and parallelization of MORSE code and the performance evaluation on Monte-4 are described. (author)

  3. Discrete diffusion Monte Carlo for frequency-dependent radiative transfer

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Thompson, Kelly G.; Urbatsch, Todd J.

    2011-01-01

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique. (author)

  4. Uncertainty analysis in the simulation of an HPGe detector using the Monte Carlo Code MCNP5

    International Nuclear Information System (INIS)

    Gallardo, Sergio; Pozuelo, Fausto; Querol, Andrea; Verdu, Gumersindo; Rodenas, Jose; Ortiz, J.; Pereira, Claubia

    2013-01-01

    A gamma spectrometer including an HPGe detector is commonly used for environmental radioactivity measurements. Many works have been focused on the simulation of the HPGe detector using Monte Carlo codes such as MCNP5. However, the simulation of this kind of detectors presents important difficulties due to the lack of information from manufacturers and due to loss of intrinsic properties in aging detectors. Some parameters such as the active volume or the Ge dead layer thickness are many times unknown and are estimated during simulations. In this work, a detailed model of an HPGe detector and a petri dish containing a certified gamma source has been done. The certified gamma source contains nuclides to cover the energy range between 50 and 1800 keV. As a result of the simulation, the Pulse Height Distribution (PHD) is obtained and the efficiency curve can be calculated from net peak areas and taking into account the certified activity of the source. In order to avoid errors due to the net area calculation, the simulated PHD is treated using the GammaVision software. On the other hand, it is proposed to use the Noether-Wilks formula to do an uncertainty analysis of model with the main goal of determining the efficiency curve of this detector and its associated uncertainty. The uncertainty analysis has been focused on dead layer thickness at different positions of the crystal. Results confirm the important role of the dead layer thickness in the low energy range of the efficiency curve. In the high energy range (from 300 to 1800 keV) the main contribution to the absolute uncertainty is due to variations in the active volume. (author)

  5. Uncertainty analysis in the simulation of an HPGe detector using the Monte Carlo Code MCNP5

    Energy Technology Data Exchange (ETDEWEB)

    Gallardo, Sergio; Pozuelo, Fausto; Querol, Andrea; Verdu, Gumersindo; Rodenas, Jose, E-mail: sergalbe@upv.es [Universitat Politecnica de Valencia, Valencia, (Spain). Instituto de Seguridad Industrial, Radiofisica y Medioambiental (ISIRYM); Ortiz, J. [Universitat Politecnica de Valencia, Valencia, (Spain). Servicio de Radiaciones. Lab. de Radiactividad Ambiental; Pereira, Claubia [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Nuclear

    2013-07-01

    A gamma spectrometer including an HPGe detector is commonly used for environmental radioactivity measurements. Many works have been focused on the simulation of the HPGe detector using Monte Carlo codes such as MCNP5. However, the simulation of this kind of detectors presents important difficulties due to the lack of information from manufacturers and due to loss of intrinsic properties in aging detectors. Some parameters such as the active volume or the Ge dead layer thickness are many times unknown and are estimated during simulations. In this work, a detailed model of an HPGe detector and a petri dish containing a certified gamma source has been done. The certified gamma source contains nuclides to cover the energy range between 50 and 1800 keV. As a result of the simulation, the Pulse Height Distribution (PHD) is obtained and the efficiency curve can be calculated from net peak areas and taking into account the certified activity of the source. In order to avoid errors due to the net area calculation, the simulated PHD is treated using the GammaVision software. On the other hand, it is proposed to use the Noether-Wilks formula to do an uncertainty analysis of model with the main goal of determining the efficiency curve of this detector and its associated uncertainty. The uncertainty analysis has been focused on dead layer thickness at different positions of the crystal. Results confirm the important role of the dead layer thickness in the low energy range of the efficiency curve. In the high energy range (from 300 to 1800 keV) the main contribution to the absolute uncertainty is due to variations in the active volume. (author)

  6. Modified Monte Carlo procedure for particle transport problems

    International Nuclear Information System (INIS)

    Matthes, W.

    1978-01-01

    The simulation of photon transport in the atmosphere with the Monte Carlo method forms part of the EURASEP-programme. The specifications for the problems posed for a solution were such, that the direct application of the analogue Monte Carlo method was not feasible. For this reason the standard Monte Carlo procedure was modified in the sense that additional properly weighted branchings at each collision and transport process in a photon history were introduced. This modified Monte Carlo procedure leads to a clear and logical separation of the essential parts of a problem and offers a large flexibility for variance reducing techniques. More complex problems, as foreseen in the EURASEP-programme (e.g. clouds in the atmosphere, rough ocean-surface and chlorophyl-distribution in the ocean) can be handled by recoding some subroutines. This collision- and transport-splitting procedure can of course be performed differently in different space- and energy regions. It is applied here only for a homogeneous problem

  7. An Overview of the Monte Carlo Application ToolKit (MCATK)

    Energy Technology Data Exchange (ETDEWEB)

    Trahan, Travis John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-01-07

    MCATK is a C++ component-based Monte Carlo neutron-gamma transport software library designed to build specialized applications and designed to provide new functionality in existing general-purpose Monte Carlo codes like MCNP; it was developed with Agile software engineering methodologies under the motivation to reduce costs. The characteristics of MCATK can be summarized as follows: MCATK physics – continuous energy neutron-gamma transport with multi-temperature treatment, static eigenvalue (k and α) algorithms, time-dependent algorithm, fission chain algorithms; MCATK geometry – mesh geometries, solid body geometries. MCATK provides verified, unit-tested Monte Carlo components, flexibility in Monte Carlo applications development, and numerous tools such as geometry and cross section plotters. Recent work has involved deterministic and Monte Carlo analysis of stochastic systems. Static and dynamic analysis is discussed, and the results of a dynamic test problem are given.

  8. Efficiency and accuracy of Monte Carlo (importance) sampling

    NARCIS (Netherlands)

    Waarts, P.H.

    2003-01-01

    Monte Carlo Analysis is often regarded as the most simple and accurate reliability method. Be-sides it is the most transparent method. The only problem is the accuracy in correlation with the efficiency. Monte Carlo gets less efficient or less accurate when very low probabilities are to be computed

  9. Data Analysis Recipes: Using Markov Chain Monte Carlo

    Science.gov (United States)

    Hogg, David W.; Foreman-Mackey, Daniel

    2018-05-01

    Markov Chain Monte Carlo (MCMC) methods for sampling probability density functions (combined with abundant computational resources) have transformed the sciences, especially in performing probabilistic inferences, or fitting models to data. In this primarily pedagogical contribution, we give a brief overview of the most basic MCMC method and some practical advice for the use of MCMC in real inference problems. We give advice on method choice, tuning for performance, methods for initialization, tests of convergence, troubleshooting, and use of the chain output to produce or report parameter estimates with associated uncertainties. We argue that autocorrelation time is the most important test for convergence, as it directly connects to the uncertainty on the sampling estimate of any quantity of interest. We emphasize that sampling is a method for doing integrals; this guides our thinking about how MCMC output is best used. .

  10. Suppression of the initial transient in Monte Carlo criticality simulations; Suppression du regime transitoire initial des simulations Monte-Carlo de criticite

    Energy Technology Data Exchange (ETDEWEB)

    Richet, Y

    2006-12-15

    Criticality Monte Carlo calculations aim at estimating the effective multiplication factor (k-effective) for a fissile system through iterations simulating neutrons propagation (making a Markov chain). Arbitrary initialization of the neutron population can deeply bias the k-effective estimation, defined as the mean of the k-effective computed at each iteration. A simplified model of this cycle k-effective sequence is built, based on characteristics of industrial criticality Monte Carlo calculations. Statistical tests, inspired by Brownian bridge properties, are designed to discriminate stationarity of the cycle k-effective sequence. The initial detected transient is, then, suppressed in order to improve the estimation of the system k-effective. The different versions of this methodology are detailed and compared, firstly on a plan of numerical tests fitted on criticality Monte Carlo calculations, and, secondly on real criticality calculations. Eventually, the best methodologies observed in these tests are selected and allow to improve industrial Monte Carlo criticality calculations. (author)

  11. Monte Carlo criticality analysis for dissolvers with neutron poison

    International Nuclear Information System (INIS)

    Yu, Deshun; Dong, Xiufang; Pu, Fuxiang.

    1987-01-01

    Criticality analysis for dissolvers with neutron poison is given on the basis of Monte Carlo method. In Monte Carlo calculations of thermal neutron group parameters for fuel pieces, neutron transport length is determined in terms of maximum cross section approach. A set of related effective multiplication factors (K eff ) are calculated by Monte Carlo method for the three cases. Related numerical results are quite useful for the design and operation of this kind of dissolver in the criticality safety analysis. (author)

  12. Improvements for Monte Carlo burnup calculation

    Energy Technology Data Exchange (ETDEWEB)

    Shenglong, Q.; Dong, Y.; Danrong, S.; Wei, L., E-mail: qiangshenglong@tsinghua.org.cn, E-mail: d.yao@npic.ac.cn, E-mail: songdr@npic.ac.cn, E-mail: luwei@npic.ac.cn [Nuclear Power Inst. of China, Cheng Du, Si Chuan (China)

    2015-07-01

    Monte Carlo burnup calculation is development trend of reactor physics, there would be a lot of work to be done for engineering applications. Based on Monte Carlo burnup code MOI, non-fuel burnup calculation methods and critical search suggestions will be mentioned in this paper. For non-fuel burnup, mixed burnup mode will improve the accuracy of burnup calculation and efficiency. For critical search of control rod position, a new method called ABN based on ABA which used by MC21 will be proposed for the first time in this paper. (author)

  13. Monte Carlo dose distributions for radiosurgery

    International Nuclear Information System (INIS)

    Perucha, M.; Leal, A.; Rincon, M.; Carrasco, E.

    2001-01-01

    The precision of Radiosurgery Treatment planning systems is limited by the approximations of their algorithms and by their dosimetrical input data. This fact is especially important in small fields. However, the Monte Carlo methods is an accurate alternative as it considers every aspect of particle transport. In this work an acoustic neurinoma is studied by comparing the dose distribution of both a planning system and Monte Carlo. Relative shifts have been measured and furthermore, Dose-Volume Histograms have been calculated for target and adjacent organs at risk. (orig.)

  14. Shell model Monte Carlo methods

    International Nuclear Information System (INIS)

    Koonin, S.E.; Dean, D.J.; Langanke, K.

    1997-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; the resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo (SMMC) methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, the thermal and rotational behavior of rare-earth and γ-soft nuclei, and the calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. (orig.)

  15. Monte Carlo Methods in ICF

    Science.gov (United States)

    Zimmerman, George B.

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

  16. Monte Carlo methods in ICF

    International Nuclear Information System (INIS)

    Zimmerman, George B.

    1997-01-01

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials

  17. Quasi Monte Carlo methods for optimization models of the energy industry with pricing and load processes; Quasi-Monte Carlo Methoden fuer Optimierungsmodelle der Energiewirtschaft mit Preis- und Last-Prozessen

    Energy Technology Data Exchange (ETDEWEB)

    Leoevey, H.; Roemisch, W. [Humboldt-Univ., Berlin (Germany)

    2015-07-01

    We discuss progress in quasi Monte Carlo methods for numerical calculation integrals or expected values and justify why these methods are more efficient than the classic Monte Carlo methods. Quasi Monte Carlo methods are found to be particularly efficient if the integrands have a low effective dimension. That's why We also discuss the concept of effective dimension and prove on the example of a stochastic Optimization model of the energy industry that such models can posses a low effective dimension. Modern quasi Monte Carlo methods are therefore for such models very promising. [German] Wir diskutieren Fortschritte bei Quasi-Monte Carlo Methoden zur numerischen Berechnung von Integralen bzw. Erwartungswerten und begruenden warum diese Methoden effizienter sind als die klassischen Monte Carlo Methoden. Quasi-Monte Carlo Methoden erweisen sich als besonders effizient, falls die Integranden eine geringe effektive Dimension besitzen. Deshalb diskutieren wir auch den Begriff effektive Dimension und weisen am Beispiel eines stochastischen Optimierungsmodell aus der Energiewirtschaft nach, dass solche Modelle eine niedrige effektive Dimension besitzen koennen. Moderne Quasi-Monte Carlo Methoden sind deshalb fuer solche Modelle sehr erfolgversprechend.

  18. BREM5 electroweak Monte Carlo

    International Nuclear Information System (INIS)

    Kennedy, D.C. II.

    1987-01-01

    This is an update on the progress of the BREMMUS Monte Carlo simulator, particularly in its current incarnation, BREM5. The present report is intended only as a follow-up to the Mark II/Granlibakken proceedings, and those proceedings should be consulted for a complete description of the capabilities and goals of the BREMMUS program. The new BREM5 program improves on the previous version of BREMMUS, BREM2, in a number of important ways. In BREM2, the internal loop (oblique) corrections were not treated in consistent fashion, a deficiency that led to renormalization scheme-dependence; i.e., physical results, such as cross sections, were dependent on the method used to eliminate infinities from the theory. Of course, this problem cannot be tolerated in a Monte Carlo designed for experimental use. BREM5 incorporates a new way of treating the oblique corrections, as explained in the Granlibakken proceedings, that guarantees renormalization scheme-independence and dramatically simplifies the organization and calculation of radiative corrections. This technique is to be presented in full detail in a forthcoming paper. BREM5 is, at this point, the only Monte Carlo to contain the entire set of one-loop corrections to electroweak four-fermion processes and renormalization scheme-independence. 3 figures

  19. PEPSI: a Monte Carlo generator for polarized leptoproduction

    International Nuclear Information System (INIS)

    Mankiewicz, L.

    1992-01-01

    We describe PEPSI (Polarized Electron Proton Scattering Interactions) a Monte Carlo program for the polarized deep inelastic leptoproduction mediated by electromagnetic interaction. The code is a modification of the LEPTO 4.3 Lund Monte Carlo for unpolarized scattering and requires the standard polarization-independent JETSET routines to perform fragmentation into final hadrons. (orig.)

  20. Importance estimation in Monte Carlo modelling of neutron and photon transport

    International Nuclear Information System (INIS)

    Mickael, M.W.

    1992-01-01

    The estimation of neutron and photon importance in a three-dimensional geometry is achieved using a coupled Monte Carlo and diffusion theory calculation. The parameters required for the solution of the multigroup adjoint diffusion equation are estimated from an analog Monte Carlo simulation of the system under investigation. The solution of the adjoint diffusion equation is then used as an estimate of the particle importance in the actual simulation. This approach provides an automated and efficient variance reduction method for Monte Carlo simulations. The technique has been successfully applied to Monte Carlo simulation of neutron and coupled neutron-photon transport in the nuclear well-logging field. The results show that the importance maps obtained in a few minutes of computer time using this technique are in good agreement with Monte Carlo generated importance maps that require prohibitive computing times. The application of this method to Monte Carlo modelling of the response of neutron porosity and pulsed neutron instruments has resulted in major reductions in computation time. (Author)

  1. Use of Monte Carlo modeling approach for evaluating risk and environmental compliance

    International Nuclear Information System (INIS)

    Higley, K.A.; Strenge, D.L.

    1988-09-01

    Evaluating compliance with environmental regulations, specifically those regulations that pertain to human exposure, can be a difficult task. Historically, maximum individual or worst-case exposures have been calculated as a basis for evaluating risk or compliance with such regulations. However, these calculations may significantly overestimate exposure and may not provide a clear understanding of the uncertainty in the analysis. The use of Monte Carlo modeling techniques can provide a better understanding of the potential range of exposures and the likelihood of high (worst-case) exposures. This paper compares the results of standard exposure estimation techniques with the Monte Carlo modeling approach. The authors discuss the potential application of this approach for demonstrating regulatory compliance, along with the strengths and weaknesses of the approach. Suggestions on implementing this method as a routine tool in exposure and risk analyses are also presented. 16 refs., 5 tabs

  2. MUSIC -- An Automated Scan for Deviations between Data and Monte Carlo Simulation

    CERN Document Server

    CMS Collaboration

    2008-01-01

    We present a model independent analysis approach, systematically scanning the data for deviations from the Monte Carlo expectation. Such an analysis can contribute to the understanding of the detector and the tuning of the event generators. Due to the minimal theoretical bias this approach is sensitive to a variety of models, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. We outline the importance of systematic uncertainties, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving supersymmetry and new heavy gauge bosons have been used as an input to the search algorithm. %Several models involving supersymmetry, new heavy gauge bosons and leptoquarks, as well as possible detector ef...

  3. A systematic framework for Monte Carlo simulation of remote sensing errors map in carbon assessments

    Science.gov (United States)

    S. Healey; P. Patterson; S. Urbanski

    2014-01-01

    Remotely sensed observations can provide unique perspective on how management and natural disturbance affect carbon stocks in forests. However, integration of these observations into formal decision support will rely upon improved uncertainty accounting. Monte Carlo (MC) simulations offer a practical, empirical method of accounting for potential remote sensing errors...

  4. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

    Energy Technology Data Exchange (ETDEWEB)

    Urbatsch, T.J.

    1995-11-01

    If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.

  5. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

    International Nuclear Information System (INIS)

    Urbatsch, T.J.

    1995-11-01

    If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors

  6. Study on random number generator in Monte Carlo code

    International Nuclear Information System (INIS)

    Oya, Kentaro; Kitada, Takanori; Tanaka, Shinichi

    2011-01-01

    The Monte Carlo code uses a sequence of pseudo-random numbers with a random number generator (RNG) to simulate particle histories. A pseudo-random number has its own period depending on its generation method and the period is desired to be long enough not to exceed the period during one Monte Carlo calculation to ensure the correctness especially for a standard deviation of results. The linear congruential generator (LCG) is widely used as Monte Carlo RNG and the period of LCG is not so long by considering the increasing rate of simulation histories in a Monte Carlo calculation according to the remarkable enhancement of computer performance. Recently, many kinds of RNG have been developed and some of their features are better than those of LCG. In this study, we investigate the appropriate RNG in a Monte Carlo code as an alternative to LCG especially for the case of enormous histories. It is found that xorshift has desirable features compared with LCG, and xorshift has a larger period, a comparable speed to generate random numbers, a better randomness, and good applicability to parallel calculation. (author)

  7. Multilevel markov chain monte carlo method for high-contrast single-phase flow problems

    KAUST Repository

    Efendiev, Yalchin R.

    2014-12-19

    In this paper we propose a general framework for the uncertainty quantification of quantities of interest for high-contrast single-phase flow problems. It is based on the generalized multiscale finite element method (GMsFEM) and multilevel Monte Carlo (MLMC) methods. The former provides a hierarchy of approximations of different resolution, whereas the latter gives an efficient way to estimate quantities of interest using samples on different levels. The number of basis functions in the online GMsFEM stage can be varied to determine the solution resolution and the computational cost, and to efficiently generate samples at different levels. In particular, it is cheap to generate samples on coarse grids but with low resolution, and it is expensive to generate samples on fine grids with high accuracy. By suitably choosing the number of samples at different levels, one can leverage the expensive computation in larger fine-grid spaces toward smaller coarse-grid spaces, while retaining the accuracy of the final Monte Carlo estimate. Further, we describe a multilevel Markov chain Monte Carlo method, which sequentially screens the proposal with different levels of approximations and reduces the number of evaluations required on fine grids, while combining the samples at different levels to arrive at an accurate estimate. The framework seamlessly integrates the multiscale features of the GMsFEM with the multilevel feature of the MLMC methods following the work in [26], and our numerical experiments illustrate its efficiency and accuracy in comparison with standard Monte Carlo estimates. © Global Science Press Limited 2015.

  8. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z. [Institute of Applied Physics and Computational Mathematics, Beijing, 100094 (China)

    2013-07-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  9. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    International Nuclear Information System (INIS)

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.

    2013-01-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  10. Monte Carlo method applied to medical physics

    International Nuclear Information System (INIS)

    Oliveira, C.; Goncalves, I.F.; Chaves, A.; Lopes, M.C.; Teixeira, N.; Matos, B.; Goncalves, I.C.; Ramalho, A.; Salgado, J.

    2000-01-01

    The main application of the Monte Carlo method to medical physics is dose calculation. This paper shows some results of two dose calculation studies and two other different applications: optimisation of neutron field for Boron Neutron Capture Therapy and optimization of a filter for a beam tube for several purposes. The time necessary for Monte Carlo calculations - the highest boundary for its intensive utilisation - is being over-passed with faster and cheaper computers. (author)

  11. Improved Monte Carlo - Perturbation Method For Estimation Of Control Rod Worths In A Research Reactor

    International Nuclear Information System (INIS)

    Kalcheva, Silva; Koonen, Edgar

    2008-01-01

    A hybrid method dedicated to improve the experimental technique for estimation of control rod worths in a research reactor is presented. The method uses a combination of Monte Carlo technique and perturbation theory. The perturbation theory is used to obtain the relation between the relative rod efficiency and the buckling of the reactor with partially inserted rod. A series of coefficients, describing the axial absorption profile are used to correct the buckling for an arbitrary composite rod, having complicated burn up irradiation history. These coefficients have to be determined - by experiment or by using some theoretical/numerical method. In the present paper they are derived from the macroscopic absorption cross sections, obtained from detailed Monte Carlo calculations by MCNPX 2.6.F of the axial burn up profile during control rod life. The method is validated on measurements of control rod worths at the BR2 reactor. Comparison with direct Monte Carlo evaluations of control rod worths is also presented. The uncertainties, arising from the used approximations in the presented hybrid method are discussed. (authors)

  12. A radiating shock evaluated using Implicit Monte Carlo Diffusion

    International Nuclear Information System (INIS)

    Cleveland, M.; Gentile, N.

    2013-01-01

    Implicit Monte Carlo [1] (IMC) has been shown to be very expensive when used to evaluate a radiation field in opaque media. Implicit Monte Carlo Diffusion (IMD) [2], which evaluates a spatial discretized diffusion equation using a Monte Carlo algorithm, can be used to reduce the cost of evaluating the radiation field in opaque media [2]. This work couples IMD to the hydrodynamics equations to evaluate opaque diffusive radiating shocks. The Lowrie semi-analytic diffusive radiating shock benchmark[a] is used to verify our implementation of the coupled system of equations. (authors)

  13. The Monte Carlo method the method of statistical trials

    CERN Document Server

    Shreider, YuA

    1966-01-01

    The Monte Carlo Method: The Method of Statistical Trials is a systematic account of the fundamental concepts and techniques of the Monte Carlo method, together with its range of applications. Some of these applications include the computation of definite integrals, neutron physics, and in the investigation of servicing processes. This volume is comprised of seven chapters and begins with an overview of the basic features of the Monte Carlo method and typical examples of its application to simple problems in computational mathematics. The next chapter examines the computation of multi-dimensio

  14. Applicability of quasi-Monte Carlo for lattice systems

    International Nuclear Information System (INIS)

    Ammon, Andreas; Deutsches Elektronen-Synchrotron; Hartung, Tobias; Jansen, Karl; Leovey, Hernan; Griewank, Andreas; Mueller-Preussker, Michael

    2013-11-01

    This project investigates the applicability of quasi-Monte Carlo methods to Euclidean lattice systems in order to improve the asymptotic error scaling of observables for such theories. The error of an observable calculated by averaging over random observations generated from ordinary Monte Carlo simulations scales like N -1/2 , where N is the number of observations. By means of quasi-Monte Carlo methods it is possible to improve this scaling for certain problems to N -1 , or even further if the problems are regular enough. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling of all investigated observables in both cases.

  15. Applicability of quasi-Monte Carlo for lattice systems

    Energy Technology Data Exchange (ETDEWEB)

    Ammon, Andreas [Berlin Humboldt-Univ. (Germany). Dept. of Physics; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Hartung, Tobias [King' s College London (United Kingdom). Dept. of Mathematics; Jansen, Karl [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Leovey, Hernan; Griewank, Andreas [Berlin Humboldt-Univ. (Germany). Dept. of Mathematics; Mueller-Preussker, Michael [Berlin Humboldt-Univ. (Germany). Dept. of Physics

    2013-11-15

    This project investigates the applicability of quasi-Monte Carlo methods to Euclidean lattice systems in order to improve the asymptotic error scaling of observables for such theories. The error of an observable calculated by averaging over random observations generated from ordinary Monte Carlo simulations scales like N{sup -1/2}, where N is the number of observations. By means of quasi-Monte Carlo methods it is possible to improve this scaling for certain problems to N{sup -1}, or even further if the problems are regular enough. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling of all investigated observables in both cases.

  16. Automated Monte Carlo biasing for photon-generated electrons near surfaces.

    Energy Technology Data Exchange (ETDEWEB)

    Franke, Brian Claude; Crawford, Martin James; Kensek, Ronald Patrick

    2009-09-01

    This report describes efforts to automate the biasing of coupled electron-photon Monte Carlo particle transport calculations. The approach was based on weight-windows biasing. Weight-window settings were determined using adjoint-flux Monte Carlo calculations. A variety of algorithms were investigated for adaptivity of the Monte Carlo tallies. Tree data structures were used to investigate spatial partitioning. Functional-expansion tallies were used to investigate higher-order spatial representations.

  17. Uniform distribution and quasi-Monte Carlo methods discrepancy, integration and applications

    CERN Document Server

    Kritzer, Peter; Pillichshammer, Friedrich; Winterhof, Arne

    2014-01-01

    The survey articles in this book focus on number theoretic point constructions, uniform distribution theory, and quasi-Monte Carlo methods. As deterministic versions of the Monte Carlo method, quasi-Monte Carlo rules enjoy increasing popularity, with many fruitful applications in mathematical practice, as for example in finance, computer graphics, and biology.

  18. Treatment of input uncertainty in hydrologic modeling: Doing hydrology backward with Markov chain Monte Carlo simulation

    NARCIS (Netherlands)

    Vrugt, J.A.; Braak, ter C.J.F.; Clark, M.P.; Hyman, J.M.; Robinson, B.A.

    2008-01-01

    There is increasing consensus in the hydrologic literature that an appropriate framework for streamflow forecasting and simulation should include explicit recognition of forcing and parameter and model structural error. This paper presents a novel Markov chain Monte Carlo (MCMC) sampler, entitled

  19. Clinical implementation of full Monte Carlo dose calculation in proton beam therapy

    International Nuclear Information System (INIS)

    Paganetti, Harald; Jiang, Hongyu; Parodi, Katia; Slopsema, Roelf; Engelsman, Martijn

    2008-01-01

    The goal of this work was to facilitate the clinical use of Monte Carlo proton dose calculation to support routine treatment planning and delivery. The Monte Carlo code Geant4 was used to simulate the treatment head setup, including a time-dependent simulation of modulator wheels (for broad beam modulation) and magnetic field settings (for beam scanning). Any patient-field-specific setup can be modeled according to the treatment control system of the facility. The code was benchmarked against phantom measurements. Using a simulation of the ionization chamber reading in the treatment head allows the Monte Carlo dose to be specified in absolute units (Gy per ionization chamber reading). Next, the capability of reading CT data information was implemented into the Monte Carlo code to model patient anatomy. To allow time-efficient dose calculation, the standard Geant4 tracking algorithm was modified. Finally, a software link of the Monte Carlo dose engine to the patient database and the commercial planning system was established to allow data exchange, thus completing the implementation of the proton Monte Carlo dose calculation engine ('DoC++'). Monte Carlo re-calculated plans are a valuable tool to revisit decisions in the planning process. Identification of clinically significant differences between Monte Carlo and pencil-beam-based dose calculations may also drive improvements of current pencil-beam methods. As an example, four patients (29 fields in total) with tumors in the head and neck regions were analyzed. Differences between the pencil-beam algorithm and Monte Carlo were identified in particular near the end of range, both due to dose degradation and overall differences in range prediction due to bony anatomy in the beam path. Further, the Monte Carlo reports dose-to-tissue as compared to dose-to-water by the planning system. Our implementation is tailored to a specific Monte Carlo code and the treatment planning system XiO (Computerized Medical Systems Inc

  20. Exponential convergence on a continuous Monte Carlo transport problem

    International Nuclear Information System (INIS)

    Booth, T.E.

    1997-01-01

    For more than a decade, it has been known that exponential convergence on discrete transport problems was possible using adaptive Monte Carlo techniques. An adaptive Monte Carlo method that empirically produces exponential convergence on a simple continuous transport problem is described

  1. MUSiC - A general search for deviations from Monte Carlo predictions in CMS

    Energy Technology Data Exchange (ETDEWEB)

    Biallass, Philipp A, E-mail: biallass@cern.c [Physics Institute IIIA, RWTH Aachen, Physikzentrum, 52056 Aachen (Germany)

    2009-06-01

    A model independent analysis approach in CMS is presented, systematically scanning the data for deviations from the Monte Carlo expectation. Such an analysis can contribute to the understanding of the detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. The importance of systematic uncertainties is outlined, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving Supersymmetry and new heavy gauge bosons are used as an input to the search algorithm.

  2. MUSiC A General Search for Deviations from Monte Carlo Predictions in CMS

    CERN Document Server

    Biallass, Philipp

    2009-01-01

    A model independent analysis approach in CMS is presented, systematically scanning the data for deviations from the Monte Carlo expectation. Such an analysis can contribute to the understanding of the detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. The importance of systematic uncertainties is outlined, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving Supersymmetry and new heavy gauge bosons are used as an input to the search algorithm.

  3. MUSiC - A general search for deviations from Monte Carlo predictions in CMS

    International Nuclear Information System (INIS)

    Biallass, Philipp A

    2009-01-01

    A model independent analysis approach in CMS is presented, systematically scanning the data for deviations from the Monte Carlo expectation. Such an analysis can contribute to the understanding of the detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. The importance of systematic uncertainties is outlined, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving Supersymmetry and new heavy gauge bosons are used as an input to the search algorithm.

  4. Isotopic depletion with Monte Carlo

    International Nuclear Information System (INIS)

    Martin, W.R.; Rathkopf, J.A.

    1996-06-01

    This work considers a method to deplete isotopes during a time- dependent Monte Carlo simulation of an evolving system. The method is based on explicitly combining a conventional estimator for the scalar flux with the analytical solutions to the isotopic depletion equations. There are no auxiliary calculations; the method is an integral part of the Monte Carlo calculation. The method eliminates negative densities and reduces the variance in the estimates for the isotope densities, compared to existing methods. Moreover, existing methods are shown to be special cases of the general method described in this work, as they can be derived by combining a high variance estimator for the scalar flux with a low-order approximation to the analytical solution to the depletion equation

  5. Multilevel sequential Monte-Carlo samplers

    KAUST Repository

    Jasra, Ajay

    2016-01-01

    Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.

  6. Monte Carlo methods in ICF

    International Nuclear Information System (INIS)

    Zimmerman, G.B.

    1997-01-01

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials. copyright 1997 American Institute of Physics

  7. Multilevel sequential Monte-Carlo samplers

    KAUST Repository

    Jasra, Ajay

    2016-01-05

    Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.

  8. A flexible coupling scheme for Monte Carlo and thermal-hydraulics codes

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. Eduard, E-mail: J.E.Hoogenboom@tudelft.nl [Delft University of Technology (Netherlands); Ivanov, Aleksandar; Sanchez, Victor, E-mail: Aleksandar.Ivanov@kit.edu, E-mail: Victor.Sanchez@kit.edu [Karlsruhe Institute of Technology, Institute of Neutron Physics and Reactor Technology, Eggenstein-Leopoldshafen (Germany); Diop, Cheikh, E-mail: Cheikh.Diop@cea.fr [CEA/DEN/DANS/DM2S/SERMA, Commissariat a l' Energie Atomique, Gif-sur-Yvette (France)

    2011-07-01

    A coupling scheme between a Monte Carlo code and a thermal-hydraulics code is being developed within the European NURISP project for comprehensive and validated reactor analysis. The scheme is flexible as it allows different Monte Carlo codes and different thermal-hydraulics codes to be used. At present the MCNP and TRIPOLI4 Monte Carlo codes can be used and the FLICA4 and SubChanFlow thermal-hydraulics codes. For all these codes only an original executable is necessary. A Python script drives the iterations between Monte Carlo and thermal-hydraulics calculations. It also calls a conversion program to merge a master input file for the Monte Carlo code with the appropriate temperature and coolant density data from the thermal-hydraulics calculation. Likewise it calls another conversion program to merge a master input file for the thermal-hydraulics code with the power distribution data from the Monte Carlo calculation. Special attention is given to the neutron cross section data for the various required temperatures in the Monte Carlo calculation. Results are shown for an infinite lattice of PWR fuel pin cells and a 3 x 3 fuel BWR pin cell cluster. Various possibilities for further improvement and optimization of the coupling system are discussed. (author)

  9. A flexible coupling scheme for Monte Carlo and thermal-hydraulics codes

    International Nuclear Information System (INIS)

    Hoogenboom, J. Eduard; Ivanov, Aleksandar; Sanchez, Victor; Diop, Cheikh

    2011-01-01

    A coupling scheme between a Monte Carlo code and a thermal-hydraulics code is being developed within the European NURISP project for comprehensive and validated reactor analysis. The scheme is flexible as it allows different Monte Carlo codes and different thermal-hydraulics codes to be used. At present the MCNP and TRIPOLI4 Monte Carlo codes can be used and the FLICA4 and SubChanFlow thermal-hydraulics codes. For all these codes only an original executable is necessary. A Python script drives the iterations between Monte Carlo and thermal-hydraulics calculations. It also calls a conversion program to merge a master input file for the Monte Carlo code with the appropriate temperature and coolant density data from the thermal-hydraulics calculation. Likewise it calls another conversion program to merge a master input file for the thermal-hydraulics code with the power distribution data from the Monte Carlo calculation. Special attention is given to the neutron cross section data for the various required temperatures in the Monte Carlo calculation. Results are shown for an infinite lattice of PWR fuel pin cells and a 3 x 3 fuel BWR pin cell cluster. Various possibilities for further improvement and optimization of the coupling system are discussed. (author)

  10. Parallel MCNP Monte Carlo transport calculations with MPI

    International Nuclear Information System (INIS)

    Wagner, J.C.; Haghighat, A.

    1996-01-01

    The steady increase in computational performance has made Monte Carlo calculations for large/complex systems possible. However, in order to make these calculations practical, order of magnitude increases in performance are necessary. The Monte Carlo method is inherently parallel (particles are simulated independently) and thus has the potential for near-linear speedup with respect to the number of processors. Further, the ever-increasing accessibility of parallel computers, such as workstation clusters, facilitates the practical use of parallel Monte Carlo. Recognizing the nature of the Monte Carlo method and the trends in available computing, the code developers at Los Alamos National Laboratory implemented the message-passing general-purpose Monte Carlo radiation transport code MCNP (version 4A). The PVM package was chosen by the MCNP code developers because it supports a variety of communication networks, several UNIX platforms, and heterogeneous computer systems. This PVM version of MCNP has been shown to produce speedups that approach the number of processors and thus, is a very useful tool for transport analysis. Due to software incompatibilities on the local IBM SP2, PVM has not been available, and thus it is not possible to take advantage of this useful tool. Hence, it became necessary to implement an alternative message-passing library package into MCNP. Because the message-passing interface (MPI) is supported on the local system, takes advantage of the high-speed communication switches in the SP2, and is considered to be the emerging standard, it was selected

  11. Monte Carlo systems used for treatment planning and dose verification

    Energy Technology Data Exchange (ETDEWEB)

    Brualla, Lorenzo [Universitaetsklinikum Essen, NCTeam, Strahlenklinik, Essen (Germany); Rodriguez, Miguel [Centro Medico Paitilla, Balboa (Panama); Lallena, Antonio M. [Universidad de Granada, Departamento de Fisica Atomica, Molecular y Nuclear, Granada (Spain)

    2017-04-15

    General-purpose radiation transport Monte Carlo codes have been used for estimation of the absorbed dose distribution in external photon and electron beam radiotherapy patients since several decades. Results obtained with these codes are usually more accurate than those provided by treatment planning systems based on non-stochastic methods. Traditionally, absorbed dose computations based on general-purpose Monte Carlo codes have been used only for research, owing to the difficulties associated with setting up a simulation and the long computation time required. To take advantage of radiation transport Monte Carlo codes applied to routine clinical practice, researchers and private companies have developed treatment planning and dose verification systems that are partly or fully based on fast Monte Carlo algorithms. This review presents a comprehensive list of the currently existing Monte Carlo systems that can be used to calculate or verify an external photon and electron beam radiotherapy treatment plan. Particular attention is given to those systems that are distributed, either freely or commercially, and that do not require programming tasks from the end user. These systems are compared in terms of features and the simulation time required to compute a set of benchmark calculations. (orig.) [German] Seit mehreren Jahrzehnten werden allgemein anwendbare Monte-Carlo-Codes zur Simulation des Strahlungstransports benutzt, um die Verteilung der absorbierten Dosis in der perkutanen Strahlentherapie mit Photonen und Elektronen zu evaluieren. Die damit erzielten Ergebnisse sind meist akkurater als solche, die mit nichtstochastischen Methoden herkoemmlicher Bestrahlungsplanungssysteme erzielt werden koennen. Wegen des damit verbundenen Arbeitsaufwands und der langen Dauer der Berechnungen wurden Monte-Carlo-Simulationen von Dosisverteilungen in der konventionellen Strahlentherapie in der Vergangenheit im Wesentlichen in der Forschung eingesetzt. Im Bemuehen, Monte-Carlo

  12. Multilevel Monte Carlo in Approximate Bayesian Computation

    KAUST Repository

    Jasra, Ajay

    2017-02-13

    In the following article we consider approximate Bayesian computation (ABC) inference. We introduce a method for numerically approximating ABC posteriors using the multilevel Monte Carlo (MLMC). A sequential Monte Carlo version of the approach is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.

  13. Aleatoric and epistemic uncertainties in sampling based nuclear data uncertainty and sensitivity analyses

    International Nuclear Information System (INIS)

    Zwermann, W.; Krzykacz-Hausmann, B.; Gallner, L.; Klein, M.; Pautz, A.; Velkov, K.

    2012-01-01

    Sampling based uncertainty and sensitivity analyses due to epistemic input uncertainties, i.e. to an incomplete knowledge of uncertain input parameters, can be performed with arbitrary application programs to solve the physical problem under consideration. For the description of steady-state particle transport, direct simulations of the microscopic processes with Monte Carlo codes are often used. This introduces an additional source of uncertainty, the aleatoric sampling uncertainty, which is due to the randomness of the simulation process performed by sampling, and which adds to the total combined output sampling uncertainty. So far, this aleatoric part of uncertainty is minimized by running a sufficiently large number of Monte Carlo histories for each sample calculation, thus making its impact negligible as compared to the impact from sampling the epistemic uncertainties. Obviously, this process may cause high computational costs. The present paper shows that in many applications reliable epistemic uncertainty results can also be obtained with substantially lower computational effort by performing and analyzing two appropriately generated series of samples with much smaller number of Monte Carlo histories each. The method is applied along with the nuclear data uncertainty and sensitivity code package XSUSA in combination with the Monte Carlo transport code KENO-Va to various critical assemblies and a full scale reactor calculation. It is shown that the proposed method yields output uncertainties and sensitivities equivalent to the traditional approach, with a high reduction of computing time by factors of the magnitude of 100. (authors)

  14. A residual Monte Carlo method for discrete thermal radiative diffusion

    International Nuclear Information System (INIS)

    Evans, T.M.; Urbatsch, T.J.; Lichtenstein, H.; Morel, J.E.

    2003-01-01

    Residual Monte Carlo methods reduce statistical error at a rate of exp(-bN), where b is a positive constant and N is the number of particle histories. Contrast this convergence rate with 1/√N, which is the rate of statistical error reduction for conventional Monte Carlo methods. Thus, residual Monte Carlo methods hold great promise for increased efficiency relative to conventional Monte Carlo methods. Previous research has shown that the application of residual Monte Carlo methods to the solution of continuum equations, such as the radiation transport equation, is problematic for all but the simplest of cases. However, the residual method readily applies to discrete systems as long as those systems are monotone, i.e., they produce positive solutions given positive sources. We develop a residual Monte Carlo method for solving a discrete 1D non-linear thermal radiative equilibrium diffusion equation, and we compare its performance with that of the discrete conventional Monte Carlo method upon which it is based. We find that the residual method provides efficiency gains of many orders of magnitude. Part of the residual gain is due to the fact that we begin each timestep with an initial guess equal to the solution from the previous timestep. Moreover, fully consistent non-linear solutions can be obtained in a reasonable amount of time because of the effective lack of statistical noise. We conclude that the residual approach has great potential and that further research into such methods should be pursued for more general discrete and continuum systems

  15. Contributon Monte Carlo

    International Nuclear Information System (INIS)

    Dubi, A.; Gerstl, S.A.W.

    1979-05-01

    The contributon Monte Carlo method is based on a new recipe to calculate target responses by means of volume integral of the contributon current in a region between the source and the detector. A comprehensive description of the method, its implementation in the general-purpose MCNP code, and results of the method for realistic nonhomogeneous, energy-dependent problems are presented. 23 figures, 10 tables

  16. Stochastic approximation Monte Carlo importance sampling for approximating exact conditional probabilities

    KAUST Repository

    Cheon, Sooyoung

    2013-02-16

    Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305-320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. © 2013 Springer Science+Business Media New York.

  17. Stochastic approximation Monte Carlo importance sampling for approximating exact conditional probabilities

    KAUST Repository

    Cheon, Sooyoung; Liang, Faming; Chen, Yuguo; Yu, Kai

    2013-01-01

    Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305-320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. © 2013 Springer Science+Business Media New York.

  18. Bayesian Monte Carlo method

    International Nuclear Information System (INIS)

    Rajabalinejad, M.

    2010-01-01

    To reduce cost of Monte Carlo (MC) simulations for time-consuming processes, Bayesian Monte Carlo (BMC) is introduced in this paper. The BMC method reduces number of realizations in MC according to the desired accuracy level. BMC also provides a possibility of considering more priors. In other words, different priors can be integrated into one model by using BMC to further reduce cost of simulations. This study suggests speeding up the simulation process by considering the logical dependence of neighboring points as prior information. This information is used in the BMC method to produce a predictive tool through the simulation process. The general methodology and algorithm of BMC method are presented in this paper. The BMC method is applied to the simplified break water model as well as the finite element model of 17th Street Canal in New Orleans, and the results are compared with the MC and Dynamic Bounds methods.

  19. Forward-weighted CADIS method for variance reduction of Monte Carlo calculations of distributions and multiple localized quantities

    International Nuclear Information System (INIS)

    Wagner, J. C.; Blakeman, E. D.; Peplow, D. E.

    2009-01-01

    This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is a variation on the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for some time to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development of an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain approximately uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented in the ADVANTG/MCNP framework and has been fully automated within the MAVRIC sequence of SCALE 6. Results of the application of the method to enabling the calculation of dose rates throughout an entire full-scale pressurized-water reactor facility are presented and discussed. (authors)

  20. Closed-shell variational quantum Monte Carlo simulation for the ...

    African Journals Online (AJOL)

    Closed-shell variational quantum Monte Carlo simulation for the electric dipole moment calculation of hydrazine molecule using casino-code. ... Nigeria Journal of Pure and Applied Physics ... The variational quantum Monte Carlo (VQMC) technique used in this work employed the restricted Hartree-Fock (RHF) scheme.

  1. New Approaches and Applications for Monte Carlo Perturbation Theory

    Energy Technology Data Exchange (ETDEWEB)

    Aufiero, Manuele; Bidaud, Adrien; Kotlyar, Dan; Leppänen, Jaakko; Palmiotti, Giuseppe; Salvatores, Massimo; Sen, Sonat; Shwageraus, Eugene; Fratoni, Massimiliano

    2017-02-01

    This paper presents some of the recent and new advancements in the extension of Monte Carlo Perturbation Theory methodologies and application. In particular, the discussed problems involve Brunup calculation, perturbation calculation based on continuous energy functions, and Monte Carlo Perturbation Theory in loosely coupled systems.

  2. Recommender engine for continuous-time quantum Monte Carlo methods

    Science.gov (United States)

    Huang, Li; Yang, Yi-feng; Wang, Lei

    2017-03-01

    Recommender systems play an essential role in the modern business world. They recommend favorable items such as books, movies, and search queries to users based on their past preferences. Applying similar ideas and techniques to Monte Carlo simulations of physical systems boosts their efficiency without sacrificing accuracy. Exploiting the quantum to classical mapping inherent in the continuous-time quantum Monte Carlo methods, we construct a classical molecular gas model to reproduce the quantum distributions. We then utilize powerful molecular simulation techniques to propose efficient quantum Monte Carlo updates. The recommender engine approach provides a general way to speed up the quantum impurity solvers.

  3. Rapid Monte Carlo Simulation of Gravitational Wave Galaxies

    Science.gov (United States)

    Breivik, Katelyn; Larson, Shane L.

    2015-01-01

    With the detection of gravitational waves on the horizon, astrophysical catalogs produced by gravitational wave observatories can be used to characterize the populations of sources and validate different galactic population models. Efforts to simulate gravitational wave catalogs and source populations generally focus on population synthesis models that require extensive time and computational power to produce a single simulated galaxy. Monte Carlo simulations of gravitational wave source populations can also be used to generate observation catalogs from the gravitational wave source population. Monte Carlo simulations have the advantes of flexibility and speed, enabling rapid galactic realizations as a function of galactic binary parameters with less time and compuational resources required. We present a Monte Carlo method for rapid galactic simulations of gravitational wave binary populations.

  4. Mixed oxidizer hybrid propulsion system optimization under uncertainty using applied response surface methodology and Monte Carlo simulation

    Science.gov (United States)

    Whitehead, James Joshua

    The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in

  5. Acceleration of monte Carlo solution by conjugate gradient method

    International Nuclear Information System (INIS)

    Toshihisa, Yamamoto

    2005-01-01

    The conjugate gradient method (CG) was applied to accelerate Monte Carlo solutions in fixed source problems. The equilibrium model based formulation enables to use CG scheme as well as initial guess to maximize computational performance. This method is available to arbitrary geometry provided that the neutron source distribution in each subregion can be regarded as flat. Even if it is not the case, the method can still be used as a powerful tool to provide an initial guess very close to the converged solution. The major difference of Monte Carlo CG to deterministic CG is that residual error is estimated using Monte Carlo sampling, thus statistical error exists in the residual. This leads to a flow diagram specific to Monte Carlo-CG. Three pre-conditioners were proposed for CG scheme and the performance was compared with a simple 1-D slab heterogeneous test problem. One of them, Sparse-M option, showed an excellent performance in convergence. The performance per unit cost was improved by four times in the test problem. Although direct estimation of efficiency of the method is impossible mainly because of the strong problem-dependence of the optimized pre-conditioner in CG, the method seems to have efficient potential as a fast solution algorithm for Monte Carlo calculations. (author)

  6. PERHITUNGAN VaR PORTOFOLIO SAHAM MENGGUNAKAN DATA HISTORIS DAN DATA SIMULASI MONTE CARLO

    Directory of Open Access Journals (Sweden)

    WAYAN ARTHINI

    2012-09-01

    Full Text Available Value at Risk (VaR is the maximum potential loss on a portfolio based on the probability at a certain time.  In this research, portfolio VaR values calculated from historical data and Monte Carlo simulation data. Historical data is processed so as to obtain stock returns, variance, correlation coefficient, and variance-covariance matrix, then the method of Markowitz sought proportion of each stock fund, and portfolio risk and return portfolio. The data was then simulated by Monte Carlo simulation, Exact Monte Carlo Simulation and Expected Monte Carlo Simulation. Exact Monte Carlo simulation have same returns and standard deviation  with historical data, while the Expected Monte Carlo Simulation satistic calculation similar to historical data. The results of this research is the portfolio VaR  with time horizon T=1, T=10, T=22 and the confidence level of 95 %, values obtained VaR between historical data and Monte Carlo simulation data with the method exact and expected. Value of VaR from both Monte Carlo simulation is greater than VaR historical data.

  7. Monte Carlo methods for the reliability analysis of Markov systems

    International Nuclear Information System (INIS)

    Buslik, A.J.

    1985-01-01

    This paper presents Monte Carlo methods for the reliability analysis of Markov systems. Markov models are useful in treating dependencies between components. The present paper shows how the adjoint Monte Carlo method for the continuous time Markov process can be derived from the method for the discrete-time Markov process by a limiting process. The straightforward extensions to the treatment of mean unavailability (over a time interval) are given. System unavailabilities can also be estimated; this is done by making the system failed states absorbing, and not permitting repair from them. A forward Monte Carlo method is presented in which the weighting functions are related to the adjoint function. In particular, if the exact adjoint function is known then weighting factors can be constructed such that the exact answer can be obtained with a single Monte Carlo trial. Of course, if the exact adjoint function is known, there is no need to perform the Monte Carlo calculation. However, the formulation is useful since it gives insight into choices of the weight factors which will reduce the variance of the estimator

  8. A general transform for variance reduction in Monte Carlo simulations

    International Nuclear Information System (INIS)

    Becker, T.L.; Larsen, E.W.

    2011-01-01

    This paper describes a general transform to reduce the variance of the Monte Carlo estimate of some desired solution, such as flux or biological dose. This transform implicitly includes many standard variance reduction techniques, including source biasing, collision biasing, the exponential transform for path-length stretching, and weight windows. Rather than optimizing each of these techniques separately or choosing semi-empirical biasing parameters based on the experience of a seasoned Monte Carlo practitioner, this General Transform unites all these variance techniques to achieve one objective: a distribution of Monte Carlo particles that attempts to optimize the desired solution. Specifically, this transform allows Monte Carlo particles to be distributed according to the user's specification by using information obtained from a computationally inexpensive deterministic simulation of the problem. For this reason, we consider the General Transform to be a hybrid Monte Carlo/Deterministic method. The numerical results con rm that the General Transform distributes particles according to the user-specified distribution and generally provide reasonable results for shielding applications. (author)

  9. A Monte Carlo approach to combating delayed completion of ...

    African Journals Online (AJOL)

    The objective of this paper is to unveil the relevance of Monte Carlo critical path analysis in resolving problem of delays in scheduled completion of development projects. Commencing with deterministic network scheduling, Monte Carlo critical path analysis was advanced by assigning probability distributions to task times.

  10. Monte Carlo numerical study of lattice field theories

    International Nuclear Information System (INIS)

    Gan Cheekwan; Kim Seyong; Ohta, Shigemi

    1997-01-01

    The authors are interested in the exact first-principle calculations of quantum field theories which are indeed exact ones. For quantum chromodynamics (QCD) at low energy scale, a nonperturbation method is needed, and the only known such method is the lattice method. The path integral can be evaluated by putting a system on a finite 4-dimensional volume and discretizing space time continuum into finite points, lattice. The continuum limit is taken by making the lattice infinitely fine. For evaluating such a finite-dimensional integral, the Monte Carlo numerical estimation of the path integral can be obtained. The calculation of light hadron mass in quenched lattice QCD with staggered quarks, 3-dimensional Thirring model calculation and the development of self-test Monte Carlo method have been carried out by using the RIKEN supercomputer. The motivation of this study, lattice QCD formulation, continuum limit, Monte Carlo update, hadron propagator, light hadron mass, auto-correlation and source size dependence are described on lattice QCD. The phase structure of the 3-dimensional Thirring model for a small 8 3 lattice has been mapped. The discussion on self-test Monte Carlo method is described again. (K.I.)

  11. Continuous energy Monte Carlo method based lattice homogeinzation

    International Nuclear Information System (INIS)

    Li Mancang; Yao Dong; Wang Kan

    2014-01-01

    Based on the Monte Carlo code MCNP, the continuous energy Monte Carlo multi-group constants generation code MCMC has been developed. The track length scheme has been used as the foundation of cross section generation. The scattering matrix and Legendre components require special techniques, and the scattering event method has been proposed to solve this problem. Three methods have been developed to calculate the diffusion coefficients for diffusion reactor core codes and the Legendre method has been applied in MCMC. To the satisfaction of the equivalence theory, the general equivalence theory (GET) and the superhomogenization method (SPH) have been applied to the Monte Carlo method based group constants. The super equivalence method (SPE) has been proposed to improve the equivalence. GET, SPH and SPE have been implemented into MCMC. The numerical results showed that generating the homogenization multi-group constants via Monte Carlo method overcomes the difficulties in geometry and treats energy in continuum, thus provides more accuracy parameters. Besides, the same code and data library can be used for a wide range of applications due to the versatility. The MCMC scheme can be seen as a potential alternative to the widely used deterministic lattice codes. (authors)

  12. PENENTUAN HARGA OPSI BELI TIPE ASIA DENGAN METODE MONTE CARLO-CONTROL VARIATE

    Directory of Open Access Journals (Sweden)

    NI NYOMAN AYU ARTANADI

    2017-01-01

    Full Text Available Option is a contract between the writer and the holder which entitles the holder to buy or sell an underlying asset at the maturity date for a specified price known as an exercise price. Asian option is a type of financial derivatives which the payoff taking the average value over the time series of the asset price. The aim of the study is to present the Monte Carlo-Control Variate as an extension of Standard Monte Carlo applied on the calculation of the Asian option price. Standard Monte Carlo simulations 10.000.000 generate standard error 0.06 and the option price convergent at Rp.160.00 while Monte Carlo-Control Variate simulations 100.000 generate standard error 0.01 and the option price convergent at Rp.152.00. This shows the Monte Carlo-Control Variate achieve faster option price toward convergent of the Monte Carlo Standar.

  13. Biased Monte Carlo optimization: the basic approach

    International Nuclear Information System (INIS)

    Campioni, Luca; Scardovelli, Ruben; Vestrucci, Paolo

    2005-01-01

    It is well-known that the Monte Carlo method is very successful in tackling several kinds of system simulations. It often happens that one has to deal with rare events, and the use of a variance reduction technique is almost mandatory, in order to have Monte Carlo efficient applications. The main issue associated with variance reduction techniques is related to the choice of the value of the biasing parameter. Actually, this task is typically left to the experience of the Monte Carlo user, who has to make many attempts before achieving an advantageous biasing. A valuable result is provided: a methodology and a practical rule addressed to establish an a priori guidance for the choice of the optimal value of the biasing parameter. This result, which has been obtained for a single component system, has the notable property of being valid for any multicomponent system. In particular, in this paper, the exponential and the uniform biases of exponentially distributed phenomena are investigated thoroughly

  14. Self-learning Monte Carlo (dynamical biasing)

    International Nuclear Information System (INIS)

    Matthes, W.

    1981-01-01

    In many applications the histories of a normal Monte Carlo game rarely reach the target region. An approximate knowledge of the importance (with respect to the target) may be used to guide the particles more frequently into the target region. A Monte Carlo method is presented in which each history contributes to update the importance field such that eventually most target histories are sampled. It is a self-learning method in the sense that the procedure itself: (a) learns which histories are important (reach the target) and increases their probability; (b) reduces the probabilities of unimportant histories; (c) concentrates gradually on the more important target histories. (U.K.)

  15. Flow in Random Microstructures: a Multilevel Monte Carlo Approach

    KAUST Repository

    Icardi, Matteo

    2016-01-06

    In this work we are interested in the fast estimation of effective parameters of random heterogeneous materials using Multilevel Monte Carlo (MLMC). MLMC is an efficient and flexible solution for the propagation of uncertainties in complex models, where an explicit parametrisation of the input randomness is not available or too expensive. We propose a general-purpose algorithm and computational code for the solution of Partial Differential Equations (PDEs) on random heterogeneous materials. We make use of the key idea of MLMC, based on different discretization levels, extending it in a more general context, making use of a hierarchy of physical resolution scales, solvers, models and other numerical/geometrical discretisation parameters. Modifications of the classical MLMC estimators are proposed to further reduce variance in cases where analytical convergence rates and asymptotic regimes are not available. Spheres, ellipsoids and general convex-shaped grains are placed randomly in the domain with different placing/packing algorithms and the effective properties of the heterogeneous medium are computed. These are, for example, effective diffusivities, conductivities, and reaction rates. The implementation of the Monte-Carlo estimators, the statistical samples and each single solver is done efficiently in parallel. The method is tested and applied for pore-scale simulations of random sphere packings.

  16. RNA folding kinetics using Monte Carlo and Gillespie algorithms.

    Science.gov (United States)

    Clote, Peter; Bayegan, Amir H

    2018-04-01

    RNA secondary structure folding kinetics is known to be important for the biological function of certain processes, such as the hok/sok system in E. coli. Although linear algebra provides an exact computational solution of secondary structure folding kinetics with respect to the Turner energy model for tiny ([Formula: see text]20 nt) RNA sequences, the folding kinetics for larger sequences can only be approximated by binning structures into macrostates in a coarse-grained model, or by repeatedly simulating secondary structure folding with either the Monte Carlo algorithm or the Gillespie algorithm. Here we investigate the relation between the Monte Carlo algorithm and the Gillespie algorithm. We prove that asymptotically, the expected time for a K-step trajectory of the Monte Carlo algorithm is equal to [Formula: see text] times that of the Gillespie algorithm, where [Formula: see text] denotes the Boltzmann expected network degree. If the network is regular (i.e. every node has the same degree), then the mean first passage time (MFPT) computed by the Monte Carlo algorithm is equal to MFPT computed by the Gillespie algorithm multiplied by [Formula: see text]; however, this is not true for non-regular networks. In particular, RNA secondary structure folding kinetics, as computed by the Monte Carlo algorithm, is not equal to the folding kinetics, as computed by the Gillespie algorithm, although the mean first passage times are roughly correlated. Simulation software for RNA secondary structure folding according to the Monte Carlo and Gillespie algorithms is publicly available, as is our software to compute the expected degree of the network of secondary structures of a given RNA sequence-see http://bioinformatics.bc.edu/clote/RNAexpNumNbors .

  17. Monte Carlo analysis of Musashi TRIGA mark II reactor core

    International Nuclear Information System (INIS)

    Matsumoto, Tetsuo

    1999-01-01

    The analysis of the TRIGA-II core at the Musashi Institute of Technology Research Reactor (Musashi reactor, 100 kW) was performed by the three-dimensional continuous-energy Monte Carlo code (MCNP4A). Effective multiplication factors (k eff ) for the several fuel-loading patterns including the initial core criticality experiment, the fuel element and control rod reactivity worth as well as the neutron flux measurements were used in the validation process of the physical model and neutron cross section data from the ENDF/B-V evaluation. The calculated k eff overestimated the experimental data by about 1.0%Δk/k for both the initial core and the several fuel-loading arrangements. The calculated reactivity worths of control rod and fuel element agree well the measured ones within the uncertainties. The comparison of neutron flux distribution was consistent with the experimental ones which were measured by activation methods at the sample irradiation tubes. All in all, the agreement between the MCNP predictions and the experimentally determined values is good, which indicated that the Monte Carlo model is enough to simulate the Musashi TRIGA-II reactor core. (author)

  18. A NEW MONTE CARLO METHOD FOR TIME-DEPENDENT NEUTRINO RADIATION TRANSPORT

    International Nuclear Information System (INIS)

    Abdikamalov, Ernazar; Ott, Christian D.; O'Connor, Evan; Burrows, Adam; Dolence, Joshua C.; Löffler, Frank; Schnetter, Erik

    2012-01-01

    Monte Carlo approaches to radiation transport have several attractive properties such as simplicity of implementation, high accuracy, and good parallel scaling. Moreover, Monte Carlo methods can handle complicated geometries and are relatively easy to extend to multiple spatial dimensions, which makes them potentially interesting in modeling complex multi-dimensional astrophysical phenomena such as core-collapse supernovae. The aim of this paper is to explore Monte Carlo methods for modeling neutrino transport in core-collapse supernovae. We generalize the Implicit Monte Carlo photon transport scheme of Fleck and Cummings and gray discrete-diffusion scheme of Densmore et al. to energy-, time-, and velocity-dependent neutrino transport. Using our 1D spherically-symmetric implementation, we show that, similar to the photon transport case, the implicit scheme enables significantly larger timesteps compared with explicit time discretization, without sacrificing accuracy, while the discrete-diffusion method leads to significant speed-ups at high optical depth. Our results suggest that a combination of spectral, velocity-dependent, Implicit Monte Carlo and discrete-diffusion Monte Carlo methods represents a robust approach for use in neutrino transport calculations in core-collapse supernovae. Our velocity-dependent scheme can easily be adapted to photon transport.

  19. A NEW MONTE CARLO METHOD FOR TIME-DEPENDENT NEUTRINO RADIATION TRANSPORT

    Energy Technology Data Exchange (ETDEWEB)

    Abdikamalov, Ernazar; Ott, Christian D.; O' Connor, Evan [TAPIR, California Institute of Technology, MC 350-17, 1200 E California Blvd., Pasadena, CA 91125 (United States); Burrows, Adam; Dolence, Joshua C. [Department of Astrophysical Sciences, Princeton University, Peyton Hall, Ivy Lane, Princeton, NJ 08544 (United States); Loeffler, Frank; Schnetter, Erik, E-mail: abdik@tapir.caltech.edu [Center for Computation and Technology, Louisiana State University, 216 Johnston Hall, Baton Rouge, LA 70803 (United States)

    2012-08-20

    Monte Carlo approaches to radiation transport have several attractive properties such as simplicity of implementation, high accuracy, and good parallel scaling. Moreover, Monte Carlo methods can handle complicated geometries and are relatively easy to extend to multiple spatial dimensions, which makes them potentially interesting in modeling complex multi-dimensional astrophysical phenomena such as core-collapse supernovae. The aim of this paper is to explore Monte Carlo methods for modeling neutrino transport in core-collapse supernovae. We generalize the Implicit Monte Carlo photon transport scheme of Fleck and Cummings and gray discrete-diffusion scheme of Densmore et al. to energy-, time-, and velocity-dependent neutrino transport. Using our 1D spherically-symmetric implementation, we show that, similar to the photon transport case, the implicit scheme enables significantly larger timesteps compared with explicit time discretization, without sacrificing accuracy, while the discrete-diffusion method leads to significant speed-ups at high optical depth. Our results suggest that a combination of spectral, velocity-dependent, Implicit Monte Carlo and discrete-diffusion Monte Carlo methods represents a robust approach for use in neutrino transport calculations in core-collapse supernovae. Our velocity-dependent scheme can easily be adapted to photon transport.

  20. Therapeutic Applications of Monte Carlo Calculations in Nuclear Medicine

    International Nuclear Information System (INIS)

    Coulot, J

    2003-01-01

    Monte Carlo techniques are involved in many applications in medical physics, and the field of nuclear medicine has seen a great development in the past ten years due to their wider use. Thus, it is of great interest to look at the state of the art in this domain, when improving computer performances allow one to obtain improved results in a dramatically reduced time. The goal of this book is to make, in 15 chapters, an exhaustive review of the use of Monte Carlo techniques in nuclear medicine, also giving key features which are not necessary directly related to the Monte Carlo method, but mandatory for its practical application. As the book deals with therapeutic' nuclear medicine, it focuses on internal dosimetry. After a general introduction on Monte Carlo techniques and their applications in nuclear medicine (dosimetry, imaging and radiation protection), the authors give an overview of internal dosimetry methods (formalism, mathematical phantoms, quantities of interest). Then, some of the more widely used Monte Carlo codes are described, as well as some treatment planning softwares. Some original techniques are also mentioned, such as dosimetry for boron neutron capture synovectomy. It is generally well written, clearly presented, and very well documented. Each chapter gives an overview of each subject, and it is up to the reader to investigate it further using the extensive bibliography provided. Each topic is discussed from a practical point of view, which is of great help for non-experienced readers. For instance, the chapter about mathematical aspects of Monte Carlo particle transport is very clear and helps one to apprehend the philosophy of the method, which is often a difficulty with a more theoretical approach. Each chapter is put in the general (clinical) context, and this allows the reader to keep in mind the intrinsic limitation of each technique involved in dosimetry (for instance activity quantitation). Nevertheless, there are some minor remarks to

  1. Grain-boundary melting: A Monte Carlo study

    DEFF Research Database (Denmark)

    Besold, Gerhard; Mouritsen, Ole G.

    1994-01-01

    Grain-boundary melting in a lattice-gas model of a bicrystal is studied by Monte Carlo simulation using the grand canonical ensemble. Well below the bulk melting temperature T(m), a disordered liquidlike layer gradually emerges at the grain boundary. Complete interfacial wetting can be observed...... when the temperature approaches T(m) from below. Monte Carlo data over an extended temperature range indicate a logarithmic divergence w(T) approximately - ln(T(m)-T) of the width of the disordered layer w, in agreement with mean-field theory....

  2. Analysis of error in Monte Carlo transport calculations

    International Nuclear Information System (INIS)

    Booth, T.E.

    1979-01-01

    The Monte Carlo method for neutron transport calculations suffers, in part, because of the inherent statistical errors associated with the method. Without an estimate of these errors in advance of the calculation, it is difficult to decide what estimator and biasing scheme to use. Recently, integral equations have been derived that, when solved, predicted errors in Monte Carlo calculations in nonmultiplying media. The present work allows error prediction in nonanalog Monte Carlo calculations of multiplying systems, even when supercritical. Nonanalog techniques such as biased kernels, particle splitting, and Russian Roulette are incorporated. Equations derived here allow prediction of how much a specific variance reduction technique reduces the number of histories required, to be weighed against the change in time required for calculation of each history. 1 figure, 1 table

  3. The Physical Models and Statistical Procedures Used in the RACER Monte Carlo Code

    International Nuclear Information System (INIS)

    Sutton, T.M.; Brown, F.B.; Bischoff, F.G.; MacMillan, D.B.; Ellis, C.L.; Ward, J.T.; Ballinger, C.T.; Kelly, D.J.; Schindler, L.

    1999-01-01

    This report describes the MCV (Monte Carlo - Vectorized)Monte Carlo neutron transport code [Brown, 1982, 1983; Brown and Mendelson, 1984a]. MCV is a module in the RACER system of codes that is used for Monte Carlo reactor physics analysis. The MCV module contains all of the neutron transport and statistical analysis functions of the system, while other modules perform various input-related functions such as geometry description, material assignment, output edit specification, etc. MCV is very closely related to the 05R neutron Monte Carlo code [Irving et al., 1965] developed at Oak Ridge National Laboratory. 05R evolved into the 05RR module of the STEMB system, which was the forerunner of the RACER system. Much of the overall logic and physics treatment of 05RR has been retained and, indeed, the original verification of MCV was achieved through comparison with STEMB results. MCV has been designed to be very computationally efficient [Brown, 1981, Brown and Martin, 1984b; Brown, 1986]. It was originally programmed to make use of vector-computing architectures such as those of the CDC Cyber- 205 and Cray X-MP. MCV was the first full-scale production Monte Carlo code to effectively utilize vector-processing capabilities. Subsequently, MCV was modified to utilize both distributed-memory [Sutton and Brown, 1994] and shared memory parallelism. The code has been compiled and run on platforms ranging from 32-bit UNIX workstations to clusters of 64-bit vector-parallel supercomputers. The computational efficiency of the code allows the analyst to perform calculations using many more neutron histories than is practical with most other Monte Carlo codes, thereby yielding results with smaller statistical uncertainties. MCV also utilizes variance reduction techniques such as survival biasing, splitting, and rouletting to permit additional reduction in uncertainties. While a general-purpose neutron Monte Carlo code, MCV is optimized for reactor physics calculations. It has the

  4. Neutron flux calculation by means of Monte Carlo methods

    International Nuclear Information System (INIS)

    Barz, H.U.; Eichhorn, M.

    1988-01-01

    In this report a survey of modern neutron flux calculation procedures by means of Monte Carlo methods is given. Due to the progress in the development of variance reduction techniques and the improvements of computational techniques this method is of increasing importance. The basic ideas in application of Monte Carlo methods are briefly outlined. In more detail various possibilities of non-analog games and estimation procedures are presented, problems in the field of optimizing the variance reduction techniques are discussed. In the last part some important international Monte Carlo codes and own codes of the authors are listed and special applications are described. (author)

  5. PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code

    International Nuclear Information System (INIS)

    Iandola, F.N.; O'Brien, M.J.; Procassini, R.J.

    2010-01-01

    Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.

  6. TITAN: a computer program for accident occurrence frequency analyses by component Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Nomura, Yasushi [Department of Fuel Cycle Safety Research, Nuclear Safety Research Center, Tokai Research Establishment, Japan Atomic Energy Research Institute, Tokai, Ibaraki (Japan); Tamaki, Hitoshi [Department of Safety Research Technical Support, Tokai Research Establishment, Japan Atomic Energy Research Institute, Tokai, Ibaraki (Japan); Kanai, Shigeru [Fuji Research Institute Corporation, Tokyo (Japan)

    2000-04-01

    In a plant system consisting of complex equipments and components for a reprocessing facility, there might be grace time between an initiating event and a resultant serious accident, allowing operating personnel to take remedial actions, thus, terminating the ongoing accident sequence. A component Monte Carlo simulation computer program TITAN has been developed to analyze such a complex reliability model including the grace time without any difficulty to obtain an accident occurrence frequency. Firstly, basic methods for the component Monte Carlo simulation is introduced to obtain an accident occurrence frequency, and then, the basic performance such as precision, convergence, and parallelization of calculation, is shown through calculation of a prototype accident sequence model. As an example to illustrate applicability to a real scale plant model, a red oil explosion in a German reprocessing plant model is simulated to show that TITAN can give an accident occurrence frequency with relatively good accuracy. Moreover, results of uncertainty analyses by TITAN are rendered to show another performance, and a proposal is made for introducing of a new input-data format to adapt the component Monte Carlo simulation. The present paper describes the calculational method, performance, applicability to a real scale, and new proposal for the TITAN code. In the Appendixes, a conventional analytical method is shown to avoid complex and laborious calculation to obtain a strict solution of accident occurrence frequency, compared with Monte Carlo method. The user's manual and the list/structure of program are also contained in the Appendixes to facilitate TITAN computer program usage. (author)

  7. TITAN: a computer program for accident occurrence frequency analyses by component Monte Carlo simulation

    International Nuclear Information System (INIS)

    Nomura, Yasushi; Tamaki, Hitoshi; Kanai, Shigeru

    2000-04-01

    In a plant system consisting of complex equipments and components for a reprocessing facility, there might be grace time between an initiating event and a resultant serious accident, allowing operating personnel to take remedial actions, thus, terminating the ongoing accident sequence. A component Monte Carlo simulation computer program TITAN has been developed to analyze such a complex reliability model including the grace time without any difficulty to obtain an accident occurrence frequency. Firstly, basic methods for the component Monte Carlo simulation is introduced to obtain an accident occurrence frequency, and then, the basic performance such as precision, convergence, and parallelization of calculation, is shown through calculation of a prototype accident sequence model. As an example to illustrate applicability to a real scale plant model, a red oil explosion in a German reprocessing plant model is simulated to show that TITAN can give an accident occurrence frequency with relatively good accuracy. Moreover, results of uncertainty analyses by TITAN are rendered to show another performance, and a proposal is made for introducing of a new input-data format to adapt the component Monte Carlo simulation. The present paper describes the calculational method, performance, applicability to a real scale, and new proposal for the TITAN code. In the Appendixes, a conventional analytical method is shown to avoid complex and laborious calculation to obtain a strict solution of accident occurrence frequency, compared with Monte Carlo method. The user's manual and the list/structure of program are also contained in the Appendixes to facilitate TITAN computer program usage. (author)

  8. Probabilistic biosphere modeling for the long-term safety assessment of geological disposal facilities for radioactive waste using first- and second-order Monte Carlo simulation.

    Science.gov (United States)

    Ciecior, Willy; Röhlig, Klaus-Jürgen; Kirchner, Gerald

    2018-10-01

    In the present paper, deterministic as well as first- and second-order probabilistic biosphere modeling approaches are compared. Furthermore, the sensitivity of the influence of the probability distribution function shape (empirical distribution functions and fitted lognormal probability functions) representing the aleatory uncertainty (also called variability) of a radioecological model parameter as well as the role of interacting parameters are studied. Differences in the shape of the output distributions for the biosphere dose conversion factor from first-order Monte Carlo uncertainty analysis using empirical and fitted lognormal distribution functions for input parameters suggest that a lognormal approximation is possibly not always an adequate representation of the aleatory uncertainty of a radioecological parameter. Concerning the comparison of the impact of aleatory and epistemic parameter uncertainty on the biosphere dose conversion factor, the latter here is described using uncertain moments (mean, variance) while the distribution itself represents the aleatory uncertainty of the parameter. From the results obtained, the solution space of second-order Monte Carlo simulation is much larger than that from first-order Monte Carlo simulation. Therefore, the influence of epistemic uncertainty of a radioecological parameter on the output result is much larger than that one caused by its aleatory uncertainty. Parameter interactions are only of significant influence in the upper percentiles of the distribution of results as well as only in the region of the upper percentiles of the model parameters. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Application of monte-carlo method in definition of key categories of most radioactive polluted soil

    Energy Technology Data Exchange (ETDEWEB)

    Mahmudov, H M; Valibeyova, G; Jafarov, Y D; Musaeva, Sh Z [Institute of Radiation Problems, Azerbaijan National Academy of Sciences, Baku (Azerbaijan); others, and

    2006-10-15

    Full text: The principle of analysis by Monte Carlo method consists of a choice of random variables of coefficients of an exposition doze capasites of radiation and data on activity within the boundaries of their individual density of frequency distribution of exposition doses capacities.The analysis using Monte Carlo method is useful for realization of sensitivity analysis of measured capacity amount of an exposition dose in order to define the major factors causing uncertainly in reports.Reception of such conceptions can be valuable for definition of key categories of radiation polluted soil and establishment of priorities to use resources for enhancement of the report.Relative uncertainly of radiation polluted soil categories determined with the help of the analysis by Monte Carlo method in case of their availability can be applied using more significant divergence between average value and a confidential limit in case when borders of resources available for preparation and to prepare possible estimations for the most significant categories of sources.Usage of the notion {sup u}ncertainty{sup i}n reports also allows to set threshold value for a key category of sources, if it necessary, for exact reflection of 90 per cent uncertainty in reports.According to radiation safety norms level of radiation backgrounds exceeding 33 mkR/hour is considered dangerous.By calculated Monte Carlo method much more dangerous sites and sites frequently imposed to disposals and utilization were chosen from analyzed samples of polluted soil.

  10. Application of monte-carlo method in definition of key categories of most radioactive polluted soil

    International Nuclear Information System (INIS)

    Mahmudov, H.M; Valibeyova, G.; Jafarov, Y.D; Musaeva, Sh.Z

    2006-01-01

    Full text: The principle of analysis by Monte Carlo method consists of a choice of random variables of coefficients of an exposition doze capasites of radiation and data on activity within the boundaries of their individual density of frequency distribution of exposition doses capacities.The analysis using Monte Carlo method is useful for realization of sensitivity analysis of measured capacity amount of an exposition dose in order to define the major factors causing uncertainly in reports.Reception of such conceptions can be valuable for definition of key categories of radiation polluted soil and establishment of priorities to use resources for enhancement of the report.Relative uncertainly of radiation polluted soil categories determined with the help of the analysis by Monte Carlo method in case of their availability can be applied using more significant divergence between average value and a confidential limit in case when borders of resources available for preparation and to prepare possible estimations for the most significant categories of sources.Usage of the notion u ncertainty i n reports also allows to set threshold value for a key category of sources, if it necessary, for exact reflection of 90 per cent uncertainty in reports.According to radiation safety norms level of radiation backgrounds exceeding 33 mkR/hour is considered dangerous.By calculated Monte Carlo method much more dangerous sites and sites frequently imposed to disposals and utilization were chosen from analyzed samples of polluted soil.

  11. Transport methods: general. 1. The Analytical Monte Carlo Method for Radiation Transport Calculations

    International Nuclear Information System (INIS)

    Martin, William R.; Brown, Forrest B.

    2001-01-01

    We present an alternative Monte Carlo method for solving the coupled equations of radiation transport and material energy. This method is based on incorporating the analytical solution to the material energy equation directly into the Monte Carlo simulation for the radiation intensity. This method, which we call the Analytical Monte Carlo (AMC) method, differs from the well known Implicit Monte Carlo (IMC) method of Fleck and Cummings because there is no discretization of the material energy equation since it is solved as a by-product of the Monte Carlo simulation of the transport equation. Our method also differs from the method recently proposed by Ahrens and Larsen since they use Monte Carlo to solve both equations, while we are solving only the radiation transport equation with Monte Carlo, albeit with effective sources and cross sections to represent the emission sources. Our method bears some similarity to a method developed and implemented by Carter and Forest nearly three decades ago, but there are substantive differences. We have implemented our method in a simple zero-dimensional Monte Carlo code to test the feasibility of the method, and the preliminary results are very promising, justifying further extension to more realistic geometries. (authors)

  12. Markov Chain Monte Carlo

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 3. Markov Chain Monte Carlo - Examples. Arnab Chakraborty. General Article Volume 7 Issue 3 March 2002 pp 25-34. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/007/03/0025-0034. Keywords.

  13. Monte Carlo studies of high-transverse-energy hadronic interactions

    International Nuclear Information System (INIS)

    Corcoran, M.D.

    1985-01-01

    A four-jet Monte Carlo calculation has been used to simulate hadron-hadron interactions which deposit high transverse energy into a large-solid-angle calorimeter and limited solid-angle regions of the calorimeter. The calculation uses first-order QCD cross sections to generate two scattered jets and also produces beam and target jets. Field-Feynman fragmentation has been used in the hadronization. The sensitivity of the results to a few features of the Monte Carlo program has been studied. The results are found to be very sensitive to the method used to ensure overall energy conservation after the fragmentation of the four jets is complete. Results are also sensitive to the minimum momentum transfer in the QCD subprocesses and to the distribution of p/sub T/ to the jet axis and the multiplicities in the fragmentation. With reasonable choices of these features of the Monte Carlo program, good agreement with data at Fermilab/CERN SPS energies is obtained, comparable to the agreement achieved with more sophisticated parton-shower models. With other choices, however, the calculation gives qualitatively different results which are in strong disagreement with the data. These results have important implications for extracting physics conclusions from Monte Carlo calculations. It is not possible to test the validity of a particular model or distinguish between different models unless the Monte Carlo results are unambiguous and different models exhibit clearly different behavior

  14. Monte Carlo methods and applications in nuclear physics

    International Nuclear Information System (INIS)

    Carlson, J.

    1990-01-01

    Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs

  15. Probability of initiation and extinction in the Mercury Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    McKinley, M. S.; Brantley, P. S. [Lawrence Livermore National Laboratory, 7000 East Ave., Livermore, CA 94551 (United States)

    2013-07-01

    A Monte Carlo method for computing the probability of initiation has previously been implemented in Mercury. Recently, a new method based on the probability of extinction has been implemented as well. The methods have similarities from counting progeny to cycling in time, but they also have differences such as population control and statistical uncertainty reporting. The two methods agree very well for several test problems. Since each method has advantages and disadvantages, we currently recommend that both methods are used to compute the probability of criticality. (authors)

  16. Monte Carlo and analytic simulations in nanoparticle-enhanced radiation therapy

    Directory of Open Access Journals (Sweden)

    Paro AD

    2016-09-01

    Full Text Available Autumn D Paro,1 Mainul Hossain,2 Thomas J Webster,1,3,4 Ming Su1,4 1Department of Chemical Engineering, Northeastern University, Boston, MA, USA; 2NanoScience Technology Center and School of Electrical Engineering and Computer Science, University of Central Florida, Orlando, Florida, USA; 3Excellence for Advanced Materials Research, King Abdulaziz University, Jeddah, Saudi Arabia; 4Wenzhou Institute of Biomaterials and Engineering, Chinese Academy of Science, Wenzhou Medical University, Zhejiang, People’s Republic of China Abstract: Analytical and Monte Carlo simulations have been used to predict dose enhancement factors in nanoparticle-enhanced X-ray radiation therapy. Both simulations predict an increase in dose enhancement in the presence of nanoparticles, but the two methods predict different levels of enhancement over the studied energy, nanoparticle materials, and concentration regime for several reasons. The Monte Carlo simulation calculates energy deposited by electrons and photons, while the analytical one only calculates energy deposited by source photons and photoelectrons; the Monte Carlo simulation accounts for electron–hole recombination, while the analytical one does not; and the Monte Carlo simulation randomly samples photon or electron path and accounts for particle interactions, while the analytical simulation assumes a linear trajectory. This study demonstrates that the Monte Carlo simulation will be a better choice to evaluate dose enhancement with nanoparticles in radiation therapy. Keywords: nanoparticle, dose enhancement, Monte Carlo simulation, analytical simulation, radiation therapy, tumor cell, X-ray 

  17. Microcanonical Monte Carlo

    International Nuclear Information System (INIS)

    Creutz, M.

    1986-01-01

    The author discusses a recently developed algorithm for simulating statistical systems. The procedure interpolates between molecular dynamics methods and canonical Monte Carlo. The primary advantages are extremely fast simulations of discrete systems such as the Ising model and a relative insensitivity to random number quality. A variation of the algorithm gives rise to a deterministic dynamics for Ising spins. This model may be useful for high speed simulation of non-equilibrium phenomena

  18. Monte Carlo simulation applied to alpha spectrometry

    International Nuclear Information System (INIS)

    Baccouche, S.; Gharbi, F.; Trabelsi, A.

    2007-01-01

    Alpha particle spectrometry is a widely-used analytical method, in particular when we deal with pure alpha emitting radionuclides. Monte Carlo simulation is an adequate tool to investigate the influence of various phenomena on this analytical method. We performed an investigation of those phenomena using the simulation code GEANT of CERN. The results concerning the geometrical detection efficiency in different measurement geometries agree with analytical calculations. This work confirms that Monte Carlo simulation of solid angle of detection is a very useful tool to determine with very good accuracy the detection efficiency.

  19. Monte Carlo simulation of neutron scattering instruments

    International Nuclear Information System (INIS)

    Seeger, P.A.

    1995-01-01

    A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width

  20. Simulation of transport equations with Monte Carlo

    International Nuclear Information System (INIS)

    Matthes, W.

    1975-09-01

    The main purpose of the report is to explain the relation between the transport equation and the Monte Carlo game used for its solution. The introduction of artificial particles carrying a weight provides one with high flexibility in constructing many different games for the solution of the same equation. This flexibility opens a way to construct a Monte Carlo game for the solution of the adjoint transport equation. Emphasis is laid mostly on giving a clear understanding of what to do and not on the details of how to do a specific game

  1. High-efficiency wavefunction updates for large scale Quantum Monte Carlo

    Science.gov (United States)

    Kent, Paul; McDaniel, Tyler; Li, Ying Wai; D'Azevedo, Ed

    Within ab intio Quantum Monte Carlo (QMC) simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunctions. The evaluation of each Monte Carlo move requires finding the determinant of a dense matrix, which is traditionally iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. For calculations with thousands of electrons, this operation dominates the execution profile. We propose a novel rank- k delayed update scheme. This strategy enables probability evaluation for multiple successive Monte Carlo moves, with application of accepted moves to the matrices delayed until after a predetermined number of moves, k. Accepted events grouped in this manner are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency. This procedure does not change the underlying Monte Carlo sampling or the sampling efficiency. For large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude speedups can be obtained on both multi-core CPU and on GPUs, making this algorithm highly advantageous for current petascale and future exascale computations.

  2. The Monte Carlo Simulation Method for System Reliability and Risk Analysis

    CERN Document Server

    Zio, Enrico

    2013-01-01

    Monte Carlo simulation is one of the best tools for performing realistic analysis of complex systems as it allows most of the limiting assumptions on system behavior to be relaxed. The Monte Carlo Simulation Method for System Reliability and Risk Analysis comprehensively illustrates the Monte Carlo simulation method and its application to reliability and system engineering. Readers are given a sound understanding of the fundamentals of Monte Carlo sampling and simulation and its application for realistic system modeling.   Whilst many of the topics rely on a high-level understanding of calculus, probability and statistics, simple academic examples will be provided in support to the explanation of the theoretical foundations to facilitate comprehension of the subject matter. Case studies will be introduced to provide the practical value of the most advanced techniques.   This detailed approach makes The Monte Carlo Simulation Method for System Reliability and Risk Analysis a key reference for senior undergra...

  3. A contribution Monte Carlo method

    International Nuclear Information System (INIS)

    Aboughantous, C.H.

    1994-01-01

    A Contribution Monte Carlo method is developed and successfully applied to a sample deep-penetration shielding problem. The random walk is simulated in most of its parts as in conventional Monte Carlo methods. The probability density functions (pdf's) are expressed in terms of spherical harmonics and are continuous functions in direction cosine and azimuthal angle variables as well as in position coordinates; the energy is discretized in the multigroup approximation. The transport pdf is an unusual exponential kernel strongly dependent on the incident and emergent directions and energies and on the position of the collision site. The method produces the same results obtained with the deterministic method with a very small standard deviation, with as little as 1,000 Contribution particles in both analog and nonabsorption biasing modes and with only a few minutes CPU time

  4. TU-H-CAMPUS-IeP1-02: Validation of a CT Monte Carlo Software

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, R; Wulff, J; Penchev, P [Technische Hochschule Mittelhessen - University of Applied Sciences, Giessen (Germany); Zink, K [Technische Hochschule Mittelhessen - University of Applied Sciences, Giessen (Germany); University Medical Center Giessen and Marburg, Marburg (Germany)

    2016-06-15

    Purpose: To validate the in-house developed CT Monte Carlo calculation tool GMctdospp against reference simulation sets provided by the AAPM in the new report 195. Methods: Deposited energy was calculated in four segments (test 1) and two 10 cm long cylinders (test 2) inside a CTDI phantom (following case #4 of the AAPM report 195). The x-ray point source of a given 120 kVp spectrum was collimated to a fan beam with two thicknesses (10 mm, 80 mm) for a static and a rotational setup. In addition, a given chest geometry was used to calculate deposited energy in several organs for a 0° static and a rotational beam (following case #5 of the AAPM report 195). The results of GMctdospp were compared against the particular mean value of the four quoted Monte Carlo codes (EGSnrc, Geant 4, MCNP and Penelope). Results: Calculated values showed no outliers in any of the cases. Differences between GMctdospp and the particular mean Results: Calculated values showed no outliers in any of the cases. Differences between GMctdospp and the particular mean value were always at similar magnitude compared to the quoted codes. For case #4 (CTDI phantom) the relative differences were within 1.5 %, on average 0.4 % and for case #5 (chest phantom) within 2.5 % and on average 0.85 %. Conclusion: The results confirmed an overall uncertainty of the Monte-Carlo calculation chain in GMctdospp being <2.5 %, for most cases even better. This can be considered small compared to other sources of uncertainties, e.g. virtual source and patient models. The photon transport implemented in GMctdospp inside a voxel-based patient geometry was successfully verified.

  5. Exact Monte Carlo for molecules

    International Nuclear Information System (INIS)

    Lester, W.A. Jr.; Reynolds, P.J.

    1985-03-01

    A brief summary of the fixed-node quantum Monte Carlo method is presented. Results obtained for binding energies, the classical barrier height for H + H 2 , and the singlet-triplet splitting in methylene are presented and discussed. 17 refs

  6. The impact of Monte Carlo simulation: a scientometric analysis of scholarly literature

    CERN Document Server

    Pia, Maria Grazia; Bell, Zane W; Dressendorfer, Paul V

    2010-01-01

    A scientometric analysis of Monte Carlo simulation and Monte Carlo codes has been performed over a set of representative scholarly journals related to radiation physics. The results of this study are reported and discussed. They document and quantitatively appraise the role of Monte Carlo methods and codes in scientific research and engineering applications.

  7. Monte Carlo-based investigation of water-equivalence of solid phantoms at 137Cs energy

    International Nuclear Information System (INIS)

    Vishwakarma, Ramkrushna S.; Palani Selvam, T.; Sahoo, Sridhar; Mishra, Subhalaxmi; Chourasiya, Ghanshyam

    2013-01-01

    Investigation of solid phantom materials such as solid water, virtual water, plastic water, RW1, polystyrene, and polymethylmethacrylate (PMMA) for their equivalence to liquid water at 137 Cs energy (photon energy of 662 keV) under full scatter conditions is carried out using the EGSnrc Monte Carlo code system. Monte Carlo-based EGSnrc code system was used in the work to calculate distance-dependent phantom scatter corrections. The study also includes separation of primary and scattered dose components. Monte Carlo simulations are carried out using primary particle histories up to 5 x 10 9 to attain less than 0.3% statistical uncertainties in the estimation of dose. Water equivalence of various solid phantoms such as solid water, virtual water, RW1, PMMA, polystyrene, and plastic water materials are investigated at 137 Cs energy under full scatter conditions. The investigation reveals that solid water, virtual water, and RW1 phantoms are water equivalent up to 15 cm from the source. Phantom materials such as plastic water, PMMA, and polystyrene phantom materials are water equivalent up to 10 cm. At 15 cm from the source, the phantom scatter corrections are 1.035, 1.050, and 0.949 for the phantoms PMMA, plastic water, and polystyrene, respectively. (author)

  8. Design and evaluation of a Monte Carlo based model of an orthovoltage treatment system

    International Nuclear Information System (INIS)

    Penchev, Petar; Maeder, Ulf; Fiebich, Martin; Zink, Klemens; University Hospital Marburg

    2015-01-01

    The aim of this study was to develop a flexible framework of an orthovoltage treatment system capable of calculating and visualizing dose distributions in different phantoms and CT datasets. The framework provides a complete set of various filters, applicators and X-ray energies and therefore can be adapted to varying studies or be used for educational purposes. A dedicated user friendly graphical interface was developed allowing for easy setup of the simulation parameters and visualization of the results. For the Monte Carlo simulations the EGSnrc Monte Carlo code package was used. Building the geometry was accomplished with the help of the EGSnrc C++ class library. The deposited dose was calculated according to the KERMA approximation using the track-length estimator. The validation against measurements showed a good agreement within 4-5% deviation, down to depths of 20% of the depth dose maximum. Furthermore, to show its capabilities, the validated model was used to calculate the dose distribution on two CT datasets. Typical Monte Carlo calculation time for these simulations was about 10 minutes achieving an average statistical uncertainty of 2% on a standard PC. However, this calculation time depends strongly on the used CT dataset, tube potential, filter material/thickness and applicator size.

  9. No-compromise reptation quantum Monte Carlo

    International Nuclear Information System (INIS)

    Yuen, W K; Farrar, Thomas J; Rothstein, Stuart M

    2007-01-01

    Since its publication, the reptation quantum Monte Carlo algorithm of Baroni and Moroni (1999 Phys. Rev. Lett. 82 4745) has been applied to several important problems in physics, but its mathematical foundations are not well understood. We show that their algorithm is not of typical Metropolis-Hastings type, and we specify conditions required for the generated Markov chain to be stationary and to converge to the intended distribution. The time-step bias may add up, and in many applications it is only the middle of a reptile that is the most important. Therefore, we propose an alternative, 'no-compromise reptation quantum Monte Carlo' to stabilize the middle of the reptile. (fast track communication)

  10. Exploring cluster Monte Carlo updates with Boltzmann machines.

    Science.gov (United States)

    Wang, Lei

    2017-11-01

    Boltzmann machines are physics informed generative models with broad applications in machine learning. They model the probability distribution of an input data set with latent variables and generate new samples accordingly. Applying the Boltzmann machines back to physics, they are ideal recommender systems to accelerate the Monte Carlo simulation of physical systems due to their flexibility and effectiveness. More intriguingly, we show that the generative sampling of the Boltzmann machines can even give different cluster Monte Carlo algorithms. The latent representation of the Boltzmann machines can be designed to mediate complex interactions and identify clusters of the physical system. We demonstrate these findings with concrete examples of the classical Ising model with and without four-spin plaquette interactions. In the future, automatic searches in the algorithm space parametrized by Boltzmann machines may discover more innovative Monte Carlo updates.

  11. Exploring cluster Monte Carlo updates with Boltzmann machines

    Science.gov (United States)

    Wang, Lei

    2017-11-01

    Boltzmann machines are physics informed generative models with broad applications in machine learning. They model the probability distribution of an input data set with latent variables and generate new samples accordingly. Applying the Boltzmann machines back to physics, they are ideal recommender systems to accelerate the Monte Carlo simulation of physical systems due to their flexibility and effectiveness. More intriguingly, we show that the generative sampling of the Boltzmann machines can even give different cluster Monte Carlo algorithms. The latent representation of the Boltzmann machines can be designed to mediate complex interactions and identify clusters of the physical system. We demonstrate these findings with concrete examples of the classical Ising model with and without four-spin plaquette interactions. In the future, automatic searches in the algorithm space parametrized by Boltzmann machines may discover more innovative Monte Carlo updates.

  12. Monte Carlo simulation of continuous-space crystal growth

    International Nuclear Information System (INIS)

    Dodson, B.W.; Taylor, P.A.

    1986-01-01

    We describe a method, based on Monte Carlo techniques, of simulating the atomic growth of crystals without the discrete lattice space assumed by conventional Monte Carlo growth simulations. Since no lattice space is assumed, problems involving epitaxial growth, heteroepitaxy, phonon-driven mechanisms, surface reconstruction, and many other phenomena incompatible with the lattice-space approximation can be studied. Also, use of the Monte Carlo method circumvents to some extent the extreme limitations on simulated timescale inherent in crystal-growth techniques which might be proposed using molecular dynamics. The implementation of the new method is illustrated by studying the growth of strained-layer superlattice (SLS) interfaces in two-dimensional Lennard-Jones atomic systems. Despite the extreme simplicity of such systems, the qualitative features of SLS growth seen here are similar to those observed experimentally in real semiconductor systems

  13. Effect of error propagation of nuclide number densities on Monte Carlo burn-up calculations

    International Nuclear Information System (INIS)

    Tohjoh, Masayuki; Endo, Tomohiro; Watanabe, Masato; Yamamoto, Akio

    2006-01-01

    As a result of improvements in computer technology, the continuous energy Monte Carlo burn-up calculation has received attention as a good candidate for an assembly calculation method. However, the results of Monte Carlo calculations contain the statistical errors. The results of Monte Carlo burn-up calculations, in particular, include propagated statistical errors through the variance of the nuclide number densities. Therefore, if statistical error alone is evaluated, the errors in Monte Carlo burn-up calculations may be underestimated. To make clear this effect of error propagation on Monte Carlo burn-up calculations, we here proposed an equation that can predict the variance of nuclide number densities after burn-up calculations, and we verified this equation using enormous numbers of the Monte Carlo burn-up calculations by changing only the initial random numbers. We also verified the effect of the number of burn-up calculation points on Monte Carlo burn-up calculations. From these verifications, we estimated the errors in Monte Carlo burn-up calculations including both statistical and propagated errors. Finally, we made clear the effects of error propagation on Monte Carlo burn-up calculations by comparing statistical errors alone versus both statistical and propagated errors. The results revealed that the effects of error propagation on the Monte Carlo burn-up calculations of 8 x 8 BWR fuel assembly are low up to 60 GWd/t

  14. Monte Carlo simulation of experiments

    International Nuclear Information System (INIS)

    Opat, G.I.

    1977-07-01

    An outline of the technique of computer simulation of particle physics experiments by the Monte Carlo method is presented. Useful special purpose subprograms are listed and described. At each stage the discussion is made concrete by direct reference to the programs SIMUL8 and its variant MONTE-PION, written to assist in the analysis of the radiative decay experiments μ + → e + ν sub(e) antiνγ and π + → e + ν sub(e)γ, respectively. These experiments were based on the use of two large sodium iodide crystals, TINA and MINA, as e and γ detectors. Instructions for the use of SIMUL8 and MONTE-PION are given. (author)

  15. Monte Carlo simulation of neutron counters for safeguards applications

    International Nuclear Information System (INIS)

    Looman, Marc; Peerani, Paolo; Tagziria, Hamid

    2009-01-01

    MCNP-PTA is a new Monte Carlo code for the simulation of neutron counters for nuclear safeguards applications developed at the Joint Research Centre (JRC) in Ispra (Italy). After some preliminary considerations outlining the general aspects involved in the computational modelling of neutron counters, this paper describes the specific details and approximations which make up the basis of the model implemented in the code. One of the major improvements allowed by the use of Monte Carlo simulation is a considerable reduction in both the experimental work and in the reference materials required for the calibration of the instruments. This new approach to the calibration of counters using Monte Carlo simulation techniques is also discussed.

  16. Monte Carlo methods and applications in nuclear physics

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, J.

    1990-01-01

    Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs.

  17. Research on Monte Carlo improved quasi-static method for reactor space-time dynamics

    International Nuclear Information System (INIS)

    Xu Qi; Wang Kan; Li Shirui; Yu Ganglin

    2013-01-01

    With large time steps, improved quasi-static (IQS) method can improve the calculation speed for reactor dynamic simulations. The Monte Carlo IQS method was proposed in this paper, combining the advantages of both the IQS method and MC method. Thus, the Monte Carlo IQS method is beneficial for solving space-time dynamics problems of new concept reactors. Based on the theory of IQS, Monte Carlo algorithms for calculating adjoint neutron flux, reactor kinetic parameters and shape function were designed and realized. A simple Monte Carlo IQS code and a corresponding diffusion IQS code were developed, which were used for verification of the Monte Carlo IQS method. (authors)

  18. Lattice gauge theories and Monte Carlo simulations

    International Nuclear Information System (INIS)

    Rebbi, C.

    1981-11-01

    After some preliminary considerations, the discussion of quantum gauge theories on a Euclidean lattice takes up the definition of Euclidean quantum theory and treatment of the continuum limit; analogy is made with statistical mechanics. Perturbative methods can produce useful results for strong or weak coupling. In the attempts to investigate the properties of the systems for intermediate coupling, numerical methods known as Monte Carlo simulations have proved valuable. The bulk of this paper illustrates the basic ideas underlying the Monte Carlo numerical techniques and the major results achieved with them according to the following program: Monte Carlo simulations (general theory, practical considerations), phase structure of Abelian and non-Abelian models, the observables (coefficient of the linear term in the potential between two static sources at large separation, mass of the lowest excited state with the quantum numbers of the vacuum (the so-called glueball), the potential between two static sources at very small distance, the critical temperature at which sources become deconfined), gauge fields coupled to basonic matter (Higgs) fields, and systems with fermions

  19. Final Report: 06-LW-013, Nuclear Physics the Monte Carlo Way

    International Nuclear Information System (INIS)

    Ormand, W.E.

    2009-01-01

    This is document reports the progress and accomplishments achieved in 2006-2007 with LDRD funding under the proposal 06-LW-013, 'Nuclear Physics the Monte Carlo Way'. The project was a theoretical study to explore a novel approach to dealing with a persistent problem in Monte Carlo approaches to quantum many-body systems. The goal was to implement a solution to the notorious 'sign-problem', which if successful, would permit, for the first time, exact solutions to quantum many-body systems that cannot be addressed with other methods. In this document, we outline the progress and accomplishments achieved during FY2006-2007 with LDRD funding in the proposal 06-LW-013, 'Nuclear Physics the Monte Carlo Way'. This project was funded under the Lab Wide LDRD competition at Lawrence Livermore National Laboratory. The primary objective of this project was to test the feasibility of implementing a novel approach to solving the generic quantum many-body problem, which is one of the most important problems being addressed in theoretical physics today. Instead of traditional methods based matrix diagonalization, this proposal focused a Monte Carlo method. The principal difficulty with Monte Carlo methods, is the so-called 'sign problem'. The sign problem, which will discussed in some detail later, is endemic to Monte Carlo approaches to the quantum many-body problem, and is the principal reason that they have not been completely successful in the past. Here, we outline our research in the 'shifted-contour method' applied the Auxiliary Field Monte Carlo (AFMC) method

  20. Monte Carlo sampling on technical parameters in criticality and burn-up-calculations

    International Nuclear Information System (INIS)

    Kirsch, M.; Hannstein, V.; Kilger, R.

    2011-01-01

    The increase in computing power over the recent years allows for the introduction of Monte Carlo sampling techniques for sensitivity and uncertainty analyses in criticality safety and burn-up calculations. With these techniques it is possible to assess the influence of a variation of the input parameters within their measured or estimated uncertainties on the final value of a calculation. The probabilistic result of a statistical analysis can thus complement the traditional method of figuring out both the nominal (best estimate) and the bounding case of the neutron multiplication factor (k eff ) in criticality safety analyses, e.g. by calculating the uncertainty of k eff or tolerance limits. Furthermore, the sampling method provides a possibility to derive sensitivity information, i.e. it allows figuring out which of the uncertain input parameters contribute the most to the uncertainty of the system. The application of Monte Carlo sampling methods has become a common practice in both industry and research institutes. Within this approach, two main paths are currently under investigation: the variation of nuclear data used in a calculation and the variation of technical parameters such as manufacturing tolerances. This contribution concentrates on the latter case. The newly developed SUnCISTT (Sensitivities and Uncertainties in Criticality Inventory and Source Term Tool) is introduced. It defines an interface to the well established GRS tool for sensitivity and uncertainty analyses SUSA, that provides the necessary statistical methods for sampling based analyses. The interfaced codes are programs that are used to simulate aspects of the nuclear fuel cycle, such as the criticality safety analysis sequence CSAS5 of the SCALE code system, developed by Oak Ridge National Laboratories, or the GRS burn-up system OREST. In the following, first the implementation of the SUnCISTT will be presented, then, results of its application in an exemplary evaluation of the neutron

  1. Review of the Monte Carlo and deterministic codes in radiation protection and dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Tagziria, H

    2000-02-01

    ' and their groups' in their intended purpose. One failure, unfortunately common to many codes (including some leading and generally available codes), is the lack of effort expended in providing a descent statistical and sensitivity analysis package, which would help the user to avoid traps such as false convergence. Another failure, which is this time blameable on us the users, is our failure to grasp the importance of choosing well, and using sensibly, cross section data. The impact of such or other incorrect input data on our results is often overlooked. With new developments in computing technology and in variance reduction or acceleration techniques, Monte Carlo calculations can nowadays be performed with very small statistical uncertainties. These are often so low that they become negligible compared to other, sometimes much larger uncertainties such as those due to input data, source definition, geometry response functions, etc. (abstract truncated)

  2. Time step length versus efficiency of Monte Carlo burnup calculations

    International Nuclear Information System (INIS)

    Dufek, Jan; Valtavirta, Ville

    2014-01-01

    Highlights: • Time step length largely affects efficiency of MC burnup calculations. • Efficiency of MC burnup calculations improves with decreasing time step length. • Results were obtained from SIE-based Monte Carlo burnup calculations. - Abstract: We demonstrate that efficiency of Monte Carlo burnup calculations can be largely affected by the selected time step length. This study employs the stochastic implicit Euler based coupling scheme for Monte Carlo burnup calculations that performs a number of inner iteration steps within each time step. In a series of calculations, we vary the time step length and the number of inner iteration steps; the results suggest that Monte Carlo burnup calculations get more efficient as the time step length is reduced. More time steps must be simulated as they get shorter; however, this is more than compensated by the decrease in computing cost per time step needed for achieving a certain accuracy

  3. Interface methods for hybrid Monte Carlo-diffusion radiation-transport simulations

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.

    2006-01-01

    Discrete diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Monte Carlo simulations in diffusive media. An important aspect of DDMC is the treatment of interfaces between diffusive regions, where DDMC is used, and transport regions, where standard Monte Carlo is employed. Three previously developed methods exist for treating transport-diffusion interfaces: the Marshak interface method, based on the Marshak boundary condition, the asymptotic interface method, based on the asymptotic diffusion-limit boundary condition, and the Nth-collided source technique, a scheme that allows Monte Carlo particles to undergo several collisions in a diffusive region before DDMC is used. Numerical calculations have shown that each of these interface methods gives reasonable results as part of larger radiation-transport simulations. In this paper, we use both analytic and numerical examples to compare the ability of these three interface techniques to treat simpler, transport-diffusion interface problems outside of a more complex radiation-transport calculation. We find that the asymptotic interface method is accurate regardless of the angular distribution of Monte Carlo particles incident on the interface surface. In contrast, the Marshak boundary condition only produces correct solutions if the incident particles are isotropic. We also show that the Nth-collided source technique has the capacity to yield accurate results if spatial cells are optically small and Monte Carlo particles are allowed to undergo many collisions within a diffusive region before DDMC is employed. These requirements make the Nth-collided source technique impractical for realistic radiation-transport calculations

  4. Artificial neural networks, a new alternative to Monte Carlo calculations for radiotherapy

    International Nuclear Information System (INIS)

    Martin, E.; Gschwind, R.; Henriet, J.; Sauget, M.; Makovicka, L.

    2010-01-01

    In order to reduce the computing time needed by Monte Carlo codes in the field of irradiation physics, notably in dosimetry, the authors report the use of artificial neural networks in combination with preliminary Monte Carlo calculations. During the learning phase, Monte Carlo calculations are performed in homogeneous media to allow the building up of the neural network. Then, dosimetric calculations (in heterogeneous media, unknown by the network) can be performed by the so-learned network. Results with an equivalent precision can be obtained within less than one minute on a simple PC whereas several days are needed with a Monte Carlo calculation

  5. Herwig: The Evolution of a Monte Carlo Simulation

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Monte Carlo event generation has seen significant developments in the last 10 years starting with preparation for the LHC and then during the first LHC run. I will discuss the basic ideas behind Monte Carlo event generators and then go on to discuss these developments, focussing on the developments in Herwig(++) event generator. I will conclude by presenting the current status of event generation together with some results of the forthcoming new version of Herwig, Herwig 7.

  6. Monte Carlo tests of the ELIPGRID-PC algorithm

    International Nuclear Information System (INIS)

    Davidson, J.R.

    1995-04-01

    The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM reg-sign PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within ±0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangular sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error

  7. Two proposed convergence criteria for Monte Carlo solutions

    International Nuclear Information System (INIS)

    Forster, R.A.; Pederson, S.P.; Booth, T.E.

    1992-01-01

    The central limit theorem (CLT) can be applied to a Monte Carlo solution if two requirements are satisfied: (1) The random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these two conditions are satisfied, a confidence interval (CI) based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the Monte Carlo tally being used. The Monte Carlo practitioner has a limited number of marginal methods to assess the fulfillment of the second requirement, such as statistical error reduction proportional to 1/√N with error magnitude guidelines. Two proposed methods are discussed in this paper to assist in deciding if N is large enough: estimating the relative variance of the variance (VOV) and examining the empirical history score probability density function (pdf)

  8. Multiple-time-stepping generalized hybrid Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Escribano, Bruno, E-mail: bescribano@bcamath.org [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); Akhmatskaya, Elena [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao (Spain); Reich, Sebastian [Universität Potsdam, Institut für Mathematik, D-14469 Potsdam (Germany); Azpiroz, Jon M. [Kimika Fakultatea, Euskal Herriko Unibertsitatea (UPV/EHU) and Donostia International Physics Center (DIPC), P.K. 1072, Donostia (Spain)

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  9. A keff calculation method by Monte Carlo

    International Nuclear Information System (INIS)

    Shen, H; Wang, K.

    2008-01-01

    The effective multiplication factor (k eff ) is defined as the ratio between the number of neutrons in successive generations, which definition is adopted by most Monte Carlo codes (e.g. MCNP). Also, it can be thought of as the ratio of the generation rate of neutrons by the sum of the leakage rate and the absorption rate, which should exclude the effect of the neutron reaction such as (n, 2n) and (n, 3n). This article discusses the Monte Carlo method for k eff calculation based on the second definition. A new code has been developed and the results are presented. (author)

  10. NOTE: Monte Carlo evaluation of kerma in an HDR brachytherapy bunker

    Science.gov (United States)

    Pérez-Calatayud, J.; Granero, D.; Ballester, F.; Casal, E.; Crispin, V.; Puchades, V.; León, A.; Verdú, G.

    2004-12-01

    In recent years, the use of high dose rate (HDR) after-loader machines has greatly increased due to the shift from traditional Cs-137/Ir-192 low dose rate (LDR) to HDR brachytherapy. The method used to calculate the required concrete and, where appropriate, lead shielding in the door is based on analytical methods provided by documents published by the ICRP, the IAEA and the NCRP. The purpose of this study is to perform a more realistic kerma evaluation at the entrance maze door of an HDR bunker using the Monte Carlo code GEANT4. The Monte Carlo results were validated experimentally. The spectrum at the maze entrance door, obtained with Monte Carlo, has an average energy of about 110 keV, maintaining a similar value along the length of the maze. The comparison of results from the aforementioned values with the Monte Carlo ones shows that results obtained using the albedo coefficient from the ICRP document more closely match those given by the Monte Carlo method, although the maximum value given by MC calculations is 30% greater.

  11. TH-A-19A-04: Latent Uncertainties and Performance of a GPU-Implemented Pre-Calculated Track Monte Carlo Method

    International Nuclear Information System (INIS)

    Renaud, M; Seuntjens, J; Roberge, D

    2014-01-01

    Purpose: Assessing the performance and uncertainty of a pre-calculated Monte Carlo (PMC) algorithm for proton and electron transport running on graphics processing units (GPU). While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from recycling a limited number of tracks in the pre-generated track bank is missing from the literature. With a proper uncertainty analysis, an optimal pre-generated track bank size can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pre-generated for electrons and protons using EGSnrc and GEANT4, respectively. The PMC algorithm for track transport was implemented on the CUDA programming framework. GPU-PMC dose distributions were compared to benchmark dose distributions simulated using general-purpose MC codes in the same conditions. A latent uncertainty analysis was performed by comparing GPUPMC dose values to a “ground truth” benchmark while varying the track bank size and primary particle histories. Results: GPU-PMC dose distributions and benchmark doses were within 1% of each other in voxels with dose greater than 50% of Dmax. In proton calculations, a submillimeter distance-to-agreement error was observed at the Bragg Peak. Latent uncertainty followed a Poisson distribution with the number of tracks per energy (TPE) and a track bank of 20,000 TPE produced a latent uncertainty of approximately 1%. Efficiency analysis showed a 937× and 508× gain over a single processor core running DOSXYZnrc for 16 MeV electrons in water and bone, respectively. Conclusion: The GPU-PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty below 1%. The track bank size necessary to achieve an optimal efficiency can be tuned based on the desired uncertainty. Coupled with a model to calculate dose contributions from uncharged particles, GPU-PMC is a candidate for inverse planning of modulated electron radiotherapy

  12. Crop canopy BRDF simulation and analysis using Monte Carlo method

    NARCIS (Netherlands)

    Huang, J.; Wu, B.; Tian, Y.; Zeng, Y.

    2006-01-01

    This author designs the random process between photons and crop canopy. A Monte Carlo model has been developed to simulate the Bi-directional Reflectance Distribution Function (BRDF) of crop canopy. Comparing Monte Carlo model to MCRM model, this paper analyzes the variations of different LAD and

  13. Monte Carlo radiation transport: A revolution in science

    International Nuclear Information System (INIS)

    Hendricks, J.

    1993-01-01

    When Enrico Fermi, Stan Ulam, Nicholas Metropolis, John von Neuman, and Robert Richtmyer invented the Monte Carlo method fifty years ago, little could they imagine the far-flung consequences, the international applications, and the revolution in science epitomized by their abstract mathematical method. The Monte Carlo method is used in a wide variety of fields to solve exact computational models approximately by statistical sampling. It is an alternative to traditional physics modeling methods which solve approximate computational models exactly by deterministic methods. Modern computers and improved methods, such as variance reduction, have enhanced the method to the point of enabling a true predictive capability in areas such as radiation or particle transport. This predictive capability has contributed to a radical change in the way science is done: design and understanding come from computations built upon experiments rather than being limited to experiments, and the computer codes doing the computations have become the repository for physics knowledge. The MCNP Monte Carlo computer code effort at Los Alamos is an example of this revolution. Physicians unfamiliar with physics details can design cancer treatments using physics buried in the MCNP computer code. Hazardous environments and hypothetical accidents can be explored. Many other fields, from underground oil well exploration to aerospace, from physics research to energy production, from safety to bulk materials processing, benefit from MCNP, the Monte Carlo method, and the revolution in science

  14. Suppression of the initial transient in Monte Carlo criticality simulations

    International Nuclear Information System (INIS)

    Richet, Y.

    2006-12-01

    Criticality Monte Carlo calculations aim at estimating the effective multiplication factor (k-effective) for a fissile system through iterations simulating neutrons propagation (making a Markov chain). Arbitrary initialization of the neutron population can deeply bias the k-effective estimation, defined as the mean of the k-effective computed at each iteration. A simplified model of this cycle k-effective sequence is built, based on characteristics of industrial criticality Monte Carlo calculations. Statistical tests, inspired by Brownian bridge properties, are designed to discriminate stationarity of the cycle k-effective sequence. The initial detected transient is, then, suppressed in order to improve the estimation of the system k-effective. The different versions of this methodology are detailed and compared, firstly on a plan of numerical tests fitted on criticality Monte Carlo calculations, and, secondly on real criticality calculations. Eventually, the best methodologies observed in these tests are selected and allow to improve industrial Monte Carlo criticality calculations. (author)

  15. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Badal, A [U.S. Food and Drug Administration (CDRH/OSEL), Silver Spring, MD (United States); Zbijewski, W [Johns Hopkins University, Baltimore, MD (United States); Bolch, W [University of Florida, Gainesville, FL (United States); Sechopoulos, I [Emory University, Atlanta, GA (United States)

    2014-06-15

    Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods, are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the

  16. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging

    International Nuclear Information System (INIS)

    Badal, A; Zbijewski, W; Bolch, W; Sechopoulos, I

    2014-01-01

    Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods, are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10 7 xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual

  17. Monte Carlo simulation of air sampling methods for the measurement of radon decay products.

    Science.gov (United States)

    Sima, Octavian; Luca, Aurelian; Sahagia, Maria

    2017-08-01

    A stochastic model of the processes involved in the measurement of the activity of the 222 Rn decay products was developed. The distributions of the relevant factors, including air sampling and radionuclide collection, are propagated using Monte Carlo simulation to the final distribution of the measurement results. The uncertainties of the 222 Rn decay products concentrations in the air are realistically evaluated. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Monte Carlo calculated CT numbers for improved heavy ion treatment planning

    Directory of Open Access Journals (Sweden)

    Qamhiyeh Sima

    2014-03-01

    Full Text Available Better knowledge of CT number values and their uncertainties can be applied to improve heavy ion treatment planning. We developed a novel method to calculate CT numbers for a computed tomography (CT scanner using the Monte Carlo (MC code, BEAMnrc/EGSnrc. To generate the initial beam shape and spectra we conducted full simulations of an X-ray tube, filters and beam shapers for a Siemens Emotion CT. The simulation output files were analyzed to calculate projections of a phantom with inserts. A simple reconstruction algorithm (FBP using a Ram-Lak filter was applied to calculate the pixel values, which represent an attenuation coefficient, normalized in such a way to give zero for water (Hounsfield unit (HU. Measured and Monte Carlo calculated CT numbers were compared. The average deviation between measured and simulated CT numbers was 4 ± 4 HU and the standard deviation σ was 49 ± 4 HU. The simulation also correctly predicted the behaviour of H-materials compared to a Gammex tissue substitutes. We believe the developed approach represents a useful new tool for evaluating the effect of CT scanner and phantom parameters on CT number values.

  19. PEPSI - a Monte Carlo generator for polarized leptoproduction

    International Nuclear Information System (INIS)

    Mankiewicz, L.

    1992-01-01

    We describe PEPSI (Polarized Electron Proton Scattering Interactions) a Monte Carlo program for polarized deep inelastic leptoproduction mediated by electromagnetic interaction, and explain how to use it. The code is a modification of the Lepto 4.3 Lund Monte Carlo for unpolarized scattering. The hard virtual gamma-parton scattering is generated according to the polarization-dependent QCD cross-section of the first order in α S . PEPSI requires the standard polarization-independent JETSET routines to simulate the fragmentation into final hadrons. (orig.)

  20. Monte Carlo method for solving a parabolic problem

    Directory of Open Access Journals (Sweden)

    Tian Yi

    2016-01-01

    Full Text Available In this paper, we present a numerical method based on random sampling for a parabolic problem. This method combines use of the Crank-Nicolson method and Monte Carlo method. In the numerical algorithm, we first discretize governing equations by Crank-Nicolson method, and obtain a large sparse system of linear algebraic equations, then use Monte Carlo method to solve the linear algebraic equations. To illustrate the usefulness of this technique, we apply it to some test problems.

  1. NUEN-618 Class Project: Actually Implicit Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Vega, R. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brunner, T. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-12-14

    This research describes a new method for the solution of the thermal radiative transfer (TRT) equations that is implicit in time which will be called Actually Implicit Monte Carlo (AIMC). This section aims to introduce the TRT equations, as well as the current workhorse method which is known as Implicit Monte Carlo (IMC). As the name of the method proposed here indicates, IMC is a misnomer in that it is only semi-implicit, which will be shown in this section as well.

  2. A new moving strategy for the sequential Monte Carlo approach in optimizing the hydrological model parameters

    Science.gov (United States)

    Zhu, Gaofeng; Li, Xin; Ma, Jinzhu; Wang, Yunquan; Liu, Shaomin; Huang, Chunlin; Zhang, Kun; Hu, Xiaoli

    2018-04-01

    Sequential Monte Carlo (SMC) samplers have become increasing popular for estimating the posterior parameter distribution with the non-linear dependency structures and multiple modes often present in hydrological models. However, the explorative capabilities and efficiency of the sampler depends strongly on the efficiency in the move step of SMC sampler. In this paper we presented a new SMC sampler entitled the Particle Evolution Metropolis Sequential Monte Carlo (PEM-SMC) algorithm, which is well suited to handle unknown static parameters of hydrologic model. The PEM-SMC sampler is inspired by the works of Liang and Wong (2001) and operates by incorporating the strengths of the genetic algorithm, differential evolution algorithm and Metropolis-Hasting algorithm into the framework of SMC. We also prove that the sampler admits the target distribution to be a stationary distribution. Two case studies including a multi-dimensional bimodal normal distribution and a conceptual rainfall-runoff hydrologic model by only considering parameter uncertainty and simultaneously considering parameter and input uncertainty show that PEM-SMC sampler is generally superior to other popular SMC algorithms in handling the high dimensional problems. The study also indicated that it may be important to account for model structural uncertainty by using multiplier different hydrological models in the SMC framework in future study.

  3. Evaluation of the interindividual human variation in bioactivation of methyleugenol using physiologically based kinetic modeling and Monte Carlo simulations

    International Nuclear Information System (INIS)

    Al-Subeihi, Ala' A.A.; Alhusainy, Wasma; Kiwamoto, Reiko; Spenkelink, Bert; Bladeren, Peter J. van; Rietjens, Ivonne M.C.M.; Punt, Ans

    2015-01-01

    The present study aims at predicting the level of formation of the ultimate carcinogenic metabolite of methyleugenol, 1′-sulfooxymethyleugenol, in the human population by taking variability in key bioactivation and detoxification reactions into account using Monte Carlo simulations. Depending on the metabolic route, variation was simulated based on kinetic constants obtained from incubations with a range of individual human liver fractions or by combining kinetic constants obtained for specific isoenzymes with literature reported human variation in the activity of these enzymes. The results of the study indicate that formation of 1′-sulfooxymethyleugenol is predominantly affected by variation in i) P450 1A2-catalyzed bioactivation of methyleugenol to 1′-hydroxymethyleugenol, ii) P450 2B6-catalyzed epoxidation of methyleugenol, iii) the apparent kinetic constants for oxidation of 1′-hydroxymethyleugenol, and iv) the apparent kinetic constants for sulfation of 1′-hydroxymethyleugenol. Based on the Monte Carlo simulations a so-called chemical-specific adjustment factor (CSAF) for intraspecies variation could be derived by dividing different percentiles by the 50th percentile of the predicted population distribution for 1′-sulfooxymethyleugenol formation. The obtained CSAF value at the 90th percentile was 3.2, indicating that the default uncertainty factor of 3.16 for human variability in kinetics may adequately cover the variation within 90% of the population. Covering 99% of the population requires a larger uncertainty factor of 6.4. In conclusion, the results showed that adequate predictions on interindividual human variation can be made with Monte Carlo-based PBK modeling. For methyleugenol this variation was observed to be in line with the default variation generally assumed in risk assessment. - Highlights: • Interindividual human differences in methyleugenol bioactivation were simulated. • This was done using in vitro incubations, PBK modeling

  4. Evaluation of the interindividual human variation in bioactivation of methyleugenol using physiologically based kinetic modeling and Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Al-Subeihi, Ala' A.A., E-mail: subeihi@yahoo.com [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands); BEN-HAYYAN-Aqaba International Laboratories, Aqaba Special Economic Zone Authority (ASEZA), P. O. Box 2565, Aqaba 77110 (Jordan); Alhusainy, Wasma; Kiwamoto, Reiko; Spenkelink, Bert [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands); Bladeren, Peter J. van [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands); Nestec S.A., Avenue Nestlé 55, 1800 Vevey (Switzerland); Rietjens, Ivonne M.C.M.; Punt, Ans [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands)

    2015-03-01

    The present study aims at predicting the level of formation of the ultimate carcinogenic metabolite of methyleugenol, 1′-sulfooxymethyleugenol, in the human population by taking variability in key bioactivation and detoxification reactions into account using Monte Carlo simulations. Depending on the metabolic route, variation was simulated based on kinetic constants obtained from incubations with a range of individual human liver fractions or by combining kinetic constants obtained for specific isoenzymes with literature reported human variation in the activity of these enzymes. The results of the study indicate that formation of 1′-sulfooxymethyleugenol is predominantly affected by variation in i) P450 1A2-catalyzed bioactivation of methyleugenol to 1′-hydroxymethyleugenol, ii) P450 2B6-catalyzed epoxidation of methyleugenol, iii) the apparent kinetic constants for oxidation of 1′-hydroxymethyleugenol, and iv) the apparent kinetic constants for sulfation of 1′-hydroxymethyleugenol. Based on the Monte Carlo simulations a so-called chemical-specific adjustment factor (CSAF) for intraspecies variation could be derived by dividing different percentiles by the 50th percentile of the predicted population distribution for 1′-sulfooxymethyleugenol formation. The obtained CSAF value at the 90th percentile was 3.2, indicating that the default uncertainty factor of 3.16 for human variability in kinetics may adequately cover the variation within 90% of the population. Covering 99% of the population requires a larger uncertainty factor of 6.4. In conclusion, the results showed that adequate predictions on interindividual human variation can be made with Monte Carlo-based PBK modeling. For methyleugenol this variation was observed to be in line with the default variation generally assumed in risk assessment. - Highlights: • Interindividual human differences in methyleugenol bioactivation were simulated. • This was done using in vitro incubations, PBK modeling

  5. Monte Carlo burnup codes acceleration using the correlated sampling method

    International Nuclear Information System (INIS)

    Dieudonne, C.

    2013-01-01

    For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this document we present an original methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time we develop a theoretical model to study the features of the correlated sampling method to understand its effects on depletion calculations. In a third time the implementation of this method in the TRIPOLI-4 code will be discussed, as well as the precise calculation scheme used to bring important speed-up of the depletion calculation. We will begin to validate and optimize the perturbed depletion scheme with the calculation of a REP-like fuel cell depletion. Then this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes. (author) [fr

  6. On the errors on Omega(0): Monte Carlo simulations of the EMSS cluster sample

    DEFF Research Database (Denmark)

    Oukbir, J.; Arnaud, M.

    2001-01-01

    We perform Monte Carlo simulations of synthetic EMSS cluster samples, to quantify the systematic errors and the statistical uncertainties on the estimate of Omega (0) derived from fits to the cluster number density evolution and to the X-ray temperature distribution up to z=0.83. We identify...... the scatter around the relation between cluster X-ray luminosity and temperature to be a source of systematic error, of the order of Delta (syst)Omega (0) = 0.09, if not properly taken into account in the modelling. After correcting for this bias, our best Omega (0) is 0.66. The uncertainties on the shape...

  7. Implementation of random set-up errors in Monte Carlo calculated dynamic IMRT treatment plans

    International Nuclear Information System (INIS)

    Stapleton, S; Zavgorodni, S; Popescu, I A; Beckham, W A

    2005-01-01

    The fluence-convolution method for incorporating random set-up errors (RSE) into the Monte Carlo treatment planning dose calculations was previously proposed by Beckham et al, and it was validated for open field radiotherapy treatments. This study confirms the applicability of the fluence-convolution method for dynamic intensity modulated radiotherapy (IMRT) dose calculations and evaluates the impact of set-up uncertainties on a clinical IMRT dose distribution. BEAMnrc and DOSXYZnrc codes were used for Monte Carlo calculations. A sliding window IMRT delivery was simulated using a dynamic multi-leaf collimator (DMLC) transport model developed by Keall et al. The dose distributions were benchmarked for dynamic IMRT fields using extended dose range (EDR) film, accumulating the dose from 16 subsequent fractions shifted randomly. Agreement of calculated and measured relative dose values was well within statistical uncertainty. A clinical seven field sliding window IMRT head and neck treatment was then simulated and the effects of random set-up errors (standard deviation of 2 mm) were evaluated. The dose-volume histograms calculated in the PTV with and without corrections for RSE showed only small differences indicating a reduction of the volume of high dose region due to set-up errors. As well, it showed that adequate coverage of the PTV was maintained when RSE was incorporated. Slice-by-slice comparison of the dose distributions revealed differences of up to 5.6%. The incorporation of set-up errors altered the position of the hot spot in the plan. This work demonstrated validity of implementation of the fluence-convolution method to dynamic IMRT Monte Carlo dose calculations. It also showed that accounting for the set-up errors could be essential for correct identification of the value and position of the hot spot

  8. Implementation of random set-up errors in Monte Carlo calculated dynamic IMRT treatment plans

    Science.gov (United States)

    Stapleton, S.; Zavgorodni, S.; Popescu, I. A.; Beckham, W. A.

    2005-02-01

    The fluence-convolution method for incorporating random set-up errors (RSE) into the Monte Carlo treatment planning dose calculations was previously proposed by Beckham et al, and it was validated for open field radiotherapy treatments. This study confirms the applicability of the fluence-convolution method for dynamic intensity modulated radiotherapy (IMRT) dose calculations and evaluates the impact of set-up uncertainties on a clinical IMRT dose distribution. BEAMnrc and DOSXYZnrc codes were used for Monte Carlo calculations. A sliding window IMRT delivery was simulated using a dynamic multi-leaf collimator (DMLC) transport model developed by Keall et al. The dose distributions were benchmarked for dynamic IMRT fields using extended dose range (EDR) film, accumulating the dose from 16 subsequent fractions shifted randomly. Agreement of calculated and measured relative dose values was well within statistical uncertainty. A clinical seven field sliding window IMRT head and neck treatment was then simulated and the effects of random set-up errors (standard deviation of 2 mm) were evaluated. The dose-volume histograms calculated in the PTV with and without corrections for RSE showed only small differences indicating a reduction of the volume of high dose region due to set-up errors. As well, it showed that adequate coverage of the PTV was maintained when RSE was incorporated. Slice-by-slice comparison of the dose distributions revealed differences of up to 5.6%. The incorporation of set-up errors altered the position of the hot spot in the plan. This work demonstrated validity of implementation of the fluence-convolution method to dynamic IMRT Monte Carlo dose calculations. It also showed that accounting for the set-up errors could be essential for correct identification of the value and position of the hot spot.

  9. Distribution network design under demand uncertainty using genetic algorithm and Monte Carlo simulation approach: a case study in pharmaceutical industry

    Science.gov (United States)

    Izadi, Arman; Kimiagari, Ali Mohammad

    2014-05-01

    Distribution network design as a strategic decision has long-term effect on tactical and operational supply chain management. In this research, the location-allocation problem is studied under demand uncertainty. The purposes of this study were to specify the optimal number and location of distribution centers and to determine the allocation of customer demands to distribution centers. The main feature of this research is solving the model with unknown demand function which is suitable with the real-world problems. To consider the uncertainty, a set of possible scenarios for customer demands is created based on the Monte Carlo simulation. The coefficient of variation of costs is mentioned as a measure of risk and the most stable structure for firm's distribution network is defined based on the concept of robust optimization. The best structure is identified using genetic algorithms and 14 % reduction in total supply chain costs is the outcome. Moreover, it imposes the least cost variation created by fluctuation in customer demands (such as epidemic diseases outbreak in some areas of the country) to the logistical system. It is noteworthy that this research is done in one of the largest pharmaceutical distribution firms in Iran.

  10. Monte Carlo Simulation in Statistical Physics An Introduction

    CERN Document Server

    Binder, Kurt

    2010-01-01

    Monte Carlo Simulation in Statistical Physics deals with the computer simulation of many-body systems in condensed-matter physics and related fields of physics, chemistry and beyond, to traffic flows, stock market fluctuations, etc.). Using random numbers generated by a computer, probability distributions are calculated, allowing the estimation of the thermodynamic properties of various systems. This book describes the theoretical background to several variants of these Monte Carlo methods and gives a systematic presentation from which newcomers can learn to perform such simulations and to analyze their results. The fifth edition covers Classical as well as Quantum Monte Carlo methods. Furthermore a new chapter on the sampling of free-energy landscapes has been added. To help students in their work a special web server has been installed to host programs and discussion groups (http://wwwcp.tphys.uni-heidelberg.de). Prof. Binder was awarded the Berni J. Alder CECAM Award for Computational Physics 2001 as well ...

  11. Monte Carlo simulation in statistical physics an introduction

    CERN Document Server

    Binder, Kurt

    1992-01-01

    The Monte Carlo method is a computer simulation method which uses random numbers to simulate statistical fluctuations The method is used to model complex systems with many degrees of freedom Probability distributions for these systems are generated numerically and the method then yields numerically exact information on the models Such simulations may be used tosee how well a model system approximates a real one or to see how valid the assumptions are in an analyical theory A short and systematic theoretical introduction to the method forms the first part of this book The second part is a practical guide with plenty of examples and exercises for the student Problems treated by simple sampling (random and self-avoiding walks, percolation clusters, etc) are included, along with such topics as finite-size effects and guidelines for the analysis of Monte Carlo simulations The two parts together provide an excellent introduction to the theory and practice of Monte Carlo simulations

  12. Geometry and Dynamics for Markov Chain Monte Carlo

    Science.gov (United States)

    Barp, Alessandro; Briol, François-Xavier; Kennedy, Anthony D.; Girolami, Mark

    2018-03-01

    Markov Chain Monte Carlo methods have revolutionised mathematical computation and enabled statistical inference within many previously intractable models. In this context, Hamiltonian dynamics have been proposed as an efficient way of building chains which can explore probability densities efficiently. The method emerges from physics and geometry and these links have been extensively studied by a series of authors through the last thirty years. However, there is currently a gap between the intuitions and knowledge of users of the methodology and our deep understanding of these theoretical foundations. The aim of this review is to provide a comprehensive introduction to the geometric tools used in Hamiltonian Monte Carlo at a level accessible to statisticians, machine learners and other users of the methodology with only a basic understanding of Monte Carlo methods. This will be complemented with some discussion of the most recent advances in the field which we believe will become increasingly relevant to applied scientists.

  13. Vectorizing and macrotasking Monte Carlo neutral particle algorithms

    International Nuclear Information System (INIS)

    Heifetz, D.B.

    1987-04-01

    Monte Carlo algorithms for computing neutral particle transport in plasmas have been vectorized and macrotasked. The techniques used are directly applicable to Monte Carlo calculations of neutron and photon transport, and Monte Carlo integration schemes in general. A highly vectorized code was achieved by calculating test flight trajectories in loops over arrays of flight data, isolating the conditional branches to as few a number of loops as possible. A number of solutions are discussed to the problem of gaps appearing in the arrays due to completed flights, which impede vectorization. A simple and effective implementation of macrotasking is achieved by dividing the calculation of the test flight profile among several processors. A tree of random numbers is used to ensure reproducible results. The additional memory required for each task may preclude using a larger number of tasks. In future machines, the limit of macrotasking may be possible, with each test flight, and split test flight, being a separate task

  14. Monte Carlo MP2 on Many Graphical Processing Units.

    Science.gov (United States)

    Doran, Alexander E; Hirata, So

    2016-10-11

    In the Monte Carlo second-order many-body perturbation (MC-MP2) method, the long sum-of-product matrix expression of the MP2 energy, whose literal evaluation may be poorly scalable, is recast into a single high-dimensional integral of functions of electron pair coordinates, which is evaluated by the scalable method of Monte Carlo integration. The sampling efficiency is further accelerated by the redundant-walker algorithm, which allows a maximal reuse of electron pairs. Here, a multitude of graphical processing units (GPUs) offers a uniquely ideal platform to expose multilevel parallelism: fine-grain data-parallelism for the redundant-walker algorithm in which millions of threads compute and share orbital amplitudes on each GPU; coarse-grain instruction-parallelism for near-independent Monte Carlo integrations on many GPUs with few and infrequent interprocessor communications. While the efficiency boost by the redundant-walker algorithm on central processing units (CPUs) grows linearly with the number of electron pairs and tends to saturate when the latter exceeds the number of orbitals, on a GPU it grows quadratically before it increases linearly and then eventually saturates at a much larger number of pairs. This is because the orbital constructions are nearly perfectly parallelized on a GPU and thus completed in a near-constant time regardless of the number of pairs. In consequence, an MC-MP2/cc-pVDZ calculation of a benzene dimer is 2700 times faster on 256 GPUs (using 2048 electron pairs) than on two CPUs, each with 8 cores (which can use only up to 256 pairs effectively). We also numerically determine that the cost to achieve a given relative statistical uncertainty in an MC-MP2 energy increases as O(n 3 ) or better with system size n, which may be compared with the O(n 5 ) scaling of the conventional implementation of deterministic MP2. We thus establish the scalability of MC-MP2 with both system and computer sizes.

  15. Multi-Index Monte Carlo (MIMC)

    KAUST Repository

    Haji Ali, Abdul Lateef; Nobile, Fabio; Tempone, Raul

    2015-01-01

    We propose and analyze a novel Multi-Index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles’s seminal work, instead of using first-order differences as in MLMC, we use in MIMC high-order mixed differences to reduce the variance of the hierarchical differences dramatically. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be of Total Degree (TD) type. When using such sets, MIMC yields new and improved complexity results, which are natural generalizations of Giles’s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence.

  16. Multi-Index Monte Carlo (MIMC)

    KAUST Repository

    Haji Ali, Abdul Lateef

    2015-01-07

    We propose and analyze a novel Multi-Index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles’s seminal work, instead of using first-order differences as in MLMC, we use in MIMC high-order mixed differences to reduce the variance of the hierarchical differences dramatically. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be of Total Degree (TD) type. When using such sets, MIMC yields new and improved complexity results, which are natural generalizations of Giles’s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence.

  17. Simulation of Rossi-α method with analog Monte-Carlo method

    International Nuclear Information System (INIS)

    Lu Yuzhao; Xie Qilin; Song Lingli; Liu Hangang

    2012-01-01

    The analog Monte-Carlo code for simulating Rossi-α method based on Geant4 was developed. The prompt neutron decay constant α of six metal uranium configurations in Oak Ridge National Laboratory were calculated. α was also calculated by Burst-Neutron method and the result was consistent with the result of Rossi-α method. There is the difference between results of analog Monte-Carlo simulation and experiment, and the reasons for the difference is the gaps between uranium layers. The influence of gaps decrease as the sub-criticality deepens. The relative difference between results of analog Monte-Carlo simulation and experiment changes from 19% to 0.19%. (authors)

  18. Quasi-Monte Carlo methods for lattice systems. A first look

    International Nuclear Information System (INIS)

    Jansen, K.; Cyprus Univ., Nicosia; Leovey, H.; Griewank, A.; Nube, A.; Humboldt-Universitaet, Berlin; Mueller-Preussker, M.

    2013-02-01

    We investigate the applicability of Quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like N -1/2 , where N is the number of observations. By means of Quasi-Monte Carlo methods it is possible to improve this behavior for certain problems up to N -1 . We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling.

  19. Quasi-Monte Carlo methods for lattice systems. A first look

    Energy Technology Data Exchange (ETDEWEB)

    Jansen, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Leovey, H.; Griewank, A. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Mathematik; Nube, A. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Mueller-Preussker, M. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik

    2013-02-15

    We investigate the applicability of Quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like N{sup -1/2}, where N is the number of observations. By means of Quasi-Monte Carlo methods it is possible to improve this behavior for certain problems up to N{sup -1}. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling.

  20. Monte Carlo calculations of thermodynamic properties of deuterium under high pressures

    International Nuclear Information System (INIS)

    Levashov, P R; Filinov, V S; BoTan, A; Fortov, V E; Bonitz, M

    2008-01-01

    Two different numerical approaches have been applied for calculations of shock Hugoniots and compression isentrope of deuterium: direct path integral Monte Carlo and reactive Monte Carlo. The results show good agreement between two methods at intermediate pressure which is an indication of correct accounting of dissociation effects in the direct path integral Monte Carlo method. Experimental data on both shock and quasi-isentropic compression of deuterium are well described by calculations. Thus dissociation of deuterium molecules in these experiments together with interparticle interaction play significant role

  1. Monte Carlo simulated dynamical magnetization of single-chain magnets

    Energy Technology Data Exchange (ETDEWEB)

    Li, Jun; Liu, Bang-Gui, E-mail: bgliu@iphy.ac.cn

    2015-03-15

    Here, a dynamical Monte-Carlo (DMC) method is used to study temperature-dependent dynamical magnetization of famous Mn{sub 2}Ni system as typical example of single-chain magnets with strong magnetic anisotropy. Simulated magnetization curves are in good agreement with experimental results under typical temperatures and sweeping rates, and simulated coercive fields as functions of temperature are also consistent with experimental curves. Further analysis indicates that the magnetization reversal is determined by both thermal-activated effects and quantum spin tunnelings. These can help explore basic properties and applications of such important magnetic systems. - Highlights: • Monte Carlo simulated magnetization curves are in good agreement with experimental results. • Simulated coercive fields as functions of temperature are consistent with experimental results. • The magnetization reversal is understood in terms of the Monte Carlo simulations.

  2. LCG MCDB - a Knowledgebase of Monte Carlo Simulated Events

    CERN Document Server

    Belov, S; Galkin, E; Gusev, A; Pokorski, Witold; Sherstnev, A V

    2008-01-01

    In this paper we report on LCG Monte Carlo Data Base (MCDB) and software which has been developed to operate MCDB. The main purpose of the LCG MCDB project is to provide a storage and documentation system for sophisticated event samples simulated for the LHC collaborations by experts. In many cases, the modern Monte Carlo simulation of physical processes requires expert knowledge in Monte Carlo generators or significant amount of CPU time to produce the events. MCDB is a knowledgebase mainly to accumulate simulated events of this type. The main motivation behind LCG MCDB is to make the sophisticated MC event samples available for various physical groups. All the data from MCDB is accessible in several convenient ways. LCG MCDB is being developed within the CERN LCG Application Area Simulation project.

  3. Exponentially-convergent Monte Carlo via finite-element trial spaces

    International Nuclear Information System (INIS)

    Morel, Jim E.; Tooley, Jared P.; Blamer, Brandon J.

    2011-01-01

    Exponentially-Convergent Monte Carlo (ECMC) methods, also known as adaptive Monte Carlo and residual Monte Carlo methods, were the subject of intense research over a decade ago, but they never became practical for solving the realistic problems. We believe that the failure of previous efforts may be related to the choice of trial spaces that were global and thus highly oscillatory. As an alternative, we consider finite-element trial spaces, which have the ability to treat fully realistic problems. As a first step towards more general methods, we apply piecewise-linear trial spaces to the spatially-continuous two-stream transport equation. Using this approach, we achieve exponential convergence and computationally demonstrate several fundamental properties of finite-element based ECMC methods. Finally, our results indicate that the finite-element approach clearly deserves further investigation. (author)

  4. Monte Carlo Calculation of Sensitivities to Secondary Angular Distributions. Theory and Validation

    International Nuclear Information System (INIS)

    Perell, R. L.

    2002-01-01

    The basic methods for solution of the transport equation that are in practical use today are the discrete ordinates (SN) method, and the Monte Carlo (Monte Carlo) method. While the SN method is typically less computation time consuming, the Monte Carlo method is often preferred for detailed and general description of three-dimensional geometries, and for calculations using cross sections that are point-wise energy dependent. For analysis of experimental and calculated results, sensitivities are needed. Sensitivities to material parameters in general, and to the angular distribution of the secondary (scattered) neutrons in particular, can be calculated by well known SN methods, using the fluxes obtained from solution of the direct and the adjoint transport equations. Algorithms to calculate sensitivities to cross-sections with Monte Carlo methods have been known for quite a time. However, only just recently we have developed a general Monte Carlo algorithm for the calculation of sensitivities to the angular distribution of the secondary neutrons

  5. Simplified monte carlo simulation for Beijing spectrometer

    International Nuclear Information System (INIS)

    Wang Taijie; Wang Shuqin; Yan Wuguang; Huang Yinzhi; Huang Deqiang; Lang Pengfei

    1986-01-01

    The Monte Carlo method based on the functionization of the performance of detectors and the transformation of values of kinematical variables into ''measured'' ones by means of smearing has been used to program the Monte Carlo simulation of the performance of the Beijing Spectrometer (BES) in FORTRAN language named BESMC. It can be used to investigate the multiplicity, the particle type, and the distribution of four-momentum of the final states of electron-positron collision, and also the response of the BES to these final states. Thus, it provides a measure to examine whether the overall design of the BES is reasonable and to decide the physical topics of the BES

  6. Monte Carlo simulation of gas Cerenkov detectors

    International Nuclear Information System (INIS)

    Mack, J.M.; Jain, M.; Jordan, T.M.

    1984-01-01

    Theoretical study of selected gamma-ray and electron diagnostic necessitates coupling Cerenkov radiation to electron/photon cascades. A Cerenkov production model and its incorporation into a general geometry Monte Carlo coupled electron/photon transport code is discussed. A special optical photon ray-trace is implemented using bulk optical properties assigned to each Monte Carlo zone. Good agreement exists between experimental and calculated Cerenkov data in the case of a carbon-dioxide gas Cerenkov detector experiment. Cerenkov production and threshold data are presented for a typical carbon-dioxide gas detector that converts a 16.7 MeV photon source to Cerenkov light, which is collected by optics and detected by a photomultiplier

  7. Proton therapy analysis using the Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Noshad, Houshyar [Center for Theoretical Physics and Mathematics, AEOI, P.O. Box 14155-1339, Tehran (Iran, Islamic Republic of)]. E-mail: hnoshad@aeoi.org.ir; Givechi, Nasim [Islamic Azad University, Science and Research Branch, Tehran (Iran, Islamic Republic of)

    2005-10-01

    The range and straggling data obtained from the transport of ions in matter (TRIM) computer program were used to determine the trajectories of monoenergetic 60 MeV protons in muscle tissue by using the Monte Carlo technique. The appropriate profile for the shape of a proton pencil beam in proton therapy as well as the dose deposited in the tissue were computed. The good agreements between our results as compared with the corresponding experimental values are presented here to show the reliability of our Monte Carlo method.

  8. Monte Carlo treatment planning with modulated electron radiotherapy: framework development and application

    Science.gov (United States)

    Alexander, Andrew William

    Within the field of medical physics, Monte Carlo radiation transport simulations are considered to be the most accurate method for the determination of dose distributions in patients. The McGill Monte Carlo treatment planning system (MMCTP), provides a flexible software environment to integrate Monte Carlo simulations with current and new treatment modalities. A developing treatment modality called energy and intensity modulated electron radiotherapy (MERT) is a promising modality, which has the fundamental capabilities to enhance the dosimetry of superficial targets. An objective of this work is to advance the research and development of MERT with the end goal of clinical use. To this end, we present the MMCTP system with an integrated toolkit for MERT planning and delivery of MERT fields. Delivery is achieved using an automated "few leaf electron collimator" (FLEC) and a controller. Aside from the MERT planning toolkit, the MMCTP system required numerous add-ons to perform the complex task of large-scale autonomous Monte Carlo simulations. The first was a DICOM import filter, followed by the implementation of DOSXYZnrc as a dose calculation engine and by logic methods for submitting and updating the status of Monte Carlo simulations. Within this work we validated the MMCTP system with a head and neck Monte Carlo recalculation study performed by a medical dosimetrist. The impact of MMCTP lies in the fact that it allows for systematic and platform independent large-scale Monte Carlo dose calculations for different treatment sites and treatment modalities. In addition to the MERT planning tools, various optimization algorithms were created external to MMCTP. The algorithms produced MERT treatment plans based on dose volume constraints that employ Monte Carlo pre-generated patient-specific kernels. The Monte Carlo kernels are generated from patient-specific Monte Carlo dose distributions within MMCTP. The structure of the MERT planning toolkit software and

  9. Calibration of the identiFINDER detector for the iodine measurement in thyroid using the Monte Carlo method; Calibracion del detector identiFINDER para la medicion de yodo en tiroides utilizando el metodo Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Ramos M, D.; Yera S, Y.; Lopez B, G. M.; Acosta R, N.; Vergara G, A., E-mail: dayana@cphr.edu.cu [Centro de Proteccion e Higiene de las Radiaciones, Calle 20 No. 4113 e/ 41 y 47, Playa, 10600 La Habana (Cuba)

    2014-08-15

    This work is based on the determination of the detection efficiency of {sup 125}I and {sup 131}I in thyroid of the identiFINDER detector using the Monte Carlo method. The suitability of the calibration method is analyzed, when comparing the results of the direct Monte Carlo method with the corrected, choosing the latter because the differences with the real efficiency stayed below 10%. To simulate the detector their geometric parameters were optimized using a tomographic study, what allowed the uncertainties minimization of the estimates. Finally were obtained the simulations of the detector geometry-point source to find the correction factors to 5 cm, 15 cm and 25 cm, and those corresponding to the detector-simulator arrangement for the method validation and final calculation of the efficiency, demonstrating that in the Monte Carlo method implementation if simulates at a greater distance than the used in the Laboratory measurements an efficiency overestimation can be obtained, while if simulates at a shorter distance this will be underestimated, so should be simulated at the same distance to which will be measured in the reality. Also, is achieved the obtaining of the efficiency curves and minimum detectable activity for the measurement of {sup 131}I and {sup 125}I. In general is achieved the implementation of the Monte Carlo methodology for the identiFINDER calibration with the purpose of estimating the measured activity of iodine in thyroid. This method represents an ideal way to replace the lack of patterns solutions and simulators assuring the capacities of the Internal Contamination Laboratory of the Centro de Proteccion e Higiene de las Radiaciones are always calibrated for the iodine measurement in thyroid. (author)

  10. A contribution to the Monte Carlo method in the reactor theory

    International Nuclear Information System (INIS)

    Lieberoth, J.

    1976-01-01

    The report gives a contribution to the further development of the Monte-Carlo Method to solve the neutron transport problem. The necessary fundamentals, mainly of statistical nature, are collected and partly derived, such as the statistical weight, the use of random numbers or the Monte-Carlo integration method. Special emphasis is put on the so-called team-method, which will help to reduce the statistical error of Monte-Carlo estimates, and on the path-method, which can be used to calculate the neutron fluxes in pre-defined local points

  11. Weighted-delta-tracking for Monte Carlo particle transport

    International Nuclear Information System (INIS)

    Morgan, L.W.G.; Kotlyar, D.

    2015-01-01

    Highlights: • This paper presents an alteration to the Monte Carlo Woodcock tracking technique. • The alteration improves computational efficiency within regions of high absorbers. • The rejection technique is replaced by a statistical weighting mechanism. • The modified Woodcock method is shown to be faster than standard Woodcock tracking. • The modified Woodcock method achieves a lower variance, given a specified accuracy. - Abstract: Monte Carlo particle transport (MCPT) codes are incredibly powerful and versatile tools to simulate particle behavior in a multitude of scenarios, such as core/criticality studies, radiation protection, shielding, medicine and fusion research to name just a small subset applications. However, MCPT codes can be very computationally expensive to run when the model geometry contains large attenuation depths and/or contains many components. This paper proposes a simple modification to the Woodcock tracking method used by some Monte Carlo particle transport codes. The Woodcock method utilizes the rejection method for sampling virtual collisions as a method to remove collision distance sampling at material boundaries. However, it suffers from poor computational efficiency when the sample acceptance rate is low. The proposed method removes rejection sampling from the Woodcock method in favor of a statistical weighting scheme, which improves the computational efficiency of a Monte Carlo particle tracking code. It is shown that the modified Woodcock method is less computationally expensive than standard ray-tracing and rejection-based Woodcock tracking methods and achieves a lower variance, given a specified accuracy

  12. The Monte Carlo approach to the economics of a DEMO-like power plant

    Energy Technology Data Exchange (ETDEWEB)

    Bustreo, Chiara, E-mail: chiara.bustreo@igi.cnr.it; Bolzonella, Tommaso; Zollino, Giuseppe

    2015-10-15

    Highlights: • A steady state DEMO-like power plant is modelled with the FRESCO code. • The Monte Carlo method is used to assess the probability distribution of the COE. • Uncertainties on technical and economical aspects make the COE vary in a large range. • The COE can be nearly 2/3 to nearly 4 times the cost derived deterministically. - Abstract: An early assessment of the economics of a fusion power plant is a key step to ensure the technology viability in a future global energy system. The FRESCO code is here used to generate the technical, physical and economic model of a steady state DEMO-like power plant whose features are taken from the current European research activities on the DEMO design definition. The Monte Carlo method is used to perform stochastic analyses in order to assess the weight on the cost of electricity of uncertainties on technical and economical aspects. This study demonstrates that a stochastic approach offers a much better perspective over the spectrum of values that could be expected for the cost of electricity from fusion. Specifically, this analysis proves that the cost of electricity of the DEMO-like power plant studied could vary in quite large range, from nearly 2/3 to nearly 4 times the cost derived through a deterministic approach, by choosing reference values for all the stochastic parameters, taken from the literature.

  13. A Monte Carlo error simulation applied to calibration-free X-ray diffraction phase analysis

    International Nuclear Information System (INIS)

    Braun, G.E.

    1986-01-01

    Quantitative phase analysis of a system of n phases can be effected without the need for calibration standards provided at least n different mixtures of these phases are available. A series of linear equations relating diffracted X-ray intensities, weight fractions and quantitation factors coupled with mass balance relationships can be solved for the unknown weight fractions and factors. Uncertainties associated with the measured X-ray intensities, owing to counting of random X-ray quanta, are used to estimate the errors in the calculated parameters utilizing a Monte Carlo simulation. The Monte Carlo approach can be generalized and applied to any quantitative X-ray diffraction phase analysis method. Two examples utilizing mixtures of CaCO 3 , Fe 2 O 3 and CaF 2 with an α-SiO 2 (quartz) internal standard illustrate the quantitative method and corresponding error analysis. One example is well conditioned; the other is poorly conditioned and, therefore, very sensitive to errors in the measured intensities. (orig.)

  14. Analisis Risiko Finansial Dengan Metode Simulasi Monte Carlo (Studi Kasus: Pt. Phase Delta Control

    Directory of Open Access Journals (Sweden)

    Atikah Aghdhi Pratiwi

    2016-10-01

    Full Text Available Basically, the purpose of a company is make a profit and enrich the owners of the company. This is manifested by development and achievement of good performance, both in financial and operational perspective. But in reality, not all of companies can achieve good performance. One of them is because exposure of risk. This could threaten achievement of the objectives and existence of the company. Therefore, companies need to have an idea related to possible condition and financial projection in future periods that are affected by risk. One of the possible method is Monte Carlo Simulation. Research will be conducted at PT. Phase Delta Control with historical data related to production/sales volume, cost of production and selling price. Historical data will be used as Monte Carlo Simulation with random numbers that describe probability of each risk variables describing reality. The main result is estimated profitability of PT. Phase Delta Control in given period. Profit estimation will be uncertain variable due to some uncertainty

  15. Three-dimensional Monte Carlo calculations of the neutron and γ-ray fluences in the TFTR diagnostic basement and comparisons with measurements

    International Nuclear Information System (INIS)

    Liew, S.L.; Ku, L.P.; Kolibal, J.G.

    1985-10-01

    Realistic calculations of the neutron and γ-ray fluences in the TFTR diagnostic basement have been carried out with three-dimensional Monte Carlo models. Comparisons with measurements show that the results are well within the experimental uncertainties

  16. A continuation multilevel Monte Carlo algorithm

    KAUST Repository

    Collier, Nathan; Haji Ali, Abdul Lateef; Nobile, Fabio; von Schwerin, Erik; Tempone, Raul

    2014-01-01

    We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending when the required error

  17. Direct Monte Carlo simulation of nanoscale mixed gas bearings

    Directory of Open Access Journals (Sweden)

    Kyaw Sett Myo

    2015-06-01

    Full Text Available The conception of sealed hard drives with helium gas mixture has been recently suggested over the current hard drives for achieving higher reliability and less position error. Therefore, it is important to understand the effects of different helium gas mixtures on the slider bearing characteristics in the head–disk interface. In this article, the helium/air and helium/argon gas mixtures are applied as the working fluids and their effects on the bearing characteristics are studied using the direct simulation Monte Carlo method. Based on direct simulation Monte Carlo simulations, the physical properties of these gas mixtures such as mean free path and dynamic viscosity are achieved and compared with those obtained from theoretical models. It is observed that both results are comparable. Using these gas mixture properties, the bearing pressure distributions are calculated under different fractions of helium with conventional molecular gas lubrication models. The outcomes reveal that the molecular gas lubrication results could have relatively good agreement with those of direct simulation Monte Carlo simulations, especially for pure air, helium, or argon gas cases. For gas mixtures, the bearing pressures predicted by molecular gas lubrication model are slightly larger than those from direct simulation Monte Carlo simulation.

  18. Monte Carlo: in the beginning and some great expectations

    International Nuclear Information System (INIS)

    Metropolis, N.

    1985-01-01

    The central theme will be on the historical setting and origins of the Monte Carlo Method. The scene was post-war Los Alamos Scientific Laboratory. There was an inevitability about the Monte Carlo Event: the ENIAC had recently enjoyed its meteoric rise (on a classified Los Alamos problem); Stan Ulam had returned to Los Alamos; John von Neumann was a frequent visitor. Techniques, algorithms, and applications developed rapidly at Los Alamos. Soon, the fascination of the Method reached wider horizons. The first paper was submitted for publication in the spring of 1949. In the summer of 1949, the first open conference was held at the University of California at Los Angeles. Of some interst perhaps is an account of Fermi's earlier, independent application in neutron moderation studies while at the University of Rome. The quantum leap expected with the advent of massively parallel processors will provide stimuli for very ambitious applications of the Monte Carlo Method in disciplines ranging from field theories to cosmology, including more realistic models in the neurosciences. A structure of multi-instruction sets for parallel processing is ideally suited for the Monte Carlo approach. One may even hope for a modest hardening of the soft sciences

  19. A Monte Carlo simulation model for stationary non-Gaussian processes

    DEFF Research Database (Denmark)

    Grigoriu, M.; Ditlevsen, Ove Dalager; Arwade, S. R.

    2003-01-01

    includes translation processes and is useful for both Monte Carlo simulation and analytical studies. As for translation processes, the mixture of translation processes can have a wide range of marginal distributions and correlation functions. Moreover, these processes can match a broader range of second...... athe proposed Monte Carlo algorithm and compare features of translation processes and mixture of translation processes. Keywords: Monte Carlo simulation, non-Gaussian processes, sampling theorem, stochastic processes, translation processes......A class of stationary non-Gaussian processes, referred to as the class of mixtures of translation processes, is defined by their finite dimensional distributions consisting of mixtures of finite dimensional distributions of translation processes. The class of mixtures of translation processes...

  20. Reliability analysis of PWR thermohydraulic design by the Monte Carlo method

    International Nuclear Information System (INIS)

    Silva Junior, H.C. da; Berthoud, J.S.; Carajilescov, P.

    1977-01-01

    The operating power level of a PWR is limited by the occurence of DNB. Without affecting the safety and performance of the reactor, it is possible to admit failure of a certain number of core channels. The thermohydraulic design, however, is affect by a great number of uncertainties of deterministic or statistical nature. In the present work, the Monte Carlo method is applied to yield the probability that a number F of channels submitted to boiling crises will not exceed a number F* previously given. This probability is obtained as function of the reactor power level. (Author) [pt

  1. A Monte Carlo burnup code linking MCNP and REBUS

    International Nuclear Information System (INIS)

    Hanan, N.A.; Olson, A.P.; Pond, R.B.; Matos, J.E.

    1998-01-01

    The REBUS-3 burnup code, used in the anl RERTR Program, is a very general code that uses diffusion theory (DIF3D) to obtain the fluxes required for reactor burnup analyses. Diffusion theory works well for most reactors. However, to include the effects of exact geometry and strong absorbers that are difficult to model using diffusion theory, a Monte Carlo method is required. MCNP, a general-purpose, generalized-geometry, time-dependent, Monte Carlo transport code, is the most widely used Monte Carlo code. This paper presents a linking of the MCNP code and the REBUS burnup code to perform these difficult analyses. The linked code will permit the use of the full capabilities of REBUS which include non-equilibrium and equilibrium burnup analyses. Results of burnup analyses using this new linked code are also presented. (author)

  2. A Monte Carlo burnup code linking MCNP and REBUS

    International Nuclear Information System (INIS)

    Hanan, N. A.

    1998-01-01

    The REBUS-3 burnup code, used in the ANL RERTR Program, is a very general code that uses diffusion theory (DIF3D) to obtain the fluxes required for reactor burnup analyses. Diffusion theory works well for most reactors. However, to include the effects of exact geometry and strong absorbers that are difficult to model using diffusion theory, a Monte Carlo method is required. MCNP, a general-purpose, generalized-geometry, time-dependent, Monte Carlo transport code, is the most widely used Monte Carlo code. This paper presents a linking of the MCNP code and the REBUS burnup code to perform these difficult burnup analyses. The linked code will permit the use of the full capabilities of REBUS which include non-equilibrium and equilibrium burnup analyses. Results of burnup analyses using this new linked code are also presented

  3. A hybrid transport-diffusion method for Monte Carlo radiative-transfer simulations

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Urbatsch, Todd J.; Evans, Thomas M.; Buksas, Michael W.

    2007-01-01

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Monte Carlo particle-transport simulations in diffusive media. If standard Monte Carlo is used in such media, particle histories will consist of many small steps, resulting in a computationally expensive calculation. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Each discrete step replaces many small Monte Carlo steps, thus increasing the efficiency of the simulation. In addition, given that DDMC is based on a diffusion equation, it should produce accurate solutions if used judiciously. In practice, DDMC is combined with standard Monte Carlo to form a hybrid transport-diffusion method that can accurately simulate problems with both diffusive and non-diffusive regions. In this paper, we extend previously developed DDMC techniques in several ways that improve the accuracy and utility of DDMC for nonlinear, time-dependent, radiative-transfer calculations. The use of DDMC in these types of problems is advantageous since, due to the underlying linearizations, optically thick regions appear to be diffusive. First, we employ a diffusion equation that is discretized in space but is continuous in time. Not only is this methodology theoretically more accurate than temporally discretized DDMC techniques, but it also has the benefit that a particle's time is always known. Thus, there is no ambiguity regarding what time to assign a particle that leaves an optically thick region (where DDMC is used) and begins transporting by standard Monte Carlo in an optically thin region. Also, we treat the interface between optically thick and optically thin regions with an improved method, based on the asymptotic diffusion-limit boundary condition, that can produce accurate results regardless of the angular distribution of the incident Monte Carlo particles. Finally, we develop a technique for estimating radiation momentum deposition during the

  4. Vectorization of phase space Monte Carlo code in FACOM vector processor VP-200

    International Nuclear Information System (INIS)

    Miura, Kenichi

    1986-01-01

    This paper describes the vectorization techniques for Monte Carlo codes in Fujitsu's Vector Processor System. The phase space Monte Carlo code FOWL is selected as a benchmark, and scalar and vector performances are compared. The vectorized kernel Monte Carlo routine which contains heavily nested IF tests runs up to 7.9 times faster in vector mode than in scalar mode. The overall performance improvement of the vectorized FOWL code over the original scalar code reaches 3.3. The results of this study strongly indicate that supercomputer can be a powerful tool for Monte Carlo simulations in high energy physics. (Auth.)

  5. Review of quantum Monte Carlo methods and results for Coulombic systems

    International Nuclear Information System (INIS)

    Ceperley, D.

    1983-01-01

    The various Monte Carlo methods for calculating ground state energies are briefly reviewed. Then a summary of the charged systems that have been studied with Monte Carlo is given. These include the electron gas, small molecules, a metal slab and many-body hydrogen

  6. Transport appraisal and Monte Carlo simulation by use of the CBA-DK model

    DEFF Research Database (Denmark)

    Salling, Kim Bang; Leleur, Steen

    2011-01-01

    calculation, where risk analysis is carried out using Monte Carlo simulation. Special emphasis has been placed on the separation between inherent randomness in the modeling system and lack of knowledge. These two concepts have been defined in terms of variability (ontological uncertainty) and uncertainty......This paper presents the Danish CBA-DK software model for assessment of transport infrastructure projects. The assessment model is based on both a deterministic calculation following the cost-benefit analysis (CBA) methodology in a Danish manual from the Ministry of Transport and on a stochastic...... (epistemic uncertainty). After a short introduction to deterministic calculation resulting in some evaluation criteria a more comprehensive evaluation of the stochastic calculation is made. Especially, the risk analysis part of CBA-DK, with considerations about which probability distributions should be used...

  7. Monte Carlo Numerical Models for Nuclear Logging Applications

    Directory of Open Access Journals (Sweden)

    Fusheng Li

    2012-06-01

    Full Text Available Nuclear logging is one of most important logging services provided by many oil service companies. The main parameters of interest are formation porosity, bulk density, and natural radiation. Other services are also provided from using complex nuclear logging tools, such as formation lithology/mineralogy, etc. Some parameters can be measured by using neutron logging tools and some can only be measured by using a gamma ray tool. To understand the response of nuclear logging tools, the neutron transport/diffusion theory and photon diffusion theory are needed. Unfortunately, for most cases there are no analytical answers if complex tool geometry is involved. For many years, Monte Carlo numerical models have been used by nuclear scientists in the well logging industry to address these challenges. The models have been widely employed in the optimization of nuclear logging tool design, and the development of interpretation methods for nuclear logs. They have also been used to predict the response of nuclear logging systems for forward simulation problems. In this case, the system parameters including geometry, materials and nuclear sources, etc., are pre-defined and the transportation and interactions of nuclear particles (such as neutrons, photons and/or electrons in the regions of interest are simulated according to detailed nuclear physics theory and their nuclear cross-section data (probability of interacting. Then the deposited energies of particles entering the detectors are recorded and tallied and the tool responses to such a scenario are generated. A general-purpose code named Monte Carlo N– Particle (MCNP has been the industry-standard for some time. In this paper, we briefly introduce the fundamental principles of Monte Carlo numerical modeling and review the physics of MCNP. Some of the latest developments of Monte Carlo Models are also reviewed. A variety of examples are presented to illustrate the uses of Monte Carlo numerical models

  8. Fundamentals of Monte Carlo

    International Nuclear Information System (INIS)

    Wollaber, Allan Benton

    2016-01-01

    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating @@), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  9. Fundamentals of Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  10. Markov Chain Monte Carlo Methods for Bayesian Data Analysis in Astronomy

    Science.gov (United States)

    Sharma, Sanjib

    2017-08-01

    Markov Chain Monte Carlo based Bayesian data analysis has now become the method of choice for analyzing and interpreting data in almost all disciplines of science. In astronomy, over the last decade, we have also seen a steady increase in the number of papers that employ Monte Carlo based Bayesian analysis. New, efficient Monte Carlo based methods are continuously being developed and explored. In this review, we first explain the basics of Bayesian theory and discuss how to set up data analysis problems within this framework. Next, we provide an overview of various Monte Carlo based methods for performing Bayesian data analysis. Finally, we discuss advanced ideas that enable us to tackle complex problems and thus hold great promise for the future. We also distribute downloadable computer software (available at https://github.com/sanjibs/bmcmc/ ) that implements some of the algorithms and examples discussed here.

  11. Monte Carlos of the new generation: status and progress

    International Nuclear Information System (INIS)

    Frixione, Stefano

    2005-01-01

    Standard parton shower monte carlos are designed to give reliable descriptions of low-pT physics. In the very high-energy regime of modern colliders, this is may lead to largely incorrect predictions of the basic reaction processes. This motivated the recent theoretical efforts aimed at improving monte carlos through the inclusion of matrix elements computed beyond the leading order in QCD. I briefly review the progress made, and discuss bottom production at the Tevatron

  12. Quantum Monte Carlo for vibrating molecules

    International Nuclear Information System (INIS)

    Brown, W.R.; Lawrence Berkeley National Lab., CA

    1996-08-01

    Quantum Monte Carlo (QMC) has successfully computed the total electronic energies of atoms and molecules. The main goal of this work is to use correlation function quantum Monte Carlo (CFQMC) to compute the vibrational state energies of molecules given a potential energy surface (PES). In CFQMC, an ensemble of random walkers simulate the diffusion and branching processes of the imaginary-time time dependent Schroedinger equation in order to evaluate the matrix elements. The program QMCVIB was written to perform multi-state VMC and CFQMC calculations and employed for several calculations of the H 2 O and C 3 vibrational states, using 7 PES's, 3 trial wavefunction forms, two methods of non-linear basis function parameter optimization, and on both serial and parallel computers. In order to construct accurate trial wavefunctions different wavefunctions forms were required for H 2 O and C 3 . In order to construct accurate trial wavefunctions for C 3 , the non-linear parameters were optimized with respect to the sum of the energies of several low-lying vibrational states. In order to stabilize the statistical error estimates for C 3 the Monte Carlo data was collected into blocks. Accurate vibrational state energies were computed using both serial and parallel QMCVIB programs. Comparison of vibrational state energies computed from the three C 3 PES's suggested that a non-linear equilibrium geometry PES is the most accurate and that discrete potential representations may be used to conveniently determine vibrational state energies

  13. Comparison of first order analysis and Monte Carlo methods in evaluating groundwater model uncertainty: a case study from an iron ore mine in the Pilbara Region of Western Australia

    Science.gov (United States)

    Firmani, G.; Matta, J.

    2012-04-01

    The expansion of mining in the Pilbara region of Western Australia is resulting in the need to develop better water strategies to make below water table resources accessible, manage surplus water and deal with water demands for processing ore and construction. In all these instances, understanding the local and regional hydrogeology is fundamental to allow sustainable mining; minimising the impacts to the environment. An understanding of the uncertainties of the hydrogeology is necessary to quantify the risks and make objective decisions rather than relying on subjective judgements. The aim of this paper is to review some of the methods proposed by the published literature and find approaches that can be practically implemented in an attempt to estimate model uncertainties. In particular, this paper adopts two general probabilistic approaches that address the parametric uncertainty estimation and its propagation in predictive scenarios: the first order analysis and Monte Carlo simulations. A case example application of the two techniques is also presented for the dewatering strategy of a large below water table open cut iron ore mine in the Pilbara region of Western Australia. This study demonstrates the weakness of the deterministic approach, as the coefficients of variation of some model parameters were greater than 1.0; and suggests a review of the model calibration method and conceptualisation. The uncertainty propagation into predictive scenarios was calculated assuming the parameters with a coefficient of variation higher than 0.25 as deterministic, due to computational difficulties to achieve an accurate result with the Monte Carlo method. The conclusion of this case study was that the first order analysis appears to be a successful and simple tool when the coefficients of variation of calibrated parameters are less than 0.25.

  14. Monte Carlo-based tail exponent estimator

    Science.gov (United States)

    Barunik, Jozef; Vacha, Lukas

    2010-11-01

    In this paper we propose a new approach to estimation of the tail exponent in financial stock markets. We begin the study with the finite sample behavior of the Hill estimator under α-stable distributions. Using large Monte Carlo simulations, we show that the Hill estimator overestimates the true tail exponent and can hardly be used on samples with small length. Utilizing our results, we introduce a Monte Carlo-based method of estimation for the tail exponent. Our proposed method is not sensitive to the choice of tail size and works well also on small data samples. The new estimator also gives unbiased results with symmetrical confidence intervals. Finally, we demonstrate the power of our estimator on the international world stock market indices. On the two separate periods of 2002-2005 and 2006-2009, we estimate the tail exponent.

  15. A new method to assess the statistical convergence of monte carlo solutions

    International Nuclear Information System (INIS)

    Forster, R.A.

    1991-01-01

    Accurate Monte Carlo confidence intervals (CIs), which are formed with an estimated mean and an estimated standard deviation, can only be created when the number of particle histories N becomes large enough so that the central limit theorem can be applied. The Monte Carlo user has a limited number of marginal methods to assess the fulfillment of this condition, such as statistical error reduction proportional to 1/√N with error magnitude guidelines and third and fourth moment estimators. A new method is presented here to assess the statistical convergence of Monte Carlo solutions by analyzing the shape of the empirical probability density function (PDF) of history scores. Related work in this area includes the derivation of analytic score distributions for a two-state Monte Carlo problem. Score distribution histograms have been generated to determine when a small number of histories accounts for a large fraction of the result. This summary describes initial studies of empirical Monte Carlo history score PDFs created from score histograms of particle transport simulations. 7 refs., 1 fig

  16. Initial Assessment of Parallelization of Monte Carlo Calculation using Graphics Processing Units

    International Nuclear Information System (INIS)

    Choi, Sung Hoon; Joo, Han Gyu

    2009-01-01

    Monte Carlo (MC) simulation is an effective tool for calculating neutron transports in complex geometry. However, because Monte Carlo simulates each neutron behavior one by one, it takes a very long computing time if enough neutrons are used for high precision of calculation. Accordingly, methods that reduce the computing time are required. In a Monte Carlo code, parallel calculation is well-suited since it simulates the behavior of each neutron independently and thus parallel computation is natural. The parallelization of the Monte Carlo codes, however, was done using multi CPUs. By the global demand for high quality 3D graphics, the Graphics Processing Unit (GPU) has developed into a highly parallel, multi-core processor. This parallel processing capability of GPUs can be available to engineering computing once a suitable interface is provided. Recently, NVIDIA introduced CUDATM, a general purpose parallel computing architecture. CUDA is a software environment that allows developers to manage GPU using C/C++ or other languages. In this work, a GPU-based Monte Carlo is developed and the initial assessment of it parallel performance is investigated

  17. A virtual source method for Monte Carlo simulation of Gamma Knife Model C

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Hoon; Kim, Yong Kyun [Hanyang University, Seoul (Korea, Republic of); Chung, Hyun Tai [Seoul National University College of Medicine, Seoul (Korea, Republic of)

    2016-05-15

    The Monte Carlo simulation method has been used for dosimetry of radiation treatment. Monte Carlo simulation is the method that determines paths and dosimetry of particles using random number. Recently, owing to the ability of fast processing of the computers, it is possible to treat a patient more precisely. However, it is necessary to increase the simulation time to improve the efficiency of accuracy uncertainty. When generating the particles from the cobalt source in a simulation, there are many particles cut off. So it takes time to simulate more accurately. For the efficiency, we generated the virtual source that has the phase space distribution which acquired a single gamma knife channel. We performed the simulation using the virtual sources on the 201 channel and compared the measurement with the simulation using virtual sources and real sources. A virtual source file was generated to reduce the simulation time of a Gamma Knife Model C. Simulations with a virtual source executed about 50 times faster than the original source code and there was no statistically significant difference in simulated results.

  18. A virtual source method for Monte Carlo simulation of Gamma Knife Model C

    International Nuclear Information System (INIS)

    Kim, Tae Hoon; Kim, Yong Kyun; Chung, Hyun Tai

    2016-01-01

    The Monte Carlo simulation method has been used for dosimetry of radiation treatment. Monte Carlo simulation is the method that determines paths and dosimetry of particles using random number. Recently, owing to the ability of fast processing of the computers, it is possible to treat a patient more precisely. However, it is necessary to increase the simulation time to improve the efficiency of accuracy uncertainty. When generating the particles from the cobalt source in a simulation, there are many particles cut off. So it takes time to simulate more accurately. For the efficiency, we generated the virtual source that has the phase space distribution which acquired a single gamma knife channel. We performed the simulation using the virtual sources on the 201 channel and compared the measurement with the simulation using virtual sources and real sources. A virtual source file was generated to reduce the simulation time of a Gamma Knife Model C. Simulations with a virtual source executed about 50 times faster than the original source code and there was no statistically significant difference in simulated results

  19. Uncertainty propagation using the Monte Carlo method in the measurement of airborne particle size distribution with a scanning mobility particle sizer

    Science.gov (United States)

    Coquelin, L.; Le Brusquet, L.; Fischer, N.; Gensdarmes, F.; Motzkus, C.; Mace, T.; Fleury, G.

    2018-05-01

    A scanning mobility particle sizer (SMPS) is a high resolution nanoparticle sizing system that is widely used as the standard method to measure airborne particle size distributions (PSD) in the size range 1 nm–1 μm. This paper addresses the problem to assess the uncertainty associated with PSD when a differential mobility analyzer (DMA) operates under scanning mode. The sources of uncertainty are described and then modeled either through experiments or knowledge extracted from the literature. Special care is brought to model the physics and to account for competing theories. Indeed, it appears that the modeling errors resulting from approximations of the physics can largely affect the final estimate of this indirect measurement, especially for quantities that are not measured during day-to-day experiments. The Monte Carlo method is used to compute the uncertainty associated with PSD. The method is tested against real data sets that are monosize polystyrene latex spheres (PSL) with nominal diameters of 100 nm, 200 nm and 450 nm. The median diameters and associated standard uncertainty of the aerosol particles are estimated as 101.22 nm  ±  0.18 nm, 204.39 nm  ±  1.71 nm and 443.87 nm  ±  1.52 nm with the new approach. Other statistical parameters, such as the mean diameter, the mode and the geometric mean and associated standard uncertainty, are also computed. These results are then compared with the results obtained by SMPS embedded software.

  20. Speed-up of ab initio hybrid Monte Carlo and ab initio path integral hybrid Monte Carlo simulations by using an auxiliary potential energy surface

    International Nuclear Information System (INIS)

    Nakayama, Akira; Taketsugu, Tetsuya; Shiga, Motoyuki

    2009-01-01

    Efficiency of the ab initio hybrid Monte Carlo and ab initio path integral hybrid Monte Carlo methods is enhanced by employing an auxiliary potential energy surface that is used to update the system configuration via molecular dynamics scheme. As a simple illustration of this method, a dual-level approach is introduced where potential energy gradients are evaluated by computationally less expensive ab initio electronic structure methods. (author)

  1. The specific bias in dynamic Monte Carlo simulations of nuclear reactors

    International Nuclear Information System (INIS)

    Yamamoto, T.; Endo, H.; Ishizu, T.; Tatewaki, I.

    2013-01-01

    During the development of Monte-Carlo-based dynamic code system, we have encountered two major Monte-Carlo-specific problems. One is the break down due to 'false super-criticality' which is caused by an accidentally large eigenvalue due to statistical error in spite of the fact that the reactor is actually not critical. The other problem, which is the main topic in this paper, is that the statistical error in power level using the reactivity calculated with Monte Carlo code is not symmetric about its mean but always positively biased. This signifies that the bias is accumulated as the calculation proceeds and consequently results in an over-estimation of the final power level. It should be noted that the bias will not be eliminated by refining the time step as long as the variance is not zero. A preliminary investigation on this matter using the one-group-precursor point kinetic equations was made and it was concluded that the bias in power level is approximately proportional to the product of variance in Monte Carlo calculation and elapsed time. This conclusion was verified with some numerical experiments. This outcome is important in quantifying the required precision of the Monte-Carlo-based reactivity calculations. (authors)

  2. Monte Carlo method to characterize radioactive waste drums

    International Nuclear Information System (INIS)

    Lima, Josenilson B.; Dellamano, Jose C.; Potiens Junior, Ademar J.

    2013-01-01

    Non-destructive methods for radioactive waste drums characterization have being developed in the Waste Management Department (GRR) at Nuclear and Energy Research Institute IPEN. This study was conducted as part of the radioactive wastes characterization program in order to meet specifications and acceptance criteria for final disposal imposed by regulatory control by gamma spectrometry. One of the main difficulties in the detectors calibration process is to obtain the counting efficiencies that can be solved by the use of mathematical techniques. The aim of this work was to develop a methodology to characterize drums using gamma spectrometry and Monte Carlo method. Monte Carlo is a widely used mathematical technique, which simulates the radiation transport in the medium, thus obtaining the efficiencies calibration of the detector. The equipment used in this work is a heavily shielded Hyperpure Germanium (HPGe) detector coupled with an electronic setup composed of high voltage source, amplifier and multiport multichannel analyzer and MCNP software for Monte Carlo simulation. The developing of this methodology will allow the characterization of solid radioactive wastes packed in drums and stored at GRR. (author)

  3. Improved diffusion coefficients generated from Monte Carlo codes

    International Nuclear Information System (INIS)

    Herman, B. R.; Forget, B.; Smith, K.; Aviles, B. N.

    2013-01-01

    Monte Carlo codes are becoming more widely used for reactor analysis. Some of these applications involve the generation of diffusion theory parameters including macroscopic cross sections and diffusion coefficients. Two approximations used to generate diffusion coefficients are assessed using the Monte Carlo code MC21. The first is the method of homogenization; whether to weight either fine-group transport cross sections or fine-group diffusion coefficients when collapsing to few-group diffusion coefficients. The second is a fundamental approximation made to the energy-dependent P1 equations to derive the energy-dependent diffusion equations. Standard Monte Carlo codes usually generate a flux-weighted transport cross section with no correction to the diffusion approximation. Results indicate that this causes noticeable tilting in reconstructed pin powers in simple test lattices with L2 norm error of 3.6%. This error is reduced significantly to 0.27% when weighting fine-group diffusion coefficients by the flux and applying a correction to the diffusion approximation. Noticeable tilting in reconstructed fluxes and pin powers was reduced when applying these corrections. (authors)

  4. Monte Carlo calculations of electron transport on microcomputers

    International Nuclear Information System (INIS)

    Chung, Manho; Jester, W.A.; Levine, S.H.; Foderaro, A.H.

    1990-01-01

    In the work described in this paper, the Monte Carlo program ZEBRA, developed by Berber and Buxton, was converted to run on the Macintosh computer using Microsoft BASIC to reduce the cost of Monte Carlo calculations using microcomputers. Then the Eltran2 program was transferred to an IBM-compatible computer. Turbo BASIC and Microsoft Quick BASIC have been used on the IBM-compatible Tandy 4000SX computer. The paper shows the running speed of the Monte Carlo programs on the different computers, normalized to one for Eltran2 on the Macintosh-SE or Macintosh-Plus computer. Higher values refer to faster running times proportionally. Since Eltran2 is a one-dimensional program, it calculates energy deposited in a semi-infinite multilayer slab. Eltran2 has been modified to a two-dimensional program called Eltran3 to computer more accurately the case with a point source, a small detector, and a short source-to-detector distance. The running time of Eltran3 is about twice as long as that of Eltran2 for a similar case

  5. Modelling of the X , Y , Z positioning errors and uncertainty evaluation for the LNE’s mAFM using the Monte Carlo method

    International Nuclear Information System (INIS)

    Ceria, Paul; Ducourtieux, Sebastien; Boukellal, Younes; Feltin, Nicolas; Allard, Alexandre; Fischer, Nicolas

    2017-01-01

    In order to evaluate the uncertainty budget of the LNE’s mAFM, a reference instrument dedicated to the calibration of nanoscale dimensional standards, a numerical model has been developed to evaluate the measurement uncertainty of the metrology loop involved in the XYZ positioning of the tip relative to the sample. The objective of this model is to overcome difficulties experienced when trying to evaluate some uncertainty components which cannot be experimentally determined and more specifically, the one linked to the geometry of the metrology loop. The model is based on object-oriented programming and developed under Matlab. It integrates one hundred parameters that allow the control of the geometry of the metrology loop without using analytical formulae. The created objects, mainly the reference and the mobile prism and their mirrors, the interferometers and their laser beams, can be moved and deformed freely to take into account several error sources. The Monte Carlo method is then used to determine the positioning uncertainty of the instrument by randomly drawing the parameters according to their associated tolerances and their probability density functions (PDFs). The whole process follows Supplement 2 to ‘The Guide to the Expression of the Uncertainty in Measurement’ (GUM). Some advanced statistical tools like Morris design and Sobol indices are also used to provide a sensitivity analysis by identifying the most influential parameters and quantifying their contribution to the XYZ positioning uncertainty. The approach validated in the paper shows that the actual positioning uncertainty is about 6 nm. As the final objective is to reach 1 nm, we engage in a discussion to estimate the most effective way to reduce the uncertainty. (paper)

  6. Modelling of the X,Y,Z positioning errors and uncertainty evaluation for the LNE’s mAFM using the Monte Carlo method

    Science.gov (United States)

    Ceria, Paul; Ducourtieux, Sebastien; Boukellal, Younes; Allard, Alexandre; Fischer, Nicolas; Feltin, Nicolas

    2017-03-01

    In order to evaluate the uncertainty budget of the LNE’s mAFM, a reference instrument dedicated to the calibration of nanoscale dimensional standards, a numerical model has been developed to evaluate the measurement uncertainty of the metrology loop involved in the XYZ positioning of the tip relative to the sample. The objective of this model is to overcome difficulties experienced when trying to evaluate some uncertainty components which cannot be experimentally determined and more specifically, the one linked to the geometry of the metrology loop. The model is based on object-oriented programming and developed under Matlab. It integrates one hundred parameters that allow the control of the geometry of the metrology loop without using analytical formulae. The created objects, mainly the reference and the mobile prism and their mirrors, the interferometers and their laser beams, can be moved and deformed freely to take into account several error sources. The Monte Carlo method is then used to determine the positioning uncertainty of the instrument by randomly drawing the parameters according to their associated tolerances and their probability density functions (PDFs). The whole process follows Supplement 2 to ‘The Guide to the Expression of the Uncertainty in Measurement’ (GUM). Some advanced statistical tools like Morris design and Sobol indices are also used to provide a sensitivity analysis by identifying the most influential parameters and quantifying their contribution to the XYZ positioning uncertainty. The approach validated in the paper shows that the actual positioning uncertainty is about 6 nm. As the final objective is to reach 1 nm, we engage in a discussion to estimate the most effective way to reduce the uncertainty.

  7. The impact of spatial variability of hydrogeological parameters - Monte Carlo calculations using SITE-94 data

    International Nuclear Information System (INIS)

    Pereira, A.; Broed, R.

    2002-03-01

    In this report, several issues related to the probabilistic methodology for performance assessments of repositories for high-level nuclear waste and spent fuel are addressed. Random Monte Carlo sampling is used to make uncertainty analyses for the migration of four nuclides and a decay chain in the geosphere. The nuclides studied are cesium, chlorine, iodine and carbon, and radium from a decay chain. A procedure is developed to take advantage of the information contained in the hydrogeological data obtained from a three-dimensional discrete fracture model as the input data for one-dimensional transport models for use in Monte Carlo calculations. This procedure retains the original correlations between parameters representing different physical entities, namely, between the groundwater flow rate and the hydrodynamic dispersion in fractured rock, in contrast with the approach commonly used that assumes that all parameters supplied for the Monte Carlo calculations are independent of each other. A small program is developed to allow the above-mentioned procedure to be used if the available three-dimensional data are scarce for Monte Carlo calculations. The program allows random sampling of data from the 3-D data distribution in the hydrogeological calculations. The impact of correlations between the groundwater flow and the hydrodynamic dispersion on the uncertainty associated with the output distribution of the radionuclides' peak releases is studied. It is shown that for the SITE-94 data, this impact can be disregarded. A global sensitivity analysis is also performed on the peak releases of the radionuclides studied. The results of these sensitivity analyses, using several known statistical methods, show discrepancies that are attributed to the limitations of these methods. The reason for the difficulties is to be found in the complexity of the models needed for the predictions of radionuclide migration, models that deliver results covering variation of several

  8. Safety assessment of infrastructures using a new Bayesian Monte Carlo method

    NARCIS (Netherlands)

    Rajabali Nejad, Mohammadreza; Demirbilek, Z.

    2011-01-01

    A recently developed Bayesian Monte Carlo (BMC) method and its application to safety assessment of structures are described in this paper. We use a one-dimensional BMC method that was proposed in 2009 by Rajabalinejad in order to develop a weighted logical dependence between successive Monte Carlo

  9. Imprecision of dose predictions for radionuclides released to the environment: an application of a Monte Carlo simulation technique

    Energy Technology Data Exchange (ETDEWEB)

    Schwarz, G; Hoffman, F O

    1980-01-01

    An evaluation of the imprecision in dose predictions for radionuclides has been performed using correct dose assessment models and knowledge of model parameter value uncertainties. The propagation of parameter uncertainties is demonstrated using a Monte Carlo technique for elemental iodine 131 transported via the pasture-cow-milk-child pathway. Results indicated that when site-specific information is unavailable, the imprecision inherent in the predictions for this pathway is potentially large. (3 graphs, 25 references, 5 tables)

  10. Monte Carlo studies of ZEPLIN III

    CERN Document Server

    Dawson, J; Davidge, D C R; Gillespie, J R; Howard, A S; Jones, W G; Joshi, M; Lebedenko, V N; Sumner, T J; Quenby, J J

    2002-01-01

    A Monte Carlo simulation of a two-phase xenon dark matter detector, ZEPLIN III, has been achieved. Results from the analysis of a simulated data set are presented, showing primary and secondary signal distributions from low energy gamma ray events.

  11. Multi-Index Monte Carlo (MIMC)

    KAUST Repository

    Haji Ali, Abdul Lateef

    2016-01-06

    We propose and analyze a novel Multi-Index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles s seminal work, instead of using first-order differences as in MLMC, we use in MIMC high-order mixed differences to reduce the variance of the hierarchical differences dramatically. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be of Total Degree (TD) type. When using such sets, MIMC yields new and improved complexity results, which are natural generalizations of Giles s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence, O(TOL-2).

  12. Multi-Index Monte Carlo (MIMC)

    KAUST Repository

    Haji Ali, Abdul Lateef; Nobile, Fabio; Tempone, Raul

    2016-01-01

    We propose and analyze a novel Multi-Index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles s seminal work, instead of using first-order differences as in MLMC, we use in MIMC high-order mixed differences to reduce the variance of the hierarchical differences dramatically. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be of Total Degree (TD) type. When using such sets, MIMC yields new and improved complexity results, which are natural generalizations of Giles s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence, O(TOL-2).

  13. Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations

    Science.gov (United States)

    Hoogenboom, J. Eduard; Dufek, Jan

    2014-06-01

    This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.

  14. Optimized iteration in coupled Monte-Carlo - Thermal-hydraulics calculations

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.; Dufek, J.

    2013-01-01

    This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration methods are also tested and it is concluded that the presented iteration method is near optimal. (authors)

  15. Profit Forecast Model Using Monte Carlo Simulation in Excel

    Directory of Open Access Journals (Sweden)

    Petru BALOGH

    2014-01-01

    Full Text Available Profit forecast is very important for any company. The purpose of this study is to provide a method to estimate the profit and the probability of obtaining the expected profit. Monte Carlo methods are stochastic techniques–meaning they are based on the use of random numbers and probability statistics to investigate problems. Monte Carlo simulation furnishes the decision-maker with a range of possible outcomes and the probabilities they will occur for any choice of action. Our example of Monte Carlo simulation in Excel will be a simplified profit forecast model. Each step of the analysis will be described in detail. The input data for the case presented: the number of leads per month, the percentage of leads that result in sales, , the cost of a single lead, the profit per sale and fixed cost, allow obtaining profit and associated probabilities of achieving.

  16. Calibration and Monte Carlo modelling of neutron long counters

    CERN Document Server

    Tagziria, H

    2000-01-01

    The Monte Carlo technique has become a very powerful tool in radiation transport as full advantage is taken of enhanced cross-section data, more powerful computers and statistical techniques, together with better characterisation of neutron and photon source spectra. At the National Physical Laboratory, calculations using the Monte Carlo radiation transport code MCNP-4B have been combined with accurate measurements to characterise two long counters routinely used to standardise monoenergetic neutron fields. New and more accurate response function curves have been produced for both long counters. A novel approach using Monte Carlo methods has been developed, validated and used to model the response function of the counters and determine more accurately their effective centres, which have always been difficult to establish experimentally. Calculations and measurements agree well, especially for the De Pangher long counter for which details of the design and constructional material are well known. The sensitivit...

  17. Adaptive anisotropic diffusion filtering of Monte Carlo dose distributions

    International Nuclear Information System (INIS)

    Miao Binhe; Jeraj, Robert; Bao Shanglian; Mackie, Thomas R

    2003-01-01

    The Monte Carlo method is the most accurate method for radiotherapy dose calculations, if used correctly. However, any Monte Carlo dose calculation is burdened with statistical noise. In this paper, denoising of Monte Carlo dose distributions with a three-dimensional adaptive anisotropic diffusion method was investigated. The standard anisotropic diffusion method was extended by changing the filtering parameters adaptively according to the local statistical noise. Smoothing of dose distributions with different noise levels in an inhomogeneous phantom, a conventional and an IMRT treatment case is shown. The resultant dose distributions were analysed using several evaluating criteria. It is shown that the adaptive anisotropic diffusion method can reduce statistical noise significantly (two to five times, corresponding to the reduction of simulation time by a factor of up to 20), while preserving important gradients of the dose distribution well. The choice of free parameters of the method was found to be fairly robust

  18. Global Monte Carlo Simulation with High Order Polynomial Expansions

    International Nuclear Information System (INIS)

    William R. Martin; James Paul Holloway; Kaushik Banerjee; Jesse Cheatham; Jeremy Conlin

    2007-01-01

    The functional expansion technique (FET) was recently developed for Monte Carlo simulation. The basic idea of the FET is to expand a Monte Carlo tally in terms of a high order expansion, the coefficients of which can be estimated via the usual random walk process in a conventional Monte Carlo code. If the expansion basis is chosen carefully, the lowest order coefficient is simply the conventional histogram tally, corresponding to a flat mode. This research project studied the applicability of using the FET to estimate the fission source, from which fission sites can be sampled for the next generation. The idea is that individual fission sites contribute to expansion modes that may span the geometry being considered, possibly increasing the communication across a loosely coupled system and thereby improving convergence over the conventional fission bank approach used in most production Monte Carlo codes. The project examined a number of basis functions, including global Legendre polynomials as well as 'local' piecewise polynomials such as finite element hat functions and higher order versions. The global FET showed an improvement in convergence over the conventional fission bank approach. The local FET methods showed some advantages versus global polynomials in handling geometries with discontinuous material properties. The conventional finite element hat functions had the disadvantage that the expansion coefficients could not be estimated directly but had to be obtained by solving a linear system whose matrix elements were estimated. An alternative fission matrix-based response matrix algorithm was formulated. Studies were made of two alternative applications of the FET, one based on the kernel density estimator and one based on Arnoldi's method of minimized iterations. Preliminary results for both methods indicate improvements in fission source convergence. These developments indicate that the FET has promise for speeding up Monte Carlo fission source convergence

  19. MONK - a general purpose Monte Carlo neutronics program

    International Nuclear Information System (INIS)

    Sherriffs, V.S.W.

    1978-01-01

    MONK is a Monte Carlo neutronics code written principally for criticality calculations relevant to the transport, storage, and processing of fissile material. The code exploits the ability of the Monte Carlo method to represent complex shapes with very great accuracy. The nuclear data used is derived from the UK Nuclear Data File processed to the required format by a subsidiary program POND. A general description is given of the MONK code together with the subsidiary program SCAN which produces diagrams of the system specified. Details of the data input required by MONK and SCAN are also given. (author)

  20. Monte Carlo simulation with the Gate software using grid computing

    International Nuclear Information System (INIS)

    Reuillon, R.; Hill, D.R.C.; Gouinaud, C.; El Bitar, Z.; Breton, V.; Buvat, I.

    2009-03-01

    Monte Carlo simulations are widely used in emission tomography, for protocol optimization, design of processing or data analysis methods, tomographic reconstruction, or tomograph design optimization. Monte Carlo simulations needing many replicates to obtain good statistical results can be easily executed in parallel using the 'Multiple Replications In Parallel' approach. However, several precautions have to be taken in the generation of the parallel streams of pseudo-random numbers. In this paper, we present the distribution of Monte Carlo simulations performed with the GATE software using local clusters and grid computing. We obtained very convincing results with this large medical application, thanks to the EGEE Grid (Enabling Grid for E-science), achieving in one week computations that could have taken more than 3 years of processing on a single computer. This work has been achieved thanks to a generic object-oriented toolbox called DistMe which we designed to automate this kind of parallelization for Monte Carlo simulations. This toolbox, written in Java is freely available on SourceForge and helped to ensure a rigorous distribution of pseudo-random number streams. It is based on the use of a documented XML format for random numbers generators statuses. (authors)

  1. Modeling dose-rate on/over the surface of cylindrical radio-models using Monte Carlo methods

    International Nuclear Information System (INIS)

    Xiao Xuefu; Ma Guoxue; Wen Fuping; Wang Zhongqi; Wang Chaohui; Zhang Jiyun; Huang Qingbo; Zhang Jiaqiu; Wang Xinxing; Wang Jun

    2004-01-01

    Objective: To determine the dose-rates on/over the surface of 10 cylindrical radio-models, which belong to the Metrology Station of Radio-Geological Survey of CNNC. Methods: The dose-rates on/over the surface of 10 cylindrical radio-models were modeled using the famous Monte Carlo code-MCNP. The dose-rates on/over the surface of 10 cylindrical radio-models were measured by a high gas pressurized ionization chamber dose-rate meter, respectively. The values of dose-rate modeled using MCNP code were compared with those obtained by authors in the present experimental measurement, and with those obtained by other workers previously. Some factors causing the discrepancy between the data obtained by authors using MCNP code and the data obtained using other methods are discussed in this paper. Results: The data of dose-rates on/over the surface of 10 cylindrical radio-models, obtained using MCNP code, were in good agreement with those obtained by other workers using the theoretical method. They were within the discrepancy of ±5% in general, and the maximum discrepancy was less than 10%. Conclusions: As if each factor needed for the Monte Carlo code is correct, the dose-rates on/over the surface of cylindrical radio-models modeled using the Monte Carlo code are correct with an uncertainty of 3%

  2. Methodology of Continuous-Energy Adjoint Monte Carlo for Neutron, Photon, and Coupled Neutron-Photon Transport

    International Nuclear Information System (INIS)

    Hoogenboom, J. Eduard

    2003-01-01

    Adjoint Monte Carlo may be a useful alternative to regular Monte Carlo calculations in cases where a small detector inhibits an efficient Monte Carlo calculation as only very few particle histories will cross the detector. However, in general purpose Monte Carlo codes, normally only the multigroup form of adjoint Monte Carlo is implemented. In this article the general methodology for continuous-energy adjoint Monte Carlo neutron transport is reviewed and extended for photon and coupled neutron-photon transport. In the latter cases the discrete photons generated by annihilation or by neutron capture or inelastic scattering prevent a direct application of the general methodology. Two successive reaction events must be combined in the selection process to accommodate the adjoint analog of a reaction resulting in a photon with a discrete energy. Numerical examples illustrate the application of the theory for some simplified problems

  3. Monte Carlo simulations in skin radiotherapy

    International Nuclear Information System (INIS)

    Sarvari, A.; Jeraj, R.; Kron, T.

    2000-01-01

    The primary goal of this work was to develop a procedure for calculation the appropriate filter shape for a brachytherapy applicator used for skin radiotherapy. In the applicator a radioactive source is positioned close to the skin. Without a filter, the resultant dose distribution would be highly nonuniform.High uniformity is usually required however. This can be achieved using an appropriately shaped filter, which flattens the dose profile. Because of the complexity of the transport and geometry, Monte Carlo simulations had to be used. An 192 Ir high dose rate photon source was used. All necessary transport parameters were simulated with the MCNP4B Monte Carlo code. A highly efficient iterative procedure was developed, which enabled calculation of the optimal filter shape in only few iterations. The initially non-uniform dose distributions became uniform within a percent when applying the filter calculated by this procedure. (author)

  4. PEPSI — a Monte Carlo generator for polarized leptoproduction

    Science.gov (United States)

    Mankiewicz, L.; Schäfer, A.; Veltri, M.

    1992-09-01

    We describe PEPSI (Polarized Electron Proton Scattering Interactions), a Monte Carlo program for polarized deep inelastic leptoproduction mediated by electromagnetic interaction, and explain how to use it. The code is a modification of the LEPTO 4.3 Lund Monte Carlo for unpolarized scattering. The hard virtual gamma-parton scattering is generated according to the polarization-dependent QCD cross-section of the first order in α S. PEPSI requires the standard polarization-independent JETSET routines to simulate the fragmentation into final hadrons.

  5. Modern analysis of ion channeling data by Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Nowicki, Lech [Andrzej SoItan Institute for Nuclear Studies, ul. Hoza 69, 00-681 Warsaw (Poland)]. E-mail: lech.nowicki@fuw.edu.pl; Turos, Andrzej [Institute of Electronic Materials Technology, Wolczynska 133, 01-919 Warsaw (Poland); Ratajczak, Renata [Andrzej SoItan Institute for Nuclear Studies, ul. Hoza 69, 00-681 Warsaw (Poland); Stonert, Anna [Andrzej SoItan Institute for Nuclear Studies, ul. Hoza 69, 00-681 Warsaw (Poland); Garrido, Frederico [Centre de Spectrometrie Nucleaire et Spectrometrie de Masse, CNRS-IN2P3-Universite Paris-Sud, 91405 Orsay (France)

    2005-10-15

    Basic scheme of ion channeling spectra Monte Carlo simulation is reformulated in terms of statistical sampling. The McChasy simulation code is described and two examples of the code applications are presented. These are: calculation of projectile flux in uranium dioxide crystal and defect analysis for ion implanted InGaAsP/InP superlattice. Virtues and pitfalls of defect analysis using Monte Carlo simulations are discussed.

  6. Hypothesis testing of scientific Monte Carlo calculations

    Science.gov (United States)

    Wallerberger, Markus; Gull, Emanuel

    2017-11-01

    The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.

  7. PRELIMINARY COUPLING OF THE MONTE CARLO CODE OPENMC AND THE MULTIPHYSICS OBJECT-ORIENTED SIMULATION ENVIRONMENT (MOOSE) FOR ANALYZING DOPPLER FEEDBACK IN MONTE CARLO SIMULATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Matthew Ellis; Derek Gaston; Benoit Forget; Kord Smith

    2011-07-01

    In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems. One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics feedback in Monte Carlo codes. An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin based on pin relative powers.

  8. Monte Carlo codes use in neutron therapy; Application de codes Monte Carlo en neutrontherapie

    Energy Technology Data Exchange (ETDEWEB)

    Paquis, P.; Mokhtari, F.; Karamanoukian, D. [Hopital Pasteur, 06 - Nice (France); Pignol, J.P. [Hopital du Hasenrain, 68 - Mulhouse (France); Cuendet, P. [CEA Centre d' Etudes de Saclay, 91 - Gif-sur-Yvette (France). Direction des Reacteurs Nucleaires; Fares, G.; Hachem, A. [Faculte des Sciences, 06 - Nice (France); Iborra, N. [Centre Antoine-Lacassagne, 06 - Nice (France)

    1998-04-01

    Monte Carlo calculation codes allow to study accurately all the parameters relevant to radiation effects, like the dose deposition or the type of microscopic interactions, through one by one particle transport simulation. These features are very useful for neutron irradiations, from device development up to dosimetry. This paper illustrates some applications of these codes in Neutron Capture Therapy and Neutron Capture Enhancement of fast neutrons irradiations. (authors)

  9. Frontiers of quantum Monte Carlo workshop: preface

    International Nuclear Information System (INIS)

    Gubernatis, J.E.

    1985-01-01

    The introductory remarks, table of contents, and list of attendees are presented from the proceedings of the conference, Frontiers of Quantum Monte Carlo, which appeared in the Journal of Statistical Physics

  10. Minimum variance Monte Carlo importance sampling with parametric dependence

    International Nuclear Information System (INIS)

    Ragheb, M.M.H.; Halton, J.; Maynard, C.W.

    1981-01-01

    An approach for Monte Carlo Importance Sampling with parametric dependence is proposed. It depends upon obtaining by proper weighting over a single stage the overall functional dependence of the variance on the importance function parameter over a broad range of its values. Results corresponding to minimum variance are adapted and other results rejected. Numerical calculation for the estimation of intergrals are compared to Crude Monte Carlo. Results explain the occurrences of the effective biases (even though the theoretical bias is zero) and infinite variances which arise in calculations involving severe biasing and a moderate number of historis. Extension to particle transport applications is briefly discussed. The approach constitutes an extension of a theory on the application of Monte Carlo for the calculation of functional dependences introduced by Frolov and Chentsov to biasing, or importance sample calculations; and is a generalization which avoids nonconvergence to the optimal values in some cases of a multistage method for variance reduction introduced by Spanier. (orig.) [de

  11. Benchmarking time-dependent neutron problems with Monte Carlo codes

    International Nuclear Information System (INIS)

    Couet, B.; Loomis, W.A.

    1990-01-01

    Many nuclear logging tools measure the time dependence of a neutron flux in a geological formation to infer important properties of the formation. The complex geometry of the tool and the borehole within the formation does not permit an exact deterministic modelling of the neutron flux behaviour. While this exact simulation is possible with Monte Carlo methods the computation time does not facilitate quick turnaround of results useful for design and diagnostic purposes. Nonetheless a simple model based on the diffusion-decay equation for the flux of neutrons of a single energy group can be useful in this situation. A combination approach where a Monte Carlo calculation benchmarks a deterministic model in terms of the diffusion constants of the neutrons propagating in the media and their flux depletion rates thus offers the possibility of quick calculation with assurance as to accuracy. We exemplify this approach with the Monte Carlo benchmarking of a logging tool problem, showing standoff and bedding response. (author)

  12. SU-E-J-144: Low Activity Studies of Carbon 11 Activation Via GATE Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Elmekawy, A; Ewell, L [Hampton University, Hampton, VA (United States); Butuceanu, C; Qu, L [Hampton University Proton Therapy Institute, Hampton, VA (United States)

    2015-06-15

    Purpose: To investigate the behavior of a Monte Carlo simulation code with low levels of activity (∼1,000Bq). Such activity levels are expected from phantoms and patients activated via a proton therapy beam. Methods: Three different ranges for a therapeutic proton radiation beam were examined in a Monte Carlo simulation code: 13.5, 17.0 and 21.0cm. For each range, the decay of an equivalent length{sup 11}C source and additional sources of length plus or minus one cm was studied in a benchmark PET simulation for activities of 1000, 2000 and 3000Bq. The ranges were chosen to coincide with a previous activation study, and the activities were chosen to coincide with the approximate level of isotope creation expected in a phantom or patient irradiated by a therapeutic proton beam. The GATE 7.0 simulation was completed on a cluster node, running Scientific Linux Carbon 6 (Red Hat©). The resulting Monte Carlo data were investigated with the ROOT (CERN) analysis tool. The half-life of{sup 11}C was extracted via a histogram fit to the number of simulated PET events vs. time. Results: The average slope of the deviation of the extracted carbon half life from the expected/nominal value vs. activity showed a generally positive value. This was unexpected, as the deviation should, in principal, decrease with increased activity and lower statistical uncertainty. Conclusion: For activity levels on the order of 1,000Bq, the behavior of a benchmark PET test was somewhat unexpected. It is important to be aware of the limitations of low activity PET images, and low activity Monte Carlo simulations. This work was funded in part by the Philips corporation.

  13. Monte Carlo methods beyond detailed balance

    NARCIS (Netherlands)

    Schram, Raoul D.; Barkema, Gerard T.|info:eu-repo/dai/nl/101275080

    2015-01-01

    Monte Carlo algorithms are nearly always based on the concept of detailed balance and ergodicity. In this paper we focus on algorithms that do not satisfy detailed balance. We introduce a general method for designing non-detailed balance algorithms, starting from a conventional algorithm satisfying

  14. Monte Carlo simulations in theoretical physic

    International Nuclear Information System (INIS)

    Billoire, A.

    1991-01-01

    After a presentation of the MONTE CARLO method principle, the method is applied, first to the critical exponents calculations in the three dimensions ISING model, and secondly to the discrete quantum chromodynamic with calculation times in function of computer power. 28 refs., 4 tabs

  15. Biases and statistical errors in Monte Carlo burnup calculations: an unbiased stochastic scheme to solve Boltzmann/Bateman coupled equations

    International Nuclear Information System (INIS)

    Dumonteil, E.; Diop, C.M.

    2011-01-01

    External linking scripts between Monte Carlo transport codes and burnup codes, and complete integration of burnup capability into Monte Carlo transport codes, have been or are currently being developed. Monte Carlo linked burnup methodologies may serve as an excellent benchmark for new deterministic burnup codes used for advanced systems; however, there are some instances where deterministic methodologies break down (i.e., heavily angularly biased systems containing exotic materials without proper group structure) and Monte Carlo burn up may serve as an actual design tool. Therefore, researchers are also developing these capabilities in order to examine complex, three-dimensional exotic material systems that do not contain benchmark data. Providing a reference scheme implies being able to associate statistical errors to any neutronic value of interest like k(eff), reaction rates, fluxes, etc. Usually in Monte Carlo, standard deviations are associated with a particular value by performing different independent and identical simulations (also referred to as 'cycles', 'batches', or 'replicas'), but this is only valid if the calculation itself is not biased. And, as will be shown in this paper, there is a bias in the methodology that consists of coupling transport and depletion codes because Bateman equations are not linear functions of the fluxes or of the reaction rates (those quantities being always measured with an uncertainty). Therefore, we have to quantify and correct this bias. This will be achieved by deriving an unbiased minimum variance estimator of a matrix exponential function of a normal mean. The result is then used to propose a reference scheme to solve Boltzmann/Bateman coupled equations, thanks to Monte Carlo transport codes. Numerical tests will be performed with an ad hoc Monte Carlo code on a very simple depletion case and will be compared to the theoretical results obtained with the reference scheme. Finally, the statistical error propagation

  16. Physical time scale in kinetic Monte Carlo simulations of continuous-time Markov chains.

    Science.gov (United States)

    Serebrinsky, Santiago A

    2011-03-01

    We rigorously establish a physical time scale for a general class of kinetic Monte Carlo algorithms for the simulation of continuous-time Markov chains. This class of algorithms encompasses rejection-free (or BKL) and rejection (or "standard") algorithms. For rejection algorithms, it was formerly considered that the availability of a physical time scale (instead of Monte Carlo steps) was empirical, at best. Use of Monte Carlo steps as a time unit now becomes completely unnecessary.

  17. Development and application of the automated Monte Carlo biasing procedure in SAS4

    International Nuclear Information System (INIS)

    Tang, J.S.; Broadhead, B.L.

    1995-01-01

    An automated approach for biasing Monte Carlo shielding calculations is described. In particular, adjoint fluxes from a one-dimensional discrete-ordinates calculation are used to generate biasing parameters for a three-dimensional Monte Carlo calculation. The automated procedure consisting of cross-section processing, adjoint flux determination, biasing parameter generation, and the initiation of a MORSE-SGC/S Monte Carlo calculation has been implemented in the SAS4 module of the SCALE computer code system. (author)

  18. A Monte Carlo study on event-by-event transverse momentum fluctuation at RHIC

    International Nuclear Information System (INIS)

    Xu Mingmei

    2005-01-01

    The experimental observation on the multiplicity dependence of event-by-event transverse momentum fluctuation in relativistic heavy ion collisions is studied using Monte Carlo simulation. It is found that the Monte Carlo generator HIJING is unable to describe the experimental phenomenon well. A simple Monte Carlo model is proposed, which can recover the data and thus shed some light on the dynamical origin of the multiplicity dependence of event-by-event transverse momentum fluctuation. (authors)

  19. Monte Carlo validation experiments for the gas Cherenkov detectors at the National Ignition Facility and Omega

    Energy Technology Data Exchange (ETDEWEB)

    Rubery, M. S.; Horsfield, C. J. [Plasma Physics Department, AWE plc, Reading RG7 4PR (United Kingdom); Herrmann, H.; Kim, Y.; Mack, J. M.; Young, C.; Evans, S.; Sedillo, T.; McEvoy, A.; Caldwell, S. E. [Plasma Physics Department, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Grafil, E.; Stoeffl, W. [Physics, Lawrence Livermore National Laboratory, Livermore, California 94551 (United States); Milnes, J. S. [Photek Limited UK, 26 Castleham Road, St. Leonards-on-sea TN38 9NS (United Kingdom)

    2013-07-15

    The gas Cherenkov detectors at NIF and Omega measure several ICF burn characteristics by detecting multi-MeV nuclear γ emissions from the implosion. Of primary interest are γ bang-time (GBT) and burn width defined as the time between initial laser-plasma interaction and peak in the fusion reaction history and the FWHM of the reaction history respectively. To accurately calculate such parameters the collaboration relies on Monte Carlo codes, such as GEANT4 and ACCEPT, for diagnostic properties that cannot be measured directly. This paper describes a series of experiments performed at the High Intensity γ Source (HIγS) facility at Duke University to validate the geometries and material data used in the Monte Carlo simulations. Results published here show that model-driven parameters such as intensity and temporal response can be used with less than 50% uncertainty for all diagnostics and facilities.

  20. On Monte Carlo Simulation and Analysis of Electricity Markets

    International Nuclear Information System (INIS)

    Amelin, Mikael

    2004-07-01

    This dissertation is about how Monte Carlo simulation can be used to analyse electricity markets. There are a wide range of applications for simulation; for example, players in the electricity market can use simulation to decide whether or not an investment can be expected to be profitable, and authorities can by means of simulation find out which consequences a certain market design can be expected to have on electricity prices, environmental impact, etc. In the first part of the dissertation, the focus is which electricity market models are suitable for Monte Carlo simulation. The starting point is a definition of an ideal electricity market. Such an electricity market is partly practical from a mathematical point of view (it is simple to formulate and does not require too complex calculations) and partly it is a representation of the best possible resource utilisation. The definition of the ideal electricity market is followed by analysis how the reality differs from the ideal model, what consequences the differences have on the rules of the electricity market and the strategies of the players, as well as how non-ideal properties can be included in a mathematical model. Particularly, questions about environmental impact, forecast uncertainty and grid costs are studied. The second part of the dissertation treats the Monte Carlo technique itself. To reduce the number of samples necessary to obtain accurate results, variance reduction techniques can be used. Here, six different variance reduction techniques are studied and possible applications are pointed out. The conclusions of these studies are turned into a method for efficient simulation of basic electricity markets. The method is applied to some test systems and the results show that the chosen variance reduction techniques can produce equal or better results using 99% fewer samples compared to when the same system is simulated without any variance reduction technique. More complex electricity market models