WorldWideScience

Sample records for monte carlo estimators

  1. Monte Carlo-based tail exponent estimator

    Science.gov (United States)

    Barunik, Jozef; Vacha, Lukas

    2010-11-01

    In this paper we propose a new approach to estimation of the tail exponent in financial stock markets. We begin the study with the finite sample behavior of the Hill estimator under α-stable distributions. Using large Monte Carlo simulations, we show that the Hill estimator overestimates the true tail exponent and can hardly be used on samples with small length. Utilizing our results, we introduce a Monte Carlo-based method of estimation for the tail exponent. Our proposed method is not sensitive to the choice of tail size and works well also on small data samples. The new estimator also gives unbiased results with symmetrical confidence intervals. Finally, we demonstrate the power of our estimator on the international world stock market indices. On the two separate periods of 2002-2005 and 2006-2009, we estimate the tail exponent.

  2. Statistical estimation Monte Carlo for unreliability evaluation of highly reliable system

    International Nuclear Information System (INIS)

    Xiao Gang; Su Guanghui; Jia Dounan; Li Tianduo

    2000-01-01

    Based on analog Monte Carlo simulation, statistical Monte Carlo methods for unreliable evaluation of highly reliable system are constructed, including direct statistical estimation Monte Carlo method and weighted statistical estimation Monte Carlo method. The basal element is given, and the statistical estimation Monte Carlo estimators are derived. Direct Monte Carlo simulation method, bounding-sampling method, forced transitions Monte Carlo method, direct statistical estimation Monte Carlo and weighted statistical estimation Monte Carlo are used to evaluate unreliability of a same system. By comparing, weighted statistical estimation Monte Carlo estimator has smallest variance, and has highest calculating efficiency

  3. A MONTE-CARLO METHOD FOR ESTIMATING THE CORRELATION EXPONENT

    NARCIS (Netherlands)

    MIKOSCH, T; WANG, QA

    We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.

  4. Monte Carlo Solutions for Blind Phase Noise Estimation

    Directory of Open Access Journals (Sweden)

    Çırpan Hakan

    2009-01-01

    Full Text Available This paper investigates the use of Monte Carlo sampling methods for phase noise estimation on additive white Gaussian noise (AWGN channels. The main contributions of the paper are (i the development of a Monte Carlo framework for phase noise estimation, with special attention to sequential importance sampling and Rao-Blackwellization, (ii the interpretation of existing Monte Carlo solutions within this generic framework, and (iii the derivation of a novel phase noise estimator. Contrary to the ad hoc phase noise estimators that have been proposed in the past, the estimators considered in this paper are derived from solid probabilistic and performance-determining arguments. Computer simulations demonstrate that, on one hand, the Monte Carlo phase noise estimators outperform the existing estimators and, on the other hand, our newly proposed solution exhibits a lower complexity than the existing Monte Carlo solutions.

  5. Failure Probability Estimation of Wind Turbines by Enhanced Monte Carlo

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Naess, Arvid

    2012-01-01

    This paper discusses the estimation of the failure probability of wind turbines required by codes of practice for designing them. The Standard Monte Carlo (SMC) simulations may be used for this reason conceptually as an alternative to the popular Peaks-Over-Threshold (POT) method. However......, estimation of very low failure probabilities with SMC simulations leads to unacceptably high computational costs. In this study, an Enhanced Monte Carlo (EMC) method is proposed that overcomes this obstacle. The method has advantages over both POT and SMC in terms of its low computational cost and accuracy...... is controlled by the pitch controller. This provides a fair framework for comparison of the behavior and failure event of the wind turbine with emphasis on the effect of the pitch controller. The Enhanced Monte Carlo method is then applied to the model and the failure probabilities of the model are estimated...

  6. Probability Density Estimation Using Neural Networks in Monte Carlo Calculations

    International Nuclear Information System (INIS)

    Shim, Hyung Jin; Cho, Jin Young; Song, Jae Seung; Kim, Chang Hyo

    2008-01-01

    The Monte Carlo neutronics analysis requires the capability for a tally distribution estimation like an axial power distribution or a flux gradient in a fuel rod, etc. This problem can be regarded as a probability density function estimation from an observation set. We apply the neural network based density estimation method to an observation and sampling weight set produced by the Monte Carlo calculations. The neural network method is compared with the histogram and the functional expansion tally method for estimating a non-smooth density, a fission source distribution, and an absorption rate's gradient in a burnable absorber rod. The application results shows that the neural network method can approximate a tally distribution quite well. (authors)

  7. Monte Carlo-Based Tail Exponent Estimator

    Czech Academy of Sciences Publication Activity Database

    Baruník, Jozef; Vácha, Lukáš

    2010-01-01

    Roč. 2010, č. 6 (2010), s. 1-26 R&D Projects: GA ČR GA402/09/0965; GA ČR GD402/09/H045; GA ČR GP402/08/P207 Institutional research plan: CEZ:AV0Z10750506 Keywords : Hill estimator * α-stable distributions * tail exponent estimation Subject RIV: AH - Economics http://library.utia.cas.cz/separaty/2010/E/barunik-0342493.pdf

  8. Combining four Monte Carlo estimators for radiation momentum deposition

    International Nuclear Information System (INIS)

    Hykes, Joshua M.; Urbatsch, Todd J.

    2011-01-01

    Using four distinct Monte Carlo estimators for momentum deposition - analog, absorption, collision, and track-length estimators - we compute a combined estimator. In the wide range of problems tested, the combined estimator always has a figure of merit (FOM) equal to or better than the other estimators. In some instances the FOM of the combined estimator is only a few percent higher than the FOM of the best solo estimator, the track-length estimator, while in one instance it is better by a factor of 2.5. Over the majority of configurations, the combined estimator's FOM is 10 - 20% greater than any of the solo estimators' FOM. The numerical results show that the track-length estimator is the most important term in computing the combined estimator, followed far behind by the analog estimator. The absorption and collision estimators make negligible contributions. (author)

  9. Approximate zero-variance Monte Carlo estimation of Markovian unreliability

    International Nuclear Information System (INIS)

    Delcoux, J.L.; Labeau, P.E.; Devooght, J.

    1997-01-01

    Monte Carlo simulation has become an important tool for the estimation of reliability characteristics, since conventional numerical methods are no more efficient when the size of the system to solve increases. However, evaluating by a simulation the probability of occurrence of very rare events means playing a very large number of histories of the system, which leads to unacceptable computation times. Acceleration and variance reduction techniques have to be worked out. We show in this paper how to write the equations of Markovian reliability as a transport problem, and how the well known zero-variance scheme can be adapted to this application. But such a method is always specific to the estimation of one quality, while a Monte Carlo simulation allows to perform simultaneously estimations of diverse quantities. Therefore, the estimation of one of them could be made more accurate while degrading at the same time the variance of other estimations. We propound here a method to reduce simultaneously the variance for several quantities, by using probability laws that would lead to zero-variance in the estimation of a mean of these quantities. Just like the zero-variance one, the method we propound is impossible to perform exactly. However, we show that simple approximations of it may be very efficient. (author)

  10. Metrics for Diagnosing Undersampling in Monte Carlo Tally Estimates

    International Nuclear Information System (INIS)

    Perfetti, Christopher M.; Rearden, Bradley T.

    2015-01-01

    This study explored the potential of using Markov chain convergence diagnostics to predict the prevalence and magnitude of biases due to undersampling in Monte Carlo eigenvalue and flux tally estimates. Five metrics were applied to two models of pressurized water reactor fuel assemblies and their potential for identifying undersampling biases was evaluated by comparing the calculated test metrics with known biases in the tallies. Three of the five undersampling metrics showed the potential to accurately predict the behavior of undersampling biases in the responses examined in this study.

  11. Unified definition of a class of Monte Carlo estimators

    International Nuclear Information System (INIS)

    Lux, I.

    1978-01-01

    A unified definition of a wide class of Monte Carlo reaction rate estimators is presented, since most commonly used estimators belong to that class. The definition is given through an integral transformation of an arbitrary estimator of the class. Since the transformation contains an arbitrary function, in principle an infinite number of new estimators can be defined on the basis of one known estimator. It is shown that the most common estimators belonging to the class, such as the track-length and expectation estimators, are special cases of transformation, corresponding to the simplest transformation kernels when transforming the usual collision estimator. A pair of new estimators is defined and their variances are compared to the variance of the expectation estimator. One of the new estimators, called the trexpectation estimator, seems to be appropriate for flux-integral estimation in moderator regions. The other one, which uses an intermediate estimation of the final result and is therefore called the self-improving estimator, always yields a lower variance than the expectation estimator. As is shown, this estimator approximates well to possibly the best estimator of the class. Numerical results are presented for the simplest geometries, and these results indicate that for absorbers that are not too strong, in practical cases the standard deviation of the self-improving estimator is less than that of the expectation estimator by more than 10%. The experiments also suggest that the self-improving estimator is always superior to the track-length estimator as well, i.e., that it is the best of all known estimators belonging to the class. In the Appendices, for simplified cases, approximate conditions are given for which the trexpectation and track-length estimators show a higher efficiency than the expectation estimator

  12. Non-analog Monte Carlo estimators for radiation momentum deposition

    International Nuclear Information System (INIS)

    Hykes, Joshua M.; Densmore, Jeffery D.

    2009-01-01

    The standard method for calculating radiation momentum deposition in Monte Carlo simulations is the analog estimator, which tallies the change in a particle's momentum at each interaction with the matter. Unfortunately, the analog estimator can suffer from large amounts of statistical error. In this paper, we present three new non-analog techniques for estimating momentum deposition. Specifically, we use absorption, collision, and track-length estimators to evaluate a simple integral expression for momentum deposition that does not contain terms that can cause large amounts of statistical error in the analog scheme. We compare our new non-analog estimators to the analog estimator with a set of test problems that encompass a wide range of material properties and both isotropic and anisotropic scattering. In nearly all cases, the new non-analog estimators outperform the analog estimator. The track-length estimator consistently yields the highest performance gains, improving upon the analog-estimator figure of merit by factors of up to two orders of magnitude.

  13. Information criteria and higher Eigenmode estimation in Monte Carlo calculations

    International Nuclear Information System (INIS)

    Nease, B. R.; Ueki, T.

    2007-01-01

    Recently developed Monte Carlo methods of estimating the dominance ratio (DR) rely on autoregressive (AR) fittings of a computed time series. This time series is obtained by applying a projection vector to the fission source distribution of the problem. The AR fitting order necessary to accurately extract the mode corresponding to DR is dependent on the number of fission source bins used. This makes it necessary to examine the convergence of DR as the AR fitting order increases. Therefore, we have investigated if the AR fitting order determined by information criteria can be reliably used to estimate DR. Two information criteria have been investigated: Improved Akaike Information Criteria (AICc) and Minimum Descriptive Length Criteria (MDL). These criteria appear to work well when applied to computations with fine bin structure where the projection vector is applied. (authors)

  14. Asteroid mass estimation using Markov-chain Monte Carlo

    Science.gov (United States)

    Siltala, Lauri; Granvik, Mikael

    2017-11-01

    Estimates for asteroid masses are based on their gravitational perturbations on the orbits of other objects such as Mars, spacecraft, or other asteroids and/or their satellites. In the case of asteroid-asteroid perturbations, this leads to an inverse problem in at least 13 dimensions where the aim is to derive the mass of the perturbing asteroid(s) and six orbital elements for both the perturbing asteroid(s) and the test asteroid(s) based on astrometric observations. We have developed and implemented three different mass estimation algorithms utilizing asteroid-asteroid perturbations: the very rough 'marching' approximation, in which the asteroids' orbital elements are not fitted, thereby reducing the problem to a one-dimensional estimation of the mass, an implementation of the Nelder-Mead simplex method, and most significantly, a Markov-chain Monte Carlo (MCMC) approach. We describe each of these algorithms with particular focus on the MCMC algorithm, and present example results using both synthetic and real data. Our results agree with the published mass estimates, but suggest that the published uncertainties may be misleading as a consequence of using linearized mass-estimation methods. Finally, we discuss remaining challenges with the algorithms as well as future plans.

  15. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    Science.gov (United States)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  16. Intelligent Monte Carlo phase-space division and importance estimation

    International Nuclear Information System (INIS)

    Booth, T.E.

    1989-01-01

    Two years ago, a quasi-deterministic method (QD) for obtaining the Monte Carlo importance function was reported. Since then, a number of very complex problems have been solved with the aid of QD. Not only does QD estimate the importance far faster than the (weight window) generator currently in MCNP, QD requires almost no user intervention in contrast to the generator. However, both the generator and QD require the user to divide the phase-space into importance regions. That is, both methods will estimate the importance of a phase-space region, but the user must define the regions. In practice this is tedious and time consuming, and many users are not particularly good at defining sensible importance regions. To make full use of the fat that QD is capable of getting good importance estimates in tens of thousands of phase-space regions relatively easily, some automatic method for dividing the phase space will be useful and perhaps essential. This paper describes recent progress toward an automatic and intelligent phase-space divider

  17. Monte Carlo estimation of the absorbed dose in computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jin Woo; Youn, Han Bean; Kim, Ho Kyung [Pusan National University, Busan (Korea, Republic of)

    2016-05-15

    The purpose of this study is to devise an algorithm calculating absorbed dose distributions of patients based on Monte Carlo (MC) methods, and which includes the dose estimations due to primary and secondary (scattered) x-ray photons. Assessment of patient dose in computed tomography (CT) at the population level has become a subject of public attention and concern, and ultimate CT quality assurance and dose optimization have the goal of reducing radiation-induced cancer risks in the examined population. However, the conventional CT dose index (CTDI) concept is not a surrogate of risk but it has rather been designed to measure an average central dose. In addition, the CTDI or the dose-length product has showed troubles for helical CT with a wider beam collimation. Simple algorithms to estimate a patient specific CT dose based on the MCNP output data have been introduced. For numerical chest and head phantoms, the spatial dose distributions were calculated. The results were reasonable. The estimated dose distribution map can be readily converted into the effective dose. The important list for further studies includes the validation of the models with the experimental measurements and the acceleration of algorithms.

  18. Monte Carlo codes and Monte Carlo simulator program

    International Nuclear Information System (INIS)

    Higuchi, Kenji; Asai, Kiyoshi; Suganuma, Masayuki.

    1990-03-01

    Four typical Monte Carlo codes KENO-IV, MORSE, MCNP and VIM have been vectorized on VP-100 at Computing Center, JAERI. The problems in vector processing of Monte Carlo codes on vector processors have become clear through the work. As the result, it is recognized that these are difficulties to obtain good performance in vector processing of Monte Carlo codes. A Monte Carlo computing machine, which processes the Monte Carlo codes with high performances is being developed at our Computing Center since 1987. The concept of Monte Carlo computing machine and its performance have been investigated and estimated by using a software simulator. In this report the problems in vectorization of Monte Carlo codes, Monte Carlo pipelines proposed to mitigate these difficulties and the results of the performance estimation of the Monte Carlo computing machine by the simulator are described. (author)

  19. Improved estimation of the variance in Monte Carlo criticality calculations

    International Nuclear Information System (INIS)

    Hoogenboom, J. Eduard

    2008-01-01

    Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k eff results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k eff will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k eff are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)

  20. Improved estimation of the variance in Monte Carlo criticality calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. Eduard [Delft University of Technology, Delft (Netherlands)

    2008-07-01

    Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k{sub eff} results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k{sub eff} will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k{sub eff} are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)

  1. Latent degradation indicators estimation and prediction: A Monte Carlo approach

    Science.gov (United States)

    Zhou, Yifan; Sun, Yong; Mathew, Joseph; Wolff, Rodney; Ma, Lin

    2011-01-01

    Asset health inspections can produce two types of indicators: (1) direct indicators (e.g. the thickness of a brake pad, and the crack depth on a gear) which directly relate to a failure mechanism; and (2) indirect indicators (e.g. the indicators extracted from vibration signals and oil analysis data) which can only partially reveal a failure mechanism. While direct indicators enable more precise references to asset health condition, they are often more difficult to obtain than indirect indicators. The state space model provides an efficient approach to estimating direct indicators by using indirect indicators. However, existing state space models to estimate direct indicators largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires fixed inspection intervals. The discrete state assumption entails discretising continuous degradation indicators, which often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This paper proposes a state space model without these assumptions. Monte Carlo-based algorithms are developed to estimate the model parameters and the remaining useful life. These algorithms are evaluated for performance using numerical simulations through MATLAB. The result shows that both the parameters and the remaining useful life are estimated accurately. Finally, the new state space model is used to process vibration and crack depth data from an accelerated test of a gearbox. During this application, the new state space model shows a better fitness result than the state space model with linear and Gaussian assumption.

  2. Asteroid mass estimation with Markov-chain Monte Carlo

    Science.gov (United States)

    Siltala, Lauri; Granvik, Mikael

    2017-10-01

    Estimates for asteroid masses are based on their gravitational perturbations on the orbits of other objects such as Mars, spacecraft, or other asteroids and/or their satellites. In the case of asteroid-asteroid perturbations, this leads to a 13-dimensional inverse problem at minimum where the aim is to derive the mass of the perturbing asteroid and six orbital elements for both the perturbing asteroid and the test asteroid by fitting their trajectories to their observed positions. The fitting has typically been carried out with linearized methods such as the least-squares method. These methods need to make certain assumptions regarding the shape of the probability distributions of the model parameters. This is problematic as these assumptions have not been validated. We have developed a new Markov-chain Monte Carlo method for mass estimation which does not require an assumption regarding the shape of the parameter distribution. Recently, we have implemented several upgrades to our MCMC method including improved schemes for handling observational errors and outlier data alongside the option to consider multiple perturbers and/or test asteroids simultaneously. These upgrades promise significantly improved results: based on two separate results for (19) Fortuna with different test asteroids we previously hypothesized that simultaneous use of both test asteroids would lead to an improved result similar to the average literature value for (19) Fortuna with substantially reduced uncertainties. Our upgraded algorithm indeed finds a result essentially equal to the literature value for this asteroid, confirming our previous hypothesis. Here we show these new results for (19) Fortuna and other example cases, and compare our results to previous estimates. Finally, we discuss our plans to improve our algorithm further, particularly in connection with Gaia.

  3. Statistical Analysis of a Class: Monte Carlo and Multiple Imputation Spreadsheet Methods for Estimation and Extrapolation

    Science.gov (United States)

    Fish, Laurel J.; Halcoussis, Dennis; Phillips, G. Michael

    2017-01-01

    The Monte Carlo method and related multiple imputation methods are traditionally used in math, physics and science to estimate and analyze data and are now becoming standard tools in analyzing business and financial problems. However, few sources explain the application of the Monte Carlo method for individuals and business professionals who are…

  4. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  5. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  6. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  7. Estimativa da produtividade em soldagem pelo Método de Monte Carlo Productivity estimation in welding by Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    José Luiz Ferreira Martins

    2011-09-01

    Full Text Available O objetivo deste artigo é o de analisar a viabilidade da utilização do método de Monte Carlo para estimar a produtividade na soldagem de tubulações industriais de aço carbono com base em amostras pequenas. O estudo foi realizado através de uma análise de uma amostra de referência contendo dados de produtividade de 160 juntas soldadas pelo processo Eletrodo Revestido na REDUC (refinaria de Duque de Caxias, utilizando o software ControlTub 5.3. A partir desses dados foram retiradas de forma aleatória, amostras com, respectivamente, 10, 15 e 20 elementos e executadas simulações pelo método de Monte Carlo. Comparando-se os resultados da amostra com 160 elementos e os dados gerados por simulação se observa que bons resultados podem ser obtidos usando o método de Monte Carlo para estimativa da produtividade da soldagem. Por outro lado, na indústria da construção brasileira o valor da média de produtividade é normalmente usado como um indicador de produtividade e é baseado em dados históricos de outros projetos coletados e avaliados somente após a conclusão do projeto, o que é uma limitação. Este artigo apresenta uma ferramenta para avaliação da execução em tempo real, permitindo ajustes nas estimativas e monitoramento de produtividade durante o empreendimento. Da mesma forma, em licitações, orçamentos e estimativas de prazo, a utilização desta técnica permite a adoção de outras estimativas diferentes da produtividade média, que é comumente usada e como alternativa, se sugerem três critérios: produtividade otimista, média e pessimista.The aim of this article is to analyze the feasibility of using Monte Carlo method to estimate productivity in industrial pipes welding of carbon steel based on small samples. The study was carried out through an analysis of a reference sample containing productivity data of 160 welded joints by SMAW process in REDUC (Duque de Caxias Refinery, using ControlTub 5.3 software

  8. Estimation of balance uncertainty using Direct Monte Carlo Simulation (DSMC) on a CPU-GPU architecture

    CSIR Research Space (South Africa)

    Bidgood, Peter M

    2017-01-01

    Full Text Available The estimation of balance uncertainty using conventional statistical and error propagation methods has been found to be both approximate and laborious to the point of being untenable. Direct Simulation by Monte Carlo (DSMC) has been shown...

  9. Monte Carlo estimation for pediatric barium meal procedures

    International Nuclear Information System (INIS)

    Filipov, D.; Schelin, H.R.; Denyak, V.; Legnani, A.; Ledesma, J.A.; Paschuk, S.A.; Sauzen, J.; Yagui, A.; Hoff, G.; Khoury, H.J.

    2015-01-01

    Fluoroscopic barium meal (BM) series involve an X-ray examination of the esophagus, stomach, and duodenum, by the use of a contrast media – the barium sulfate (BaSO4). They are widely used to observe digestive functions or to diagnose abnormalities such as ulcers; tumors; inflammation of the esophagus, stomach, and duodenum; malrotations; vascular rings; and gastroesophageal reflux disease (a common ailment in children). However, this procedure uses long fluoroscopy times and multiple radiographies, resulting in high effective doses to pediatric patients, whose radiosensitivity and life expectancy are higher than in adults. Based on those data, the aims of the current study are to: determine the P K,A (kerma-area product) values, on the patient chest area, and the effective doses to 5 and 10 years old children. Thirty-seven different pediatric patients were studied and stratified into two group sizes: 5 and 10 years old. For each procedure, the following data was recorded: sex, age and upper chest thickness, from the patients; technical parameters of the procedure (kV, fluoroscopy time and number of radiographies); distances (focus-detector and focus-table) and field size on the examination table. Three pairs of LiF:Mg,Ti thermoluminescent dosimeters were positioned at the center of the child´s sternum. After that, upper chest thickness was subtracted from focus-table distance, so focus-patient distance was obtained. Using the field size on the table and applying similar triangles concepts, the field size on the patient was measured, which was multiplied by the mean kerma (from the dosimeters), so that P K,A could be determined. To estimate the effective dose, P K,A and technical parameters of the procedure (kV, total filtration, focus-detector distance and field size on the patient) were written in a Monte Carlo software simulation. The results of P K,A and effective doses were higher than studies used for comparison, which shows the importance of an

  10. Six types Monte Carlo for estimating the current unavailability of Markov system with dependent repair

    International Nuclear Information System (INIS)

    Xiao Gang; Li Zhizhong

    2004-01-01

    Based on integral equaiton describing the life-history of Markov system, six types of estimators of the current unavailability of Markov system with dependent repair are propounded. Combining with the biased sampling of state transition time of system, six types of Monte Carlo for estimating the current unavailability are given. Two numerical examples are given to deal with the variances and efficiencies of the six types of Monte Carlo methods. (authors)

  11. A Monte Carlo estimation of effective dose in chest tomosynthesis

    International Nuclear Information System (INIS)

    Sabol, John M.

    2009-01-01

    Purpose: The recent introduction of digital tomosynthesis imaging into routine clinical use has enabled the acquisition of volumetric patient data within a standard radiographic examination. Tomosynthesis requires the acquisition of multiple projection views, requiring additional dose compared to a standard projection examination. Knowledge of the effective dose is needed to make an appropriate decision between standard projection, tomosynthesis, and CT for thoracic x-ray examinations. In this article, the effective dose to the patient of chest tomosynthesis is calculated and compared to a standard radiographic examination and to values published for thoracic CT. Methods: Radiographic technique data for posterior-anterior (PA) and left lateral (LAT) radiographic chest examinations of medium-sized adults was obtained from clinical sites. From these data, the average incident air kerma for the standard views was determined. A commercially available tomosynthesis system was used to define the acquisition technique and geometry for each projection view. Using Monte Carlo techniques, the effective dose of the PA, LAT, and each tomosynthesis projection view was calculated. The effective dose for all projections of the tomosynthesis sweep was summed and compared to the calculated PA and LAT values and to the published values for thoracic CT. Results: The average incident air kerma for the PA and left lateral clinical radiographic examinations were found to be 0.10 and 0.40 mGy, respectively. The effective dose for the PA view of a patient of the size of an average adult male was determined to be 0.017 mSv (ICRP 60) [0.018 mSv (ICRP 103)]. For the left lateral view of the same sized patient, the effective dose was determined to be 0.039 mSv (ICRP 60) [0.050 mSv (ICRP 103)]. The cumulative mA s for a tomosynthesis examination is recommended to be ten times the mA s of the PA image. With this technique, the effective dose for an average tomosynthesis examination was

  12. Importance estimation in Monte Carlo modelling of neutron and photon transport

    International Nuclear Information System (INIS)

    Mickael, M.W.

    1992-01-01

    The estimation of neutron and photon importance in a three-dimensional geometry is achieved using a coupled Monte Carlo and diffusion theory calculation. The parameters required for the solution of the multigroup adjoint diffusion equation are estimated from an analog Monte Carlo simulation of the system under investigation. The solution of the adjoint diffusion equation is then used as an estimate of the particle importance in the actual simulation. This approach provides an automated and efficient variance reduction method for Monte Carlo simulations. The technique has been successfully applied to Monte Carlo simulation of neutron and coupled neutron-photon transport in the nuclear well-logging field. The results show that the importance maps obtained in a few minutes of computer time using this technique are in good agreement with Monte Carlo generated importance maps that require prohibitive computing times. The application of this method to Monte Carlo modelling of the response of neutron porosity and pulsed neutron instruments has resulted in major reductions in computation time. (Author)

  13. Vectorized Monte Carlo

    International Nuclear Information System (INIS)

    Brown, F.B.

    1981-01-01

    Examination of the global algorithms and local kernels of conventional general-purpose Monte Carlo codes shows that multigroup Monte Carlo methods have sufficient structure to permit efficient vectorization. A structured multigroup Monte Carlo algorithm for vector computers is developed in which many particle events are treated at once on a cell-by-cell basis. Vectorization of kernels for tracking and variance reduction is described, and a new method for discrete sampling is developed to facilitate the vectorization of collision analysis. To demonstrate the potential of the new method, a vectorized Monte Carlo code for multigroup radiation transport analysis was developed. This code incorporates many features of conventional general-purpose production codes, including general geometry, splitting and Russian roulette, survival biasing, variance estimation via batching, a number of cutoffs, and generalized tallies of collision, tracklength, and surface crossing estimators with response functions. Predictions of vectorized performance characteristics for the CYBER-205 were made using emulated coding and a dynamic model of vector instruction timing. Computation rates were examined for a variety of test problems to determine sensitivities to batch size and vector lengths. Significant speedups are predicted for even a few hundred particles per batch, and asymptotic speedups by about 40 over equivalent Amdahl 470V/8 scalar codes arepredicted for a few thousand particles per batch. The principal conclusion is that vectorization of a general-purpose multigroup Monte Carlo code is well worth the significant effort required for stylized coding and major algorithmic changes

  14. Estimating the Partition Function Zeros by Using the Wang-Landau Monte Carlo Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seung-Yeon [Korea National University of Transportation, Chungju (Korea, Republic of)

    2017-03-15

    The concept of the partition function zeros is one of the most efficient methods for investigating the phase transitions and the critical phenomena in various physical systems. Estimating the partition function zeros requires information on the density of states Ω(E) as a function of the energy E. Currently, the Wang-Landau Monte Carlo algorithm is one of the best methods for calculating Ω(E). The partition function zeros in the complex temperature plane of the Ising model on an L × L square lattice (L = 10 ∼ 80) with a periodic boundary condition have been estimated by using the Wang-Landau Monte Carlo algorithm. The efficiency of the Wang-Landau Monte Carlo algorithm and the accuracies of the partition function zeros have been evaluated for three different, 5%, 10%, and 20%, flatness criteria for the histogram H(E).

  15. Computational error estimates for Monte Carlo finite element approximation with log normal diffusion coefficients

    KAUST Repository

    Sandberg, Mattias

    2015-01-07

    The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with log normal distributed diffusion coefficients, e.g. modelling ground water flow. Typical models use log normal diffusion coefficients with H¨older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. This talk will address how the total error can be estimated by the computable error.

  16. Computable error estimates for Monte Carlo finite element approximation of elliptic PDE with lognormal diffusion coefficients

    KAUST Repository

    Hall, Eric

    2016-01-09

    The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with lognormal distributed diffusion coefficients, e.g. modeling ground water flow. Typical models use lognormal diffusion coefficients with H´ older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. We address how the total error can be estimated by the computable error.

  17. Estimation of ex-core detector responses by adjoint Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands)

    2006-07-01

    Ex-core detector responses can be efficiently calculated by combining an adjoint Monte Carlo calculation with the converged source distribution of a forward Monte Carlo calculation. As the fission source distribution from a Monte Carlo calculation is given only as a collection of discrete space positions, the coupling requires a point flux estimator for each collision in the adjoint calculation. To avoid the infinite variance problems of the point flux estimator, a next-event finite-variance point flux estimator has been applied, witch is an energy dependent form for heterogeneous media of a finite-variance estimator known from the literature. To test the effects of this combined adjoint-forward calculation a simple geometry of a homogeneous core with a reflector was adopted with a small detector in the reflector. To demonstrate the potential of the method the continuous-energy adjoint Monte Carlo technique with anisotropic scattering was implemented with energy dependent absorption and fission cross sections and constant scattering cross section. A gain in efficiency over a completely forward calculation of the detector response was obtained, which is strongly dependent on the specific system and especially the size and position of the ex-core detector and the energy range considered. Further improvements are possible. The method works without problems for small detectors, even for a point detector and a small or even zero energy range. (authors)

  18. Reducing uncertainty of Monte Carlo estimated fatigue damage in offshore wind turbines using FORM

    DEFF Research Database (Denmark)

    H. Horn, Jan-Tore; Jensen, Jørgen Juncher

    2016-01-01

    Uncertainties related to fatigue damage estimation of non-linear systems are highly dependent on the tail behaviour and extreme values of the stress range distribution. By using a combination of the First Order Reliability Method (FORM) and Monte Carlo simulations (MCS), the accuracy of the fatigue...

  19. Fundamentals of Monte Carlo

    International Nuclear Information System (INIS)

    Wollaber, Allan Benton

    2016-01-01

    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating @@), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  20. Fundamentals of Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  1. Exploring Monte Carlo methods

    CERN Document Server

    Dunn, William L

    2012-01-01

    Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble

  2. Monte Carlo methods

    Directory of Open Access Journals (Sweden)

    Bardenet Rémi

    2013-07-01

    Full Text Available Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of Monte Carlo in experimental physics, and point to landmarks in the literature for the curious reader.

  3. Unbiased estimators of coincidence and correlation in non-analogous Monte Carlo particle transport

    International Nuclear Information System (INIS)

    Szieberth, M.; Kloosterman, J.L.

    2014-01-01

    Highlights: • The history splitting method was developed for non-Boltzmann Monte Carlo estimators. • The method allows variance reduction for pulse-height and higher moment estimators. • It works in highly multiplicative problems but Russian roulette has to be replaced. • Estimation of higher moments allows the simulation of neutron noise measurements. • Biased sampling of fission helps the effective simulation of neutron noise methods. - Abstract: The conventional non-analogous Monte Carlo methods are optimized to preserve the mean value of the distributions. Therefore, they are not suited to non-Boltzmann problems such as the estimation of coincidences or correlations. This paper presents a general method called history splitting for the non-analogous estimation of such quantities. The basic principle of the method is that a non-analogous particle history can be interpreted as a collection of analogous histories with different weights according to the probability of their realization. Calculations with a simple Monte Carlo program for a pulse-height-type estimator prove that the method is feasible and provides unbiased estimation. Different variance reduction techniques have been tried with the method and Russian roulette turned out to be ineffective in high multiplicity systems. An alternative history control method is applied instead. Simulation results of an auto-correlation (Rossi-α) measurement show that even the reconstruction of the higher moments is possible with the history splitting method, which makes the simulation of neutron noise measurements feasible

  4. First Passage Probability Estimation of Wind Turbines by Markov Chain Monte Carlo

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.

    2013-01-01

    Markov Chain Monte Carlo simulation has received considerable attention within the past decade as reportedly one of the most powerful techniques for the first passage probability estimation of dynamic systems. A very popular method in this direction capable of estimating probability of rare events...... of the method by modifying the conditional sampler. In this paper, applicability of the original SS is compared to the recently introduced modifications of the method on a wind turbine model. The model incorporates a PID pitch controller which aims at keeping the rotational speed of the wind turbine rotor equal...... to its nominal value. Finally Monte Carlo simulations are performed which allow assessment of the accuracy of the first passage probability estimation by the SS methods....

  5. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    International Nuclear Information System (INIS)

    Beer, M.

    1980-01-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that the use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates

  6. Monte Carlo: Basics

    OpenAIRE

    Murthy, K. P. N.

    2001-01-01

    An introduction to the basics of Monte Carlo is given. The topics covered include, sample space, events, probabilities, random variables, mean, variance, covariance, characteristic function, chebyshev inequality, law of large numbers, central limit theorem (stable distribution, Levy distribution), random numbers (generation and testing), random sampling techniques (inversion, rejection, sampling from a Gaussian, Metropolis sampling), analogue Monte Carlo and Importance sampling (exponential b...

  7. Estimation of magnetocaloric properties by using Monte Carlo method for AMRR cycle

    International Nuclear Information System (INIS)

    Arai, R; Fukuda, H; Numazawa, T; Tamura, R; Li, J; Saito, A T; Nakagome, H; Kaji, S

    2015-01-01

    In order to achieve a wide refrigerating temperature range in magnetic refrigeration, it is effective to layer multiple materials with different Curie temperatures. It is crucial to have a detailed understanding of physical properties of materials to optimize the material selection and the layered structure. In the present study, we discuss methods for estimating a change in physical properties, particularly the Curie temperature when some of the Gd atoms are substituted for non-magnetic elements for material design, based on Gd as a ferromagnetic material which is a typical magnetocaloric material. For this purpose, whilst making calculations using the S=7/2 Ising model and the Monte Carlo method, we made a specific heat measurement and a magnetization measurement of Gd-R alloy (R = Y, Zr) to compare experimental values and calculated ones. The results showed that the magnetic entropy change, specific heat, and Curie temperature can be estimated with good accuracy using the Monte Carlo method. (paper)

  8. A recursive Monte Carlo method for estimating importance functions in deep penetration problems

    International Nuclear Information System (INIS)

    Goldstein, M.

    1980-04-01

    A pratical recursive Monte Carlo method for estimating the importance function distribution, aimed at importance sampling for the solution of deep penetration problems in three-dimensional systems, was developed. The efficiency of the recursive method was investigated for sample problems including one- and two-dimensional, monoenergetic and and multigroup problems, as well as for a practical deep-penetration problem with streaming. The results of the recursive Monte Carlo calculations agree fairly well with Ssub(n) results. It is concluded that the recursive Monte Carlo method promises to become a universal method for estimating the importance function distribution for the solution of deep-penetration problems, in all kinds of systems: for many systems the recursive method is likely to be more efficient than previously existing methods; for three-dimensional systems it is the first method that can estimate the importance function with the accuracy required for an efficient solution based on importance sampling of neutron deep-penetration problems in those systems

  9. Improved Monte Carlo-perturbation method for estimation of control rod worths in a research reactor

    International Nuclear Information System (INIS)

    Kalcheva, Silva; Koonen, Edgar

    2009-01-01

    A hybrid method dedicated to improve the experimental technique for estimation of control rod worths in a research reactor is presented. The method uses a combination of Monte Carlo technique and perturbation theory. Perturbation method is used to obtain the equation for the relative efficiency of control rod insertion. A series of coefficients, describing the axial absorption profile are used to correct the equation for a composite rod, having a complicated burn-up irradiation history. These coefficients have to be determined - by experiment or by using some theoretical/numerical method. In the present paper they are derived from the macroscopic absorption cross-sections, obtained from detailed Monte Carlo calculations by MCNPX 2.6.F of the axial burn-up profile during control rod life. The method is validated on measurements of control rod worths at the BR2 reactor. Comparison with direct MCNPX evaluations of control rod worths is also presented

  10. Risk Consideration and Cost Estimation in Construction Projects Using Monte Carlo Simulation

    Directory of Open Access Journals (Sweden)

    Claudius A. Peleskei

    2015-06-01

    Full Text Available Construction projects usually involve high investments. It is, therefore, a risky adventure for companies as actual costs of construction projects nearly always exceed the planed scenario. This is due to the various risks and the large uncertainty existing within this industry. Determination and quantification of risks and their impact on project costs within the construction industry is described to be one of the most difficult areas. This paper analyses how the cost of construction projects can be estimated using Monte Carlo Simulation. It investigates if the different cost elements in a construction project follow a specific probability distribution. The research examines the effect of correlation between different project costs on the result of the Monte Carlo Simulation. The paper finds out that Monte Carlo Simulation can be a helpful tool for risk managers and can be used for cost estimation of construction projects. The research has shown that cost distributions are positively skewed and cost elements seem to have some interdependent relationships.

  11. MORSE Monte Carlo code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described

  12. Burnup Estimation of Rhodium Self-Powered Neutron Detector Emitter in VVER Reactor Core Using Monte Carlo Simulations

    OpenAIRE

    Khrutchinsky, А. А.; Kuten, S. A.; Babichev, L. F.

    2011-01-01

    Estimation of burn-up in a rhodium-103 emitter of self-powered neutron detector in VVER-1000 reactor core has been performed using Monte Carlo simulations within approximation of a constant neutron flux.

  13. Bayesian Monte Carlo and Maximum Likelihood Approach for Uncertainty Estimation and Risk Management: Application to Lake Oxygen Recovery Model

    Science.gov (United States)

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...

  14. Improved Monte Carlo - Perturbation Method For Estimation Of Control Rod Worths In A Research Reactor

    International Nuclear Information System (INIS)

    Kalcheva, Silva; Koonen, Edgar

    2008-01-01

    A hybrid method dedicated to improve the experimental technique for estimation of control rod worths in a research reactor is presented. The method uses a combination of Monte Carlo technique and perturbation theory. The perturbation theory is used to obtain the relation between the relative rod efficiency and the buckling of the reactor with partially inserted rod. A series of coefficients, describing the axial absorption profile are used to correct the buckling for an arbitrary composite rod, having complicated burn up irradiation history. These coefficients have to be determined - by experiment or by using some theoretical/numerical method. In the present paper they are derived from the macroscopic absorption cross sections, obtained from detailed Monte Carlo calculations by MCNPX 2.6.F of the axial burn up profile during control rod life. The method is validated on measurements of control rod worths at the BR2 reactor. Comparison with direct Monte Carlo evaluations of control rod worths is also presented. The uncertainties, arising from the used approximations in the presented hybrid method are discussed. (authors)

  15. Variational Monte Carlo Technique

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 19; Issue 8. Variational Monte Carlo Technique: Ground State Energies of Quantum Mechanical Systems. Sukanta Deb. General Article Volume 19 Issue 8 August 2014 pp 713-739 ...

  16. Monte-Carlo estimation of the inflight performance of the GEMS satellite x-ray polarimeter

    Science.gov (United States)

    Kitaguchi, Takao; Tamagawa, Toru; Hayato, Asami; Enoto, Teruaki; Yoshikawa, Akifumi; Kaneko, Kenta; Takeuchi, Yoko; Black, Kevin; Hill, Joanne; Jahoda, Keith; Krizmanic, John; Sturner, Steven; Griffiths, Scott; Kaaret, Philip; Marlowe, Hannah

    2014-07-01

    We report a Monte-Carlo estimation of the in-orbit performance of a cosmic X-ray polarimeter designed to be installed on the focal plane of a small satellite. The simulation uses GEANT for the transport of photons and energetic particles and results from Magboltz for the transport of secondary electrons in the detector gas. We validated the simulation by comparing spectra and modulation curves with actual data taken with radioactive sources and an X-ray generator. We also estimated the in-orbit background induced by cosmic radiation in low Earth orbit.

  17. Accuracy estimation for intermediate and low energy neutron transport calculation with Monte Carlo code MCNP

    International Nuclear Information System (INIS)

    Kotegawa, Hiroshi; Sasamoto, Nobuo; Tanaka, Shun-ichi

    1987-02-01

    Both ''measured radioactive inventory due to neutron activation in the shield concrete of JPDR'' and ''measured intermediate and low energy neutron spectra penetrating through a graphite sphere'' are analyzed using a continuous energy model Monte Carlo code MCNP so as to estimate calculational accuracy of the code for neutron transport in thermal and epithermal energy regions. Analyses reveal that MCNP calculates thermal neutron spectra fairly accurately, while it apparently over-estimates epithermal neutron spectra (of approximate 1/E distribution) as compared with the measurements. (author)

  18. Bayesian estimation of realized stochastic volatility model by Hybrid Monte Carlo algorithm

    International Nuclear Information System (INIS)

    Takaishi, Tetsuya

    2014-01-01

    The hybrid Monte Carlo algorithm (HMCA) is applied for Bayesian parameter estimation of the realized stochastic volatility (RSV) model. Using the 2nd order minimum norm integrator (2MNI) for the molecular dynamics (MD) simulation in the HMCA, we find that the 2MNI is more efficient than the conventional leapfrog integrator. We also find that the autocorrelation time of the volatility variables sampled by the HMCA is very short. Thus it is concluded that the HMCA with the 2MNI is an efficient algorithm for parameter estimations of the RSV model

  19. Prediction of Monte Carlo errors by a theory generalized to treat track-length estimators

    International Nuclear Information System (INIS)

    Booth, T.E.; Amster, H.J.

    1978-01-01

    Present theories for predicting expected Monte Carlo errors in neutron transport calculations apply to estimates of flux-weighted integrals sampled directly by scoring individual collisions. To treat track-length estimators, the recent theory of Amster and Djomehri is generalized to allow the score distribution functions to depend on the coordinates of two successive collisions. It has long been known that the expected track length in a region of phase space equals the expected flux integrated over that region, but that the expected statistical error of the Monte Carlo estimate of the track length is different from that of the flux integral obtained by sampling the sum of the reciprocals of the cross sections for all collisions in the region. These conclusions are shown to be implied by the generalized theory, which provides explicit equations for the expected values and errors of both types of estimators. Sampling expected contributions to the track-length estimator is also treated. Other general properties of the errors for both estimators are derived from the equations and physically interpreted. The actual values of these errors are then obtained and interpreted for a simple specific example

  20. TREEDE, Point Fluxes and Currents Based on Track Rotation Estimator by Monte-Carlo Method

    International Nuclear Information System (INIS)

    Dubi, A.

    1985-01-01

    1 - Description of problem or function: TREEDE is a Monte Carlo transport code based on the Track Rotation estimator, used, in general, to calculate fluxes and currents at a point. This code served as a test code in the development of the concept of the Track Rotation estimator, and therefore analogue Monte Carlo is used (i.e. no importance biasing). 2 - Method of solution: The basic idea is to follow the particle's track in the medium and then to rotate it such that it passes through the detector point. That is, rotational symmetry considerations (even in non-spherically symmetric configurations) are applied to every history, so that a very large fraction of the track histories can be rotated and made to pass through the point of interest; in this manner the 1/r 2 singularity in the un-collided flux estimator (next event estimator) is avoided. TREEDE, being a test code, is used to estimate leakage or in-medium fluxes at given points in a 3-dimensional finite box, where the source is an isotropic point source at the centre of the z = 0 surface. However, many of the constraints of geometry and source can be easily removed. The medium is assumed homogeneous with isotropic scattering, and one energy group only is considered. 3 - Restrictions on the complexity of the problem: One energy group, a homogeneous medium, isotropic scattering

  1. Estimation of coincidence and correlation in non-analogous Monte Carlo particle transport - 159

    International Nuclear Information System (INIS)

    Szieberth, M.; Leen Kloosterman, J.

    2010-01-01

    The conventional non-analogous Monte Carlo methods are optimized to preserve the mean value of the distributions and therefore they are not suited for non-Boltzmann problems like the estimation of coincidences or correlations. This paper presents a general method called history splitting for the non-analogous estimation of such quantities. The basic principle of the method is that a non-analogous particle history can be interpreted as a collection of analogous histories with different weights according to the probability of their realization. Calculations with a simple Monte Carlo program for a pulse-height-type estimator prove that the method is feasible and provides unbiased estimation. Different variance reduction techniques have been tried with the method and Russian roulette turned out to be ineffective in high multiplicity systems. An alternative history control method is applied instead. Simulation results of a Feynman-α measurement shows that even the reconstruction of the higher moments is possible with the history splitting method, which makes the simulation of neutron noise measurements feasible. (authors)

  2. Estimation of Adjoint-Weighted Kinetics Parameters in Monte Carlo Wieland Calculations

    International Nuclear Information System (INIS)

    Choi, Sung Hoon; Shim, Hyung Jin

    2013-01-01

    The effective delayed neutron fraction, β eff , and the prompt neutron generation time, Λ, in the point kinetics equation are weighted by the adjoint flux to improve the accuracy of the reactivity estimate. Recently the Monte Carlo (MC) kinetics parameter estimation methods by using the self-consistent adjoint flux calculated in the MC forward simulations have been developed and successfully applied for the research reactor analyses. However these adjoint estimation methods based on the cycle-by-cycle genealogical table require a huge memory size to store the pedigree hierarchy. In this paper, we present a new adjoint estimation in which the pedigree of a single history is utilized by applying the MC Wielandt method. The effectiveness of the new method is demonstrated in the kinetics parameter estimations for infinite homogeneous two-group problems and the Godiva critical facility

  3. astroABC : An Approximate Bayesian Computation Sequential Monte Carlo sampler for cosmological parameter estimation

    Energy Technology Data Exchange (ETDEWEB)

    Jennings, E.; Madigan, M.

    2017-04-01

    Given the complexity of modern cosmological parameter inference where we arefaced with non-Gaussian data and noise, correlated systematics and multi-probecorrelated data sets, the Approximate Bayesian Computation (ABC) method is apromising alternative to traditional Markov Chain Monte Carlo approaches in thecase where the Likelihood is intractable or unknown. The ABC method is called"Likelihood free" as it avoids explicit evaluation of the Likelihood by using aforward model simulation of the data which can include systematics. Weintroduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler forparameter estimation. A key challenge in astrophysics is the efficient use oflarge multi-probe datasets to constrain high dimensional, possibly correlatedparameter spaces. With this in mind astroABC allows for massive parallelizationusing MPI, a framework that handles spawning of jobs across multiple nodes. Akey new feature of astroABC is the ability to create MPI groups with differentcommunicators, one for the sampler and several others for the forward modelsimulation, which speeds up sampling time considerably. For smaller jobs thePython multiprocessing option is also available. Other key features include: aSequential Monte Carlo sampler, a method for iteratively adapting tolerancelevels, local covariance estimate using scikit-learn's KDTree, modules forspecifying optimal covariance matrix for a component-wise or multivariatenormal perturbation kernel, output and restart files are backed up everyiteration, user defined metric and simulation methods, a module for specifyingheterogeneous parameter priors including non-standard prior PDFs, a module forspecifying a constant, linear, log or exponential tolerance level,well-documented examples and sample scripts. This code is hosted online athttps://github.com/EliseJ/astroABC

  4. Evaluation of three Monte Carlo estimation schemes for flux at a point

    International Nuclear Information System (INIS)

    Kalli, H.J.; Cashwell, E.D.

    1977-09-01

    Three Monte Carlo estimation schemes were studied to avoid the difficulties caused by the (1/r 2 ) singularity in the expression of the normal next-event estimator (NEE) for the flux at a point. A new, fast, once-more collided flux estimator (OMCFE) scheme, based on a very simple probability density function (p.d.f.) of the distance to collision in the selection of the intermediate collision points, is proposed. This kind of p.d.f. of the collision distance is used in two nonanalog schemes using the NEE. In these two schemes, which have principal similarities to some schemes proposed earlier in the literature, the (1/r 2 ) singularity is canceled by incorporating the singularity into the p.d.f. of the collision points. This is achieved by playing a suitable nonanalog game in the neighborhood of the detector points. The three schemes were tested in a monoenergetic, homogeneous infinite-medium problem, then were evaluated in a point-cross-section problem by using the Monte Carlo code MCNG. 10 figures

  5. A functional method for estimating DPA tallies in Monte Carlo calculations of Light Water Reactors

    International Nuclear Information System (INIS)

    Read, Edward A.; Oliveira, Cassiano R.E. de

    2011-01-01

    There has been a growing need in recent years for the development of methodology to calculate radiation damage factors, namely displacements per atom (dpa), of structural components for Light Water Reactors (LWRs). The aim of this paper is to discuss the development and implementation of a dpa method using Monte Carlo method for transport calculations. The capabilities of the Monte Carlo code Serpent such as Woodcock tracking and fuel depletion are assessed for radiation damage calculations and its capability demonstrated and compared to those of the Monte Carlo code MCNP for radiation damage calculations of a typical LWR configuration. (author)

  6. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-01-01

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  7. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros

    2016-08-29

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  8. Comparison of internal dose estimates obtained using organ-level, voxel S value, and Monte Carlo techniques

    Energy Technology Data Exchange (ETDEWEB)

    Grimes, Joshua, E-mail: grimes.joshua@mayo.edu [Department of Physics and Astronomy, University of British Columbia, Vancouver V5Z 1L8 (Canada); Celler, Anna [Department of Radiology, University of British Columbia, Vancouver V5Z 1L8 (Canada)

    2014-09-15

    Purpose: The authors’ objective was to compare internal dose estimates obtained using the Organ Level Dose Assessment with Exponential Modeling (OLINDA/EXM) software, the voxel S value technique, and Monte Carlo simulation. Monte Carlo dose estimates were used as the reference standard to assess the impact of patient-specific anatomy on the final dose estimate. Methods: Six patients injected with{sup 99m}Tc-hydrazinonicotinamide-Tyr{sup 3}-octreotide were included in this study. A hybrid planar/SPECT imaging protocol was used to estimate {sup 99m}Tc time-integrated activity coefficients (TIACs) for kidneys, liver, spleen, and tumors. Additionally, TIACs were predicted for {sup 131}I, {sup 177}Lu, and {sup 90}Y assuming the same biological half-lives as the {sup 99m}Tc labeled tracer. The TIACs were used as input for OLINDA/EXM for organ-level dose calculation and voxel level dosimetry was performed using the voxel S value method and Monte Carlo simulation. Dose estimates for {sup 99m}Tc, {sup 131}I, {sup 177}Lu, and {sup 90}Y distributions were evaluated by comparing (i) organ-level S values corresponding to each method, (ii) total tumor and organ doses, (iii) differences in right and left kidney doses, and (iv) voxelized dose distributions calculated by Monte Carlo and the voxel S value technique. Results: The S values for all investigated radionuclides used by OLINDA/EXM and the corresponding patient-specific S values calculated by Monte Carlo agreed within 2.3% on average for self-irradiation, and differed by as much as 105% for cross-organ irradiation. Total organ doses calculated by OLINDA/EXM and the voxel S value technique agreed with Monte Carlo results within approximately ±7%. Differences between right and left kidney doses determined by Monte Carlo were as high as 73%. Comparison of the Monte Carlo and voxel S value dose distributions showed that each method produced similar dose volume histograms with a minimum dose covering 90% of the volume (D90

  9. Monte Carlo simulation for the estimation of iron in human whole ...

    Indian Academy of Sciences (India)

    2017-02-10

    Feb 10, 2017 ... Monte Carlo N-particle (MCNP) code has been used to simulate the transport of gamma photon rays ... experimental data, and better than the theoretical XCOM values. ... tions in the materials, according to probability density.

  10. Statistical implications in Monte Carlo depletions - 051

    International Nuclear Information System (INIS)

    Zhiwen, Xu; Rhodes, J.; Smith, K.

    2010-01-01

    As a result of steady advances of computer power, continuous-energy Monte Carlo depletion analysis is attracting considerable attention for reactor burnup calculations. The typical Monte Carlo analysis is set up as a combination of a Monte Carlo neutron transport solver and a fuel burnup solver. Note that the burnup solver is a deterministic module. The statistical errors in Monte Carlo solutions are introduced into nuclide number densities and propagated along fuel burnup. This paper is towards the understanding of the statistical implications in Monte Carlo depletions, including both statistical bias and statistical variations in depleted fuel number densities. The deterministic Studsvik lattice physics code, CASMO-5, is modified to model the Monte Carlo depletion. The statistical bias in depleted number densities is found to be negligible compared to its statistical variations, which, in turn, demonstrates the correctness of the Monte Carlo depletion method. Meanwhile, the statistical variation in number densities generally increases with burnup. Several possible ways of reducing the statistical errors are discussed: 1) to increase the number of individual Monte Carlo histories; 2) to increase the number of time steps; 3) to run additional independent Monte Carlo depletion cases. Finally, a new Monte Carlo depletion methodology, called the batch depletion method, is proposed, which consists of performing a set of independent Monte Carlo depletions and is thus capable of estimating the overall statistical errors including both the local statistical error and the propagated statistical error. (authors)

  11. Markov Chain Monte Carlo (MCMC) methods for parameter estimation of a novel hybrid redundant robot

    International Nuclear Information System (INIS)

    Wang Yongbo; Wu Huapeng; Handroos, Heikki

    2011-01-01

    This paper presents a statistical method for the calibration of a redundantly actuated hybrid serial-parallel robot IWR (Intersector Welding Robot). The robot under study will be used to carry out welding, machining, and remote handing for the assembly of vacuum vessel of International Thermonuclear Experimental Reactor (ITER). The robot has ten degrees of freedom (DOF), among which six DOF are contributed by the parallel mechanism and the rest are from the serial mechanism. In this paper, a kinematic error model which involves 54 unknown geometrical error parameters is developed for the proposed robot. Based on this error model, the mean values of the unknown parameters are statistically analyzed and estimated by means of Markov Chain Monte Carlo (MCMC) approach. The computer simulation is conducted by introducing random geometric errors and measurement poses which represent the corresponding real physical behaviors. The simulation results of the marginal posterior distributions of the estimated model parameters indicate that our method is reliable and robust.

  12. Using Monte Carlo/Gaussian Based Small Area Estimates to Predict Where Medicaid Patients Reside.

    Science.gov (United States)

    Behrens, Jess J; Wen, Xuejin; Goel, Satyender; Zhou, Jing; Fu, Lina; Kho, Abel N

    2016-01-01

    Electronic Health Records (EHR) are rapidly becoming accepted as tools for planning and population health 1,2 . With the national dialogue around Medicaid expansion 12 , the role of EHR data has become even more important. For their potential to be fully realized and contribute to these discussions, techniques for creating accurate small area estimates is vital. As such, we examined the efficacy of developing small area estimates for Medicaid patients in two locations, Albuquerque and Chicago, by using a Monte Carlo/Gaussian technique that has worked in accurately locating registered voters in North Carolina 11 . The Albuquerque data, which includes patient address, will first be used to assess the accuracy of the methodology. Subsequently, it will be combined with the EHR data from Chicago to develop a regression that predicts Medicaid patients by US Block Group. We seek to create a tool that is effective in translating EHR data's potential for population health studies.

  13. Markov Chain Monte Carlo

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 3. Markov Chain Monte Carlo - Examples. Arnab Chakraborty. General Article Volume 7 Issue 3 March 2002 pp 25-34. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/007/03/0025-0034. Keywords.

  14. Monte Carlo and Quasi-Monte Carlo Sampling

    CERN Document Server

    Lemieux, Christiane

    2009-01-01

    Presents essential tools for using quasi-Monte Carlo sampling in practice. This book focuses on issues related to Monte Carlo methods - uniform and non-uniform random number generation, variance reduction techniques. It covers several aspects of quasi-Monte Carlo methods.

  15. Estimation of the four-wave mixing noise probability-density function by the multicanonical Monte Carlo method.

    Science.gov (United States)

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2005-01-01

    The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.

  16. Estimation of axial diffusion processes by analog Monte-Carlo: theory, tests and examples

    International Nuclear Information System (INIS)

    Milgram, M.S.

    1997-01-01

    With the advent of fast, reasonably inexpensive computer hardware, it has become possible to follow the histories of several million particles and tally quantities such as currents and fluxes in a finite reactor region using analog Monte-Carlo. Here use is made of this new capability to demonstrate that it is possible to test various approximations that cumulatively are known as the axial diffusion approximation in a realistic, heterogenous reactor lattice cell. From this, it proves possible to extract excellent estimates of the homogenized diffusion coefficient in few energy groups and lattice sub-regions for further comparison with deterministic methods of deriving the same quantity. The breakdown of the diffusion approximation near the endpoints of the axial lattice cell, as well as in the moderator at certain energies, can be observed. (Author)

  17. Photon dose estimation from ultraintense laser–solid interactions and shielding calculation with Monte Carlo simulation

    International Nuclear Information System (INIS)

    Yang, Bo; Qiu, Rui; Li, JunLi; Lu, Wei; Wu, Zhen; Li, Chunyan

    2017-01-01

    When a strong laser beam irradiates a solid target, a hot plasma is produced and high-energy electrons are usually generated (the so-called “hot electrons”). These energetic electrons subsequently generate hard X-rays in the solid target through the Bremsstrahlung process. To date, only limited studies have been conducted on this laser-induced radiological protection issue. In this study, extensive literature reviews on the physics and properties of hot electrons have been conducted. On the basis of these information, the photon dose generated by the interaction between hot electrons and a solid target was simulated with the Monte Carlo code FLUKA. With some reasonable assumptions, the calculated dose can be regarded as the upper boundary of the experimental results over the laser intensity ranging from 10 19 to 10 21 W/cm 2 . Furthermore, an equation to estimate the photon dose generated from ultraintense laser–solid interactions based on the normalized laser intensity is derived. The shielding effects of common materials including concrete and lead were also studied for the laser-driven X-ray source. The dose transmission curves and tenth-value layers (TVLs) in concrete and lead were calculated through Monte Carlo simulations. These results could be used to perform a preliminary and fast radiation safety assessment for the X-rays generated from ultraintense laser–solid interactions. - Highlights: • The laser–driven X-ray ionizing radiation source was analyzed in this study. • An equation to estimate the photon dose based on the laser intensity is given. • The shielding effects of concrete and lead were studied for this new X-ray source. • The aim of this study is to analyze and mitigate the laser–driven X-ray hazard.

  18. Monte Carlo next-event point flux estimation for RCP01

    International Nuclear Information System (INIS)

    Martz, R.L.; Gast, R.C.; Tyburski, L.J.

    1991-01-01

    Two next event point estimators have been developed and programmed into the RCP01 Monte Carlo program for solving neutron transport problems in three-dimensional geometry with detailed energy description. These estimators use a simplified but accurate flux-at-a-point tallying technique. Anisotropic scattering in the lab system at the collision site is accounted for by determining the exit energy that corresponds to the angle between the location of the collision and the point detector. Elastic, inelastic, and thermal kernel scattering events are included in this formulation. An averaging technique is used in both estimators to eliminate the well-known problem of infinite variance due to collisions close to the point detector. In a novel approach to improve the estimator's efficiency, a Russian roulette scheme based on anticipated flux fall off is employed where averaging is not appropriate. A second estimator successfully uses a simple rejection technique in conjunction with detailed tracking where averaging isn't needed. Test results show good agreement with known numeric solutions. Efficiencies are examined as a function of input parameter selection and problem difficulty

  19. Bayesian Monte Carlo method

    International Nuclear Information System (INIS)

    Rajabalinejad, M.

    2010-01-01

    To reduce cost of Monte Carlo (MC) simulations for time-consuming processes, Bayesian Monte Carlo (BMC) is introduced in this paper. The BMC method reduces number of realizations in MC according to the desired accuracy level. BMC also provides a possibility of considering more priors. In other words, different priors can be integrated into one model by using BMC to further reduce cost of simulations. This study suggests speeding up the simulation process by considering the logical dependence of neighboring points as prior information. This information is used in the BMC method to produce a predictive tool through the simulation process. The general methodology and algorithm of BMC method are presented in this paper. The BMC method is applied to the simplified break water model as well as the finite element model of 17th Street Canal in New Orleans, and the results are compared with the MC and Dynamic Bounds methods.

  20. Monte Carlo principles and applications

    Energy Technology Data Exchange (ETDEWEB)

    Raeside, D E [Oklahoma Univ., Oklahoma City (USA). Health Sciences Center

    1976-03-01

    The principles underlying the use of Monte Carlo methods are explained, for readers who may not be familiar with the approach. The generation of random numbers is discussed, and the connection between Monte Carlo methods and random numbers is indicated. Outlines of two well established Monte Carlo sampling techniques are given, together with examples illustrating their use. The general techniques for improving the efficiency of Monte Carlo calculations are considered. The literature relevant to the applications of Monte Carlo calculations in medical physics is reviewed.

  1. Contributon Monte Carlo

    International Nuclear Information System (INIS)

    Dubi, A.; Gerstl, S.A.W.

    1979-05-01

    The contributon Monte Carlo method is based on a new recipe to calculate target responses by means of volume integral of the contributon current in a region between the source and the detector. A comprehensive description of the method, its implementation in the general-purpose MCNP code, and results of the method for realistic nonhomogeneous, energy-dependent problems are presented. 23 figures, 10 tables

  2. Microcanonical Monte Carlo

    International Nuclear Information System (INIS)

    Creutz, M.

    1986-01-01

    The author discusses a recently developed algorithm for simulating statistical systems. The procedure interpolates between molecular dynamics methods and canonical Monte Carlo. The primary advantages are extremely fast simulations of discrete systems such as the Ising model and a relative insensitivity to random number quality. A variation of the algorithm gives rise to a deterministic dynamics for Ising spins. This model may be useful for high speed simulation of non-equilibrium phenomena

  3. Monte Carlo alpha calculation

    Energy Technology Data Exchange (ETDEWEB)

    Brockway, D.; Soran, P.; Whalen, P.

    1985-01-01

    A Monte Carlo algorithm to efficiently calculate static alpha eigenvalues, N = ne/sup ..cap alpha..t/, for supercritical systems has been developed and tested. A direct Monte Carlo approach to calculating a static alpha is to simply follow the buildup in time of neutrons in a supercritical system and evaluate the logarithmic derivative of the neutron population with respect to time. This procedure is expensive, and the solution is very noisy and almost useless for a system near critical. The modified approach is to convert the time-dependent problem to a static ..cap alpha../sup -/eigenvalue problem and regress ..cap alpha.. on solutions of a/sup -/ k/sup -/eigenvalue problem. In practice, this procedure is much more efficient than the direct calculation, and produces much more accurate results. Because the Monte Carlo codes are intrinsically three-dimensional and use elaborate continuous-energy cross sections, this technique is now used as a standard for evaluating other calculational techniques in odd geometries or with group cross sections.

  4. On the use of Monte Carlo-derived dosimetric data in the estimation of patient dose from CT examinations

    International Nuclear Information System (INIS)

    Perisinakis, Kostas; Tzedakis, Antonis; Damilakis, John

    2008-01-01

    The purpose of this work was to investigate the applicability and appropriateness of Monte Carlo-derived normalized data to provide accurate estimations of patient dose from computed tomography (CT) exposures. Monte Carlo methodology and mathematical anthropomorphic phantoms were used to simulate standard patient CT examinations of the head, thorax, abdomen, and trunk performed on a multislice CT scanner. Phantoms were generated to simulate the average adult individual and two individuals with different body sizes. Normalized dose values for all radiosensitive organs and normalized effective dose values were calculated for standard axial and spiral CT examinations. Discrepancies in CT dosimetry using Monte Carlo-derived coefficients originating from the use of: (a) Conversion coefficients derived for axial CT exposures, (b) a mathematical anthropomorphic phantom of standard body size to derive conversion coefficients, and (c) data derived for a specific CT scanner to estimate patient dose from CT examinations performed on a different scanner, were separately evaluated. The percentage differences between the normalized organ dose values derived for contiguous axial scans and the corresponding values derived for spiral scans with pitch=1 and the same total scanning length were up to 10%, while the corresponding percentage differences in normalized effective dose values were less than 0.7% for all standard CT examinations. The normalized organ dose values for standard spiral CT examinations with pitch 0.5-1.5 were found to differ from the corresponding values derived for contiguous axial scans divided by the pitch, by less than 14% while the corresponding percentage differences in normalized effective dose values were less than 1% for all standard CT examinations. Normalized effective dose values for the standard contiguous axial CT examinations derived by Monte Carlo simulation were found to considerably decrease with increasing body size of the mathematical phantom

  5. Report on some methods of determining the state of convergence of Monte Carlo risk estimates

    International Nuclear Information System (INIS)

    Orford, J.L.; Hufton, D.; Johnson, K.

    1991-05-01

    The Department of the Environment is developing a methodology for assessing potential sites for the disposal of low and intermediate level radioactive wastes. Computer models are used to simulate the groundwater transport of radioactive materials from a disposal facility back to man. Monte Carlo methods are being employed to conduct a probabilistic risk assessment (pra) of potential sites. The models calculate time histories of annual radiation dose to the critical group population. The annual radiation dose to the critical group in turn specifies the annual individual risk. The distribution of dose is generally highly skewed and many simulation runs are required to predict the level of confidence in the risk estimate i.e. to determine whether the risk estimate is converged. This report describes some statistical methods for determining the state of convergence of the risk estimate. The methods described include the Shapiro-Wilk test, calculation of skewness and kurtosis and normal probability plots. A method for forecasting the number of samples needed before the risk estimate is converged is presented. Three case studies were conducted to examine the performance of some of these techniques. (author)

  6. Is Monte Carlo embarrassingly parallel?

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  7. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  8. Isotopic depletion with Monte Carlo

    International Nuclear Information System (INIS)

    Martin, W.R.; Rathkopf, J.A.

    1996-06-01

    This work considers a method to deplete isotopes during a time- dependent Monte Carlo simulation of an evolving system. The method is based on explicitly combining a conventional estimator for the scalar flux with the analytical solutions to the isotopic depletion equations. There are no auxiliary calculations; the method is an integral part of the Monte Carlo calculation. The method eliminates negative densities and reduces the variance in the estimates for the isotope densities, compared to existing methods. Moreover, existing methods are shown to be special cases of the general method described in this work, as they can be derived by combining a high variance estimator for the scalar flux with a low-order approximation to the analytical solution to the depletion equation

  9. Monte Carlo estimation of the influence of elastic scattering anisotropy on the neutron flux in a nuclear reactor cell; Monte Carlo procena uticaja anizotropije elasticnog rasejanja na vrednost neutronskog fluksa u celiji nuklearnog reaktora

    Energy Technology Data Exchange (ETDEWEB)

    Kocic, A [Institute of nuclear sciences Boris Kidric, Vinca, Beograd (Yugoslavia)

    1974-07-01

    Anisotropy of neutron elastic scattering is a problem of special importance in solving the Boltzmann transport equation numerically. This is not the case when Monte Carlo method is applied. Estimation of the influence of elastic scattering anisotropy on the neutron flux is treated in order to justify the application of Monte Carlo method which is computer time consuming. Correlation procedure was applied for the study of this influence. One group case was used as an example to enable comparison of other methods.

  10. Estimation of parameters and basic reproduction ratio for Japanese encephalitis transmission in the Philippines using sequential Monte Carlo filter

    Science.gov (United States)

    We developed a sequential Monte Carlo filter to estimate the states and the parameters in a stochastic model of Japanese Encephalitis (JE) spread in the Philippines. This method is particularly important for its adaptability to the availability of new incidence data. This method can also capture the...

  11. Monte Carlo Simulation Of The Portfolio-Balance Model Of Exchange Rates: Finite Sample Properties Of The GMM Estimator

    OpenAIRE

    Hong-Ghi Min

    2011-01-01

    Using Monte Carlo simulation of the Portfolio-balance model of the exchange rates, we report finite sample properties of the GMM estimator for testing over-identifying restrictions in the simultaneous equations model. F-form of Sargans statistic performs better than its chi-squared form while Hansens GMM statistic has the smallest bias.

  12. Dose estimation of patients in CT examinations using EGS4 Monte-Carlo simulation of voxel phantom

    International Nuclear Information System (INIS)

    Akahane, K.; Kai, M.; Kusama, T.; Saito, K.

    2002-01-01

    A voxel phantom based on CT images of one Japanese male have developed in Japan Atomic Energy Research Institute. Dose calculations of patients in X-ray CT examinations were performed using the voxel phantom and EGS4 Monte-Carlo simulation code. The organ doses of the patients were estimated

  13. Dose estimation of patients in CT examinations using EGS4 Monte-Carlo simulation of voxel phantom

    Energy Technology Data Exchange (ETDEWEB)

    Akahane, K.; Kai, M.; Kusama, T. [Oita Univ., of Nursing and Health Sciences, Oita-Ken (Japan); Saito, K. [JAERI, Ibaraki-ken (Japan)

    2002-07-01

    A voxel phantom based on CT images of one Japanese male have developed in Japan Atomic Energy Research Institute. Dose calculations of patients in X-ray CT examinations were performed using the voxel phantom and EGS4 Monte-Carlo simulation code. The organ doses of the patients were estimated.

  14. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    Science.gov (United States)

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  15. The unbiasedness of a generalized mirage boundary correction method for Monte Carlo integration estimators of volume

    Science.gov (United States)

    Thomas B. Lynch; Jeffrey H. Gove

    2014-01-01

    The typical "double counting" application of the mirage method of boundary correction cannot be applied to sampling systems such as critical height sampling (CHS) that are based on a Monte Carlo sample of a tree (or debris) attribute because the critical height (or other random attribute) sampled from a mirage point is generally not equal to the critical...

  16. Using Monte Carlo Simulation To Improve Cargo Mass Estimates For International Space Station Commercial Resupply Flights

    Science.gov (United States)

    2016-12-01

    The Challenges of ISS Resupply .......................................... 23 F. THE IMPORTANCE OF MASS PROPERTIES IN SPACECRAFT AND MISSION DESIGN...Transportation System TBA trundle bearing assembly VLC verification loads cycle xv EXECUTIVE SUMMARY Resupplying the International Space Station...management priorities. This study addresses those challenges by developing Monte Carlo simulations based on over 13 years of as- flownSS resupply

  17. Depth-of-interaction estimates in pixelated scintillator sensors using Monte Carlo techniques

    International Nuclear Information System (INIS)

    Sharma, Diksha; Sze, Christina; Bhandari, Harish; Nagarkar, Vivek; Badano, Aldo

    2017-01-01

    Image quality in thick scintillator detectors can be improved by minimizing parallax errors through depth-of-interaction (DOI) estimation. A novel sensor for low-energy single photon imaging having a thick, transparent, crystalline pixelated micro-columnar CsI:Tl scintillator structure has been described, with possible future application in small-animal single photon emission computed tomography (SPECT) imaging when using thicker structures under development. In order to understand the fundamental limits of this new structure, we introduce cartesianDETECT2, an open-source optical transport package that uses Monte Carlo methods to obtain estimates of DOI for improving spatial resolution of nuclear imaging applications. Optical photon paths are calculated as a function of varying simulation parameters such as columnar surface roughness, bulk, and top-surface absorption. We use scanning electron microscope images to estimate appropriate surface roughness coefficients. Simulation results are analyzed to model and establish patterns between DOI and photon scattering. The effect of varying starting locations of optical photons on the spatial response is studied. Bulk and top-surface absorption fractions were varied to investigate their effect on spatial response as a function of DOI. We investigated the accuracy of our DOI estimation model for a particular screen with various training and testing sets, and for all cases the percent error between the estimated and actual DOI over the majority of the detector thickness was ±5% with a maximum error of up to ±10% at deeper DOIs. In addition, we found that cartesianDETECT2 is computationally five times more efficient than MANTIS. Findings indicate that DOI estimates can be extracted from a double-Gaussian model of the detector response. We observed that our model predicts DOI in pixelated scintillator detectors reasonably well.

  18. Technical Note: On the efficiency of variance reduction techniques for Monte Carlo estimates of imaging noise.

    Science.gov (United States)

    Sharma, Diksha; Sempau, Josep; Badano, Aldo

    2018-02-01

    Monte Carlo simulations require large number of histories to obtain reliable estimates of the quantity of interest and its associated statistical uncertainty. Numerous variance reduction techniques (VRTs) have been employed to increase computational efficiency by reducing the statistical uncertainty. We investigate the effect of two VRTs for optical transport methods on accuracy and computing time for the estimation of variance (noise) in x-ray imaging detectors. We describe two VRTs. In the first, we preferentially alter the direction of the optical photons to increase detection probability. In the second, we follow only a fraction of the total optical photons generated. In both techniques, the statistical weight of photons is altered to maintain the signal mean. We use fastdetect2, an open-source, freely available optical transport routine from the hybridmantis package. We simulate VRTs for a variety of detector models and energy sources. The imaging data from the VRT simulations are then compared to the analog case (no VRT) using pulse height spectra, Swank factor, and the variance of the Swank estimate. We analyze the effect of VRTs on the statistical uncertainty associated with Swank factors. VRTs increased the relative efficiency by as much as a factor of 9. We demonstrate that we can achieve the same variance of the Swank factor with less computing time. With this approach, the simulations can be stopped when the variance of the variance estimates reaches the desired level of uncertainty. We implemented analytic estimates of the variance of Swank factor and demonstrated the effect of VRTs on image quality calculations. Our findings indicate that the Swank factor is dominated by the x-ray interaction profile as compared to the additional uncertainty introduced in the optical transport by the use of VRTs. For simulation experiments that aim at reducing the uncertainty in the Swank factor estimate, any of the proposed VRT can be used for increasing the relative

  19. A punctual flux estimator and reactions rates optimization in neutral particles transport calculus by the Monte Carlo method

    International Nuclear Information System (INIS)

    Authier, N.

    1998-12-01

    One of the questions asked in radiation shielding problems is the estimation of the radiation level in particular to determine accessibility of working persons in controlled area (nuclear power plants, nuclear fuel reprocessing plants) or to study the dose gradients encountered in material (iron nuclear vessel, medical therapy, electronics in satellite). The flux and reaction rate estimators used in Monte Carlo codes give average values in volumes or on surfaces of the geometrical description of the system. But in certain configurations, the knowledge of punctual deposited energy and dose estimates are necessary. The Monte Carlo estimate of the flux at a point of interest is a calculus which presents an unbounded variance. The central limit theorem cannot be applied thus no easy confidence level may be calculated. The convergence rate is then very poor. We propose in this study a new solution for the photon flux at a point estimator. The method is based on the 'once more collided flux estimator' developed earlier for neutron calculations. It solves the problem of the unbounded variance and do not add any bias to the estimation. We show however that our new sampling schemes specially developed to treat the anisotropy of the photon coherent scattering is necessary for a good and regular behavior of the estimator. This developments integrated in the TRIPOLI-4 Monte Carlo code add the possibility of an unbiased punctual estimate on media interfaces. (author)

  20. Estimation of crosstalk in LED fNIRS by photon propagation Monte Carlo simulation

    Science.gov (United States)

    Iwano, Takayuki; Umeyama, Shinji

    2015-12-01

    fNIRS (functional near-Infrared spectroscopy) can measure brain activity non-invasively and has advantages such as low cost and portability. While the conventional fNIRS has used laser light, LED light fNIRS is recently becoming common in use. Using LED for fNIRS, equipment can be more inexpensive and more portable. LED light, however, has a wider illumination spectrum than laser light, which may change crosstalk between the calculated concentration change of oxygenated and deoxygenated hemoglobins. The crosstalk is caused by difference in light path length in the head tissues depending on wavelengths used. We conducted Monte Carlo simulations of photon propagation in the tissue layers of head (scalp, skull, CSF, gray matter, and white matter) to estimate the light path length in each layers. Based on the estimated path lengths, the crosstalk in fNIRS using LED light was calculated. Our results showed that LED light more increases the crosstalk than laser light does when certain combinations of wavelengths were adopted. Even in such cases, the crosstalk increased by using LED light can be effectively suppressed by replacing the value of extinction coefficients used in the hemoglobin calculation to their weighted average over illumination spectrum.

  1. Application of the Monte Carlo method to estimate doses in a radioactive waste drum environment

    International Nuclear Information System (INIS)

    Rodenas, J.; Garcia, T.; Burgos, M.C.; Felipe, A.; Sanchez-Mayoral, M.L.

    2002-01-01

    During refuelling operation in a Nuclear Power Plant, filtration is used to remove non-soluble radionuclides contained in the water from reactor pool. Filter cartridges accumulate a high radioactivity, so that they are usually placed into a drum. When the operation ends up, the drum is filled with concrete and stored along with other drums containing radioactive wastes. Operators working in the refuelling plant near these radwaste drums can receive high dose rates. Therefore, it is convenient to estimate those doses to prevent risks in order to apply ALARA criterion for dose reduction to workers. The Monte Carlo method has been applied, using MCNP 4B code, to simulate the drum containing contaminated filters and estimate doses produced in the drum environment. In the paper, an analysis of the results obtained with the MCNP code has been performed. Thus, the influence on the evaluated doses of distance from drum and interposed shielding barriers has been studied. The source term has also been analysed to check the importance of the isotope composition. Two different geometric models have been considered in order to simplify calculations. Results have been compared with dose measurements in plant in order to validate the calculation procedure. This work has been developed at the Nuclear Engineering Department of the Polytechnic University of Valencia in collaboration with IBERINCO in the frame of an RD project sponsored by IBERINCO

  2. Monte Carlo estimation of neoclassical transport for the TJ-II stellarator

    International Nuclear Information System (INIS)

    Tribaldos, V.

    2001-01-01

    The neoclassical transport properties of TJ-II stellarator [C. Alejaldre et al., Fusion Technol. 13, 521 (1988)] are studied with the monoenergetic Monte Carlo technique. A compromise between the number of modes and particles and the required computing time to obtain reliable estimates, from the computational point of view, of the monoenergetic diffusion coefficients is shown to be of one thousand particles and one hundred harmonics, because of the rich magnetic-field structure of TJ-II. Although, these requirements are probably too demanding in making the transport estimations. The data base containing the normalized monoenergetic diffusion coefficient for several radial positions, radial electric fields and collisionalities have been fitted using a neural network. This fit reduces the number of points necessary in the data base and allows a smooth interpolation and extrapolation to perform the convolutions of the monoenergetic coefficients with the Maxwellian. For two different typical TJ-II discharges the ambipolar radial electric field, and the neoclassical particle and heat fluxes are presented both showing rather large positive radial electric fields at the plasma core and small negative fields at the edge. The neoclassical particle and energy confinement time are in surprisingly good agreement with the experimental energy balance analysis and the international stellarator scaling. Although no satisfactory explanation is available yet the large neoclassical diffusion caused by the complex ripple structure of TJ-II magnetic field may be an important ingredient

  3. Cost-effectiveness of targeted screening for abdominal aortic aneurysm. Monte Carlo-based estimates.

    Science.gov (United States)

    Pentikäinen, T J; Sipilä, T; Rissanen, P; Soisalon-Soininen, S; Salo, J

    2000-01-01

    This article reports a cost-effectiveness analysis of targeted screening for abdominal aortic aneurysm (AAA). A major emphasis was on the estimation of distributions of costs and effectiveness. We performed a Monte Carlo simulation using C programming language in a PC environment. Data on survival and costs, and a majority of screening probabilities, were from our own empirical studies. Natural history data were based on the literature. Each screened male gained 0.07 life-years at an incremental cost of FIM 3,300. The expected values differed from zero very significantly. For females, expected gains were 0.02 life-years at an incremental cost of FIM 1,100, which was not statistically significant. Cost-effectiveness ratios and their 95% confidence intervals were FIM 48,000 (27,000-121,000) and 54,000 (22,000-infinity) for males and females, respectively. Sensitivity analysis revealed that the results for males were stable. Individual variation in life-year gains was high. Males seemed to benefit from targeted AAA screening, and the results were stable. As far as the cost-effectiveness ratio is considered acceptable, screening for males seemed to be justified. However, our assumptions about growth and rupture behavior of AAAs might be improved with further clinical and epidemiological studies. As a point estimate, females benefited in a similar manner, but the results were not statistically significant. The evidence of this study did not justify screening of females.

  4. Estimation of snow albedo reduction by light absorbing impurities using Monte Carlo radiative transfer model

    Science.gov (United States)

    Sengupta, D.; Gao, L.; Wilcox, E. M.; Beres, N. D.; Moosmüller, H.; Khlystov, A.

    2017-12-01

    Radiative forcing and climate change greatly depends on earth's surface albedo and its temporal and spatial variation. The surface albedo varies greatly depending on the surface characteristics ranging from 5-10% for calm ocean waters to 80% for some snow-covered areas. Clean and fresh snow surfaces have the highest albedo and are most sensitive to contamination with light absorbing impurities that can greatly reduce surface albedo and change overall radiative forcing estimates. Accurate estimation of snow albedo as well as understanding of feedbacks on climate from changes in snow-covered areas is important for radiative forcing, snow energy balance, predicting seasonal snowmelt, and run off rates. Such information is essential to inform timely decision making of stakeholders and policy makers. Light absorbing particles deposited onto the snow surface can greatly alter snow albedo and have been identified as a major contributor to regional climate forcing if seasonal snow cover is involved. However, uncertainty associated with quantification of albedo reduction by these light absorbing particles is high. Here, we use Mie theory (under the assumption of spherical snow grains) to reconstruct the single scattering parameters of snow (i.e., single scattering albedo ῶ and asymmetry parameter g) from observation-based size distribution information and retrieved refractive index values. The single scattering parameters of impurities are extracted with the same approach from datasets obtained during laboratory combustion of biomass samples. Instead of using plane-parallel approximation methods to account for multiple scattering, we have used the simple "Monte Carlo ray/photon tracing approach" to calculate the snow albedo. This simple approach considers multiple scattering to be the "collection" of single scattering events. Using this approach, we vary the effective snow grain size and impurity concentrations to explore the evolution of snow albedo over a wide

  5. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    Science.gov (United States)

    Kanjilal, Oindrila; Manohar, C. S.

    2017-07-01

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.

  6. IVF cycle cost estimation using Activity Based Costing and Monte Carlo simulation.

    Science.gov (United States)

    Cassettari, Lucia; Mosca, Marco; Mosca, Roberto; Rolando, Fabio; Costa, Mauro; Pisaturo, Valerio

    2016-03-01

    The Authors present a new methodological approach in stochastic regime to determine the actual costs of an healthcare process. The paper specifically shows the application of the methodology for the determination of the cost of an Assisted reproductive technology (ART) treatment in Italy. The reason of this research comes from the fact that deterministic regime is inadequate to implement an accurate estimate of the cost of this particular treatment. In fact the durations of the different activities involved are unfixed and described by means of frequency distributions. Hence the need to determine in addition to the mean value of the cost, the interval within which it is intended to vary with a known confidence level. Consequently the cost obtained for each type of cycle investigated (in vitro fertilization and embryo transfer with or without intracytoplasmic sperm injection), shows tolerance intervals around the mean value sufficiently restricted as to make the data obtained statistically robust and therefore usable also as reference for any benchmark with other Countries. It should be noted that under a methodological point of view the approach was rigorous. In fact it was used both the technique of Activity Based Costing for determining the cost of individual activities of the process both the Monte Carlo simulation, with control of experimental error, for the construction of the tolerance intervals on the final result.

  7. Estimation of Compton Imager Using Single 3D Position-Sensitive LYSO Scintillator: Monte Carlo Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Taewoong; Lee, Hyounggun; Kim, Younghak; Lee, Wonho [Korea University, Seoul (Korea, Republic of)

    2017-07-15

    The performance of a Compton imager using a single three-dimensional position-sensitive LYSO scintillator detector was estimated using a Monte Carlo simulation. The Compton imager consisted of a single LYSO scintillator with a pixelized structure. The size of the scintillator and each pixel were 1.3 × 1.3 × 1.3 cm{sup 3} and 0.3 × 0.3 × 0.3 cm{sup 3}, respectively. The order of γ-ray interactions was determined based on the deposited energies in each detector. After the determination of the interaction sequence, various types of reconstruction algorithms such as simple back-projection, filtered back-projection, and list-mode maximum-likelihood expectation maximization (LM-MLEM) were applied and compared with each other in terms of their angular resolution and signal-tonoise ratio (SNR) for several γ-ray energies. The LM-MLEM reconstruction algorithm exhibited the best performance for Compton imaging in maintaining high angular resolution and SNR. The two sources of {sup 137}Cs (662 keV) could be distinguishable if they were more than 17 ◦ apart. The reconstructed Compton images showed the precise position and distribution of various radiation isotopes, which demonstrated the feasibility of the monitoring of nuclear materials in homeland security and radioactive waste management applications.

  8. Estimation of the Thermal Process in the Honeycomb Panel by a Monte Carlo Method

    Science.gov (United States)

    Gusev, S. A.; Nikolaev, V. N.

    2018-01-01

    A new Monte Carlo method for estimating the thermal state of the heat insulation containing honeycomb panels is proposed in the paper. The heat transfer in the honeycomb panel is described by a boundary value problem for a parabolic equation with discontinuous diffusion coefficient and boundary conditions of the third kind. To obtain an approximate solution, it is proposed to use the smoothing of the diffusion coefficient. After that, the obtained problem is solved on the basis of the probability representation. The probability representation is the expectation of the functional of the diffusion process corresponding to the boundary value problem. The process of solving the problem is reduced to numerical statistical modelling of a large number of trajectories of the diffusion process corresponding to the parabolic problem. It was used earlier the Euler method for this object, but that requires a large computational effort. In this paper the method is modified by using combination of the Euler and the random walk on moving spheres methods. The new approach allows us to significantly reduce the computation costs.

  9. On Monte Carlo estimation of radiation damage in light water reactor systems

    International Nuclear Information System (INIS)

    Read, Edward A.; Oliveira, Cassiano R.E. de

    2010-01-01

    There has been a growing need in recent years for the development of methodologies to calculate damage factors, namely displacements per atom (dpa), of structural components for Light Water Reactors (LWRs). The aim of this paper is discuss and highlight the main issues associated with the calculation of radiation damage factors utilizing the Monte Carlo method. Among these issues are: particle tracking and tallying in complex geometries, dpa calculation methodology, coupled fuel depletion and uncertainty propagation. The capabilities of the Monte Carlo code Serpent such as Woodcock tracking and burnup are assessed for radiation damage calculations and its capability demonstrated and compared to those of the MCNP code for dpa calculations of a typical LWR configuration involving the core vessel and the downcomer. (author)

  10. Simultaneous Monte Carlo zero-variance estimates of several correlated means

    International Nuclear Information System (INIS)

    Booth, T.E.

    1997-08-01

    Zero variance procedures have been in existence since the dawn of Monte Carlo. Previous works all treat the problem of zero variance solutions for a single tally. One often wants to get low variance solutions to more than one tally. When the sets of random walks needed for two tallies are similar, it is more efficient to do zero variance biasing for both tallies in the same Monte Carlo run, instead of two separate runs. The theory presented here correlates the random walks of particles by the similarity of their tallies. Particles with dissimilar tallies rapidly become uncorrelated whereas particles with similar tallies will stay correlated through most of their random walk. The theory herein should allow practitioners to make efficient use of zero-variance biasing procedures in practical problems

  11. Estimation of skyshine dose from turbine building of BWR plant using Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Yuji, Nemoto; Toshihisa, Tsukiyama; Shigeki, Nemezawa [Hitachi. Ltd., Saiwai-cho, Hitachi (Japan); Tadashi, Yamasaki; Hidetsugu, Okada [Chubu Electric Power Company, Inc., Odaka-cho, Midori-ku Nagoya (Japan)

    2007-07-01

    The Monte Carlo N-Particle transport code (MCNP) was adopted to calculate the skyshine dose from the turbine building of a BWR plant for obtaining precise estimations at the site boundary. In MCNP calculation, the equipment and piping arranged on the operating floor of the turbine building were considered and modeled in detail. The inner and outer walls of the turbine building, the shielding materials around the high-pressure turbine, and the piping connected from the moisture separator to the low-pressure turbine were all considered. A three-step study was conducted to estimate the applicability of MCNP code. The first step is confirming the propriety of calculation models. The atmospheric relief diaphragms, which are installed on top of the low-pressure turbine exhaust hood, are not considered in the calculation model. There was little difference between the skyshine dose distributions that were considered when using and not using the atmospheric relief diaphragms. The calculated dose rates agreed well with the measurements taken around the turbine. The second step is estimating the dose rates on the outer roof surface of the turbine building. This calculation was made to confirm the dose distribution of gamma-rays on the turbine roof before being scattered into the air. The calculated dose rates agreed well with the measured data. The third step is making a final confirmation by comparing the calculations and measurements of skyshine dose rates around the turbine building. The source terms of the main steam system are based on the measured activity data of N-16 and C-15. As a conclusion, we were able to calculate reasonable skyshine dose rates by using MCNP code. (authors)

  12. A Non-Stationary Approach for Estimating Future Hydroclimatic Extremes Using Monte-Carlo Simulation

    Science.gov (United States)

    Byun, K.; Hamlet, A. F.

    2017-12-01

    There is substantial evidence that observed hydrologic extremes (e.g. floods, extreme stormwater events, and low flows) are changing and that climate change will continue to alter the probability distributions of hydrologic extremes over time. These non-stationary risks imply that conventional approaches for designing hydrologic infrastructure (or making other climate-sensitive decisions) based on retrospective analysis and stationary statistics will become increasingly problematic through time. To develop a framework for assessing risks in a non-stationary environment our study develops a new approach using a super ensemble of simulated hydrologic extremes based on Monte Carlo (MC) methods. Specifically, using statistically downscaled future GCM projections from the CMIP5 archive (using the Hybrid Delta (HD) method), we extract daily precipitation (P) and temperature (T) at 1/16 degree resolution based on a group of moving 30-yr windows within a given design lifespan (e.g. 10, 25, 50-yr). Using these T and P scenarios we simulate daily streamflow using the Variable Infiltration Capacity (VIC) model for each year of the design lifespan and fit a Generalized Extreme Value (GEV) probability distribution to the simulated annual extremes. MC experiments are then used to construct a random series of 10,000 realizations of the design lifespan, estimating annual extremes using the estimated unique GEV parameters for each individual year of the design lifespan. Our preliminary results for two watersheds in Midwest show that there are considerable differences in the extreme values for a given percentile between conventional MC and non-stationary MC approach. Design standards based on our non-stationary approach are also directly dependent on the design lifespan of infrastructure, a sensitivity which is notably absent from conventional approaches based on retrospective analysis. The experimental approach can be applied to a wide range of hydroclimatic variables of interest.

  13. Monte Carlo uncertainty analysis of dose estimates in radiochromic film dosimetry with single-channel and multichannel algorithms.

    Science.gov (United States)

    Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen; González-López, Antonio

    2018-03-01

    To provide a multi-stage model to calculate uncertainty in radiochromic film dosimetry with Monte-Carlo techniques. This new approach is applied to single-channel and multichannel algorithms. Two lots of Gafchromic EBT3 are exposed in two different Varian linacs. They are read with an EPSON V800 flatbed scanner. The Monte-Carlo techniques in uncertainty analysis provide a numerical representation of the probability density functions of the output magnitudes. From this numerical representation, traditional parameters of uncertainty analysis as the standard deviations and bias are calculated. Moreover, these numerical representations are used to investigate the shape of the probability density functions of the output magnitudes. Also, another calibration film is read in four EPSON scanners (two V800 and two 10000XL) and the uncertainty analysis is carried out with the four images. The dose estimates of single-channel and multichannel algorithms show a Gaussian behavior and low bias. The multichannel algorithms lead to less uncertainty in the final dose estimates when the EPSON V800 is employed as reading device. In the case of the EPSON 10000XL, the single-channel algorithms provide less uncertainty in the dose estimates for doses higher than four Gy. A multi-stage model has been presented. With the aid of this model and the use of the Monte-Carlo techniques, the uncertainty of dose estimates for single-channel and multichannel algorithms are estimated. The application of the model together with Monte-Carlo techniques leads to a complete characterization of the uncertainties in radiochromic film dosimetry. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  14. Organ doses for reference pediatric and adolescent patients undergoing computed tomography estimated by Monte Carlo simulation

    International Nuclear Information System (INIS)

    Lee, Choonsik; Kim, Kwang Pyo; Long, Daniel J.; Bolch, Wesley E.

    2012-01-01

    Purpose: To establish an organ dose database for pediatric and adolescent reference individuals undergoing computed tomography (CT) examinations by using Monte Carlo simulation. The data will permit rapid estimates of organ and effective doses for patients of different age, gender, examination type, and CT scanner model. Methods: The Monte Carlo simulation model of a Siemens Sensation 16 CT scanner previously published was employed as a base CT scanner model. A set of absorbed doses for 33 organs/tissues normalized to the product of 100 mAs and CTDI vol (mGy/100 mAs mGy) was established by coupling the CT scanner model with age-dependent reference pediatric hybrid phantoms. A series of single axial scans from the top of head to the feet of the phantoms was performed at a slice thickness of 10 mm, and at tube potentials of 80, 100, and 120 kVp. Using the established CTDI vol - and 100 mAs-normalized dose matrix, organ doses for different pediatric phantoms undergoing head, chest, abdomen-pelvis, and chest-abdomen-pelvis (CAP) scans with the Siemens Sensation 16 scanner were estimated and analyzed. The results were then compared with the values obtained from three independent published methods: CT-Expo software, organ dose for abdominal CT scan derived empirically from patient abdominal circumference, and effective dose per dose-length product (DLP). Results: Organ and effective doses were calculated and normalized to 100 mAs and CTDI vol for different CT examinations. At the same technical setting, dose to the organs, which were entirely included in the CT beam coverage, were higher by from 40 to 80% for newborn phantoms compared to those of 15-year phantoms. An increase of tube potential from 80 to 120 kVp resulted in 2.5-2.9-fold greater brain dose for head scans. The results from this study were compared with three different published studies and/or techniques. First, organ doses were compared to those given by CT-Expo which revealed dose differences up to

  15. Multivariate Error Covariance Estimates by Monte-Carlo Simulation for Assimilation Studies in the Pacific Ocean

    Science.gov (United States)

    Borovikov, Anna; Rienecker, Michele M.; Keppenne, Christian; Johnson, Gregory C.

    2004-01-01

    One of the most difficult aspects of ocean state estimation is the prescription of the model forecast error covariances. The paucity of ocean observations limits our ability to estimate the covariance structures from model-observation differences. In most practical applications, simple covariances are usually prescribed. Rarely are cross-covariances between different model variables used. Here a comparison is made between a univariate Optimal Interpolation (UOI) scheme and a multivariate OI algorithm (MvOI) in the assimilation of ocean temperature. In the UOI case only temperature is updated using a Gaussian covariance function and in the MvOI salinity, zonal and meridional velocities as well as temperature, are updated using an empirically estimated multivariate covariance matrix. Earlier studies have shown that a univariate OI has a detrimental effect on the salinity and velocity fields of the model. Apparently, in a sequential framework it is important to analyze temperature and salinity together. For the MvOI an estimation of the model error statistics is made by Monte-Carlo techniques from an ensemble of model integrations. An important advantage of using an ensemble of ocean states is that it provides a natural way to estimate cross-covariances between the fields of different physical variables constituting the model state vector, at the same time incorporating the model's dynamical and thermodynamical constraints as well as the effects of physical boundaries. Only temperature observations from the Tropical Atmosphere-Ocean array have been assimilated in this study. In order to investigate the efficacy of the multivariate scheme two data assimilation experiments are validated with a large independent set of recently published subsurface observations of salinity, zonal velocity and temperature. For reference, a third control run with no data assimilation is used to check how the data assimilation affects systematic model errors. While the performance of the

  16. Monte Carlo Methods in Physics

    International Nuclear Information System (INIS)

    Santoso, B.

    1997-01-01

    Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained

  17. SU-E-T-238: Monte Carlo Estimation of Cerenkov Dose for Photo-Dynamic Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Chibani, O; Price, R; Ma, C [Fox Chase Cancer Center, Philadelphia, PA (United States); Eldib, A [Fox Chase Cancer Center, Philadelphia, PA (United States); University Cairo (Egypt); Mora, G [de Lisboa, Codex, Lisboa (Portugal)

    2014-06-01

    Purpose: Estimation of Cerenkov dose from high-energy megavoltage photon and electron beams in tissue and its impact on the radiosensitization using Protoporphyrine IX (PpIX) for tumor targeting enhancement in radiotherapy. Methods: The GEPTS Monte Carlo code is used to generate dose distributions from 18MV Varian photon beam and generic high-energy (45-MV) photon and (45-MeV) electron beams in a voxel-based tissueequivalent phantom. In addition to calculating the ionization dose, the code scores Cerenkov energy released in the wavelength range 375–425 nm corresponding to the pick of the PpIX absorption spectrum (Fig. 1) using the Frank-Tamm formula. Results: The simulations shows that the produced Cerenkov dose suitable for activating PpIX is 4000 to 5500 times lower than the overall radiation dose for all considered beams (18MV, 45 MV and 45 MeV). These results were contradictory to the recent experimental studies by Axelsson et al. (Med. Phys. 38 (2011) p 4127), where Cerenkov dose was reported to be only two orders of magnitude lower than the radiation dose. Note that our simulation results can be corroborated by a simple model where the Frank and Tamm formula is applied for electrons with 2 MeV/cm stopping power generating Cerenkov photons in the 375–425 nm range and assuming these photons have less than 1mm penetration in tissue. Conclusion: The Cerenkov dose generated by high-energy photon and electron beams may produce minimal clinical effect in comparison with the photon fluence (or dose) commonly used for photo-dynamic therapy. At the present time, it is unclear whether Cerenkov radiation is a significant contributor to the recently observed tumor regression for patients receiving radiotherapy and PpIX versus patients receiving radiotherapy only. The ongoing study will include animal experimentation and investigation of dose rate effects on PpIX response.

  18. On the predictivity of pore-scale simulations: estimating uncertainties with multilevel Monte Carlo

    KAUST Repository

    Icardi, Matteo

    2016-02-08

    A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another “equivalent” sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [2015. https://bitbucket.org/micardi/porescalemc.], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers

  19. Estimation of whole-body radiation exposure from brachytherapy for oral cancer using a Monte Carlo simulation

    International Nuclear Information System (INIS)

    Ozaki, Y.; Watanabe, H.; Kaida, A.; Miura, M.; Nakagawa, K.; Toda, K.; Yoshimura, R.; Sumi, Y.; Kurabayashi, T.

    2017-01-01

    Early stage oral cancer can be cured with oral brachytherapy, but whole-body radiation exposure status has not been previously studied. Recently, the International Commission on Radiological Protection Committee (ICRP) recommended the use of ICRP phantoms to estimate radiation exposure from external and internal radiation sources. In this study, we used a Monte Carlo simulation with ICRP phantoms to estimate whole-body exposure from oral brachytherapy. We used a Particle and Heavy Ion Transport code System (PHITS) to model oral brachytherapy with 192 Ir hairpins and 198 Au grains and to perform a Monte Carlo simulation on the ICRP adult reference computational phantoms. To confirm the simulations, we also computed local dose distributions from these small sources, and compared them with the results from Oncentra manual Low Dose Rate Treatment Planning (mLDR) software which is used in day-to-day clinical practice. We successfully obtained data on absorbed dose for each organ in males and females. Sex-averaged equivalent doses were 0.547 and 0.710 Sv with 192 Ir hairpins and 198 Au grains, respectively. Simulation with PHITS was reliable when compared with an alternative computational technique using mLDR software. We concluded that the absorbed dose for each organ and whole-body exposure from oral brachytherapy can be estimated with Monte Carlo simulation using PHITS on ICRP reference phantoms. Effective doses for patients with oral cancer were obtained.

  20. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  1. Estimation of induced activity in super conducting cyclotron at VECC: Monte Carlo calculations

    International Nuclear Information System (INIS)

    Chatterjee, S.; Bandyopadhyay, T.

    2011-01-01

    Super Conducting Cyclotron (K500) at Variable Energy Cyclotron Centre, Kolkata (VECC) is at an advanced stage of commissioning and has successfully delivered many internal beams (light to heavy particles) up to the extraction radius. One of the external beam lines has already been completed and commissioned. The SCC along with the other subsystems is getting ready to deliver beam for experiments. Beam loss of during beam tuning and also during a steady beam delivery to the different experimental facilities due to many operational factors and requirements. Extraction of beam from the machine to the beam line is also a major player for the loss of beam. The interaction of the lost beam of accelerated charge particles with the machine parts will undergo different nuclear reaction and induced activity will be produced at the different parts of the machine. Moreover secondary neutrons produced having energy maximum limiting to the beam energy will also produce radioactivity in the different parts of the machine as well it will also induce radioactivity on the radiation shield constructed around the accelerator. Radio-activation of different parts of an accelerator, radiation shield walls around the machine and materials available in the room poses a radiation hazard inside the machine vault and experimental caves especially during maintenance of the machine. During the shutdown and maintenance period of the machine occupational workers will be exposed to these induced activity produced during operation of the machine. An effort was made to estimate these residual activity and dose for 80 MeV proton bombarded on Stainless Steel thereon activity produced on different materials like Cu, Al, Concrete and SS of different thickness. Cu, Al, SS are among many other materials which are widely used for fabrication of any accelerator. A simulation of the scenario was created using a Monte Carlo approach using FLUKA 2006.3b-general purpose multi particle transport code. Estimation

  2. Lectures on Monte Carlo methods

    CERN Document Server

    Madras, Neal

    2001-01-01

    Monte Carlo methods form an experimental branch of mathematics that employs simulations driven by random number generators. These methods are often used when others fail, since they are much less sensitive to the "curse of dimensionality", which plagues deterministic methods in problems with a large number of variables. Monte Carlo methods are used in many fields: mathematics, statistics, physics, chemistry, finance, computer science, and biology, for instance. This book is an introduction to Monte Carlo methods for anyone who would like to use these methods to study various kinds of mathemati

  3. Estimation of the impact of manufacturing tolerances on burn-up calculations using Monte Carlo techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bock, M.; Wagner, M. [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH, Garching (Germany). Forschungszentrum

    2012-11-01

    In recent years, the availability of computing resources has increased enormously. There are two ways to take advantage of this increase in analyses in the field of the nuclear fuel cycle, such as burn-up calculations or criticality safety calculations. The first possible way is to improve the accuracy of the models that are analyzed. For burn-up calculations this means, that the goal to model and to calculate the burn-up of a full reactor core is getting more and more into reach. The second way to utilize the resources is to run state-of-the-art programs with simplified models several times, but with varied input parameters. This second way opens the applicability of the assessment of uncertainties and sensitivities based on the Monte Carlo method for fields of research that rely heavily on either high CPU usage or high memory consumption. In the context of the nuclear fuel cycle, applications that belong to these types of demanding analyses are again burn-up and criticality safety calculations. The assessment of uncertainties in burn-up analyses can complement traditional analysis techniques such as best estimate or bounding case analyses and can support the safety analysis in future design decisions, e.g. by analyzing the uncertainty of the decay heat power of the nuclear inventory stored in the spent fuel pool of a nuclear power plant. This contribution concentrates on the uncertainty analysis in burn-up calculations of PWR fuel assemblies. The uncertainties in the results arise from the variation of the input parameters. In this case, the focus is on the one hand on the variation of manufacturing tolerances that are present in the different production stages of the fuel assemblies. On the other hand, uncertainties that describe the conditions during the reactor operation are taken into account. They also affect the results of burn-up calculations. In order to perform uncertainty analyses in burn-up calculations, GRS has improved the capabilities of its general

  4. Efficient scatter distribution estimation and correction in CBCT using concurrent Monte Carlo fitting

    Energy Technology Data Exchange (ETDEWEB)

    Bootsma, G. J., E-mail: Gregory.Bootsma@rmp.uhn.on.ca [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Verhaegen, F. [Department of Radiation Oncology - MAASTRO, GROW—School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec H3G 1A4 (Canada); Jaffray, D. A. [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Ontario Cancer Institute, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada)

    2015-01-15

    Purpose: X-ray scatter is a significant impediment to image quality improvements in cone-beam CT (CBCT). The authors present and demonstrate a novel scatter correction algorithm using a scatter estimation method that simultaneously combines multiple Monte Carlo (MC) CBCT simulations through the use of a concurrently evaluated fitting function, referred to as concurrent MC fitting (CMCF). Methods: The CMCF method uses concurrently run MC CBCT scatter projection simulations that are a subset of the projection angles used in the projection set, P, to be corrected. The scattered photons reaching the detector in each MC simulation are simultaneously aggregated by an algorithm which computes the scatter detector response, S{sub MC}. S{sub MC} is fit to a function, S{sub F}, and if the fit of S{sub F} is within a specified goodness of fit (GOF), the simulations are terminated. The fit, S{sub F}, is then used to interpolate the scatter distribution over all pixel locations for every projection angle in the set P. The CMCF algorithm was tested using a frequency limited sum of sines and cosines as the fitting function on both simulated and measured data. The simulated data consisted of an anthropomorphic head and a pelvis phantom created from CT data, simulated with and without the use of a compensator. The measured data were a pelvis scan of a phantom and patient taken on an Elekta Synergy platform. The simulated data were used to evaluate various GOF metrics as well as determine a suitable fitness value. The simulated data were also used to quantitatively evaluate the image quality improvements provided by the CMCF method. A qualitative analysis was performed on the measured data by comparing the CMCF scatter corrected reconstruction to the original uncorrected and corrected by a constant scatter correction reconstruction, as well as a reconstruction created using a set of projections taken with a small cone angle. Results: Pearson’s correlation, r, proved to be a

  5. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution.

    Science.gov (United States)

    Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr

    2012-01-01

    Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Dose estimation in the crystalline lens of industrial radiography personnel using Monte Carlo Method

    International Nuclear Information System (INIS)

    Lima, Alexandre Roza de

    2014-01-01

    The International Commission on Radiological Protection, ICRP, in its publication 103, reviewed recent epidemiological evidence and indicated that, for the eye lens, the absorbed dose threshold for induction of late detriment is around 0.5 Gy. On this basis, on April 21, 2011, the ICRP recommended changes to the occupational dose limit in planned exposure situations, reducing the eye lens equivalent dose limit from 150 mSv to 20 mSv per year, on average, during the period of 5 years, with exposure not to exceed 50 mSv in a single year. This paper presents the dose estimation to eye lens, H p (10), effective dose and doses to important organs in the body, received by industrial gamma radiography workers, during planned or accidental exposure situations. The computer program Visual Monte Carlo was used and two relevant scenarios were postulated. The first is a planned exposure situation scenario where the operator is directly exposed to radiation during the operation. 12 radiographic exposures per day for 250 days per year, which leads to an exposure of 36,000 seconds or 10 hours per year were considered. The simulation was carried out using the following parameters: a 192 Ir source with 1.0 TBq of activity, the source/operator distance varying from 5 m to 10 m at three different heights of 0.2 m, 1.0 m and 2.0 m. The eyes lens doses were estimated as being between 16.9 mSv/year and 66.9 mSv/year and for H p (10) the doses were between 17.7 mSv/year and 74.2 mSv/year. For the accidental exposure situation scenario, the same radionuclide and activity were used, but in this case the doses were calculated with and without a collimator. The heights above ground considered were 1.0 m, 1.5 m e 2.0 m, the source/operator distance was 40 cm and, the exposure time 74 seconds. The eyes lens doses, for 1.5 m, were 12.3 mGy and 0.28 mGy without and with a collimator, respectively. Three conclusions resulted from this work. The first was that the estimated doses show that the new

  7. Advanced Multilevel Monte Carlo Methods

    KAUST Repository

    Jasra, Ajay; Law, Kody; Suciu, Carina

    2017-01-01

    This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.

  8. Advanced Multilevel Monte Carlo Methods

    KAUST Repository

    Jasra, Ajay

    2017-04-24

    This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.

  9. Monte Carlo simulation for IRRMA

    International Nuclear Information System (INIS)

    Gardner, R.P.; Liu Lianyan

    2000-01-01

    Monte Carlo simulation is fast becoming a standard approach for many radiation applications that were previously treated almost entirely by experimental techniques. This is certainly true for Industrial Radiation and Radioisotope Measurement Applications - IRRMA. The reasons for this include: (1) the increased cost and inadequacy of experimentation for design and interpretation purposes; (2) the availability of low cost, large memory, and fast personal computers; and (3) the general availability of general purpose Monte Carlo codes that are increasingly user-friendly, efficient, and accurate. This paper discusses the history and present status of Monte Carlo simulation for IRRMA including the general purpose (GP) and specific purpose (SP) Monte Carlo codes and future needs - primarily from the experience of the authors

  10. Application analysis of Monte Carlo to estimate the capacity of geothermal resources in Lawu Mount

    Energy Technology Data Exchange (ETDEWEB)

    Supriyadi, E-mail: supriyadi-uno@yahoo.co.nz [Physics, Faculty of Mathematics and Natural Sciences, University of Jember, Jl. Kalimantan Kampus Bumi Tegal Boto, Jember 68181 (Indonesia); Srigutomo, Wahyu [Complex system and earth physics, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung, Jl. Ganesha 10, Bandung 40132 (Indonesia); Munandar, Arif [Kelompok Program Penelitian Panas Bumi, PSDG, Badan Geologi, Kementrian ESDM, Jl. Soekarno Hatta No. 444 Bandung 40254 (Indonesia)

    2014-03-24

    Monte Carlo analysis has been applied in calculation of geothermal resource capacity based on volumetric method issued by Standar Nasional Indonesia (SNI). A deterministic formula is converted into a stochastic formula to take into account the nature of uncertainties in input parameters. The method yields a range of potential power probability stored beneath Lawu Mount geothermal area. For 10,000 iterations, the capacity of geothermal resources is in the range of 139.30-218.24 MWe with the most likely value is 177.77 MWe. The risk of resource capacity above 196.19 MWe is less than 10%. The power density of the prospect area covering 17 km{sup 2} is 9.41 MWe/km{sup 2} with probability 80%.

  11. Monte Carlo MCNP-4B-based absorbed dose distribution estimates for patient-specific dosimetry.

    Science.gov (United States)

    Yoriyaz, H; Stabin, M G; dos Santos, A

    2001-04-01

    This study was intended to verify the capability of the Monte Carlo MCNP-4B code to evaluate spatial dose distribution based on information gathered from CT or SPECT. A new three-dimensional (3D) dose calculation approach for internal emitter use in radioimmunotherapy (RIT) was developed using the Monte Carlo MCNP-4B code as the photon and electron transport engine. It was shown that the MCNP-4B computer code can be used with voxel-based anatomic and physiologic data to provide 3D dose distributions. This study showed that the MCNP-4B code can be used to develop a treatment planning system that will provide such information in a time manner, if dose reporting is suitably optimized. If each organ is divided into small regions where the average energy deposition is calculated with a typical volume of 0.4 cm(3), regional dose distributions can be provided with reasonable central processing unit times (on the order of 12-24 h on a 200-MHz personal computer or modest workstation). Further efforts to provide semiautomated region identification (segmentation) and improvement of marrow dose calculations are needed to supply a complete system for RIT. It is envisioned that all such efforts will continue to develop and that internal dose calculations may soon be brought to a similar level of accuracy, detail, and robustness as is commonly expected in external dose treatment planning. For this study we developed a code with a user-friendly interface that works on several nuclear medicine imaging platforms and provides timely patient-specific dose information to the physician and medical physicist. Future therapy with internal emitters should use a 3D dose calculation approach, which represents a significant advance over dose information provided by the standard geometric phantoms used for more than 20 y (which permit reporting of only average organ doses for certain standardized individuals)

  12. Adjoint electron Monte Carlo calculations

    International Nuclear Information System (INIS)

    Jordan, T.M.

    1986-01-01

    Adjoint Monte Carlo is the most efficient method for accurate analysis of space systems exposed to natural and artificially enhanced electron environments. Recent adjoint calculations for isotropic electron environments include: comparative data for experimental measurements on electronics boxes; benchmark problem solutions for comparing total dose prediction methodologies; preliminary assessment of sectoring methods used during space system design; and total dose predictions on an electronics package. Adjoint Monte Carlo, forward Monte Carlo, and experiment are in excellent agreement for electron sources that simulate space environments. For electron space environments, adjoint Monte Carlo is clearly superior to forward Monte Carlo, requiring one to two orders of magnitude less computer time for relatively simple geometries. The solid-angle sectoring approximations used for routine design calculations can err by more than a factor of 2 on dose in simple shield geometries. For critical space systems exposed to severe electron environments, these potential sectoring errors demand the establishment of large design margins and/or verification of shield design by adjoint Monte Carlo/experiment

  13. Monte Carlo theory and practice

    International Nuclear Information System (INIS)

    James, F.

    1987-01-01

    Historically, the first large-scale calculations to make use of the Monte Carlo method were studies of neutron scattering and absorption, random processes for which it is quite natural to employ random numbers. Such calculations, a subset of Monte Carlo calculations, are known as direct simulation, since the 'hypothetical population' of the narrower definition above corresponds directly to the real population being studied. The Monte Carlo method may be applied wherever it is possible to establish equivalence between the desired result and the expected behaviour of a stochastic system. The problem to be solved may already be of a probabilistic or statistical nature, in which case its Monte Carlo formulation will usually be a straightforward simulation, or it may be of a deterministic or analytic nature, in which case an appropriate Monte Carlo formulation may require some imagination and may appear contrived or artificial. In any case, the suitability of the method chosen will depend on its mathematical properties and not on its superficial resemblance to the problem to be solved. The authors show how Monte Carlo techniques may be compared with other methods of solution of the same physical problem

  14. Estimating the occurrence of foreign material in Advanced Gas-cooled Reactors: A Bayesian Monte Carlo approach

    International Nuclear Information System (INIS)

    Mason, Paolo

    2014-01-01

    Highlights: • The amount of a specific type of foreign material found in UK AGRs has been estimated. • The estimate is based on very few instances of detection in numerous inspections. • A Bayesian Monte Carlo approach was used. • The study supports safety case claims on coolant flow impairment. • The methodology is applicable to any inspection campaign on any plant system. - Abstract: The current occurrence of a particular sort of foreign material in eight UK Advanced Gas-cooled Reactors has been estimated by means of a parametric approach. The study includes both variability, treated in analytic fashion via the combination of standard probability distributions, and the uncertainty in the parameters of the model of choice, whose posterior distribution was inferred in Bayesian fashion by means of a Monte Carlo route consisting in the conditional acceptance of sets of model parameters drawn from a prior distribution based on engineering judgement. The model underlying the present study specifically refers to the re-loading and inspection routines of UK Advanced Gas-cooled Reactors. The approach to inference here presented, however, is of general validity and can be applied to the outcome of any inspection campaign on any plant system, and indeed to any situation in which the outcome of a stochastic process is more easily simulated than described by a probability density or mass function

  15. Reducing Monte Carlo error in the Bayesian estimation of risk ratios using log-binomial regression models.

    Science.gov (United States)

    Salmerón, Diego; Cano, Juan A; Chirlaque, María D

    2015-08-30

    In cohort studies, binary outcomes are very often analyzed by logistic regression. However, it is well known that when the goal is to estimate a risk ratio, the logistic regression is inappropriate if the outcome is common. In these cases, a log-binomial regression model is preferable. On the other hand, the estimation of the regression coefficients of the log-binomial model is difficult owing to the constraints that must be imposed on these coefficients. Bayesian methods allow a straightforward approach for log-binomial regression models and produce smaller mean squared errors in the estimation of risk ratios than the frequentist methods, and the posterior inferences can be obtained using the software WinBUGS. However, Markov chain Monte Carlo methods implemented in WinBUGS can lead to large Monte Carlo errors in the approximations to the posterior inferences because they produce correlated simulations, and the accuracy of the approximations are inversely related to this correlation. To reduce correlation and to improve accuracy, we propose a reparameterization based on a Poisson model and a sampling algorithm coded in R. Copyright © 2015 John Wiley & Sons, Ltd.

  16. Remaining Useful Life Estimation of Li-ion Battery for Energy Storage System Using Markov Chain Monte Carlo Method

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dongjin; Kim, Seok Goo; Choi, Jooho; Lee, Jaewook [Korea Aerospace Univ., Koyang (Korea, Republic of); Song, Hwa Seob; Park, Sang Hui [Hyosung Corporation, Seoul (Korea, Republic of)

    2016-10-15

    Remaining useful life (RUL) estimation of the Li-ion battery has gained great interest because it is necessary for quality assurance, operation planning, and determination of the exchange period. This paper presents the RUL estimation of an Li-ion battery for an energy storage system using exponential function for the degradation model and Markov Chain Monte Carlo (MCMC) approach for parameter estimation. The MCMC approach is dependent upon information such as model initial parameters and input setting parameters which highly affect the estimation result. To overcome this difficulty, this paper offers a guideline for model initial parameters based on the regression result, and MCMC input parameters derived by comparisons with a thorough search of theoretical results.

  17. Remaining Useful Life Estimation of Li-ion Battery for Energy Storage System Using Markov Chain Monte Carlo Method

    International Nuclear Information System (INIS)

    Kim, Dongjin; Kim, Seok Goo; Choi, Jooho; Lee, Jaewook; Song, Hwa Seob; Park, Sang Hui

    2016-01-01

    Remaining useful life (RUL) estimation of the Li-ion battery has gained great interest because it is necessary for quality assurance, operation planning, and determination of the exchange period. This paper presents the RUL estimation of an Li-ion battery for an energy storage system using exponential function for the degradation model and Markov Chain Monte Carlo (MCMC) approach for parameter estimation. The MCMC approach is dependent upon information such as model initial parameters and input setting parameters which highly affect the estimation result. To overcome this difficulty, this paper offers a guideline for model initial parameters based on the regression result, and MCMC input parameters derived by comparisons with a thorough search of theoretical results

  18. Monte Carlo Planning Method Estimates Planning Horizons during Interactive Social Exchange

    Science.gov (United States)

    Hula, Andreas; Montague, P. Read; Dayan, Peter

    2015-01-01

    Reciprocating interactions represent a central feature of all human exchanges. They have been the target of various recent experiments, with healthy participants and psychiatric populations engaging as dyads in multi-round exchanges such as a repeated trust task. Behaviour in such exchanges involves complexities related to each agent’s preference for equity with their partner, beliefs about the partner’s appetite for equity, beliefs about the partner’s model of their partner, and so on. Agents may also plan different numbers of steps into the future. Providing a computationally precise account of the behaviour is an essential step towards understanding what underlies choices. A natural framework for this is that of an interactive partially observable Markov decision process (IPOMDP). However, the various complexities make IPOMDPs inordinately computationally challenging. Here, we show how to approximate the solution for the multi-round trust task using a variant of the Monte-Carlo tree search algorithm. We demonstrate that the algorithm is efficient and effective, and therefore can be used to invert observations of behavioural choices. We use generated behaviour to elucidate the richness and sophistication of interactive inference. PMID:26053429

  19. Monte Carlo Planning Method Estimates Planning Horizons during Interactive Social Exchange.

    Science.gov (United States)

    Hula, Andreas; Montague, P Read; Dayan, Peter

    2015-06-01

    Reciprocating interactions represent a central feature of all human exchanges. They have been the target of various recent experiments, with healthy participants and psychiatric populations engaging as dyads in multi-round exchanges such as a repeated trust task. Behaviour in such exchanges involves complexities related to each agent's preference for equity with their partner, beliefs about the partner's appetite for equity, beliefs about the partner's model of their partner, and so on. Agents may also plan different numbers of steps into the future. Providing a computationally precise account of the behaviour is an essential step towards understanding what underlies choices. A natural framework for this is that of an interactive partially observable Markov decision process (IPOMDP). However, the various complexities make IPOMDPs inordinately computationally challenging. Here, we show how to approximate the solution for the multi-round trust task using a variant of the Monte-Carlo tree search algorithm. We demonstrate that the algorithm is efficient and effective, and therefore can be used to invert observations of behavioural choices. We use generated behaviour to elucidate the richness and sophistication of interactive inference.

  20. Oxygen transport properties estimation by classical trajectory–direct simulation Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Bruno, Domenico, E-mail: domenico.bruno@cnr.it [Istituto di Metodologie Inorganiche e dei Plasmi, Consiglio Nazionale delle Ricerche– Via G. Amendola 122, 70125 Bari (Italy); Frezzotti, Aldo, E-mail: aldo.frezzotti@polimi.it; Ghiroldi, Gian Pietro, E-mail: gpghiro@gmail.com [Dipartimento di Scienze e Tecnologie Aerospaziali, Politecnico di Milano–Via La Masa 34, 20156 Milano (Italy)

    2015-05-15

    Coupling direct simulation Monte Carlo (DSMC) simulations with classical trajectory calculations is a powerful tool to improve predictive capabilities of computational dilute gas dynamics. The considerable increase in computational effort outlined in early applications of the method can be compensated by running simulations on massively parallel computers. In particular, Graphics Processing Unit acceleration has been found quite effective in reducing computing time of classical trajectory (CT)-DSMC simulations. The aim of the present work is to study dilute molecular oxygen flows by modeling binary collisions, in the rigid rotor approximation, through an accurate Potential Energy Surface (PES), obtained by molecular beams scattering. The PES accuracy is assessed by calculating molecular oxygen transport properties by different equilibrium and non-equilibrium CT-DSMC based simulations that provide close values of the transport properties. Comparisons with available experimental data are presented and discussed in the temperature range 300–900 K, where vibrational degrees of freedom are expected to play a limited (but not always negligible) role.

  1. Estimating stepwise debromination pathways of polybrominated diphenyl ethers with an analogue Markov Chain Monte Carlo algorithm.

    Science.gov (United States)

    Zou, Yonghong; Christensen, Erik R; Zheng, Wei; Wei, Hua; Li, An

    2014-11-01

    A stochastic process was developed to simulate the stepwise debromination pathways for polybrominated diphenyl ethers (PBDEs). The stochastic process uses an analogue Markov Chain Monte Carlo (AMCMC) algorithm to generate PBDE debromination profiles. The acceptance or rejection of the randomly drawn stepwise debromination reactions was determined by a maximum likelihood function. The experimental observations at certain time points were used as target profiles; therefore, the stochastic processes are capable of presenting the effects of reaction conditions on the selection of debromination pathways. The application of the model is illustrated by adopting the experimental results of decabromodiphenyl ether (BDE209) in hexane exposed to sunlight. Inferences that were not obvious from experimental data were suggested by model simulations. For example, BDE206 has much higher accumulation at the first 30 min of sunlight exposure. By contrast, model simulation suggests that, BDE206 and BDE207 had comparable yields from BDE209. The reason for the higher BDE206 level is that BDE207 has the highest depletion in producing octa products. Compared to a previous version of the stochastic model based on stochastic reaction sequences (SRS), the AMCMC approach was determined to be more efficient and robust. Due to the feature of only requiring experimental observations as input, the AMCMC model is expected to be applicable to a wide range of PBDE debromination processes, e.g. microbial, photolytic, or joint effects in natural environments. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Monte Carlo Estimation of Absorbed Dose Distributions Obtained from Heterogeneous 106Ru Eye Plaques.

    Science.gov (United States)

    Zaragoza, Francisco J; Eichmann, Marion; Flühs, Dirk; Sauerwein, Wolfgang; Brualla, Lorenzo

    2017-09-01

    The distribution of the emitter substance in 106 Ru eye plaques is usually assumed to be homogeneous for treatment planning purposes. However, this distribution is never homogeneous, and it widely differs from plaque to plaque due to manufacturing factors. By Monte Carlo simulation of radiation transport, we study the absorbed dose distribution obtained from the specific CCA1364 and CCB1256 106 Ru plaques, whose actual emitter distributions were measured. The idealized, homogeneous CCA and CCB plaques are also simulated. The largest discrepancy in depth dose distribution observed between the heterogeneous and the homogeneous plaques was 7.9 and 23.7% for the CCA and CCB plaques, respectively. In terms of isodose lines, the line referring to 100% of the reference dose penetrates 0.2 and 1.8 mm deeper in the case of heterogeneous CCA and CCB plaques, respectively, with respect to the homogeneous counterpart. The observed differences in absorbed dose distributions obtained from heterogeneous and homogeneous plaques are clinically irrelevant if the plaques are used with a lateral safety margin of at least 2 mm. However, these differences may be relevant if the plaques are used in eccentric positioning.

  3. Generalized hybrid Monte Carlo - CMFD methods for fission source convergence

    International Nuclear Information System (INIS)

    Wolters, Emily R.; Larsen, Edward W.; Martin, William R.

    2011-01-01

    In this paper, we generalize the recently published 'CMFD-Accelerated Monte Carlo' method and present two new methods that reduce the statistical error in CMFD-Accelerated Monte Carlo. The CMFD-Accelerated Monte Carlo method uses Monte Carlo to estimate nonlinear functionals used in low-order CMFD equations for the eigenfunction and eigenvalue. The Monte Carlo fission source is then modified to match the resulting CMFD fission source in a 'feedback' procedure. The two proposed methods differ from CMFD-Accelerated Monte Carlo in the definition of the required nonlinear functionals, but they have identical CMFD equations. The proposed methods are compared with CMFD-Accelerated Monte Carlo on a high dominance ratio test problem. All hybrid methods converge the Monte Carlo fission source almost immediately, leading to a large reduction in the number of inactive cycles required. The proposed methods stabilize the fission source more efficiently than CMFD-Accelerated Monte Carlo, leading to a reduction in the number of active cycles required. Finally, as in CMFD-Accelerated Monte Carlo, the apparent variance of the eigenfunction is approximately equal to the real variance, so the real error is well-estimated from a single calculation. This is an advantage over standard Monte Carlo, in which the real error can be underestimated due to inter-cycle correlation. (author)

  4. On the use of stochastic approximation Monte Carlo for Monte Carlo integration

    KAUST Repository

    Liang, Faming

    2009-03-01

    The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration via a dynamically weighted estimator by calling some results from the literature of nonhomogeneous Markov chains. Our numerical results indicate that SAMC can yield significant savings over conventional Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, for the problems for which the energy landscape is rugged. © 2008 Elsevier B.V. All rights reserved.

  5. A Monte Carlo study comparing PIV, ULS and DWLS in the estimation of dichotomous confirmatory factor analysis.

    Science.gov (United States)

    Nestler, Steffen

    2013-02-01

    We conducted a Monte Carlo study to investigate the performance of the polychoric instrumental variable estimator (PIV) in comparison to unweighted least squares (ULS) and diagonally weighted least squares (DWLS) in the estimation of a confirmatory factor analysis model with dichotomous indicators. The simulation involved 144 conditions (1,000 replications per condition) that were defined by a combination of (a) two types of latent factor models, (b) four sample sizes (100, 250, 500, 1,000), (c) three factor loadings (low, moderate, strong), (d) three levels of non-normality (normal, moderately, and extremely non-normal), and (e) whether the factor model was correctly specified or misspecified. The results showed that when the model was correctly specified, PIV produced estimates that were as accurate as ULS and DWLS. Furthermore, the simulation showed that PIV was more robust to structural misspecifications than ULS and DWLS. © 2012 The British Psychological Society.

  6. Monte Carlo methods and models in finance and insurance

    CERN Document Server

    Korn, Ralf; Kroisandt, Gerald

    2010-01-01

    Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of...

  7. Strategije drevesnega preiskovanja Monte Carlo

    OpenAIRE

    VODOPIVEC, TOM

    2018-01-01

    Po preboju pri igri go so metode drevesnega preiskovanja Monte Carlo (ang. Monte Carlo tree search – MCTS) sprožile bliskovit napredek agentov za igranje iger: raziskovalna skupnost je od takrat razvila veliko variant in izboljšav algoritma MCTS ter s tem zagotovila napredek umetne inteligence ne samo pri igrah, ampak tudi v številnih drugih domenah. Čeprav metode MCTS združujejo splošnost naključnega vzorčenja z natančnostjo drevesnega preiskovanja, imajo lahko v praksi težave s počasno konv...

  8. Variational Monte Carlo Technique

    Indian Academy of Sciences (India)

    ias

    on the development of nuclear weapons in Los Alamos ..... cantly improved the paper. ... Carlo simulations of solids, Reviews of Modern Physics, Vol.73, pp.33– ... The computer algorithms are usually based on a random seed that starts the ...

  9. MLE [Maximum Likelihood Estimator] reconstruction of a brain phantom using a Monte Carlo transition matrix and a statistical stopping rule

    International Nuclear Information System (INIS)

    Veklerov, E.; Llacer, J.; Hoffman, E.J.

    1987-10-01

    In order to study properties of the Maximum Likelihood Estimator (MLE) algorithm for image reconstruction in Positron Emission Tomographyy (PET), the algorithm is applied to data obtained by the ECAT-III tomograph from a brain phantom. The procedure for subtracting accidental coincidences from the data stream generated by this physical phantom is such that he resultant data are not Poisson distributed. This makes the present investigation different from other investigations based on computer-simulated phantoms. It is shown that the MLE algorithm is robust enough to yield comparatively good images, especially when the phantom is in the periphery of the field of view, even though the underlying assumption of the algorithm is violated. Two transition matrices are utilized. The first uses geometric considerations only. The second is derived by a Monte Carlo simulation which takes into account Compton scattering in the detectors, positron range, etc. in the detectors. It is demonstrated that the images obtained from the Monte Carlo matrix are superior in some specific ways. A stopping rule derived earlier and allowing the user to stop the iterative process before the images begin to deteriorate is tested. Since the rule is based on the Poisson assumption, it does not work well with the presently available data, although it is successful wit computer-simulated Poisson data

  10. Estimation of absorbed doses from paediatric cone-beam CT scans: MOSFET measurements and Monte Carlo simulations.

    Science.gov (United States)

    Kim, Sangroh; Yoshizumi, Terry T; Toncheva, Greta; Frush, Donald P; Yin, Fang-Fang

    2010-03-01

    The purpose of this study was to establish a dose estimation tool with Monte Carlo (MC) simulations. A 5-y-old paediatric anthropomorphic phantom was computed tomography (CT) scanned to create a voxelised phantom and used as an input for the abdominal cone-beam CT in a BEAMnrc/EGSnrc MC system. An X-ray tube model of the Varian On-Board Imager((R)) was built in the MC system. To validate the model, the absorbed doses at each organ location for standard-dose and low-dose modes were measured in the physical phantom with MOSFET detectors; effective doses were also calculated. In the results, the MC simulations were comparable to the MOSFET measurements. This voxelised phantom approach could produce a more accurate dose estimation than the stylised phantom method. This model can be easily applied to multi-detector CT dosimetry.

  11. A mathematical model for the kidney and estimative of the specific absorbed fractions by Monte Carlo method

    International Nuclear Information System (INIS)

    Todo, A.S.

    1980-01-01

    Presently, the estimates of specific absorbed fractions in various organs of a heterogeneous phantom are based on Monte Carlo calculation for monoenergetic photons uniformly distributed in the organs of an adult phantom. But, it is known that the kidney and some other organs (for example the skeleton) do not retain the radionuclides in an uniform manner in its internal region. So, we developed a model for the kidney including the cortex, medulla and collecting region. This model was utilized to estimate the specific absorbed fractions, for monoenergetic photons or electrons, in various organs of a heterogeneous phantom, when sources were uniformly distributed in each region of the kidney. All results obtained in this work were compared with those using a homogeneous model for the kidney as presented in ORNL-5000. (Author) [pt

  12. Monte Carlo electron/photon transport

    International Nuclear Information System (INIS)

    Mack, J.M.; Morel, J.E.; Hughes, H.G.

    1985-01-01

    A review of nonplasma coupled electron/photon transport using Monte Carlo method is presented. Remarks are mainly restricted to linerarized formalisms at electron energies from 1 keV to 1000 MeV. Applications involving pulse-height estimation, transport in external magnetic fields, and optical Cerenkov production are discussed to underscore the importance of this branch of computational physics. Advances in electron multigroup cross-section generation is reported, and its impact on future code development assessed. Progress toward the transformation of MCNP into a generalized neutral/charged-particle Monte Carlo code is described. 48 refs

  13. Reflections on early Monte Carlo calculations

    International Nuclear Information System (INIS)

    Spanier, J.

    1992-01-01

    Monte Carlo methods for solving various particle transport problems developed in parallel with the evolution of increasingly sophisticated computer programs implementing diffusion theory and low-order moments calculations. In these early years, Monte Carlo calculations and high-order approximations to the transport equation were seen as too expensive to use routinely for nuclear design but served as invaluable aids and supplements to design with less expensive tools. The earliest Monte Carlo programs were quite literal; i.e., neutron and other particle random walk histories were simulated by sampling from the probability laws inherent in the physical system without distoration. Use of such analogue sampling schemes resulted in a good deal of time being spent in examining the possibility of lowering the statistical uncertainties in the sample estimates by replacing simple, and intuitively obvious, random variables by those with identical means but lower variances

  14. Exact Monte Carlo for molecules

    International Nuclear Information System (INIS)

    Lester, W.A. Jr.; Reynolds, P.J.

    1985-03-01

    A brief summary of the fixed-node quantum Monte Carlo method is presented. Results obtained for binding energies, the classical barrier height for H + H 2 , and the singlet-triplet splitting in methylene are presented and discussed. 17 refs

  15. Monte Carlo - Advances and Challenges

    International Nuclear Information System (INIS)

    Brown, Forrest B.; Mosteller, Russell D.; Martin, William R.

    2008-01-01

    Abstract only, full text follows: With ever-faster computers and mature Monte Carlo production codes, there has been tremendous growth in the application of Monte Carlo methods to the analysis of reactor physics and reactor systems. In the past, Monte Carlo methods were used primarily for calculating k eff of a critical system. More recently, Monte Carlo methods have been increasingly used for determining reactor power distributions and many design parameters, such as β eff , l eff , τ, reactivity coefficients, Doppler defect, dominance ratio, etc. These advanced applications of Monte Carlo methods are now becoming common, not just feasible, but bring new challenges to both developers and users: Convergence of 3D power distributions must be assured; confidence interval bias must be eliminated; iterated fission probabilities are required, rather than single-generation probabilities; temperature effects including Doppler and feedback must be represented; isotopic depletion and fission product buildup must be modeled. This workshop focuses on recent advances in Monte Carlo methods and their application to reactor physics problems, and on the resulting challenges faced by code developers and users. The workshop is partly tutorial, partly a review of the current state-of-the-art, and partly a discussion of future work that is needed. It should benefit both novice and expert Monte Carlo developers and users. In each of the topic areas, we provide an overview of needs, perspective on past and current methods, a review of recent work, and discussion of further research and capabilities that are required. Electronic copies of all workshop presentations and material will be available. The workshop is structured as 2 morning and 2 afternoon segments: - Criticality Calculations I - convergence diagnostics, acceleration methods, confidence intervals, and the iterated fission probability, - Criticality Calculations II - reactor kinetics parameters, dominance ratio, temperature

  16. Some problems on Monte Carlo method development

    International Nuclear Information System (INIS)

    Pei Lucheng

    1992-01-01

    This is a short paper on some problems of Monte Carlo method development. The content consists of deep-penetration problems, unbounded estimate problems, limitation of Mdtropolis' method, dependency problem in Metropolis' method, random error interference problems and random equations, intellectualisation and vectorization problems of general software

  17. Variance Reduction Techniques in Monte Carlo Methods

    NARCIS (Netherlands)

    Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.

    2010-01-01

    Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the

  18. Monte Carlo method for array criticality calculations

    International Nuclear Information System (INIS)

    Dickinson, D.; Whitesides, G.E.

    1976-01-01

    The Monte Carlo method for solving neutron transport problems consists of mathematically tracing paths of individual neutrons collision by collision until they are lost by absorption or leakage. The fate of the neutron after each collision is determined by the probability distribution functions that are formed from the neutron cross-section data. These distributions are sampled statistically to establish the successive steps in the neutron's path. The resulting data, accumulated from following a large number of batches, are analyzed to give estimates of k/sub eff/ and other collision-related quantities. The use of electronic computers to produce the simulated neutron histories, initiated at Los Alamos Scientific Laboratory, made the use of the Monte Carlo method practical for many applications. In analog Monte Carlo simulation, the calculation follows the physical events of neutron scattering, absorption, and leakage. To increase calculational efficiency, modifications such as the use of statistical weights are introduced. The Monte Carlo method permits the use of a three-dimensional geometry description and a detailed cross-section representation. Some of the problems in using the method are the selection of the spatial distribution for the initial batch, the preparation of the geometry description for complex units, and the calculation of error estimates for region-dependent quantities such as fluxes. The Monte Carlo method is especially appropriate for criticality safety calculations since it permits an accurate representation of interacting units of fissile material. Dissimilar units, units of complex shape, moderators between units, and reflected arrays may be calculated. Monte Carlo results must be correlated with relevant experimental data, and caution must be used to ensure that a representative set of neutron histories is produced

  19. Monte Carlo-based development of a shield and total background estimation for the COBRA experiment

    International Nuclear Information System (INIS)

    Heidrich, Nadine

    2014-11-01

    The COBRA experiment aims for the measurement of the neutrinoless double beta decay and thus for the determination the effective Majorana mass of the neutrino. To be competitive with other next-generation experiments the background rate has to be in the order of 10 -3 counts/kg/keV/yr, which is a challenging criterion. This thesis deals with the development of a shield design and the calculation of the expected total background rate for the large scale COBRA experiment containing 13824 6 cm 3 CdZnTe detectors. For the development of a shield single-layer and multi-layer shields were investigated and a shield design was optimized concerning high-energy muon-induced neutrons. As the best design the combination of 10 cm boron doped polyethylene as outermost layer, 20 cm lead and 10 cm copper as innermost layer were determined. It showed the best performance regarding neutron attenuation as well as (n, γ) self-shielding effects leading to a negligible background rate of less than 2.10 -6 counts/kg/keV/yr. Additionally. the shield with a thickness of 40 cm is compact and costeffective. In the next step the expected total background rate was computed taking into account individual setup parts and various background sources including natural and man-made radioactivity, cosmic ray-induced background and thermal neutrons. Furthermore, a comparison of measured data from the COBRA demonstrator setup with Monte Carlo data was used to calculate reliable contamination levels of the single setup parts. The calculation was performed conservatively to prevent an underestimation. In addition, the contribution to the total background rate regarding the individual detector parts and background sources was investigated. The main portion arise from the Delrin support structure, the Glyptal lacquer followed by the circuit board of the high voltage supply. Most background events originate from particles with a quantity of 99 % in total. Regarding surface events a contribution of 26

  20. Monte Carlo-based development of a shield and total background estimation for the COBRA experiment

    Energy Technology Data Exchange (ETDEWEB)

    Heidrich, Nadine

    2014-11-15

    The COBRA experiment aims for the measurement of the neutrinoless double beta decay and thus for the determination the effective Majorana mass of the neutrino. To be competitive with other next-generation experiments the background rate has to be in the order of 10{sup -3} counts/kg/keV/yr, which is a challenging criterion. This thesis deals with the development of a shield design and the calculation of the expected total background rate for the large scale COBRA experiment containing 13824 6 cm{sup 3} CdZnTe detectors. For the development of a shield single-layer and multi-layer shields were investigated and a shield design was optimized concerning high-energy muon-induced neutrons. As the best design the combination of 10 cm boron doped polyethylene as outermost layer, 20 cm lead and 10 cm copper as innermost layer were determined. It showed the best performance regarding neutron attenuation as well as (n, γ) self-shielding effects leading to a negligible background rate of less than 2.10{sup -6} counts/kg/keV/yr. Additionally. the shield with a thickness of 40 cm is compact and costeffective. In the next step the expected total background rate was computed taking into account individual setup parts and various background sources including natural and man-made radioactivity, cosmic ray-induced background and thermal neutrons. Furthermore, a comparison of measured data from the COBRA demonstrator setup with Monte Carlo data was used to calculate reliable contamination levels of the single setup parts. The calculation was performed conservatively to prevent an underestimation. In addition, the contribution to the total background rate regarding the individual detector parts and background sources was investigated. The main portion arise from the Delrin support structure, the Glyptal lacquer followed by the circuit board of the high voltage supply. Most background events originate from particles with a quantity of 99 % in total. Regarding surface events a

  1. Monte Carlo simulation for the estimation of the glandular breast dose for a digital breast tomosynthesis system

    International Nuclear Information System (INIS)

    Rodrigues, Leonardo; Braz, Delson; Goncalves Magalhaes, Luis Alexandre

    2015-01-01

    Digital breast tomosynthesis (DBT) is a screening and diagnostic modality that acquires images of the breast at multiple angles during a short scan. The Selenia Dimensions (Hologic, Bedford, Mass) DBT system can perform both full-field digital mammography and DBT. The system acquires 15 projections over a 15 deg. angular range (from -7.5 deg. to +7.5 deg.). An important factor in determining the optimal imaging technique for breast tomosynthesis is the radiation dose. In breast imaging, the radiation dose of concern is that deposited in the glandular tissue of the breast because this is the tissue that has a risk of developing cancer. The concept of the normalised mean glandular dose (DgN) has been introduced as the metric for the dose in breast imaging. The DgN is difficult to measure. The Monte Carlo techniques offer an alternative method for a realistic estimation of the radiation dose. The purpose of this work was to use the Monte Carlo code MCNPX technique to generate monoenergetic glandular dose data for estimating the breast tissue dose in tomosynthesis for arbitrary spectra as well as to observe the deposited radiation dose by projection on the glandular portion of the breast in a Selenia Dimensions DBT system. A Monte Carlo simulation of the system was developed to compute the DgN in a craniocaudal view. Monoenergetic X-ray beams from 10 to 49 keV in 1-keV increments were used. The simulation utilised the assumption of a homogeneous breast composition and three compositions (0 % glandular, 50 % glandular and 100 % glandular). The glandular and adipose tissue compositions were specified according ICRU Report 44. A skin layer of 4 mm was assumed to encapsulate the breast on all surfaces. The breast size was varied using the chest wall-to-nipple distance (CND) and compressed breast thickness (t). In this work, the authors assumed a CND of 5 cm and the thicknesses ranged from 2 to 8 cm, in steps of 2 cm. The fractional energy absorption increases (up to 44

  2. Estimate of the melanin content in human hairs by the inverse Monte-Carlo method using a system for digital image analysis

    International Nuclear Information System (INIS)

    Bashkatov, A N; Genina, Elina A; Kochubei, V I; Tuchin, Valerii V

    2006-01-01

    Based on the digital image analysis and inverse Monte-Carlo method, the proximate analysis method is deve-loped and the optical properties of hairs of different types are estimated in three spectral ranges corresponding to three colour components. The scattering and absorption properties of hairs are separated for the first time by using the inverse Monte-Carlo method. The content of different types of melanin in hairs is estimated from the absorption coefficient. It is shown that the dominating type of melanin in dark hairs is eumelanin, whereas in light hairs pheomelanin dominates. (special issue devoted to multiple radiation scattering in random media)

  3. Monte Carlo estimation of the dose and heating of cobalt adjuster rods irradiated in the CANDU 6 reactor core

    International Nuclear Information System (INIS)

    Gugiu, D.; Dumitrache, I.

    2005-01-01

    The present work is a part of a more complex project related to the replacement of the original stainless steel adjuster rods with cobalt assemblies in the CANDU 6 reactor core. The 60 Co produced by 59 Co irradiation could be used extensively in medicine and industry. The paper will mainly describe some of the reactor physics and safety requirements that must be carried into practice for the Co adjuster rods. The computations related to the neutronic equivalence of the stainless steel adjusters with the Co adjuster assemblies, as well as the estimations of the activity and heating of the irradiated cobalt rods, are performed using the Monte Carlo codes MCNP5 and MONTEBURNS 2.1. The activity values are used to evaluate the dose at the surface of the device designed to transport the cobalt adjusters. (authors)

  4. Decoding the non-stationary neuron spike trains by dual Monte Carlo point process estimation in motor Brain Machine Interfaces.

    Science.gov (United States)

    Liao, Yuxi; Li, Hongbao; Zhang, Qiaosheng; Fan, Gong; Wang, Yiwen; Zheng, Xiaoxiang

    2014-01-01

    Decoding algorithm in motor Brain Machine Interfaces translates the neural signals to movement parameters. They usually assume the connection between the neural firings and movements to be stationary, which is not true according to the recent studies that observe the time-varying neuron tuning property. This property results from the neural plasticity and motor learning etc., which leads to the degeneration of the decoding performance when the model is fixed. To track the non-stationary neuron tuning during decoding, we propose a dual model approach based on Monte Carlo point process filtering method that enables the estimation also on the dynamic tuning parameters. When applied on both simulated neural signal and in vivo BMI data, the proposed adaptive method performs better than the one with static tuning parameters, which raises a promising way to design a long-term-performing model for Brain Machine Interfaces decoder.

  5. Monte Carlo estimation of total variation distance of Markov chains on large spaces, with application to phylogenetics.

    Science.gov (United States)

    Herbei, Radu; Kubatko, Laura

    2013-03-26

    Markov chains are widely used for modeling in many areas of molecular biology and genetics. As the complexity of such models advances, it becomes increasingly important to assess the rate at which a Markov chain converges to its stationary distribution in order to carry out accurate inference. A common measure of convergence to the stationary distribution is the total variation distance, but this measure can be difficult to compute when the state space of the chain is large. We propose a Monte Carlo method to estimate the total variation distance that can be applied in this situation, and we demonstrate how the method can be efficiently implemented by taking advantage of GPU computing techniques. We apply the method to two Markov chains on the space of phylogenetic trees, and discuss the implications of our findings for the development of algorithms for phylogenetic inference.

  6. Adaptive Markov Chain Monte Carlo

    KAUST Repository

    Jadoon, Khan

    2016-08-08

    A substantial interpretation of electromagnetic induction (EMI) measurements requires quantifying optimal model parameters and uncertainty of a nonlinear inverse problem. For this purpose, an adaptive Bayesian Markov chain Monte Carlo (MCMC) algorithm is used to assess multi-orientation and multi-offset EMI measurements in an agriculture field with non-saline and saline soil. In the MCMC simulations, posterior distribution was computed using Bayes rule. The electromagnetic forward model based on the full solution of Maxwell\\'s equations was used to simulate the apparent electrical conductivity measured with the configurations of EMI instrument, the CMD mini-Explorer. The model parameters and uncertainty for the three-layered earth model are investigated by using synthetic data. Our results show that in the scenario of non-saline soil, the parameters of layer thickness are not well estimated as compared to layers electrical conductivity because layer thicknesses in the model exhibits a low sensitivity to the EMI measurements, and is hence difficult to resolve. Application of the proposed MCMC based inversion to the field measurements in a drip irrigation system demonstrate that the parameters of the model can be well estimated for the saline soil as compared to the non-saline soil, and provide useful insight about parameter uncertainty for the assessment of the model outputs.

  7. Current and future applications of Monte Carlo

    International Nuclear Information System (INIS)

    Zaidi, H.

    2003-01-01

    Full text: The use of radionuclides in medicine has a long history and encompasses a large area of applications including diagnosis and radiation treatment of cancer patients using either external or radionuclide radiotherapy. The 'Monte Carlo method'describes a very broad area of science, in which many processes, physical systems, and phenomena are simulated by statistical methods employing random numbers. The general idea of Monte Carlo analysis is to create a model, which is as similar as possible to the real physical system of interest, and to create interactions within that system based on known probabilities of occurrence, with random sampling of the probability density functions (pdfs). As the number of individual events (called 'histories') is increased, the quality of the reported average behavior of the system improves, meaning that the statistical uncertainty decreases. The use of the Monte Carlo method to simulate radiation transport has become the most accurate means of predicting absorbed dose distributions and other quantities of interest in the radiation treatment of cancer patients using either external or radionuclide radiotherapy. The same trend has occurred for the estimation of the absorbed dose in diagnostic procedures using radionuclides as well as the assessment of image quality and quantitative accuracy of radionuclide imaging. As a consequence of this generalized use, many questions are being raised primarily about the need and potential of Monte Carlo techniques, but also about how accurate it really is, what would it take to apply it clinically and make it available widely to the nuclear medicine community at large. Many of these questions will be answered when Monte Carlo techniques are implemented and used for more routine calculations and for in-depth investigations. In this paper, the conceptual role of the Monte Carlo method is briefly introduced and followed by a survey of its different applications in diagnostic and therapeutic

  8. (U) Introduction to Monte Carlo Methods

    Energy Technology Data Exchange (ETDEWEB)

    Hungerford, Aimee L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-20

    Monte Carlo methods are very valuable for representing solutions to particle transport problems. Here we describe a “cook book” approach to handling the terms in a transport equation using Monte Carlo methods. Focus is on the mechanics of a numerical Monte Carlo code, rather than the mathematical foundations of the method.

  9. Sequential Monte Carlo filter for state estimation of LiFePO4 batteries based on an online updated model

    Science.gov (United States)

    Li, Jiahao; Klee Barillas, Joaquin; Guenther, Clemens; Danzer, Michael A.

    2014-02-01

    Battery state monitoring is one of the key techniques in battery management systems e.g. in electric vehicles. An accurate estimation can help to improve the system performance and to prolong the battery remaining useful life. Main challenges for the state estimation for LiFePO4 batteries are the flat characteristic of open-circuit-voltage over battery state of charge (SOC) and the existence of hysteresis phenomena. Classical estimation approaches like Kalman filtering show limitations to handle nonlinear and non-Gaussian error distribution problems. In addition, uncertainties in the battery model parameters must be taken into account to describe the battery degradation. In this paper, a novel model-based method combining a Sequential Monte Carlo filter with adaptive control to determine the cell SOC and its electric impedance is presented. The applicability of this dual estimator is verified using measurement data acquired from a commercial LiFePO4 cell. Due to a better handling of the hysteresis problem, results show the benefits of the proposed method against the estimation with an Extended Kalman filter.

  10. Present status of transport code development based on Monte Carlo method

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki

    1985-01-01

    The present status of development in Monte Carlo code is briefly reviewed. The main items are the followings; Application fields, Methods used in Monte Carlo code (geometry spectification, nuclear data, estimator and variance reduction technique) and unfinished works, Typical Monte Carlo codes and Merits of continuous energy Monte Carlo code. (author)

  11. Monte Carlo Methods in ICF

    Science.gov (United States)

    Zimmerman, George B.

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

  12. Monte Carlo methods in ICF

    International Nuclear Information System (INIS)

    Zimmerman, G.B.

    1997-01-01

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials. copyright 1997 American Institute of Physics

  13. Monte Carlo methods in ICF

    International Nuclear Information System (INIS)

    Zimmerman, George B.

    1997-01-01

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials

  14. Shell model Monte Carlo methods

    International Nuclear Information System (INIS)

    Koonin, S.E.; Dean, D.J.; Langanke, K.

    1997-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; the resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo (SMMC) methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, the thermal and rotational behavior of rare-earth and γ-soft nuclei, and the calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. (orig.)

  15. A contribution Monte Carlo method

    International Nuclear Information System (INIS)

    Aboughantous, C.H.

    1994-01-01

    A Contribution Monte Carlo method is developed and successfully applied to a sample deep-penetration shielding problem. The random walk is simulated in most of its parts as in conventional Monte Carlo methods. The probability density functions (pdf's) are expressed in terms of spherical harmonics and are continuous functions in direction cosine and azimuthal angle variables as well as in position coordinates; the energy is discretized in the multigroup approximation. The transport pdf is an unusual exponential kernel strongly dependent on the incident and emergent directions and energies and on the position of the collision site. The method produces the same results obtained with the deterministic method with a very small standard deviation, with as little as 1,000 Contribution particles in both analog and nonabsorption biasing modes and with only a few minutes CPU time

  16. Shell model Monte Carlo methods

    International Nuclear Information System (INIS)

    Koonin, S.E.

    1996-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs

  17. Parallel Monte Carlo reactor neutronics

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Brown, F.B.

    1994-01-01

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

  18. Elements of Monte Carlo techniques

    International Nuclear Information System (INIS)

    Nagarajan, P.S.

    2000-01-01

    The Monte Carlo method is essentially mimicking the real world physical processes at the microscopic level. With the incredible increase in computing speeds and ever decreasing computing costs, there is widespread use of the method for practical problems. The method is used in calculating algorithm-generated sequences known as pseudo random sequence (prs)., probability density function (pdf), test for randomness, extension to multidimensional integration etc

  19. Adaptive Multilevel Monte Carlo Simulation

    KAUST Repository

    Hoel, H

    2011-08-23

    This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).

  20. Geometrical splitting in Monte Carlo

    International Nuclear Information System (INIS)

    Dubi, A.; Elperin, T.; Dudziak, D.J.

    1982-01-01

    A statistical model is presented by which a direct statistical approach yielded an analytic expression for the second moment, the variance ratio, and the benefit function in a model of an n surface-splitting Monte Carlo game. In addition to the insight into the dependence of the second moment on the splitting parameters the main importance of the expressions developed lies in their potential to become a basis for in-code optimization of splitting through a general algorithm. Refs

  1. Extending canonical Monte Carlo methods

    International Nuclear Information System (INIS)

    Velazquez, L; Curilef, S

    2010-01-01

    In this paper, we discuss the implications of a recently obtained equilibrium fluctuation-dissipation relation for the extension of the available Monte Carlo methods on the basis of the consideration of the Gibbs canonical ensemble to account for the existence of an anomalous regime with negative heat capacities C α with α≈0.2 for the particular case of the 2D ten-state Potts model

  2. Monte Carlo techniques for analyzing deep-penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1986-01-01

    Current methods and difficulties in Monte Carlo deep-penetration calculations are reviewed, including statistical uncertainty and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multigroup Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications

  3. Variational Variance Reduction for Monte Carlo Criticality Calculations

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Larsen, Edward W.

    2001-01-01

    A new variational variance reduction (VVR) method for Monte Carlo criticality calculations was developed. This method employs (a) a variational functional that is more accurate than the standard direct functional, (b) a representation of the deterministically obtained adjoint flux that is especially accurate for optically thick problems with high scattering ratios, and (c) estimates of the forward flux obtained by Monte Carlo. The VVR method requires no nonanalog Monte Carlo biasing, but it may be used in conjunction with Monte Carlo biasing schemes. Some results are presented from a class of criticality calculations involving alternating arrays of fuel and moderator regions

  4. Non statistical Monte-Carlo

    International Nuclear Information System (INIS)

    Mercier, B.

    1985-04-01

    We have shown that the transport equation can be solved with particles, like the Monte-Carlo method, but without random numbers. In the Monte-Carlo method, particles are created from the source, and are followed from collision to collision until either they are absorbed or they leave the spatial domain. In our method, particles are created from the original source, with a variable weight taking into account both collision and absorption. These particles are followed until they leave the spatial domain, and we use them to determine a first collision source. Another set of particles is then created from this first collision source, and tracked to determine a second collision source, and so on. This process introduces an approximation which does not exist in the Monte-Carlo method. However, we have analyzed the effect of this approximation, and shown that it can be limited. Our method is deterministic, gives reproducible results. Furthermore, when extra accuracy is needed in some region, it is easier to get more particles to go there. It has the same kind of applications: rather problems where streaming is dominant than collision dominated problems

  5. BREM5 electroweak Monte Carlo

    International Nuclear Information System (INIS)

    Kennedy, D.C. II.

    1987-01-01

    This is an update on the progress of the BREMMUS Monte Carlo simulator, particularly in its current incarnation, BREM5. The present report is intended only as a follow-up to the Mark II/Granlibakken proceedings, and those proceedings should be consulted for a complete description of the capabilities and goals of the BREMMUS program. The new BREM5 program improves on the previous version of BREMMUS, BREM2, in a number of important ways. In BREM2, the internal loop (oblique) corrections were not treated in consistent fashion, a deficiency that led to renormalization scheme-dependence; i.e., physical results, such as cross sections, were dependent on the method used to eliminate infinities from the theory. Of course, this problem cannot be tolerated in a Monte Carlo designed for experimental use. BREM5 incorporates a new way of treating the oblique corrections, as explained in the Granlibakken proceedings, that guarantees renormalization scheme-independence and dramatically simplifies the organization and calculation of radiative corrections. This technique is to be presented in full detail in a forthcoming paper. BREM5 is, at this point, the only Monte Carlo to contain the entire set of one-loop corrections to electroweak four-fermion processes and renormalization scheme-independence. 3 figures

  6. Monte Carlo simulation of experiments

    International Nuclear Information System (INIS)

    Opat, G.I.

    1977-07-01

    An outline of the technique of computer simulation of particle physics experiments by the Monte Carlo method is presented. Useful special purpose subprograms are listed and described. At each stage the discussion is made concrete by direct reference to the programs SIMUL8 and its variant MONTE-PION, written to assist in the analysis of the radiative decay experiments μ + → e + ν sub(e) antiνγ and π + → e + ν sub(e)γ, respectively. These experiments were based on the use of two large sodium iodide crystals, TINA and MINA, as e and γ detectors. Instructions for the use of SIMUL8 and MONTE-PION are given. (author)

  7. Fatigue damage estimation in non-linear systems using a combination of Monte Carlo simulation and the First Order Reliability Method

    DEFF Research Database (Denmark)

    Jensen, Jørgen Juncher

    2015-01-01

    For non-linear systems the estimation of fatigue damage under stochastic loadings can be rather time-consuming. Usually Monte Carlo simulation (MCS) is applied, but the coefficient-of-variation (COV) can be large if only a small set of simulations can be done due to otherwise excessive CPU time...

  8. Dynamic bounds coupled with Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Rajabalinejad, M., E-mail: M.Rajabalinejad@tudelft.n [Faculty of Civil Engineering, Delft University of Technology, Delft (Netherlands); Meester, L.E. [Delft Institute of Applied Mathematics, Delft University of Technology, Delft (Netherlands); Gelder, P.H.A.J.M. van; Vrijling, J.K. [Faculty of Civil Engineering, Delft University of Technology, Delft (Netherlands)

    2011-02-15

    For the reliability analysis of engineering structures a variety of methods is known, of which Monte Carlo (MC) simulation is widely considered to be among the most robust and most generally applicable. To reduce simulation cost of the MC method, variance reduction methods are applied. This paper describes a method to reduce the simulation cost even further, while retaining the accuracy of Monte Carlo, by taking into account widely present monotonicity. For models exhibiting monotonic (decreasing or increasing) behavior, dynamic bounds (DB) are defined, which in a coupled Monte Carlo simulation are updated dynamically, resulting in a failure probability estimate, as well as a strict (non-probabilistic) upper and lower bounds. Accurate results are obtained at a much lower cost than an equivalent ordinary Monte Carlo simulation. In a two-dimensional and a four-dimensional numerical example, the cost reduction factors are 130 and 9, respectively, where the relative error is smaller than 5%. At higher accuracy levels, this factor increases, though this effect is expected to be smaller with increasing dimension. To show the application of DB method to real world problems, it is applied to a complex finite element model of a flood wall in New Orleans.

  9. Transfer-Matrix Monte Carlo Estimates of Critical Points in the Simple Cubic Ising, Planar and Heisenberg Models

    NARCIS (Netherlands)

    Nightingale, M.P.; Blöte, H.W.J.

    1996-01-01

    The principle and the efficiency of the Monte Carlo transfer-matrix algorithm are discussed. Enhancements of this algorithm are illustrated by applications to several phase transitions in lattice spin models. We demonstrate how the statistical noise can be reduced considerably by a similarity

  10. Non-Pilot-Aided Sequential Monte Carlo Method to Joint Signal, Phase Noise, and Frequency Offset Estimation in Multicarrier Systems

    Directory of Open Access Journals (Sweden)

    Christelle Garnier

    2008-05-01

    Full Text Available We address the problem of phase noise (PHN and carrier frequency offset (CFO mitigation in multicarrier receivers. In multicarrier systems, phase distortions cause two effects: the common phase error (CPE and the intercarrier interference (ICI which severely degrade the accuracy of the symbol detection stage. Here, we propose a non-pilot-aided scheme to jointly estimate PHN, CFO, and multicarrier signal in time domain. Unlike existing methods, non-pilot-based estimation is performed without any decision-directed scheme. Our approach to the problem is based on Bayesian estimation using sequential Monte Carlo filtering commonly referred to as particle filtering. The particle filter is efficiently implemented by combining the principles of the Rao-Blackwellization technique and an approximate optimal importance function for phase distortion sampling. Moreover, in order to fully benefit from time-domain processing, we propose a multicarrier signal model which includes the redundancy information induced by the cyclic prefix, thus leading to a significant performance improvement. Simulation results are provided in terms of bit error rate (BER and mean square error (MSE to illustrate the efficiency and the robustness of the proposed algorithm.

  11. Monte Carlo simulation of prompt γ-ray emission in proton therapy using a specific track length estimator

    International Nuclear Information System (INIS)

    El Kanawati, W; Létang, J M; Sarrut, D; Freud, N; Dauvergne, D; Pinto, M; Testa, É

    2015-01-01

    A Monte Carlo (MC) variance reduction technique is developed for prompt-γ emitters calculations in proton therapy. Prompt-γ emitted through nuclear fragmentation reactions and exiting the patient during proton therapy could play an important role to help monitoring the treatment. However, the estimation of the number and the energy of emitted prompt-γ per primary proton with MC simulations is a slow process. In order to estimate the local distribution of prompt-γ emission in a volume of interest for a given proton beam of the treatment plan, a MC variance reduction technique based on a specific track length estimator (TLE) has been developed. First an elemental database of prompt-γ emission spectra is established in the clinical energy range of incident protons for all elements in the composition of human tissues. This database of the prompt-γ spectra is built offline with high statistics. Regarding the implementation of the prompt-γ TLE MC tally, each proton deposits along its track the expectation of the prompt-γ spectra from the database according to the proton kinetic energy and the local material composition. A detailed statistical study shows that the relative efficiency mainly depends on the geometrical distribution of the track length. Benchmarking of the proposed prompt-γ TLE MC technique with respect to an analogous MC technique is carried out. A large relative efficiency gain is reported, ca. 10 5 . (paper)

  12. Monte Carlo estimates of interfacial tension in the two-dimensional Ising model from non-equilibrium methods

    International Nuclear Information System (INIS)

    Híjar, Humberto; Sutmann, Godehard

    2008-01-01

    Non-equilibrium methods for estimating free energy differences are used in order to calculate the interfacial tension between domains with opposite magnetizations in two-dimensional Ising lattices. Non-equilibrium processes are driven by changing the boundary conditions for two opposite sides of the lattice from periodic to antiperiodic and vice versa. This mechanism, which promotes the appearance and disappearance of the interface, is studied by means of Monte Carlo simulations performed at different rates and using different algorithms, thus allowing for testing the applicability of non-equilibrium methods for processes driven far from or close to equilibrium. Interfaces in lattices with different widths and heights are studied and the interface tension as a function of these quantities is obtained. It is found that the estimates of the interfacial tension from non-equilibrium procedures are in good agreement with previous reports as well as with exact results. The efficiency of the different procedures used is analyzed and the dynamics of the interface under these perturbations is briefly discussed. A method for determining the efficiency of non-equilibrium methods as regards thermodynamic perturbation is also presented. It is found that for all cases studied, the Crooks non-equilibrium method for estimating free energy differences is the most efficient one

  13. Neutron point-flux calculation by Monte Carlo

    International Nuclear Information System (INIS)

    Eichhorn, M.

    1986-04-01

    A survey of the usual methods for estimating flux at a point is given. The associated variance-reducing techniques in direct Monte Carlo games are explained. The multigroup Monte Carlo codes MC for critical systems and PUNKT for point source-point detector-systems are represented, and problems in applying the codes to practical tasks are discussed. (author)

  14. Monte Carlo Simulation of an American Option

    Directory of Open Access Journals (Sweden)

    Gikiri Thuo

    2007-04-01

    Full Text Available We implement gradient estimation techniques for sensitivity analysis of option pricing which can be efficiently employed in Monte Carlo simulation. Using these techniques we can simultaneously obtain an estimate of the option value together with the estimates of sensitivities of the option value to various parameters of the model. After deriving the gradient estimates we incorporate them in an iterative stochastic approximation algorithm for pricing an option with early exercise features. We illustrate the procedure using an example of an American call option with a single dividend that is analytically tractable. In particular we incorporate estimates for the gradient with respect to the early exercise threshold level.

  15. SIMULACIÓN DE MONTE CARLO APLICADA A LA ESTIMACIÓN DE DEPRESIONES RÁPIDAS DE LA TENSIÓN EN REDES ELÉCTRICAS MONTE CARLO SIMULATION APPLIED TO THE ESTIMATION OF VOLTAGE DIPS IN ELECTRIC NETWORKS

    Directory of Open Access Journals (Sweden)

    Miguel Arias Albornoz

    2008-09-01

    Full Text Available En este trabajo se aplica el método de simulación de Monte Carlo (MC para estimar el número de depresiones rápidas de tensión (dips esperados en barras de una red eléctrica. Las estimaciones obtenidas a través de MC se comparan con los resultados de otro método de cálculo conocido como Método de Posiciones de Falla (MPF. Entre los resultados se muestra tanto la convergencia del algoritmo MC a los valores de largo plazo del método MPF como la distribución completa de frecuencias para diferentes eventos, lo cual representa información valiosa para apoyar la toma de decisiones sobre el empleo de equipos sensibles a este tipo de perturbación.In this work, the Monte Carlo simulation method (MC is applied to estimate the number of expected voltage dips in the nodes of an electric network. The estimations obtained through MC are compared with the results of another method of calculation, known as Failure Position Method (MPF. In the results, both the convergence of the algorithm with the long-term values of the MPF method and the complete distribution of frequencies for different events are shown. This represents valuable information to support the decision-making process for equipment that is sensitive to this type of perturbation.

  16. Multilevel sequential Monte-Carlo samplers

    KAUST Repository

    Jasra, Ajay

    2016-01-01

    Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.

  17. Multilevel sequential Monte-Carlo samplers

    KAUST Repository

    Jasra, Ajay

    2016-01-05

    Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.

  18. Markov models for digraph panel data : Monte Carlo-based derivative estimation

    NARCIS (Netherlands)

    Schweinberger, Michael; Snijders, Tom A. B.

    2007-01-01

    A parametric, continuous-time Markov model for digraph panel data is considered. The parameter is estimated by the method of moments. A convenient method for estimating the variance-covariance matrix of the moment estimator relies on the delta method, requiring the Jacobian matrix-that is, the

  19. APPLICATION OF BAYESIAN MONTE CARLO ANALYSIS TO A LAGRANGIAN PHOTOCHEMICAL AIR QUALITY MODEL. (R824792)

    Science.gov (United States)

    Uncertainties in ozone concentrations predicted with a Lagrangian photochemical air quality model have been estimated using Bayesian Monte Carlo (BMC) analysis. Bayesian Monte Carlo analysis provides a means of combining subjective "prior" uncertainty estimates developed ...

  20. Autocorrelations in hybrid Monte Carlo simulations

    International Nuclear Information System (INIS)

    Schaefer, Stefan; Virotta, Francesco

    2010-11-01

    Simulations of QCD suffer from severe critical slowing down towards the continuum limit. This problem is known to be prominent in the topological charge, however, all observables are affected to various degree by these slow modes in the Monte Carlo evolution. We investigate the slowing down in high statistics simulations and propose a new error analysis method, which gives a realistic estimate of the contribution of the slow modes to the errors. (orig.)

  1. Cost of splitting in Monte Carlo transport

    International Nuclear Information System (INIS)

    Everett, C.J.; Cashwell, E.D.

    1978-03-01

    In a simple transport problem designed to estimate transmission through a plane slab of x free paths by Monte Carlo methods, it is shown that m-splitting (m > or = 2) does not pay unless exp(x) > m(m + 3)/(m - 1). In such a case, the minimum total cost in terms of machine time is obtained as a function of m, and the optimal value of m is determined

  2. An Evaluation of the Plant Density Estimator the Point-Centred Quarter Method (PCQM Using Monte Carlo Simulation.

    Directory of Open Access Journals (Sweden)

    Md Nabiul Islam Khan

    Full Text Available In the Point-Centred Quarter Method (PCQM, the mean distance of the first nearest plants in each quadrant of a number of random sample points is converted to plant density. It is a quick method for plant density estimation. In recent publications the estimator equations of simple PCQM (PCQM1 and higher order ones (PCQM2 and PCQM3, which uses the distance of the second and third nearest plants, respectively show discrepancy. This study attempts to review PCQM estimators in order to find the most accurate equation form. We tested the accuracy of different PCQM equations using Monte Carlo Simulations in simulated (having 'random', 'aggregated' and 'regular' spatial patterns plant populations and empirical ones.PCQM requires at least 50 sample points to ensure a desired level of accuracy. PCQM with a corrected estimator is more accurate than with a previously published estimator. The published PCQM versions (PCQM1, PCQM2 and PCQM3 show significant differences in accuracy of density estimation, i.e. the higher order PCQM provides higher accuracy. However, the corrected PCQM versions show no significant differences among them as tested in various spatial patterns except in plant assemblages with a strong repulsion (plant competition. If N is number of sample points and R is distance, the corrected estimator of PCQM1 is 4(4N - 1/(π ∑ R2 but not 12N/(π ∑ R2, of PCQM2 is 4(8N - 1/(π ∑ R2 but not 28N/(π ∑ R2 and of PCQM3 is 4(12N - 1/(π ∑ R2 but not 44N/(π ∑ R2 as published.If the spatial pattern of a plant association is random, PCQM1 with a corrected equation estimator and over 50 sample points would be sufficient to provide accurate density estimation. PCQM using just the nearest tree in each quadrant is therefore sufficient, which facilitates sampling of trees, particularly in areas with just a few hundred trees per hectare. PCQM3 provides the best density estimations for all types of plant assemblages including the repulsion process

  3. Monte Carlo Particle Lists: MCPL

    DEFF Research Database (Denmark)

    Kittelmann, Thomas; Klinkby, Esben Bryndt; Bergbäck Knudsen, Erik

    2017-01-01

    A binary format with lists of particle state information, for interchanging particles between various Monte Carlo simulation applications, is presented. Portable C code for file manipulation is made available to the scientific community, along with converters and plugins for several popular...... simulation packages. Program summary: Program Title: MCPL. Program Files doi: http://dx.doi.org/10.17632/cby92vsv5g.1 Licensing provisions: CC0 for core MCPL, see LICENSE file for details. Programming language: C and C++ External routines/libraries: Geant4, MCNP, McStas, McXtrace Nature of problem: Saving...

  4. Monte Carlo simulation for the estimation of iron in human whole ...

    Indian Academy of Sciences (India)

    The simulation shows that theobtained results are in good agreement with experimental data, and better than the theoretical XCOM values. The study indicates that MCNP simulation is an excellent tool to estimate the iron concentration in the blood samples. The MCNP code can also be utilized to estimate other trace ...

  5. Employing a Monte Carlo algorithm in Newton-type methods for restricted maximum likelihood estimation of genetic parameters.

    Directory of Open Access Journals (Sweden)

    Kaarina Matilainen

    Full Text Available Estimation of variance components by Monte Carlo (MC expectation maximization (EM restricted maximum likelihood (REML is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR, where the information matrix was generated via sampling; MC average information(AI, where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.

  6. Monte Carlo techniques in radiation therapy

    CERN Document Server

    Verhaegen, Frank

    2013-01-01

    Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...

  7. On the predictivity of pore-scale simulations: estimating uncertainties with multilevel Monte Carlo

    KAUST Repository

    Icardi, Matteo; Boccardo, Gianluca; Tempone, Raul

    2016-01-01

    heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity

  8. Comparison of the GUM and Monte Carlo methods on the flatness uncertainty estimation in coordinate measuring machine

    Directory of Open Access Journals (Sweden)

    Jalid Abdelilah

    2016-01-01

    Full Text Available In engineering industry, control of manufactured parts is usually done on a coordinate measuring machine (CMM, a sensor mounted at the end of the machine probes a set of points on the surface to be inspected. Data processing is performed subsequently using software, and the result of this measurement process either validates or not the conformity of the part. Measurement uncertainty is a crucial parameter for making the right decisions, and not taking into account this parameter can, therefore, sometimes lead to aberrant decisions. The determination of the uncertainty measurement on CMM is a complex task for the variety of influencing factors. Through this study, we aim to check if the uncertainty propagation model developed according to the guide to the expression of uncertainty in measurement (GUM approach is valid, we present here a comparison of the GUM and Monte Carlo methods. This comparison is made to estimate a flatness deviation of a surface belonging to an industrial part and the uncertainty associated to the measurement result.

  9. Application of Higher Order Fission Matrix for Real Variance Estimation in McCARD Monte Carlo Eigenvalue Calculation

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ho Jin [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Shim, Hyung Jin [Seoul National University, Seoul (Korea, Republic of)

    2015-05-15

    In a Monte Carlo (MC) eigenvalue calculation, it is well known that the apparent variance of a local tally such as pin power differs from the real variance considerably. The MC method in eigenvalue calculations uses a power iteration method. In the power iteration method, the fission matrix (FM) and fission source density (FSD) are used as the operator and the solution. The FM is useful to estimate a variance and covariance because the FM can be calculated by a few cycle calculations even at inactive cycle. Recently, S. Carney have implemented the higher order fission matrix (HOFM) capabilities into the MCNP6 MC code in order to apply to extend the perturbation theory to second order. In this study, the HOFM capability by the Hotelling deflation method was implemented into McCARD and used to predict the behavior of a real and apparent SD ratio. In the simple 1D slab problems, the Endo's theoretical model predicts well the real to apparent SD ratio. It was noted that the Endo's theoretical model with the McCARD higher mode FS solutions by the HOFM yields much better the real to apparent SD ratio than that with the analytic solutions. In the near future, the application for a high dominance ratio problem such as BEAVRS benchmark will be conducted.

  10. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in

    2017-07-15

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.

  11. Application of the Monte Carlo method to estimate doses due to neutron activation of different materials in a nuclear reactor

    Science.gov (United States)

    Ródenas, José

    2017-11-01

    All materials exposed to some neutron flux can be activated independently of the kind of the neutron source. In this study, a nuclear reactor has been considered as neutron source. In particular, the activation of control rods in a BWR is studied to obtain the doses produced around the storage pool for irradiated fuel of the plant when control rods are withdrawn from the reactor and installed into this pool. It is very important to calculate these doses because they can affect to plant workers in the area. The MCNP code based on the Monte Carlo method has been applied to simulate activation reactions produced in the control rods inserted into the reactor. Obtained activities are introduced as input into another MC model to estimate doses produced by them. The comparison of simulation results with experimental measurements allows the validation of developed models. The developed MC models have been also applied to simulate the activation of other materials, such as components of a stainless steel sample introduced into a training reactors. These models, once validated, can be applied to other situations and materials where a neutron flux can be found, not only nuclear reactors. For instance, activation analysis with an Am-Be source, neutrography techniques in both medical applications and non-destructive analysis of materials, civil engineering applications using a Troxler, analysis of materials in decommissioning of nuclear power plants, etc.

  12. Estimation of computed tomography dose index in cone beam computed tomography: MOSFET measurements and Monte Carlo simulations.

    Science.gov (United States)

    Kim, Sangroh; Yoshizumi, Terry; Toncheva, Greta; Yoo, Sua; Yin, Fang-Fang; Frush, Donald

    2010-05-01

    To address the lack of accurate dose estimation method in cone beam computed tomography (CBCT), we performed point dose metal oxide semiconductor field-effect transistor (MOSFET) measurements and Monte Carlo (MC) simulations. A Varian On-Board Imager (OBI) was employed to measure point doses in the polymethyl methacrylate (PMMA) CT phantoms with MOSFETs for standard and low dose modes. A MC model of the OBI x-ray tube was developed using BEAMnrc/EGSnrc MC system and validated by the half value layer, x-ray spectrum and lateral and depth dose profiles. We compared the weighted computed tomography dose index (CTDIw) between MOSFET measurements and MC simulations. The CTDIw was found to be 8.39 cGy for the head scan and 4.58 cGy for the body scan from the MOSFET measurements in standard dose mode, and 1.89 cGy for the head and 1.11 cGy for the body in low dose mode, respectively. The CTDIw from MC compared well to the MOSFET measurements within 5% differences. In conclusion, a MC model for Varian CBCT has been established and this approach may be easily extended from the CBCT geometry to multi-detector CT geometry.

  13. Cross talk in the Lambert-Beer calculation for near-infrared wavelengths estimated by Monte Carlo simulations.

    Science.gov (United States)

    Uludag, K; Kohl, M; Steinbrink, J; Obrig, H; Villringer, A

    2002-01-01

    Using the modified Lambert-Beer law to analyze attenuation changes measured noninvasively during functional activation of the brain might result in an insufficient separation of chromophore changes ("cross talk") due to the wavelength dependence of the partial path length of photons in the activated volume of the head. The partial path length was estimated by performing Monte Carlo simulations on layered head models. When assuming cortical activation (e.g., in the depth of 8-12 mm), we determine negligible cross talk when considering changes in oxygenated and deoxygenated hemoglobin. But additionally taking changes in the redox state of cytochrome-c-oxidase into account, this analysis results in significant artifacts. An analysis developed for changes in mean time of flight--instead of changes in attenuation--reduces the cross talk for the layers of cortical activation. These results were validated for different oxygen saturations, wavelength combinations and scattering coefficients. For the analysis of changes in oxygenated and deoxygenated hemoglobin only, low cross talk was also found when the activated volume was assumed to be a 4-mm-diam sphere.

  14. Mean field simulation for Monte Carlo integration

    CERN Document Server

    Del Moral, Pierre

    2013-01-01

    In the last three decades, there has been a dramatic increase in the use of interacting particle methods as a powerful tool in real-world applications of Monte Carlo simulation in computational physics, population biology, computer sciences, and statistical machine learning. Ideally suited to parallel and distributed computation, these advanced particle algorithms include nonlinear interacting jump diffusions; quantum, diffusion, and resampled Monte Carlo methods; Feynman-Kac particle models; genetic and evolutionary algorithms; sequential Monte Carlo methods; adaptive and interacting Marko

  15. Bias and efficiency loss in regression estimates due to duplicated observations: a Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    Francesco Sarracino

    2017-04-01

    Full Text Available Recent studies documented that survey data contain duplicate records. We assess how duplicate records affect regression estimates, and we evaluate the effectiveness of solutions to deal with duplicate records. Results show that the chances of obtaining unbiased estimates when data contain 40 doublets (about 5% of the sample range between 3.5% and 11.5% depending on the distribution of duplicates. If 7 quintuplets are present in the data (2% of the sample, then the probability of obtaining biased estimates ranges between 11% and 20%. Weighting the duplicate records by the inverse of their multiplicity, or dropping superfluous duplicates outperform other solutions in all considered scenarios. Our results illustrate the risk of using data in presence of duplicate records and call for further research on strategies to analyze affected data.

  16. Monte Carlo Method to Study Properties of Acceleration Factor Estimation Based on the Test Results with Varying Load

    Directory of Open Access Journals (Sweden)

    N. D. Tiannikova

    2014-01-01

    Full Text Available G.D. Kartashov has developed a technique to determine the rapid testing results scaling functions to the normal mode. Its feature is preliminary tests of products of one sample including tests using the alternating modes. Standard procedure of preliminary tests (researches is as follows: n groups of products with m elements in each start being tested in normal mode and, after a failure of one of products in the group, the remained products are tested in accelerated mode. In addition to tests in alternating mode, tests in constantly normal mode are conducted as well. The acceleration factor of rapid tests for this type of products, identical to any lots is determined using such testing results of products from the same lot. A drawback of this technique is that tests are to be conducted in alternating mode till the failure of all products. That is not always is possible. To avoid this shortcoming, the Renyi criterion is offered. It allows us to determine scaling functions using the right-censored data thus giving the opportunity to stop testing prior to all failures of products.In this work a statistical modeling of the acceleration factor estimation owing to Renyi statistics minimization is implemented by the Monte-Carlo method. Results of modeling show that the acceleration factor estimation obtained through Renyi statistics minimization is conceivable for rather large n . But for small sample volumes some systematic bias of acceleration factor estimation, which decreases with growth n is observed for both distributions (exponential and Veybull's distributions. Therefore the paper also presents calculation results of correction factors for a case of exponential distribution and Veybull's distribution.

  17. Estimation of the effects of a lead vest on dose reduction for radiation workers using Monte Carlo calculations

    International Nuclear Information System (INIS)

    Young-khi, Lim; Byoung-il, Lee; Jeong-in, Kim

    2008-01-01

    Full text: In the field of medical diagnosis or treatments using radiations, lead vests or aprons are widely used to protect the patients or workers from unwanted irradiation. Also, in nuclear power plants, it is recommended that the workers should wear a lead vest to reduce the dose for working in high radiation area. Generally, personal dosimeters were used to estimate the doses of workers but these cannot give the absolute values. So, measured values should be modified by comparing the reference conditions with conversion factors. Many trials to estimate the doses of workers with lead shield using two or more dosimeters at different locations were done but these had limitations. Through this study the personal dose with/without a lead vest and the effectiveness were evaluated by Monte Carlo methods. A lead vest which had been used at several nuclear sites was modelled with MIRD-V and typical Korean voxel phantom using MCNP-5 transport code. Organ doses were calculated in AP, PA, RLAT, LLAT irradiation geometry for several parallel photon beams. Also irradiation experiments were carried out using real typical Korean phantom with the lead vest and the results were compared with those calculated by simulations. In most cases, the lead vest decreases the organ doses about 30%. For low energy, the lead vest is very effective to reduce the dose but it is not so good for high energy photon shielding. For thyroids, the doses to high energy photons increased by 5% on the contrary. This study may be applied to the better design of personal shielding and dose estimation procedures for practical use. (author)

  18. Status of Monte Carlo at Los Alamos

    International Nuclear Information System (INIS)

    Thompson, W.L.; Cashwell, E.D.; Godfrey, T.N.K.; Schrandt, R.G.; Deutsch, O.L.; Booth, T.E.

    1980-05-01

    Four papers were presented by Group X-6 on April 22, 1980, at the Oak Ridge Radiation Shielding Information Center (RSIC) Seminar-Workshop on Theory and Applications of Monte Carlo Methods. These papers are combined into one report for convenience and because they are related to each other. The first paper (by Thompson and Cashwell) is a general survey about X-6 and MCNP and is an introduction to the other three papers. It can also serve as a resume of X-6. The second paper (by Godfrey) explains some of the details of geometry specification in MCNP. The third paper (by Cashwell and Schrandt) illustrates calculating flux at a point with MCNP; in particular, the once-more-collided flux estimator is demonstrated. Finally, the fourth paper (by Thompson, Deutsch, and Booth) is a tutorial on some variance-reduction techniques. It should be required for a fledging Monte Carlo practitioner

  19. Monte Carlo surface flux tallies

    International Nuclear Information System (INIS)

    Favorite, Jeffrey A.

    2010-01-01

    Particle fluxes on surfaces are difficult to calculate with Monte Carlo codes because the score requires a division by the surface-crossing angle cosine, and grazing angles lead to inaccuracies. We revisit the standard practice of dividing by half of a cosine 'cutoff' for particles whose surface-crossing cosines are below the cutoff. The theory behind this approximation is sound, but the application of the theory to all possible situations does not account for two implicit assumptions: (1) the grazing band must be symmetric about 0, and (2) a single linear expansion for the angular flux must be applied in the entire grazing band. These assumptions are violated in common circumstances; for example, for separate in-going and out-going flux tallies on internal surfaces, and for out-going flux tallies on external surfaces. In some situations, dividing by two-thirds of the cosine cutoff is more appropriate. If users were able to control both the cosine cutoff and the substitute value, they could use these parameters to make accurate surface flux tallies. The procedure is demonstrated in a test problem in which Monte Carlo surface fluxes in cosine bins are converted to angular fluxes and compared with the results of a discrete ordinates calculation.

  20. Monte Carlo comparison of four normality tests using different entropy estimates

    Czech Academy of Sciences Publication Activity Database

    Esteban, M. D.; Castellanos, M. E.; Morales, D.; Vajda, Igor

    2001-01-01

    Roč. 30, č. 4 (2001), s. 761-785 ISSN 0361-0918 R&D Projects: GA ČR GA102/99/1137 Institutional research plan: CEZ:AV0Z1075907 Keywords : test of normality * entropy test and entropy estimator * table of critical values Subject RIV: BD - Theory of Information Impact factor: 0.153, year: 2001

  1. Monte Carlo Maximum Likelihood Estimation for Generalized Long-Memory Time Series Models

    NARCIS (Netherlands)

    Mesters, G.; Koopman, S.J.; Ooms, M.

    2016-01-01

    An exact maximum likelihood method is developed for the estimation of parameters in a non-Gaussian nonlinear density function that depends on a latent Gaussian dynamic process with long-memory properties. Our method relies on the method of importance sampling and on a linear Gaussian approximating

  2. Estimation of stochastic frontier models with fixed-effects through Monte Carlo Maximum Likelihood

    NARCIS (Netherlands)

    Emvalomatis, G.; Stefanou, S.E.; Oude Lansink, A.G.J.M.

    2011-01-01

    Estimation of nonlinear fixed-effects models is plagued by the incidental parameters problem. This paper proposes a procedure for choosing appropriate densities for integrating the incidental parameters from the likelihood function in a general context. The densities are based on priors that are

  3. Efficient Monte Carlo Estimation of the Expected Value of Sample Information Using Moment Matching.

    Science.gov (United States)

    Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca

    2018-02-01

    The Expected Value of Sample Information (EVSI) is used to calculate the economic value of a new research strategy. Although this value would be important to both researchers and funders, there are very few practical applications of the EVSI. This is due to computational difficulties associated with calculating the EVSI in practical health economic models using nested simulations. We present an approximation method for the EVSI that is framed in a Bayesian setting and is based on estimating the distribution of the posterior mean of the incremental net benefit across all possible future samples, known as the distribution of the preposterior mean. Specifically, this distribution is estimated using moment matching coupled with simulations that are available for probabilistic sensitivity analysis, which is typically mandatory in health economic evaluations. This novel approximation method is applied to a health economic model that has previously been used to assess the performance of other EVSI estimators and accurately estimates the EVSI. The computational time for this method is competitive with other methods. We have developed a new calculation method for the EVSI which is computationally efficient and accurate. This novel method relies on some additional simulation so can be expensive in models with a large computational cost.

  4. Markov chain Monte Carlo approach to parameter estimation in the FitzHugh-Nagumo model

    DEFF Research Database (Denmark)

    Jensen, Anders Christian; Ditlevsen, Susanne; Kessler, Mathieu

    2012-01-01

    Excitability is observed in a variety of natural systems, such as neuronal dynamics, cardiovascular tissues, or climate dynamics. The stochastic FitzHugh-Nagumo model is a prominent example representing an excitable system. To validate the practical use of a model, the first step is to estimate...

  5. Monte Carlo simulations of neutron scattering instruments

    International Nuclear Information System (INIS)

    Aestrand, Per-Olof; Copenhagen Univ.; Lefmann, K.; Nielsen, K.

    2001-01-01

    A Monte Carlo simulation is an important computational tool used in many areas of science and engineering. The use of Monte Carlo techniques for simulating neutron scattering instruments is discussed. The basic ideas, techniques and approximations are presented. Since the construction of a neutron scattering instrument is very expensive, Monte Carlo software used for design of instruments have to be validated and tested extensively. The McStas software was designed with these aspects in mind and some of the basic principles of the McStas software will be discussed. Finally, some future prospects are discussed for using Monte Carlo simulations in optimizing neutron scattering experiments. (R.P.)

  6. On the use of stochastic approximation Monte Carlo for Monte Carlo integration

    KAUST Repository

    Liang, Faming

    2009-01-01

    The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration

  7. Estimation of staff doses in complex radiological examinations using a Monte Carlo computer code

    International Nuclear Information System (INIS)

    Vanhavere, F.

    2007-01-01

    The protection of medical personnel in interventional radiology is an important issue of radiological protection. The irradiation of the worker is largely non-uniform, and a large part of his body is shielded by a lead apron. The estimation of effective dose (E) under these conditions is difficult and several approaches are used to estimate effective dose involving such a protective apron. This study presents a summary from an extensive series of simulations to determine scatter-dose distribution around the patient and staff effective dose from personal dosimeter readings. The influence of different parameters (like beam energy and size, patient size, irradiated region, worker position and orientation) on the staff doses has been determined. Published algorithms that combine readings of an unshielded and a shielded dosimeter to estimate effective dose have been applied and a new algorithm, that gives more accurate dose estimates for a wide range of situations was proposed. A computational approach was used to determine the dose distribution in the worker's body. The radiation transport and energy deposition was simulated using the MCNP4B code. The human bodies of the patient and radiologist were generated with the Body Builder anthropomorphic model-generating tool. The radiologist is protected with a lead apron (0.5 mm lead equivalent in the front and 0.25 mm lead equivalent in the back and sides) and a thyroid collar (0.35 mm lead equivalent). The lower-arms of the worker were folded to simulate the arms position during clinical examinations. This realistic situation of the folded arms affects the effective dose to the worker. Depending on the worker position and orientation (and of course the beam energy), the difference can go up to 25 percent. A total of 12 Hp(10) dosimeters were positioned above and under the lead apron at the neck, chest and waist levels. Extra dosimeters for the skin dose were positioned at the forehead, the forearms and the front surface of

  8. Probabilistic parameter estimation of activated sludge processes using Markov Chain Monte Carlo.

    Science.gov (United States)

    Sharifi, Soroosh; Murthy, Sudhir; Takács, Imre; Massoudieh, Arash

    2014-03-01

    One of the most important challenges in making activated sludge models (ASMs) applicable to design problems is identifying the values of its many stoichiometric and kinetic parameters. When wastewater characteristics data from full-scale biological treatment systems are used for parameter estimation, several sources of uncertainty, including uncertainty in measured data, external forcing (e.g. influent characteristics), and model structural errors influence the value of the estimated parameters. This paper presents a Bayesian hierarchical modeling framework for the probabilistic estimation of activated sludge process parameters. The method provides the joint probability density functions (JPDFs) of stoichiometric and kinetic parameters by updating prior information regarding the parameters obtained from expert knowledge and literature. The method also provides the posterior correlations between the parameters, as well as a measure of sensitivity of the different constituents with respect to the parameters. This information can be used to design experiments to provide higher information content regarding certain parameters. The method is illustrated using the ASM1 model to describe synthetically generated data from a hypothetical biological treatment system. The results indicate that data from full-scale systems can narrow down the ranges of some parameters substantially whereas the amount of information they provide regarding other parameters is small, due to either large correlations between some of the parameters or a lack of sensitivity with respect to the parameters. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Simultaneous Monte Carlo zero-variance estimates of several correlated means

    International Nuclear Information System (INIS)

    Booth, T.E.

    1998-01-01

    Zero-variance biasing procedures are normally associated with estimating a single mean or tally. In particular, a zero-variance solution occurs when every sampling is made proportional to the product of the true probability multiplied by the expected score (importance) subsequent to the sampling; i.e., the zero-variance sampling is importance weighted. Because every tally has a different importance function, a zero-variance biasing for one tally cannot be a zero-variance biasing for another tally (unless the tallies are perfectly correlated). The way to optimize the situation when the required tallies have positive correlation is shown

  10. Research on perturbation based Monte Carlo reactor criticality search

    International Nuclear Information System (INIS)

    Li Zeguang; Wang Kan; Li Yangliu; Deng Jingkang

    2013-01-01

    Criticality search is a very important aspect in reactor physics analysis. Due to the advantages of Monte Carlo method and the development of computer technologies, Monte Carlo criticality search is becoming more and more necessary and feasible. Traditional Monte Carlo criticality search method is suffered from large amount of individual criticality runs and uncertainty and fluctuation of Monte Carlo results. A new Monte Carlo criticality search method based on perturbation calculation is put forward in this paper to overcome the disadvantages of traditional method. By using only one criticality run to get initial k_e_f_f and differential coefficients of concerned parameter, the polynomial estimator of k_e_f_f changing function is solved to get the critical value of concerned parameter. The feasibility of this method was tested. The results show that the accuracy and efficiency of perturbation based criticality search method are quite inspiring and the method overcomes the disadvantages of traditional one. (authors)

  11. Statistics of Monte Carlo methods used in radiation transport calculation

    International Nuclear Information System (INIS)

    Datta, D.

    2009-01-01

    Radiation transport calculation can be carried out by using either deterministic or statistical methods. Radiation transport calculation based on statistical methods is basic theme of the Monte Carlo methods. The aim of this lecture is to describe the fundamental statistics required to build the foundations of Monte Carlo technique for radiation transport calculation. Lecture note is organized in the following way. Section (1) will describe the introduction of Basic Monte Carlo and its classification towards the respective field. Section (2) will describe the random sampling methods, a key component of Monte Carlo radiation transport calculation, Section (3) will provide the statistical uncertainty of Monte Carlo estimates, Section (4) will describe in brief the importance of variance reduction techniques while sampling particles such as photon, or neutron in the process of radiation transport

  12. Linear filtering applied to Monte Carlo criticality calculations

    International Nuclear Information System (INIS)

    Morrison, G.W.; Pike, D.H.; Petrie, L.M.

    1975-01-01

    A significant improvement in the acceleration of the convergence of the eigenvalue computed by Monte Carlo techniques has been developed by applying linear filtering theory to Monte Carlo calculations for multiplying systems. A Kalman filter was applied to a KENO Monte Carlo calculation of an experimental critical system consisting of eight interacting units of fissile material. A comparison of the filter estimate and the Monte Carlo realization was made. The Kalman filter converged in five iterations to 0.9977. After 95 iterations, the average k-eff from the Monte Carlo calculation was 0.9981. This demonstrates that the Kalman filter has the potential of reducing the calculational effort of multiplying systems. Other examples and results are discussed

  13. Problems in radiation shielding calculations with Monte Carlo methods

    International Nuclear Information System (INIS)

    Ueki, Kohtaro

    1985-01-01

    The Monte Carlo method is a very useful tool for solving a large class of radiation transport problem. In contrast with deterministic method, geometric complexity is a much less significant problem for Monte Carlo calculations. However, the accuracy of Monte Carlo calculations is of course, limited by statistical error of the quantities to be estimated. In this report, we point out some typical problems to solve a large shielding system including radiation streaming. The Monte Carlo coupling technique was developed to settle such a shielding problem accurately. However, the variance of the Monte Carlo results using the coupling technique of which detectors were located outside the radiation streaming, was still not enough. So as to bring on more accurate results for the detectors located outside the streaming and also for a multi-legged-duct streaming problem, a practicable way of ''Prism Scattering technique'' is proposed in the study. (author)

  14. Systematic evaluation of a time-domain Monte Carlo fitting routine to estimate the adult brain optical properties

    Science.gov (United States)

    Selb, Juliette; Ogden, Tyler M.; Dubb, Jay; Fang, Qianqian; Boas, David A.

    2013-03-01

    Time-domain near-infrared spectroscopy (TD-NIRS) offers the ability to measure the absolute baseline optical properties of a tissue. Specifically, for brain imaging, the robust assessment of cerebral blood volume and oxygenation based on measurement of cerebral hemoglobin concentrations is essential for reliable cross-sectional and longitudinal studies. In adult heads, these baseline measurements are complicated by the presence of thick extra-cerebral tissue (scalp, skull, CSF). A simple semi-infinite homogeneous model of the head has proven to have limited use because of the large errors it introduces in the recovered brain absorption. Analytical solutions for layered media have shown improved performance on Monte-Carlo simulated data and layered phantom experiments, but their validity on real adult head data has never been demonstrated. With the advance of fast Monte Carlo approaches based on GPU computation, numerical methods to solve the radiative transfer equation become viable alternatives to analytical solutions of the diffusion equation. Monte Carlo approaches provide the additional advantage to be adaptable to any geometry, in particular more realistic head models. The goals of the present study were twofold: (1) to implement a fast and flexible Monte Carlo-based fitting routine to retrieve the brain optical properties; (2) to characterize the performances of this fitting method on realistic adult head data. We generated time-resolved data at various locations over the head, and fitted them with different models of light propagation: the homogeneous analytical model, and Monte Carlo simulations for three head models: a two-layer slab, the true subject's anatomy, and that of a generic atlas head. We found that the homogeneous model introduced a median 20 to 25% error on the recovered brain absorption, with large variations over the range of true optical properties. The two-layer slab model only improved moderately the results over the homogeneous one. On

  15. Monte Carlo point process estimation of electromyographic envelopes from motor cortical spikes for brain-machine interfaces

    Science.gov (United States)

    Liao, Yuxi; She, Xiwei; Wang, Yiwen; Zhang, Shaomin; Zhang, Qiaosheng; Zheng, Xiaoxiang; Principe, Jose C.

    2015-12-01

    Objective. Representation of movement in the motor cortex (M1) has been widely studied in brain-machine interfaces (BMIs). The electromyogram (EMG) has greater bandwidth than the conventional kinematic variables (such as position, velocity), and is functionally related to the discharge of cortical neurons. As the stochastic information of EMG is derived from the explicit spike time structure, point process (PP) methods will be a good solution for decoding EMG directly from neural spike trains. Previous studies usually assume linear or exponential tuning curves between neural firing and EMG, which may not be true. Approach. In our analysis, we estimate the tuning curves in a data-driven way and find both the traditional functional-excitatory and functional-inhibitory neurons, which are widely found across a rat’s motor cortex. To accurately decode EMG envelopes from M1 neural spike trains, the Monte Carlo point process (MCPP) method is implemented based on such nonlinear tuning properties. Main results. Better reconstruction of EMG signals is shown on baseline and extreme high peaks, as our method can better preserve the nonlinearity of the neural tuning during decoding. The MCPP improves the prediction accuracy (the normalized mean squared error) 57% and 66% on average compared with the adaptive point process filter using linear and exponential tuning curves respectively, for all 112 data segments across six rats. Compared to a Wiener filter using spike rates with an optimal window size of 50 ms, MCPP decoding EMG from a point process improves the normalized mean square error (NMSE) by 59% on average. Significance. These results suggest that neural tuning is constantly changing during task execution and therefore, the use of spike timing methodologies and estimation of appropriate tuning curves needs to be undertaken for better EMG decoding in motor BMIs.

  16. Generalized likelihood uncertainty estimation (GLUE) using adaptive Markov chain Monte Carlo sampling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Vrugt, Jasper A.; Madsen, Henrik

    2008-01-01

    propose an alternative strategy to determine the value of the cutoff threshold based on the appropriate coverage of the resulting uncertainty bounds. We demonstrate the superiority of this revised GLUE method with three different conceptual watershed models of increasing complexity, using both synthetic......In the last few decades hydrologists have made tremendous progress in using dynamic simulation models for the analysis and understanding of hydrologic systems. However, predictions with these models are often deterministic and as such they focus on the most probable forecast, without an explicit...... of applications. However, the MC based sampling strategy of the prior parameter space typically utilized in GLUE is not particularly efficient in finding behavioral simulations. This becomes especially problematic for high-dimensional parameter estimation problems, and in the case of complex simulation models...

  17. Improved method for estimating particle scattering probabilities to finite detectors for Monte Carlo simulation

    International Nuclear Information System (INIS)

    Mickael, M.; Gardner, R.P.; Verghese, K.

    1988-01-01

    An improved method for calculating the total probability of particle scattering within the solid angle subtended by finite detectors is developed, presented, and tested. The limiting polar and azimuthal angles subtended by the detector are measured from the direction that most simplifies their calculation rather than from the incident particle direction. A transformation of the particle scattering probability distribution function (pdf) is made to match the transformation of the direction from which the limiting angles are measured. The particle scattering probability to the detector is estimated by evaluating the integral of the transformed pdf over the range of the limiting angles measured from the preferred direction. A general formula for transforming the particle scattering pdf is derived from basic principles and applied to four important scattering pdf's; namely, isotropic scattering in the Lab system, isotropic neutron scattering in the center-of-mass system, thermal neutron scattering by the free gas model, and gamma-ray Klein-Nishina scattering. Some approximations have been made to these pdf's to enable analytical evaluations of the final integrals. These approximations are shown to be valid over a wide range of energies and for most elements. The particle scattering probability to spherical, planar circular, and right circular cylindrical detectors has been calculated using the new and previously reported direct approach. Results indicate that the new approach is valid and is computationally faster by orders of magnitude

  18. FOODCHAIN: a Monte Carlo model to estimate individual exposure to airborne pollutants via the foodchain pathway

    International Nuclear Information System (INIS)

    Dixon, E.; Holton, G.A.

    1984-01-01

    Ingestion of contaminated food due to the airborne release of radionuclides or chemical pollutants is a particularly difficult human exposure pathway to quantify. There are a number of important physical and biological processes such as atmospheric deposition and plant uptake to consider. These processes are approximate by techniques encoded in the computer program TEREX. Once estimates of pollutant concentrations are made, the problem can be reduced to computing exposure from ingestion of the food. Some assessments do not account for where the contaminated food is eaten, while others limit consumption to meat and vegetables produced within the affected area. While those approaches lead to an upper bound of exposure, a more realistic assumption is that if locally produced food is not sufficient to meet the dietary needs of the local populace, then uncontaminated food will be imported. This is the approach taken by the computer model FOODCHAIN. Exposures via ingestion of six basic types of food are modeled: beef, milk, grains, leafy vegetables, exposed produce (edible parts are exposed to atmospheric deposition), and protected produce (edible parts are protected from atmospheric deposition). Intake requirements for these six foods are based on a standard diet. Using TEREX-produced site-specific crop production values and food contamination values, FOODCHAIN randomly samples pollutant concentrations in each of the six foodstuffs in an inerative manner. Consumption of a particular food is weighted by a factor proportional to the total production of that food within the area studied. The exposures due to consumption of each of the six foodstuffs are summed to produce the total exposure for each randomly calculated diet

  19. Monte Carlo method for neutron transport problems

    International Nuclear Information System (INIS)

    Asaoka, Takumi

    1977-01-01

    Some methods for decreasing variances in Monte Carlo neutron transport calculations are presented together with the results of sample calculations. A general purpose neutron transport Monte Carlo code ''MORSE'' was used for the purpose. The first method discussed in this report is the method of statistical estimation. As an example of this method, the application of the coarse-mesh rebalance acceleration method to the criticality calculation of a cylindrical fast reactor is presented. Effective multiplication factor and its standard deviation are presented as a function of the number of histories and comparisons are made between the coarse-mesh rebalance method and the standard method. Five-group neutron fluxes at core center are also compared with the result of S4 calculation. The second method is the method of correlated sampling. This method was applied to the perturbation calculation of control rod worths in a fast critical assembly (FCA-V-3) Two methods of sampling (similar flight paths and identical flight paths) are tested and compared with experimental results. For every cases the experimental value lies within the standard deviation of the Monte Carlo calculations. The third method is the importance sampling. In this report a biased selection of particle flight directions discussed. This method was applied to the flux calculation in a spherical fast neutron system surrounded by a 10.16 cm iron reflector. Result-direction biasing, path-length stretching, and no biasing are compared with S8 calculation. (Aoki, K.)

  20. Quantum Monte Carlo for vibrating molecules

    International Nuclear Information System (INIS)

    Brown, W.R.; Lawrence Berkeley National Lab., CA

    1996-08-01

    Quantum Monte Carlo (QMC) has successfully computed the total electronic energies of atoms and molecules. The main goal of this work is to use correlation function quantum Monte Carlo (CFQMC) to compute the vibrational state energies of molecules given a potential energy surface (PES). In CFQMC, an ensemble of random walkers simulate the diffusion and branching processes of the imaginary-time time dependent Schroedinger equation in order to evaluate the matrix elements. The program QMCVIB was written to perform multi-state VMC and CFQMC calculations and employed for several calculations of the H 2 O and C 3 vibrational states, using 7 PES's, 3 trial wavefunction forms, two methods of non-linear basis function parameter optimization, and on both serial and parallel computers. In order to construct accurate trial wavefunctions different wavefunctions forms were required for H 2 O and C 3 . In order to construct accurate trial wavefunctions for C 3 , the non-linear parameters were optimized with respect to the sum of the energies of several low-lying vibrational states. In order to stabilize the statistical error estimates for C 3 the Monte Carlo data was collected into blocks. Accurate vibrational state energies were computed using both serial and parallel QMCVIB programs. Comparison of vibrational state energies computed from the three C 3 PES's suggested that a non-linear equilibrium geometry PES is the most accurate and that discrete potential representations may be used to conveniently determine vibrational state energies

  1. Monte Carlo eigenfunction strategies and uncertainties

    International Nuclear Information System (INIS)

    Gast, R.C.; Candelore, N.R.

    1974-01-01

    Comparisons of convergence rates for several possible eigenfunction source strategies led to the selection of the ''straight'' analog of the analytic power method as the source strategy for Monte Carlo eigenfunction calculations. To insure a fair game strategy, the number of histories per iteration increases with increasing iteration number. The estimate of eigenfunction uncertainty is obtained from a modification of a proposal by D. B. MacMillan and involves only estimates of the usual purely statistical component of uncertainty and a serial correlation coefficient of lag one. 14 references. (U.S.)

  2. General Monte Carlo code MONK

    International Nuclear Information System (INIS)

    Moore, J.G.

    1974-01-01

    The Monte Carlo code MONK is a general program written to provide a high degree of flexibility to the user. MONK is distinguished by its detailed representation of nuclear data in point form i.e., the cross-section is tabulated at specific energies instead of the more usual group representation. The nuclear data are unadjusted in the point form but recently the code has been modified to accept adjusted group data as used in fast and thermal reactor applications. The various geometrical handling capabilities and importance sampling techniques are described. In addition to the nuclear data aspects, the following features are also described; geometrical handling routines, tracking cycles, neutron source and output facilities. 12 references. (U.S.)

  3. Monte Carlo lattice program KIM

    International Nuclear Information System (INIS)

    Cupini, E.; De Matteis, A.; Simonini, R.

    1980-01-01

    The Monte Carlo program KIM solves the steady-state linear neutron transport equation for a fixed-source problem or, by successive fixed-source runs, for the eigenvalue problem, in a two-dimensional thermal reactor lattice. Fluxes and reaction rates are the main quantities computed by the program, from which power distribution and few-group averaged cross sections are derived. The simulation ranges from 10 MeV to zero and includes anisotropic and inelastic scattering in the fast energy region, the epithermal Doppler broadening of the resonances of some nuclides, and the thermalization phenomenon by taking into account the thermal velocity distribution of some molecules. Besides the well known combinatorial geometry, the program allows complex configurations to be represented by a discrete set of points, an approach greatly improving calculation speed

  4. Advanced Computational Methods for Monte Carlo Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-01-12

    This course is intended for graduate students who already have a basic understanding of Monte Carlo methods. It focuses on advanced topics that may be needed for thesis research, for developing new state-of-the-art methods, or for working with modern production Monte Carlo codes.

  5. Nested Sampling with Constrained Hamiltonian Monte Carlo

    OpenAIRE

    Betancourt, M. J.

    2010-01-01

    Nested sampling is a powerful approach to Bayesian inference ultimately limited by the computationally demanding task of sampling from a heavily constrained probability distribution. An effective algorithm in its own right, Hamiltonian Monte Carlo is readily adapted to efficiently sample from any smooth, constrained distribution. Utilizing this constrained Hamiltonian Monte Carlo, I introduce a general implementation of the nested sampling algorithm.

  6. Monte Carlo Treatment Planning for Advanced Radiotherapy

    DEFF Research Database (Denmark)

    Cronholm, Rickard

    This Ph.d. project describes the development of a workflow for Monte Carlo Treatment Planning for clinical radiotherapy plans. The workflow may be utilized to perform an independent dose verification of treatment plans. Modern radiotherapy treatment delivery is often conducted by dynamically...... modulating the intensity of the field during the irradiation. The workflow described has the potential to fully model the dynamic delivery, including gantry rotation during irradiation, of modern radiotherapy. Three corner stones of Monte Carlo Treatment Planning are identified: Building, commissioning...... and validation of a Monte Carlo model of a medical linear accelerator (i), converting a CT scan of a patient to a Monte Carlo compliant phantom (ii) and translating the treatment plan parameters (including beam energy, angles of incidence, collimator settings etc) to a Monte Carlo input file (iii). A protocol...

  7. The MC21 Monte Carlo Transport Code

    International Nuclear Information System (INIS)

    Sutton TM; Donovan TJ; Trumbull TH; Dobreff PS; Caro E; Griesheimer DP; Tyburski LJ; Carpenter DC; Joo H

    2007-01-01

    MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities

  8. Monte Carlo simulation in nuclear medicine

    International Nuclear Information System (INIS)

    Morel, Ch.

    2007-01-01

    The Monte Carlo method allows for simulating random processes by using series of pseudo-random numbers. It became an important tool in nuclear medicine to assist in the design of new medical imaging devices, optimise their use and analyse their data. Presently, the sophistication of the simulation tools allows the introduction of Monte Carlo predictions in data correction and image reconstruction processes. The availability to simulate time dependent processes opens up new horizons for Monte Carlo simulation in nuclear medicine. In a near future, these developments will allow to tackle simultaneously imaging and dosimetry issues and soon, case system Monte Carlo simulations may become part of the nuclear medicine diagnostic process. This paper describes some Monte Carlo method basics and the sampling methods that were developed for it. It gives a referenced list of different simulation software used in nuclear medicine and enumerates some of their present and prospective applications. (author)

  9. Monte Carlo based diffusion coefficients for LMFBR analysis

    International Nuclear Information System (INIS)

    Van Rooijen, Willem F.G.; Takeda, Toshikazu; Hazama, Taira

    2010-01-01

    A method based on Monte Carlo calculations is developed to estimate the diffusion coefficient of unit cells. The method uses a geometrical model similar to that used in lattice theory, but does not use the assumption of a separable fundamental mode used in lattice theory. The method uses standard Monte Carlo flux and current tallies, and the continuous energy Monte Carlo code MVP was used without modifications. Four models are presented to derive the diffusion coefficient from tally results of flux and partial currents. In this paper the method is applied to the calculation of a plate cell of the fast-spectrum critical facility ZEBRA. Conventional calculations of the diffusion coefficient diverge in the presence of planar voids in the lattice, but our Monte Carlo method can treat this situation without any problem. The Monte Carlo method was used to investigate the influence of geometrical modeling as well as the directional dependence of the diffusion coefficient. The method can be used to estimate the diffusion coefficient of complicated unit cells, the limitation being the capabilities of the Monte Carlo code. The method will be used in the future to confirm results for the diffusion coefficient obtained of the Monte Carlo code. The method will be used in the future to confirm results for the diffusion coefficient obtained with deterministic codes. (author)

  10. Suppression of the initial transient in Monte Carlo criticality simulations

    International Nuclear Information System (INIS)

    Richet, Y.

    2006-12-01

    Criticality Monte Carlo calculations aim at estimating the effective multiplication factor (k-effective) for a fissile system through iterations simulating neutrons propagation (making a Markov chain). Arbitrary initialization of the neutron population can deeply bias the k-effective estimation, defined as the mean of the k-effective computed at each iteration. A simplified model of this cycle k-effective sequence is built, based on characteristics of industrial criticality Monte Carlo calculations. Statistical tests, inspired by Brownian bridge properties, are designed to discriminate stationarity of the cycle k-effective sequence. The initial detected transient is, then, suppressed in order to improve the estimation of the system k-effective. The different versions of this methodology are detailed and compared, firstly on a plan of numerical tests fitted on criticality Monte Carlo calculations, and, secondly on real criticality calculations. Eventually, the best methodologies observed in these tests are selected and allow to improve industrial Monte Carlo criticality calculations. (author)

  11. Neutron flux calculation by means of Monte Carlo methods

    International Nuclear Information System (INIS)

    Barz, H.U.; Eichhorn, M.

    1988-01-01

    In this report a survey of modern neutron flux calculation procedures by means of Monte Carlo methods is given. Due to the progress in the development of variance reduction techniques and the improvements of computational techniques this method is of increasing importance. The basic ideas in application of Monte Carlo methods are briefly outlined. In more detail various possibilities of non-analog games and estimation procedures are presented, problems in the field of optimizing the variance reduction techniques are discussed. In the last part some important international Monte Carlo codes and own codes of the authors are listed and special applications are described. (author)

  12. Estimating true human and animal host source contribution in quantitative microbial source tracking using the Monte Carlo method.

    Science.gov (United States)

    Wang, Dan; Silkie, Sarah S; Nelson, Kara L; Wuertz, Stefan

    2010-09-01

    Cultivation- and library-independent, quantitative PCR-based methods have become the method of choice in microbial source tracking. However, these qPCR assays are not 100% specific and sensitive for the target sequence in their respective hosts' genome. The factors that can lead to false positive and false negative information in qPCR results are well defined. It is highly desirable to have a way of removing such false information to estimate the true concentration of host-specific genetic markers and help guide the interpretation of environmental monitoring studies. Here we propose a statistical model based on the Law of Total Probability to predict the true concentration of these markers. The distributions of the probabilities of obtaining false information are estimated from representative fecal samples of known origin. Measurement error is derived from the sample precision error of replicated qPCR reactions. Then, the Monte Carlo method is applied to sample from these distributions of probabilities and measurement error. The set of equations given by the Law of Total Probability allows one to calculate the distribution of true concentrations, from which their expected value, confidence interval and other statistical characteristics can be easily evaluated. The output distributions of predicted true concentrations can then be used as input to watershed-wide total maximum daily load determinations, quantitative microbial risk assessment and other environmental models. This model was validated by both statistical simulations and real world samples. It was able to correct the intrinsic false information associated with qPCR assays and output the distribution of true concentrations of Bacteroidales for each animal host group. Model performance was strongly affected by the precision error. It could perform reliably and precisely when the standard deviation of the precision error was small (≤ 0.1). Further improvement on the precision of sample processing and q

  13. A note on simultaneous Monte Carlo tests

    DEFF Research Database (Denmark)

    Hahn, Ute

    In this short note, Monte Carlo tests of goodness of fit for data of the form X(t), t ∈ I are considered, that reject the null hypothesis if X(t) leaves an acceptance region bounded by an upper and lower curve for some t in I. A construction of the acceptance region is proposed that complies to a...... to a given target level of rejection, and yields exact p-values. The construction is based on pointwise quantiles, estimated from simulated realizations of X(t) under the null hypothesis....

  14. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution

    Czech Academy of Sciences Publication Activity Database

    Mukhopadhyay, N. D.; Sampson, A. J.; Deniz, D.; Carlsson, G. A.; Williamson, J.; Malušek, Alexandr

    2012-01-01

    Roč. 70, č. 1 (2012), s. 315-323 ISSN 0969-8043 Institutional research plan: CEZ:AV0Z10480505 Keywords : Monte Carlo * correlated sampling * efficiency * uncertainty * bootstrap Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 1.179, year: 2012 http://www.sciencedirect.com/science/article/pii/S0969804311004775

  15. Clinical considerations of Monte Carlo for electron radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Faddegon, Bruce; Balogh, Judith; Mackenzie, Robert; Scora, Daryl

    1998-01-01

    Technical requirements for Monte Carlo based electron radiotherapy treatment planning are outlined. The targeted overall accuracy for estimate of the delivered dose is the least restrictive of 5% in dose, 5 mm in isodose position. A system based on EGS4 and capable of achieving this accuracy is described. Experience gained in system design and commissioning is summarized. The key obstacle to widespread clinical use of Monte Carlo is lack of clinically acceptable measurement based methodology for accurate commissioning

  16. Algorithms for Monte Carlo calculations with fermions

    International Nuclear Information System (INIS)

    Weingarten, D.

    1985-01-01

    We describe a fermion Monte Carlo algorithm due to Petcher and the present author and another due to Fucito, Marinari, Parisi and Rebbi. For the first algorithm we estimate the number of arithmetic operations required to evaluate a vacuum expectation value grows as N 11 /msub(q) on an N 4 lattice with fixed periodicity in physical units and renormalized quark mass msub(q). For the second algorithm the rate of growth is estimated to be N 8 /msub(q) 2 . Numerical experiments are presented comparing the two algorithms on a lattice of size 2 4 . With a hopping constant K of 0.15 and β of 4.0 we find the number of operations for the second algorithm is about 2.7 times larger than for the first and about 13 000 times larger than for corresponding Monte Carlo calculations with a pure gauge theory. An estimate is given for the number of operations required for more realistic calculations by each algorithm on a larger lattice. (orig.)

  17. Reactor perturbation calculations by Monte Carlo methods

    International Nuclear Information System (INIS)

    Gubbins, M.E.

    1965-09-01

    Whilst Monte Carlo methods are useful for reactor calculations involving complicated geometry, it is difficult to apply them to the calculation of perturbation worths because of the large amount of computing time needed to obtain good accuracy. Various ways of overcoming these difficulties are investigated in this report, with the problem of estimating absorbing control rod worths particularly in mind. As a basis for discussion a method of carrying out multigroup reactor calculations by Monte Carlo methods is described. Two methods of estimating a perturbation worth directly, without differencing two quantities of like magnitude, are examined closely but are passed over in favour of a third method based on a correlation technique. This correlation method is described, and demonstrated by a limited range of calculations for absorbing control rods in a fast reactor. In these calculations control rod worths of between 1% and 7% in reactivity are estimated to an accuracy better than 10% (3 standard errors) in about one hour's computing time on the English Electric KDF.9 digital computer. (author)

  18. Monte Carlo based estimation of organ and effective doses to patients undergoing hysterosalpingography and retrograde urethrography fluoroscopy procedures

    Science.gov (United States)

    Ngaile, J. E.; Msaki, P. K.; Kazema, R. R.

    2018-04-01

    Contrast investigations of hysterosalpingography (HSG) and retrograde urethrography (RUG) fluoroscopy procedures remain the dominant diagnostic tools for the investigation of infertility in females and urethral strictures in males, respectively, owing to the scarcity and high cost of services of alternative diagnostic technologies. In light of the radiological risks associated with contrast based investigations of the genitourinary tract systems, there is a need to assess the magnitude of radiation burden imparted to patients undergoing HSG and RUG fluoroscopy procedures in Tanzania. The air kerma area product (KAP), fluoroscopy time, number of images, organ dose and effective dose to patients undergoing HSG and RUG procedures were obtained from four hospitals. The KAP was measured using a flat transmission ionization chamber, while the organ and effective doses were estimated using the knowledge of the patient characteristics, patient related exposure parameters, geometry of examination, KAP and Monte Carlo calculations (PCXMC). The median values of KAP for the HSG and RUG were 2.2 Gy cm2 and 3.3 Gy cm2, respectively. The median organ doses in the present study for the ovaries, urinary bladder and uterus for the HSG procedures, were 1.0 mGy, 4.0 mGy and 1.6 mGy, respectively, while for urinary bladder and testes of the RUG were 3.4 mGy and 5.9 mGy, respectively. The median values of effective doses for the HSG and RUG procedures were 0.65 mSv and 0.59 mSv, respectively. The median values of effective dose per hospital for the HSG and RUG procedures had a range of 1.6-2.8 mSv and 1.9-5.6 mSv, respectively, while the overall differences between individual effective doses across the four hospitals varied by factors of up to 22.0 and 46.7, respectively for the HSG and RUG procedures. The proposed diagnostic reference levels (DRLs) for the HSG and RUG were for KAP 2.8 Gy cm2 and 3.9 Gy cm2, for fluoroscopy time 0.8 min and 0.9 min, and for number of images 5 and 4

  19. Estimation of tumour dose enhancement due to gold nanoparticles during typical radiation treatments: a preliminary Monte Carlo study

    International Nuclear Information System (INIS)

    Cho, S H

    2005-01-01

    A recent mice study demonstrated that gold nanoparticles could be safely administered and used to enhance the tumour dose during radiation therapy. The use of gold nanoparticles seems more promising than earlier methods because of the high atomic number of gold and because nanoparticles can more easily penetrate the tumour vasculature. However, to date, possible dose enhancement due to the use of gold nanoparticles has not been well quantified, especially for common radiation treatment situations. Therefore, the current preliminary study estimated this dose enhancement by Monte Carlo calculations for several phantom test cases representing radiation treatments with the following modalities: 140 kVp x-rays, 4 and 6 MV photon beams, and 192 Ir gamma rays. The current study considered three levels of gold concentration within the tumour, two of which are based on the aforementioned mice study, and assumed either no gold or a single gold concentration level outside the tumour. The dose enhancement over the tumour volume considered for the 140 kVp x-ray case can be at least a factor of 2 at an achievable gold concentration of 7 mg Au/g tumour assuming no gold outside the tumour. The tumour dose enhancement for the cases involving the 4 and 6 MV photon beams based on the same assumption ranged from about 1% to 7%, depending on the amount of gold within the tumour and photon beam qualities. For the 192 Ir cases, the dose enhancement within the tumour region ranged from 5% to 31%, depending on radial distance and gold concentration level within the tumour. For the 7 mg Au/g tumour cases, the loading of gold into surrounding normal tissue at 2 mg Au/g resulted in an increase in the normal tissue dose, up to 30%, negligible, and about 2% for the 140 kVp x-rays, 6 MV photon beam, and 192 Ir gamma rays, respectively, while the magnitude of dose enhancement within the tumour was essentially unchanged. (note)

  20. Importance iteration in MORSE Monte Carlo calculations

    International Nuclear Information System (INIS)

    Kloosterman, J.L.; Hoogenboom, J.E.

    1994-01-01

    An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example that shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation

  1. Monte Carlo approaches to light nuclei

    International Nuclear Information System (INIS)

    Carlson, J.

    1990-01-01

    Significant progress has been made recently in the application of Monte Carlo methods to the study of light nuclei. We review new Green's function Monte Carlo results for the alpha particle, Variational Monte Carlo studies of 16 O, and methods for low-energy scattering and transitions. Through these calculations, a coherent picture of the structure and electromagnetic properties of light nuclei has arisen. In particular, we examine the effect of the three-nucleon interaction and the importance of exchange currents in a variety of experimentally measured properties, including form factors and capture cross sections. 29 refs., 7 figs

  2. Monte Carlo approaches to light nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, J.

    1990-01-01

    Significant progress has been made recently in the application of Monte Carlo methods to the study of light nuclei. We review new Green's function Monte Carlo results for the alpha particle, Variational Monte Carlo studies of {sup 16}O, and methods for low-energy scattering and transitions. Through these calculations, a coherent picture of the structure and electromagnetic properties of light nuclei has arisen. In particular, we examine the effect of the three-nucleon interaction and the importance of exchange currents in a variety of experimentally measured properties, including form factors and capture cross sections. 29 refs., 7 figs.

  3. Importance iteration in MORSE Monte Carlo calculations

    International Nuclear Information System (INIS)

    Kloosterman, J.L.; Hoogenboom, J.E.

    1994-02-01

    An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example, which shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation. (orig.)

  4. Monte carlo simulation for soot dynamics

    KAUST Repository

    Zhou, Kun

    2012-01-01

    A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.

  5. Monte Carlo Codes Invited Session

    International Nuclear Information System (INIS)

    Trama, J.C.; Malvagi, F.; Brown, F.

    2013-01-01

    This document lists 22 Monte Carlo codes used in radiation transport applications throughout the world. For each code the names of the organization and country and/or place are given. We have the following computer codes. 1) ARCHER, USA, RPI; 2) COG11, USA, LLNL; 3) DIANE, France, CEA/DAM Bruyeres; 4) FLUKA, Italy and CERN, INFN and CERN; 5) GEANT4, International GEANT4 collaboration; 6) KENO and MONACO (SCALE), USA, ORNL; 7) MC21, USA, KAPL and Bettis; 8) MCATK, USA, LANL; 9) MCCARD, South Korea, Seoul National University; 10) MCNP6, USA, LANL; 11) MCU, Russia, Kurchatov Institute; 12) MONK and MCBEND, United Kingdom, AMEC; 13) MORET5, France, IRSN Fontenay-aux-Roses; 14) MVP2, Japan, JAEA; 15) OPENMC, USA, MIT; 16) PENELOPE, Spain, Barcelona University; 17) PHITS, Japan, JAEA; 18) PRIZMA, Russia, VNIITF; 19) RMC, China, Tsinghua University; 20) SERPENT, Finland, VTT; 21) SUPERMONTECARLO, China, CAS INEST FDS Team Hefei; and 22) TRIPOLI-4, France, CEA Saclay

  6. Advanced computers and Monte Carlo

    International Nuclear Information System (INIS)

    Jordan, T.L.

    1979-01-01

    High-performance parallelism that is currently available is synchronous in nature. It is manifested in such architectures as Burroughs ILLIAC-IV, CDC STAR-100, TI ASC, CRI CRAY-1, ICL DAP, and many special-purpose array processors designed for signal processing. This form of parallelism has apparently not been of significant value to many important Monte Carlo calculations. Nevertheless, there is much asynchronous parallelism in many of these calculations. A model of a production code that requires up to 20 hours per problem on a CDC 7600 is studied for suitability on some asynchronous architectures that are on the drawing board. The code is described and some of its properties and resource requirements ae identified to compare with corresponding properties and resource requirements are identified to compare with corresponding properties and resource requirements are identified to compare with corresponding properties and resources of some asynchronous multiprocessor architectures. Arguments are made for programer aids and special syntax to identify and support important asynchronous parallelism. 2 figures, 5 tables

  7. Monte Carlo methods to calculate impact probabilities

    Science.gov (United States)

    Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.

    2014-09-01

    Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward

  8. Bayesian phylogeny analysis via stochastic approximation Monte Carlo

    KAUST Repository

    Cheon, Sooyoung

    2009-11-01

    Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time. © 2009 Elsevier Inc. All rights reserved.

  9. Monte Carlo techniques for analyzing deep penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1985-01-01

    A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications

  10. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    International Nuclear Information System (INIS)

    Booth, T.E.

    1992-12-01

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed

  11. Monte Carlo techniques for analyzing deep penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1985-01-01

    A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications. 29 refs

  12. Combinatorial nuclear level density by a Monte Carlo method

    International Nuclear Information System (INIS)

    Cerf, N.

    1994-01-01

    We present a new combinatorial method for the calculation of the nuclear level density. It is based on a Monte Carlo technique, in order to avoid a direct counting procedure which is generally impracticable for high-A nuclei. The Monte Carlo simulation, making use of the Metropolis sampling scheme, allows a computationally fast estimate of the level density for many fermion systems in large shell model spaces. We emphasize the advantages of this Monte Carlo approach, particularly concerning the prediction of the spin and parity distributions of the excited states,and compare our results with those derived from a traditional combinatorial or a statistical method. Such a Monte Carlo technique seems very promising to determine accurate level densities in a large energy range for nuclear reaction calculations

  13. 11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing

    CERN Document Server

    Nuyens, Dirk

    2016-01-01

    This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.

  14. Quantum Monte Carlo approaches for correlated systems

    CERN Document Server

    Becca, Federico

    2017-01-01

    Over the past several decades, computational approaches to studying strongly-interacting systems have become increasingly varied and sophisticated. This book provides a comprehensive introduction to state-of-the-art quantum Monte Carlo techniques relevant for applications in correlated systems. Providing a clear overview of variational wave functions, and featuring a detailed presentation of stochastic samplings including Markov chains and Langevin dynamics, which are developed into a discussion of Monte Carlo methods. The variational technique is described, from foundations to a detailed description of its algorithms. Further topics discussed include optimisation techniques, real-time dynamics and projection methods, including Green's function, reptation and auxiliary-field Monte Carlo, from basic definitions to advanced algorithms for efficient codes, and the book concludes with recent developments on the continuum space. Quantum Monte Carlo Approaches for Correlated Systems provides an extensive reference ...

  15. Monte Carlo simulations for plasma physics

    International Nuclear Information System (INIS)

    Okamoto, M.; Murakami, S.; Nakajima, N.; Wang, W.X.

    2000-07-01

    Plasma behaviours are very complicated and the analyses are generally difficult. However, when the collisional processes play an important role in the plasma behaviour, the Monte Carlo method is often employed as a useful tool. For examples, in neutral particle injection heating (NBI heating), electron or ion cyclotron heating, and alpha heating, Coulomb collisions slow down high energetic particles and pitch angle scatter them. These processes are often studied by the Monte Carlo technique and good agreements can be obtained with the experimental results. Recently, Monte Carlo Method has been developed to study fast particle transports associated with heating and generating the radial electric field. Further it is applied to investigating the neoclassical transport in the plasma with steep gradients of density and temperatures which is beyong the conventional neoclassical theory. In this report, we briefly summarize the researches done by the present authors utilizing the Monte Carlo method. (author)

  16. Frontiers of quantum Monte Carlo workshop: preface

    International Nuclear Information System (INIS)

    Gubernatis, J.E.

    1985-01-01

    The introductory remarks, table of contents, and list of attendees are presented from the proceedings of the conference, Frontiers of Quantum Monte Carlo, which appeared in the Journal of Statistical Physics

  17. Monte Carlo code development in Los Alamos

    International Nuclear Information System (INIS)

    Carter, L.L.; Cashwell, E.D.; Everett, C.J.; Forest, C.A.; Schrandt, R.G.; Taylor, W.M.; Thompson, W.L.; Turner, G.D.

    1974-01-01

    The present status of Monte Carlo code development at Los Alamos Scientific Laboratory is discussed. A brief summary is given of several of the most important neutron, photon, and electron transport codes. 17 references. (U.S.)

  18. Experience with the Monte Carlo Method

    Energy Technology Data Exchange (ETDEWEB)

    Hussein, E M.A. [Department of Mechanical Engineering University of New Brunswick, Fredericton, N.B., (Canada)

    2007-06-15

    Monte Carlo simulation of radiation transport provides a powerful research and design tool that resembles in many aspects laboratory experiments. Moreover, Monte Carlo simulations can provide an insight not attainable in the laboratory. However, the Monte Carlo method has its limitations, which if not taken into account can result in misleading conclusions. This paper will present the experience of this author, over almost three decades, in the use of the Monte Carlo method for a variety of applications. Examples will be shown on how the method was used to explore new ideas, as a parametric study and design optimization tool, and to analyze experimental data. The consequences of not accounting in detail for detector response and the scattering of radiation by surrounding structures are two of the examples that will be presented to demonstrate the pitfall of condensed.

  19. Experience with the Monte Carlo Method

    International Nuclear Information System (INIS)

    Hussein, E.M.A.

    2007-01-01

    Monte Carlo simulation of radiation transport provides a powerful research and design tool that resembles in many aspects laboratory experiments. Moreover, Monte Carlo simulations can provide an insight not attainable in the laboratory. However, the Monte Carlo method has its limitations, which if not taken into account can result in misleading conclusions. This paper will present the experience of this author, over almost three decades, in the use of the Monte Carlo method for a variety of applications. Examples will be shown on how the method was used to explore new ideas, as a parametric study and design optimization tool, and to analyze experimental data. The consequences of not accounting in detail for detector response and the scattering of radiation by surrounding structures are two of the examples that will be presented to demonstrate the pitfall of condensed

  20. Monte Carlo Transport for Electron Thermal Transport

    Science.gov (United States)

    Chenhall, Jeffrey; Cao, Duc; Moses, Gregory

    2015-11-01

    The iSNB (implicit Schurtz Nicolai Busquet multigroup electron thermal transport method of Cao et al. is adapted into a Monte Carlo transport method in order to better model the effects of non-local behavior. The end goal is a hybrid transport-diffusion method that combines Monte Carlo Transport with a discrete diffusion Monte Carlo (DDMC). The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the method will be presented. This work was supported by Sandia National Laboratory - Albuquerque and the University of Rochester Laboratory for Laser Energetics.

  1. A continuation multilevel Monte Carlo algorithm

    KAUST Repository

    Collier, Nathan; Haji Ali, Abdul Lateef; Nobile, Fabio; von Schwerin, Erik; Tempone, Raul

    2014-01-01

    We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending when the required error

  2. Simulation and the Monte Carlo method

    CERN Document Server

    Rubinstein, Reuven Y

    2016-01-01

    Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...

  3. Hybrid Monte Carlo methods in computational finance

    NARCIS (Netherlands)

    Leitao Rodriguez, A.

    2017-01-01

    Monte Carlo methods are highly appreciated and intensively employed in computational finance in the context of financial derivatives valuation or risk management. The method offers valuable advantages like flexibility, easy interpretation and straightforward implementation. Furthermore, the

  4. LCG Monte-Carlo Data Base

    CERN Document Server

    Bartalini, P.; Kryukov, A.; Selyuzhenkov, Ilya V.; Sherstnev, A.; Vologdin, A.

    2004-01-01

    We present the Monte-Carlo events Data Base (MCDB) project and its development plans. MCDB facilitates communication between authors of Monte-Carlo generators and experimental users. It also provides a convenient book-keeping and an easy access to generator level samples. The first release of MCDB is now operational for the CMS collaboration. In this paper we review the main ideas behind MCDB and discuss future plans to develop this Data Base further within the CERN LCG framework.

  5. Multilevel Monte Carlo in Approximate Bayesian Computation

    KAUST Repository

    Jasra, Ajay

    2017-02-13

    In the following article we consider approximate Bayesian computation (ABC) inference. We introduce a method for numerically approximating ABC posteriors using the multilevel Monte Carlo (MLMC). A sequential Monte Carlo version of the approach is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.

  6. Monte Carlo method applied to medical physics

    International Nuclear Information System (INIS)

    Oliveira, C.; Goncalves, I.F.; Chaves, A.; Lopes, M.C.; Teixeira, N.; Matos, B.; Goncalves, I.C.; Ramalho, A.; Salgado, J.

    2000-01-01

    The main application of the Monte Carlo method to medical physics is dose calculation. This paper shows some results of two dose calculation studies and two other different applications: optimisation of neutron field for Boron Neutron Capture Therapy and optimization of a filter for a beam tube for several purposes. The time necessary for Monte Carlo calculations - the highest boundary for its intensive utilisation - is being over-passed with faster and cheaper computers. (author)

  7. A Monte Carlo experiment to analyze the curse of dimensionality in estimating random coefficients models with a full variance–covariance matrix

    DEFF Research Database (Denmark)

    Cherchi, Elisabetta; Guevara, Cristian Angelo

    2012-01-01

    of parameters increases is usually known as the “curse of dimensionality” in the simulation methods. We investigate this problem in the case of the random coefficients Logit model. We compare the traditional Maximum Simulated Likelihood (MSL) method with two alternative estimation methods: the Expectation......–Maximization (EM) and the Laplace Approximation (HH) methods that do not require simulation. We use Monte Carlo experimentation to investigate systematically the performance of the methods under different circumstances, including different numbers of variables, sample sizes and structures of the variance...

  8. Utilizing Monte-Carlo radiation transport and spallation cross sections to estimate nuclide dependent scaling with altitude

    Science.gov (United States)

    Argento, D.; Reedy, R. C.; Stone, J.

    2010-12-01

    Cosmogenic Nuclides (CNs) are a critical new tool for geomorphology, allowing researchers to date Earth surface events and measure process rates [1]. Prior to CNs, many of these events and processes had no absolute method for measurement and relied entirely on relative methods [2]. Continued improvements in CN methods are necessary for expanding analytic capability in geomorphology. In the last two decades, significant progress has been made in refining these methods and reducing analytic uncertainties [1,3]. Calibration data and scaling methods are being developed to provide a self consistent platform for use in interpreting nuclide concentration values into geologic data [4]. However, nuclide dependent scaling has been difficult to address due to analytic uncertainty and sparseness in altitude transects. Artificial target experiments are underway, but these experiments take considerable time for nuclide buildup in lower altitudes. In this study, a Monte Carlo method radiation transport code, MCNPX, is used to model the galactic cosmic-ray radiation impinging on the upper atmosphere and track the resulting secondary particles through a model of the Earth’s atmosphere and lithosphere. To address the issue of nuclide dependent scaling, the neutron flux values determined by the MCNPX simulation are folded in with estimated cross-section values [5,6]. Preliminary calculations indicate that scaling of nuclide production potential in free air seems to be a function of both altitude and nuclide production pathway. At 0 g/cm2 (sea-level) all neutron spallation pathways have attenuation lengths within 1% of 130 g/cm2. However, the differences in attenuation length are exacerbated with increasing altitude. At 530 g/cm2 atmospheric height (~5,500 m), the apparent attenuation lengths for aggregate SiO2(n,x)10Be, aggregate SiO2(n,x)14C and K(n,x)36Cl become 149.5 g/cm2, 151 g/cm2 and 148 g/cm2 respectively. At 700 g/cm2 atmospheric height (~8,400m - close to the highest

  9. Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments

    International Nuclear Information System (INIS)

    Pevey, Ronald E.

    2005-01-01

    Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL

  10. New estimation of secondary particle multiplicity of nuclear interactions in proton therapy using multicollisional plus evaporation Monte Carlo calculations

    International Nuclear Information System (INIS)

    Mesa, J.; Rodrigues, T. E.; Garcia-Trapaga, C. E.; Arruda-Neto, J. D. T.; Shtejer, K. . Email. jmesa@ibb.unesp.br

    2007-01-01

    Secondary particles contribute to dose deposition in critical organs outside the irradiated target volume. However, the literature regarding specifically to neutron dose and other secondary particles from proton therapy is limited. This issue is of special relevance for young patients, particularly when life expectancy is long, fundamentally if we consider that the art of cancer treatment is finding the right balance between tumor control and injury to normal tissues. In this work we have obtained spectra and multiplicities for neutrons and other secondary particles emitted in the reactions of protons: p+ 12 C, p+ 16 O, p+ 40 Ca and p+ 14 N, for proton energies from 100 to 200 MeV. In this sense, we have used a quite sophisticate multicollisional Monte Carlo code (MCMC) for pre-equilibrium emission, plus de-excitation of residual nucleus by two ways: evaporation of particles (mainly nucleons, but also composites) and possibly fission in the case of heavy residues. The code was developed in our group, with very recently improvements that take into account Pauli-blocking effects in a novel and more precise way, as well as a more rigorous energy balance, an energy stopping time criterion for pre-equilibrium emission, and the inclusion of deuteron, triton and 3 He emissions in the evaporation step

  11. Comparative Dosimetric Estimates of a 25 keV Electron Micro-beam with three Monte Carlo Codes

    CERN Document Server

    Mainardi, E; Donahue, R J

    2002-01-01

    The calculations presented compare the different performances of the three Monte Carlo codes PENELOPE-1999, MCNP-4C and PITS, for the evaluation of Dose profiles from a 25 keV electron micro-beam traversing individual cells. The overall model of a cell is a water cylinder equivalent for the three codes but with a different internal scoring geometry: hollow cylinders for PENELOPE and MCNP, whereas spheres are used for the PITS code. A cylindrical cell geometry with scoring volumes with the shape of hollow cylinders was initially selected for PENELOPE and MCNP because of its superior simulation of the actual shape and dimensions of a cell and for its improved computer-time efficiency if compared to spherical internal volumes. Some of the transfer points and energy transfer that constitute a radiation track may actually fall in the space between spheres, that would be outside the spherical scoring volume. This internal geometry, along with the PENELOPE algorithm, drastically reduced the computer time when using ...

  12. Radon counting statistics - a Monte Carlo investigation

    International Nuclear Information System (INIS)

    Scott, A.G.

    1996-01-01

    Radioactive decay is a Poisson process, and so the Coefficient of Variation (COV) of open-quotes nclose quotes counts of a single nuclide is usually estimated as 1/√n. This is only true if the count duration is much shorter than the half-life of the nuclide. At longer count durations, the COV is smaller than the Poisson estimate. Most radon measurement methods count the alpha decays of 222 Rn, plus the progeny 218 Po and 214 Po, and estimate the 222 Rn activity from the sum of the counts. At long count durations, the chain decay of these nuclides means that every 222 Rn decay must be followed by two other alpha decays. The total number of decays is open-quotes 3Nclose quotes, where N is the number of radon decays, and the true COV of the radon concentration estimate is 1/√(N), √3 larger than the Poisson total count estimate of 1/√3N. Most count periods are comparable to the half lives of the progeny, so the relationship between COV and count time is complex. A Monte-Carlo estimate of the ratio of true COV to Poisson estimate was carried out for a range of count periods from 1 min to 16 h and three common radon measurement methods: liquid scintillation, scintillation cell, and electrostatic precipitation of progeny. The Poisson approximation underestimates COV by less than 20% for count durations of less than 60 min

  13. Successful vectorization - reactor physics Monte Carlo code

    International Nuclear Information System (INIS)

    Martin, W.R.

    1989-01-01

    Most particle transport Monte Carlo codes in use today are based on the ''history-based'' algorithm, wherein one particle history at a time is simulated. Unfortunately, the ''history-based'' approach (present in all Monte Carlo codes until recent years) is inherently scalar and cannot be vectorized. In particular, the history-based algorithm cannot take advantage of vector architectures, which characterize the largest and fastest computers at the current time, vector supercomputers such as the Cray X/MP or IBM 3090/600. However, substantial progress has been made in recent years in developing and implementing a vectorized Monte Carlo algorithm. This algorithm follows portions of many particle histories at the same time and forms the basis for all successful vectorized Monte Carlo codes that are in use today. This paper describes the basic vectorized algorithm along with descriptions of several variations that have been developed by different researchers for specific applications. These applications have been mainly in the areas of neutron transport in nuclear reactor and shielding analysis and photon transport in fusion plasmas. The relative merits of the various approach schemes will be discussed and the present status of known vectorization efforts will be summarized along with available timing results, including results from the successful vectorization of 3-D general geometry, continuous energy Monte Carlo. (orig.)

  14. Monte Carlo work at Argonne National Laboratory

    International Nuclear Information System (INIS)

    Gelbard, E.M.; Prael, R.E.

    1974-01-01

    A simple model of the Monte Carlo process is described and a (nonlinear) recursion relation between fission sources in successive generations is developed. From the linearized form of these recursion relations, it is possible to derive expressions for the mean square coefficients of error modes in the iterates and for correlation coefficients between fluctuations in successive generations. First-order nonlinear terms in the recursion relation are analyzed. From these nonlinear terms an expression for the bias in the eigenvalue estimator is derived, and prescriptions for measuring the bias are formulated. Plans for the development of the VIM code are reviewed, and the proposed treatment of small sample perturbations in VIM is described. 6 references. (U.S.)

  15. Monte Carlo systems used for treatment planning and dose verification

    Energy Technology Data Exchange (ETDEWEB)

    Brualla, Lorenzo [Universitaetsklinikum Essen, NCTeam, Strahlenklinik, Essen (Germany); Rodriguez, Miguel [Centro Medico Paitilla, Balboa (Panama); Lallena, Antonio M. [Universidad de Granada, Departamento de Fisica Atomica, Molecular y Nuclear, Granada (Spain)

    2017-04-15

    General-purpose radiation transport Monte Carlo codes have been used for estimation of the absorbed dose distribution in external photon and electron beam radiotherapy patients since several decades. Results obtained with these codes are usually more accurate than those provided by treatment planning systems based on non-stochastic methods. Traditionally, absorbed dose computations based on general-purpose Monte Carlo codes have been used only for research, owing to the difficulties associated with setting up a simulation and the long computation time required. To take advantage of radiation transport Monte Carlo codes applied to routine clinical practice, researchers and private companies have developed treatment planning and dose verification systems that are partly or fully based on fast Monte Carlo algorithms. This review presents a comprehensive list of the currently existing Monte Carlo systems that can be used to calculate or verify an external photon and electron beam radiotherapy treatment plan. Particular attention is given to those systems that are distributed, either freely or commercially, and that do not require programming tasks from the end user. These systems are compared in terms of features and the simulation time required to compute a set of benchmark calculations. (orig.) [German] Seit mehreren Jahrzehnten werden allgemein anwendbare Monte-Carlo-Codes zur Simulation des Strahlungstransports benutzt, um die Verteilung der absorbierten Dosis in der perkutanen Strahlentherapie mit Photonen und Elektronen zu evaluieren. Die damit erzielten Ergebnisse sind meist akkurater als solche, die mit nichtstochastischen Methoden herkoemmlicher Bestrahlungsplanungssysteme erzielt werden koennen. Wegen des damit verbundenen Arbeitsaufwands und der langen Dauer der Berechnungen wurden Monte-Carlo-Simulationen von Dosisverteilungen in der konventionellen Strahlentherapie in der Vergangenheit im Wesentlichen in der Forschung eingesetzt. Im Bemuehen, Monte-Carlo

  16. Monte Carlo methods for the reliability analysis of Markov systems

    International Nuclear Information System (INIS)

    Buslik, A.J.

    1985-01-01

    This paper presents Monte Carlo methods for the reliability analysis of Markov systems. Markov models are useful in treating dependencies between components. The present paper shows how the adjoint Monte Carlo method for the continuous time Markov process can be derived from the method for the discrete-time Markov process by a limiting process. The straightforward extensions to the treatment of mean unavailability (over a time interval) are given. System unavailabilities can also be estimated; this is done by making the system failed states absorbing, and not permitting repair from them. A forward Monte Carlo method is presented in which the weighting functions are related to the adjoint function. In particular, if the exact adjoint function is known then weighting factors can be constructed such that the exact answer can be obtained with a single Monte Carlo trial. Of course, if the exact adjoint function is known, there is no need to perform the Monte Carlo calculation. However, the formulation is useful since it gives insight into choices of the weight factors which will reduce the variance of the estimator

  17. Acceleration of monte Carlo solution by conjugate gradient method

    International Nuclear Information System (INIS)

    Toshihisa, Yamamoto

    2005-01-01

    The conjugate gradient method (CG) was applied to accelerate Monte Carlo solutions in fixed source problems. The equilibrium model based formulation enables to use CG scheme as well as initial guess to maximize computational performance. This method is available to arbitrary geometry provided that the neutron source distribution in each subregion can be regarded as flat. Even if it is not the case, the method can still be used as a powerful tool to provide an initial guess very close to the converged solution. The major difference of Monte Carlo CG to deterministic CG is that residual error is estimated using Monte Carlo sampling, thus statistical error exists in the residual. This leads to a flow diagram specific to Monte Carlo-CG. Three pre-conditioners were proposed for CG scheme and the performance was compared with a simple 1-D slab heterogeneous test problem. One of them, Sparse-M option, showed an excellent performance in convergence. The performance per unit cost was improved by four times in the test problem. Although direct estimation of efficiency of the method is impossible mainly because of the strong problem-dependence of the optimized pre-conditioner in CG, the method seems to have efficient potential as a fast solution algorithm for Monte Carlo calculations. (author)

  18. Monte Carlo strategies in scientific computing

    CERN Document Server

    Liu, Jun S

    2008-01-01

    This paperback edition is a reprint of the 2001 Springer edition This book provides a self-contained and up-to-date treatment of the Monte Carlo method and develops a common framework under which various Monte Carlo techniques can be "standardized" and compared Given the interdisciplinary nature of the topics and a moderate prerequisite for the reader, this book should be of interest to a broad audience of quantitative researchers such as computational biologists, computer scientists, econometricians, engineers, probabilists, and statisticians It can also be used as the textbook for a graduate-level course on Monte Carlo methods Many problems discussed in the alter chapters can be potential thesis topics for masters’ or PhD students in statistics or computer science departments Jun Liu is Professor of Statistics at Harvard University, with a courtesy Professor appointment at Harvard Biostatistics Department Professor Liu was the recipient of the 2002 COPSS Presidents' Award, the most prestigious one for sta...

  19. Random Numbers and Monte Carlo Methods

    Science.gov (United States)

    Scherer, Philipp O. J.

    Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.

  20. Off-diagonal expansion quantum Monte Carlo.

    Science.gov (United States)

    Albash, Tameem; Wagenbreth, Gene; Hen, Itay

    2017-12-01

    We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.

  1. Monte Carlo simulation of Markov unreliability models

    International Nuclear Information System (INIS)

    Lewis, E.E.; Boehm, F.

    1984-01-01

    A Monte Carlo method is formulated for the evaluation of the unrealibility of complex systems with known component failure and repair rates. The formulation is in terms of a Markov process allowing dependences between components to be modeled and computational efficiencies to be achieved in the Monte Carlo simulation. Two variance reduction techniques, forced transition and failure biasing, are employed to increase computational efficiency of the random walk procedure. For an example problem these result in improved computational efficiency by more than three orders of magnitudes over analog Monte Carlo. The method is generalized to treat problems with distributed failure and repair rate data, and a batching technique is introduced and shown to result in substantial increases in computational efficiency for an example problem. A method for separating the variance due to the data uncertainty from that due to the finite number of random walks is presented. (orig.)

  2. Shell model the Monte Carlo way

    International Nuclear Information System (INIS)

    Ormand, W.E.

    1995-01-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined

  3. Shell model the Monte Carlo way

    Energy Technology Data Exchange (ETDEWEB)

    Ormand, W.E.

    1995-03-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.

  4. SPQR: a Monte Carlo reactor kinetics code

    International Nuclear Information System (INIS)

    Cramer, S.N.; Dodds, H.L.

    1980-02-01

    The SPQR Monte Carlo code has been developed to analyze fast reactor core accident problems where conventional methods are considered inadequate. The code is based on the adiabatic approximation of the quasi-static method. This initial version contains no automatic material motion or feedback. An existing Monte Carlo code is used to calculate the shape functions and the integral quantities needed in the kinetics module. Several sample problems have been devised and analyzed. Due to the large statistical uncertainty associated with the calculation of reactivity in accident simulations, the results, especially at later times, differ greatly from deterministic methods. It was also found that in large uncoupled systems, the Monte Carlo method has difficulty in handling asymmetric perturbations

  5. Selection of important Monte Carlo histories

    International Nuclear Information System (INIS)

    Egbert, Stephen D.

    1987-01-01

    The 1986 Dosimetry System (DS86) for Japanese A-bomb survivors uses information describing the behavior of individual radiation particles, simulated by Monte Carlo methods, to calculate the transmission of radiation into structures and, thence, into humans. However, there are practical constraints on the number of such particle 'histories' that may be used. First, the number must be sufficiently high to provide adequate statistical precision fir any calculated quantity of interest. For integral quantities, such as dose or kerma, statistical precision of approximately 5% (standard deviation) is required to ensure that statistical uncertainties are not a major contributor to the overall uncertainty of the transmitted value. For differential quantities, such as scalar fluence spectra, 10 to 15% standard deviation on individual energy groups is adequate. Second, the number of histories cannot be so large as to require an unacceptably large amount of computer time to process the entire survivor data base. Given that there are approx. 30,000 survivors, each having 13 or 14 organs of interest, the number of histories per organ must be constrained to less than several ten's of thousands at the very most. Selection and use of the most important Monte Carlo leakage histories from among all those calculated allows the creation of an efficient house and organ radiation transmission system for use at RERF. While attempts have been made during the adjoint Monte Carlo calculation to bias the histories toward an efficient dose estimate, this effort has been far from satisfactory. Many of the adjoint histories on a typical leakage tape are either starting in an energy group in which there is very little kerma or dose or leaking into an energy group with very little free-field couple with. By knowing the typical free-field fluence and the fluence-to-dose factors with which the leaking histories will be used, one can select histories rom a leakage tape that will contribute to dose

  6. Analysis of error in Monte Carlo transport calculations

    International Nuclear Information System (INIS)

    Booth, T.E.

    1979-01-01

    The Monte Carlo method for neutron transport calculations suffers, in part, because of the inherent statistical errors associated with the method. Without an estimate of these errors in advance of the calculation, it is difficult to decide what estimator and biasing scheme to use. Recently, integral equations have been derived that, when solved, predicted errors in Monte Carlo calculations in nonmultiplying media. The present work allows error prediction in nonanalog Monte Carlo calculations of multiplying systems, even when supercritical. Nonanalog techniques such as biased kernels, particle splitting, and Russian Roulette are incorporated. Equations derived here allow prediction of how much a specific variance reduction technique reduces the number of histories required, to be weighed against the change in time required for calculation of each history. 1 figure, 1 table

  7. A continuation multilevel Monte Carlo algorithm

    KAUST Repository

    Collier, Nathan

    2014-09-05

    We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending when the required error tolerance is satisfied. CMLMC assumes discretization hierarchies that are defined a priori for each level and are geometrically refined across levels. The actual choice of computational work across levels is based on parametric models for the average cost per sample and the corresponding variance and weak error. These parameters are calibrated using Bayesian estimation, taking particular notice of the deepest levels of the discretization hierarchy, where only few realizations are available to produce the estimates. The resulting CMLMC estimator exhibits a non-trivial splitting between bias and statistical contributions. We also show the asymptotic normality of the statistical error in the MLMC estimator and justify in this way our error estimate that allows prescribing both required accuracy and confidence in the final result. Numerical results substantiate the above results and illustrate the corresponding computational savings in examples that are described in terms of differential equations either driven by random measures or with random coefficients. © 2014, Springer Science+Business Media Dordrecht.

  8. Monte Carlo simulation applied to alpha spectrometry

    International Nuclear Information System (INIS)

    Baccouche, S.; Gharbi, F.; Trabelsi, A.

    2007-01-01

    Alpha particle spectrometry is a widely-used analytical method, in particular when we deal with pure alpha emitting radionuclides. Monte Carlo simulation is an adequate tool to investigate the influence of various phenomena on this analytical method. We performed an investigation of those phenomena using the simulation code GEANT of CERN. The results concerning the geometrical detection efficiency in different measurement geometries agree with analytical calculations. This work confirms that Monte Carlo simulation of solid angle of detection is a very useful tool to determine with very good accuracy the detection efficiency.

  9. Simplified monte carlo simulation for Beijing spectrometer

    International Nuclear Information System (INIS)

    Wang Taijie; Wang Shuqin; Yan Wuguang; Huang Yinzhi; Huang Deqiang; Lang Pengfei

    1986-01-01

    The Monte Carlo method based on the functionization of the performance of detectors and the transformation of values of kinematical variables into ''measured'' ones by means of smearing has been used to program the Monte Carlo simulation of the performance of the Beijing Spectrometer (BES) in FORTRAN language named BESMC. It can be used to investigate the multiplicity, the particle type, and the distribution of four-momentum of the final states of electron-positron collision, and also the response of the BES to these final states. Thus, it provides a measure to examine whether the overall design of the BES is reasonable and to decide the physical topics of the BES

  10. Self-learning Monte Carlo (dynamical biasing)

    International Nuclear Information System (INIS)

    Matthes, W.

    1981-01-01

    In many applications the histories of a normal Monte Carlo game rarely reach the target region. An approximate knowledge of the importance (with respect to the target) may be used to guide the particles more frequently into the target region. A Monte Carlo method is presented in which each history contributes to update the importance field such that eventually most target histories are sampled. It is a self-learning method in the sense that the procedure itself: (a) learns which histories are important (reach the target) and increases their probability; (b) reduces the probabilities of unimportant histories; (c) concentrates gradually on the more important target histories. (U.K.)

  11. Burnup calculations using Monte Carlo method

    International Nuclear Information System (INIS)

    Ghosh, Biplab; Degweker, S.B.

    2009-01-01

    In the recent years, interest in burnup calculations using Monte Carlo methods has gained momentum. Previous burn up codes have used multigroup transport theory based calculations followed by diffusion theory based core calculations for the neutronic portion of codes. The transport theory methods invariably make approximations with regard to treatment of the energy and angle variables involved in scattering, besides approximations related to geometry simplification. Cell homogenisation to produce diffusion, theory parameters adds to these approximations. Moreover, while diffusion theory works for most reactors, it does not produce accurate results in systems that have strong gradients, strong absorbers or large voids. Also, diffusion theory codes are geometry limited (rectangular, hexagonal, cylindrical, and spherical coordinates). Monte Carlo methods are ideal to solve very heterogeneous reactors and/or lattices/assemblies in which considerable burnable poisons are used. The key feature of this approach is that Monte Carlo methods permit essentially 'exact' modeling of all geometrical detail, without resort to ene and spatial homogenization of neutron cross sections. Monte Carlo method would also be better for in Accelerator Driven Systems (ADS) which could have strong gradients due to the external source and a sub-critical assembly. To meet the demand for an accurate burnup code, we have developed a Monte Carlo burnup calculation code system in which Monte Carlo neutron transport code is coupled with a versatile code (McBurn) for calculating the buildup and decay of nuclides in nuclear materials. McBurn is developed from scratch by the authors. In this article we will discuss our effort in developing the continuous energy Monte Carlo burn-up code, McBurn. McBurn is intended for entire reactor core as well as for unit cells and assemblies. Generally, McBurn can do burnup of any geometrical system which can be handled by the underlying Monte Carlo transport code

  12. Improvements for Monte Carlo burnup calculation

    Energy Technology Data Exchange (ETDEWEB)

    Shenglong, Q.; Dong, Y.; Danrong, S.; Wei, L., E-mail: qiangshenglong@tsinghua.org.cn, E-mail: d.yao@npic.ac.cn, E-mail: songdr@npic.ac.cn, E-mail: luwei@npic.ac.cn [Nuclear Power Inst. of China, Cheng Du, Si Chuan (China)

    2015-07-01

    Monte Carlo burnup calculation is development trend of reactor physics, there would be a lot of work to be done for engineering applications. Based on Monte Carlo burnup code MOI, non-fuel burnup calculation methods and critical search suggestions will be mentioned in this paper. For non-fuel burnup, mixed burnup mode will improve the accuracy of burnup calculation and efficiency. For critical search of control rod position, a new method called ABN based on ABA which used by MC21 will be proposed for the first time in this paper. (author)

  13. A keff calculation method by Monte Carlo

    International Nuclear Information System (INIS)

    Shen, H; Wang, K.

    2008-01-01

    The effective multiplication factor (k eff ) is defined as the ratio between the number of neutrons in successive generations, which definition is adopted by most Monte Carlo codes (e.g. MCNP). Also, it can be thought of as the ratio of the generation rate of neutrons by the sum of the leakage rate and the absorption rate, which should exclude the effect of the neutron reaction such as (n, 2n) and (n, 3n). This article discusses the Monte Carlo method for k eff calculation based on the second definition. A new code has been developed and the results are presented. (author)

  14. Monte Carlo simulation of neutron scattering instruments

    International Nuclear Information System (INIS)

    Seeger, P.A.

    1995-01-01

    A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width

  15. Monte Carlo applications to radiation shielding problems

    International Nuclear Information System (INIS)

    Subbaiah, K.V.

    2009-01-01

    Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling of physical and mathematical systems to compute their results. However, basic concepts of MC are both simple and straightforward and can be learned by using a personal computer. Uses of Monte Carlo methods require large amounts of random numbers, and it was their use that spurred the development of pseudorandom number generators, which were far quicker to use than the tables of random numbers which had been previously used for statistical sampling. In Monte Carlo simulation of radiation transport, the history (track) of a particle is viewed as a random sequence of free flights that end with an interaction event where the particle changes its direction of movement, loses energy and, occasionally, produces secondary particles. The Monte Carlo simulation of a given experimental arrangement (e.g., an electron beam, coming from an accelerator and impinging on a water phantom) consists of the numerical generation of random histories. To simulate these histories we need an interaction model, i.e., a set of differential cross sections (DCS) for the relevant interaction mechanisms. The DCSs determine the probability distribution functions (pdf) of the random variables that characterize a track; 1) free path between successive interaction events, 2) type of interaction taking place and 3) energy loss and angular deflection in a particular event (and initial state of emitted secondary particles, if any). Once these pdfs are known, random histories can be generated by using appropriate sampling methods. If the number of generated histories is large enough, quantitative information on the transport process may be obtained by simply averaging over the simulated histories. The Monte Carlo method yields the same information as the solution of the Boltzmann transport equation, with the same interaction model, but is easier to implement. In particular, the simulation of radiation

  16. Simulation of transport equations with Monte Carlo

    International Nuclear Information System (INIS)

    Matthes, W.

    1975-09-01

    The main purpose of the report is to explain the relation between the transport equation and the Monte Carlo game used for its solution. The introduction of artificial particles carrying a weight provides one with high flexibility in constructing many different games for the solution of the same equation. This flexibility opens a way to construct a Monte Carlo game for the solution of the adjoint transport equation. Emphasis is laid mostly on giving a clear understanding of what to do and not on the details of how to do a specific game

  17. Monte Carlo dose distributions for radiosurgery

    International Nuclear Information System (INIS)

    Perucha, M.; Leal, A.; Rincon, M.; Carrasco, E.

    2001-01-01

    The precision of Radiosurgery Treatment planning systems is limited by the approximations of their algorithms and by their dosimetrical input data. This fact is especially important in small fields. However, the Monte Carlo methods is an accurate alternative as it considers every aspect of particle transport. In this work an acoustic neurinoma is studied by comparing the dose distribution of both a planning system and Monte Carlo. Relative shifts have been measured and furthermore, Dose-Volume Histograms have been calculated for target and adjacent organs at risk. (orig.)

  18. Estimation of miniature forest parameters, species, tree shape, and distance between canopies by means of Monte-Carlo based radiative transfer model with forestry surface model

    International Nuclear Information System (INIS)

    Ding, Y.; Arai, K.

    2007-01-01

    A method for estimation of forest parameters, species, tree shape, distance between canopies by means of Monte-Carlo based radiative transfer model with forestry surface model is proposed. The model is verified through experiments with the miniature model of forest, tree array of relatively small size of trees. Two types of miniature trees, ellipse-looking and cone-looking canopy are examined in the experiments. It is found that the proposed model and experimental results show a coincidence so that the proposed method is validated. It is also found that estimation of tree shape, trunk tree distance as well as distinction between deciduous or coniferous trees can be done with the proposed model. Furthermore, influences due to multiple reflections between trees and interaction between trees and under-laying grass are clarified with the proposed method

  19. A punctual flux estimator and reactions rates optimization in neutral particles transport calculus by the Monte Carlo method; Mise au point d'un estimateur ponctuel du flux et des taux de reactions dans les calculs de transport de particules neutres par la methode de monte carlo

    Energy Technology Data Exchange (ETDEWEB)

    Authier, N

    1998-12-01

    One of the questions asked in radiation shielding problems is the estimation of the radiation level in particular to determine accessibility of working persons in controlled area (nuclear power plants, nuclear fuel reprocessing plants) or to study the dose gradients encountered in material (iron nuclear vessel, medical therapy, electronics in satellite). The flux and reaction rate estimators used in Monte Carlo codes give average values in volumes or on surfaces of the geometrical description of the system. But in certain configurations, the knowledge of punctual deposited energy and dose estimates are necessary. The Monte Carlo estimate of the flux at a point of interest is a calculus which presents an unbounded variance. The central limit theorem cannot be applied thus no easy confidencelevel may be calculated. The convergence rate is then very poor. We propose in this study a new solution for the photon flux at a point estimator. The method is based on the 'once more collided flux estimator' developed earlier for neutron calculations. It solves the problem of the unbounded variance and do not add any bias to the estimation. We show however that our new sampling schemes specially developed to treat the anisotropy of the photon coherent scattering is necessary for a good and regular behavior of the estimator. This developments integrated in the TRIPOLI-4 Monte Carlo code add the possibility of an unbiased punctual estimate on media interfaces. (author)

  20. A punctual flux estimator and reactions rates optimization in neutral particles transport calculus by the Monte Carlo method; Mise au point d'un estimateur ponctuel du flux et des taux de reactions dans les calculs de transport de particules neutres par la methode de monte carlo

    Energy Technology Data Exchange (ETDEWEB)

    Authier, N

    1998-12-01

    One of the questions asked in radiation shielding problems is the estimation of the radiation level in particular to determine accessibility of working persons in controlled area (nuclear power plants, nuclear fuel reprocessing plants) or to study the dose gradients encountered in material (iron nuclear vessel, medical therapy, electronics in satellite). The flux and reaction rate estimators used in Monte Carlo codes give average values in volumes or on surfaces of the geometrical description of the system. But in certain configurations, the knowledge of punctual deposited energy and dose estimates are necessary. The Monte Carlo estimate of the flux at a point of interest is a calculus which presents an unbounded variance. The central limit theorem cannot be applied thus no easy confidencelevel may be calculated. The convergence rate is then very poor. We propose in this study a new solution for the photon flux at a point estimator. The method is based on the 'once more collided flux estimator' developed earlier for neutron calculations. It solves the problem of the unbounded variance and do not add any bias to the estimation. We show however that our new sampling schemes specially developed to treat the anisotropy of the photon coherent scattering is necessary for a good and regular behavior of the estimator. This developments integrated in the TRIPOLI-4 Monte Carlo code add the possibility of an unbiased punctual estimate on media interfaces. (author)

  1. Specialized Monte Carlo codes versus general-purpose Monte Carlo codes

    International Nuclear Information System (INIS)

    Moskvin, Vadim; DesRosiers, Colleen; Papiez, Lech; Lu, Xiaoyi

    2002-01-01

    The possibilities of Monte Carlo modeling for dose calculations and optimization treatment are quite limited in radiation oncology applications. The main reason is that the Monte Carlo technique for dose calculations is time consuming while treatment planning may require hundreds of possible cases of dose simulations to be evaluated for dose optimization. The second reason is that general-purpose codes widely used in practice, require an experienced user to customize them for calculations. This paper discusses the concept of Monte Carlo code design that can avoid the main problems that are preventing wide spread use of this simulation technique in medical physics. (authors)

  2. A general transform for variance reduction in Monte Carlo simulations

    International Nuclear Information System (INIS)

    Becker, T.L.; Larsen, E.W.

    2011-01-01

    This paper describes a general transform to reduce the variance of the Monte Carlo estimate of some desired solution, such as flux or biological dose. This transform implicitly includes many standard variance reduction techniques, including source biasing, collision biasing, the exponential transform for path-length stretching, and weight windows. Rather than optimizing each of these techniques separately or choosing semi-empirical biasing parameters based on the experience of a seasoned Monte Carlo practitioner, this General Transform unites all these variance techniques to achieve one objective: a distribution of Monte Carlo particles that attempts to optimize the desired solution. Specifically, this transform allows Monte Carlo particles to be distributed according to the user's specification by using information obtained from a computationally inexpensive deterministic simulation of the problem. For this reason, we consider the General Transform to be a hybrid Monte Carlo/Deterministic method. The numerical results con rm that the General Transform distributes particles according to the user-specified distribution and generally provide reasonable results for shielding applications. (author)

  3. Optimum biasing of integral equations in Monte Carlo calculations

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.

    1979-01-01

    In solving integral equations and estimating average values with the Monte Carlo method, biasing functions may be used to reduce the variancee of the estimates. A simple derivation was used to prove the existence of a zero-variance collision estimator if a specific biasing function and survival probability are applied. This optimum biasing function is the same as that used for the well known zero-variance last-event estimator

  4. Control Variates for Monte Carlo Valuation of American Options

    DEFF Research Database (Denmark)

    Rasmussen, Nicki S.

    2005-01-01

    This paper considers two applications of control variates to the Monte Carlo valuation of American options. The main contribution of the paper lies in the particular choice of a control variate for American or Bermudan options. It is shown that for any martingale process used as a control variate...... technique is used for improving the least-squares Monte Carlo (LSM) approach for determining exercise strategies. The suggestions made allow for more efficient estimation of the continuation value, used in determining the strategy. An additional suggestion is made in order to improve the stability...

  5. A study on the shielding element using Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ki Jeong [Dept. of Radiology, Konkuk University Medical Center, Seoul (Korea, Republic of); Shim, Jae Goo [Dept. of Radiologic Technology, Daegu Health College, Daegu (Korea, Republic of)

    2017-06-15

    In this research, we simulated the elementary star shielding ability using Monte Carlo simulation to apply medical radiation shielding sheet which can replace existing lead. In the selection of elements, mainly elements and metal elements having a large atomic number, which are known to have high shielding performance, recently, various composite materials have improved shielding performance, so that weight reduction, processability, In consideration of activity etc., 21 elements were selected. The simulation tools were utilized Monte Carlo method. As a result of simulating the shielding performance by each element, it was estimated that the shielding ratio is the highest at 98.82% and 98.44% for tungsten and gold.

  6. Aspects of perturbative QCD in Monte Carlo shower models

    International Nuclear Information System (INIS)

    Gottschalk, T.D.

    1986-01-01

    The perturbative QCD content of Monte Carlo models for high energy hadron-hadron scattering is examined. Particular attention is given to the recently developed backwards evolution formalism for initial state parton showers, and the merging of parton shower evolution with hard scattering cross sections. Shower estimates of K-factors are discussed, and a simple scheme is presented for incorporating 2 → QCD cross sections into shower model calculations without double counting. Additional issues in the development of hard scattering Monte Carlo models are summarized. 69 references, 20 figures

  7. Taylor-series and Monte-Carlo-method uncertainty estimation of the width of a probability distribution based on varying bias and random error

    International Nuclear Information System (INIS)

    Wilson, Brandon M; Smith, Barton L

    2013-01-01

    Uncertainties are typically assumed to be constant or a linear function of the measured value; however, this is generally not true. Particle image velocimetry (PIV) is one example of a measurement technique that has highly nonlinear, time varying local uncertainties. Traditional uncertainty methods are not adequate for the estimation of the uncertainty of measurement statistics (mean and variance) in the presence of nonlinear, time varying errors. Propagation of instantaneous uncertainty estimates into measured statistics is performed allowing accurate uncertainty quantification of time-mean and statistics of measurements such as PIV. It is shown that random errors will always elevate the measured variance, and thus turbulent statistics such as u'u'-bar. Within this paper, nonlinear, time varying errors are propagated from instantaneous measurements into the measured mean and variance using the Taylor-series method. With these results and knowledge of the systematic and random uncertainty of each measurement, the uncertainty of the time-mean, the variance and covariance can be found. Applicability of the Taylor-series uncertainty equations to time varying systematic and random errors and asymmetric error distributions are demonstrated with Monte-Carlo simulations. The Taylor-series uncertainty estimates are always accurate for uncertainties on the mean quantity. The Taylor-series variance uncertainty is similar to the Monte-Carlo results for cases in which asymmetric random errors exist or the magnitude of the instantaneous variations in the random and systematic errors is near the ‘true’ variance. However, the Taylor-series method overpredicts the uncertainty in the variance as the instantaneous variations of systematic errors are large or are on the same order of magnitude as the ‘true’ variance. (paper)

  8. Minimum variance Monte Carlo importance sampling with parametric dependence

    International Nuclear Information System (INIS)

    Ragheb, M.M.H.; Halton, J.; Maynard, C.W.

    1981-01-01

    An approach for Monte Carlo Importance Sampling with parametric dependence is proposed. It depends upon obtaining by proper weighting over a single stage the overall functional dependence of the variance on the importance function parameter over a broad range of its values. Results corresponding to minimum variance are adapted and other results rejected. Numerical calculation for the estimation of intergrals are compared to Crude Monte Carlo. Results explain the occurrences of the effective biases (even though the theoretical bias is zero) and infinite variances which arise in calculations involving severe biasing and a moderate number of historis. Extension to particle transport applications is briefly discussed. The approach constitutes an extension of a theory on the application of Monte Carlo for the calculation of functional dependences introduced by Frolov and Chentsov to biasing, or importance sample calculations; and is a generalization which avoids nonconvergence to the optimal values in some cases of a multistage method for variance reduction introduced by Spanier. (orig.) [de

  9. MORET: Version 4.B. A multigroup Monte Carlo criticality code

    International Nuclear Information System (INIS)

    Jacquet, Olivier; Miss, Joachim; Courtois, Gerard

    2003-01-01

    MORET 4 is a three dimensional multigroup Monte Carlo code which calculates the effective multiplication factor (keff) of any configurations more or less complex as well as reaction rates in the different volumes of the geometry and the leakage out of the system. MORET 4 is the Monte Carlo code of the APOLLO2-MORET 4 standard route of CRISTAL, the French criticality package. It is the most commonly used Monte Carlo code for French criticality calculations. During the last four years, the MORET 4 team has developed or improved the following major points: modernization of the geometry, implementation of perturbation algorithms, source distribution convergence, statistical detection of stationarity, unbiased variance estimation and creation of pre-processing and post-processing tools. The purpose of this paper is not only to present the new features of MORET but also to detail clearly the physical models and the mathematical methods used in the code. (author)

  10. Monte Carlo Simulation in Statistical Physics An Introduction

    CERN Document Server

    Binder, Kurt

    2010-01-01

    Monte Carlo Simulation in Statistical Physics deals with the computer simulation of many-body systems in condensed-matter physics and related fields of physics, chemistry and beyond, to traffic flows, stock market fluctuations, etc.). Using random numbers generated by a computer, probability distributions are calculated, allowing the estimation of the thermodynamic properties of various systems. This book describes the theoretical background to several variants of these Monte Carlo methods and gives a systematic presentation from which newcomers can learn to perform such simulations and to analyze their results. The fifth edition covers Classical as well as Quantum Monte Carlo methods. Furthermore a new chapter on the sampling of free-energy landscapes has been added. To help students in their work a special web server has been installed to host programs and discussion groups (http://wwwcp.tphys.uni-heidelberg.de). Prof. Binder was awarded the Berni J. Alder CECAM Award for Computational Physics 2001 as well ...

  11. Two proposed convergence criteria for Monte Carlo solutions

    International Nuclear Information System (INIS)

    Forster, R.A.; Pederson, S.P.; Booth, T.E.

    1992-01-01

    The central limit theorem (CLT) can be applied to a Monte Carlo solution if two requirements are satisfied: (1) The random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these two conditions are satisfied, a confidence interval (CI) based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the Monte Carlo tally being used. The Monte Carlo practitioner has a limited number of marginal methods to assess the fulfillment of the second requirement, such as statistical error reduction proportional to 1/√N with error magnitude guidelines. Two proposed methods are discussed in this paper to assist in deciding if N is large enough: estimating the relative variance of the variance (VOV) and examining the empirical history score probability density function (pdf)

  12. Profit Forecast Model Using Monte Carlo Simulation in Excel

    Directory of Open Access Journals (Sweden)

    Petru BALOGH

    2014-01-01

    Full Text Available Profit forecast is very important for any company. The purpose of this study is to provide a method to estimate the profit and the probability of obtaining the expected profit. Monte Carlo methods are stochastic techniques–meaning they are based on the use of random numbers and probability statistics to investigate problems. Monte Carlo simulation furnishes the decision-maker with a range of possible outcomes and the probabilities they will occur for any choice of action. Our example of Monte Carlo simulation in Excel will be a simplified profit forecast model. Each step of the analysis will be described in detail. The input data for the case presented: the number of leads per month, the percentage of leads that result in sales, , the cost of a single lead, the profit per sale and fixed cost, allow obtaining profit and associated probabilities of achieving.

  13. Parallel processing Monte Carlo radiation transport codes

    International Nuclear Information System (INIS)

    McKinney, G.W.

    1994-01-01

    Issues related to distributed-memory multiprocessing as applied to Monte Carlo radiation transport are discussed. Measurements of communication overhead are presented for the radiation transport code MCNP which employs the communication software package PVM, and average efficiency curves are provided for a homogeneous virtual machine

  14. Monte Carlo determination of heteroepitaxial misfit structures

    DEFF Research Database (Denmark)

    Baker, J.; Lindgård, Per-Anker

    1996-01-01

    We use Monte Carlo simulations to determine the structure of KBr overlayers on a NaCl(001) substrate, a system with large (17%) heteroepitaxial misfit. The equilibrium relaxation structure is determined for films of 2-6 ML, for which extensive helium-atom scattering data exist for comparison...

  15. The Monte Carlo applied for calculation dose

    International Nuclear Information System (INIS)

    Peixoto, J.E.

    1988-01-01

    The Monte Carlo method is showed for the calculation of absorbed dose. The trajectory of the photon is traced simulating sucessive interaction between the photon and the substance that consist the human body simulator. The energy deposition in each interaction of the simulator organ or tissue per photon is also calculated. (C.G.C.) [pt

  16. Monte Carlo code for neutron radiography

    International Nuclear Information System (INIS)

    Milczarek, Jacek J.; Trzcinski, Andrzej; El-Ghany El Abd, Abd; Czachor, Andrzej

    2005-01-01

    The concise Monte Carlo code, MSX, for simulation of neutron radiography images of non-uniform objects is presented. The possibility of modeling the images of objects with continuous spatial distribution of specific isotopes is included. The code can be used for assessment of the scattered neutron component in neutron radiograms

  17. Monte Carlo code for neutron radiography

    Energy Technology Data Exchange (ETDEWEB)

    Milczarek, Jacek J. [Institute of Atomic Energy, Swierk, 05-400 Otwock (Poland)]. E-mail: jjmilcz@cyf.gov.pl; Trzcinski, Andrzej [Institute for Nuclear Studies, Swierk, 05-400 Otwock (Poland); El-Ghany El Abd, Abd [Institute of Atomic Energy, Swierk, 05-400 Otwock (Poland); Nuclear Research Center, PC 13759, Cairo (Egypt); Czachor, Andrzej [Institute of Atomic Energy, Swierk, 05-400 Otwock (Poland)

    2005-04-21

    The concise Monte Carlo code, MSX, for simulation of neutron radiography images of non-uniform objects is presented. The possibility of modeling the images of objects with continuous spatial distribution of specific isotopes is included. The code can be used for assessment of the scattered neutron component in neutron radiograms.

  18. Monte Carlo method in neutron activation analysis

    International Nuclear Information System (INIS)

    Majerle, M.; Krasa, A.; Svoboda, O.; Wagner, V.; Adam, J.; Peetermans, S.; Slama, O.; Stegajlov, V.I.; Tsupko-Sitnikov, V.M.

    2009-01-01

    Neutron activation detectors are a useful technique for the neutron flux measurements in spallation experiments. The study of the usefulness and the accuracy of this method at similar experiments was performed with the help of Monte Carlo codes MCNPX and FLUKA

  19. Atomistic Monte Carlo simulation of lipid membranes

    DEFF Research Database (Denmark)

    Wüstner, Daniel; Sklenar, Heinz

    2014-01-01

    Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction...... of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol....

  20. Computer system for Monte Carlo experimentation

    International Nuclear Information System (INIS)

    Grier, D.A.

    1986-01-01

    A new computer system for Monte Carlo Experimentation is presented. The new system speeds and simplifies the process of coding and preparing a Monte Carlo Experiment; it also encourages the proper design of Monte Carlo Experiments, and the careful analysis of the experimental results. A new functional language is the core of this system. Monte Carlo Experiments, and their experimental designs, are programmed in this new language; those programs are compiled into Fortran output. The Fortran output is then compiled and executed. The experimental results are analyzed with a standard statistics package such as Si, Isp, or Minitab or with a user-supplied program. Both the experimental results and the experimental design may be directly loaded into the workspace of those packages. The new functional language frees programmers from many of the details of programming an experiment. Experimental designs such as factorial, fractional factorial, or latin square are easily described by the control structures and expressions of the language. Specific mathematical modes are generated by the routines of the language

  1. Scalable Domain Decomposed Monte Carlo Particle Transport

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  2. Monte Carlo methods beyond detailed balance

    NARCIS (Netherlands)

    Schram, Raoul D.; Barkema, Gerard T.|info:eu-repo/dai/nl/101275080

    2015-01-01

    Monte Carlo algorithms are nearly always based on the concept of detailed balance and ergodicity. In this paper we focus on algorithms that do not satisfy detailed balance. We introduce a general method for designing non-detailed balance algorithms, starting from a conventional algorithm satisfying

  3. Monte Carlo studies of ZEPLIN III

    CERN Document Server

    Dawson, J; Davidge, D C R; Gillespie, J R; Howard, A S; Jones, W G; Joshi, M; Lebedenko, V N; Sumner, T J; Quenby, J J

    2002-01-01

    A Monte Carlo simulation of a two-phase xenon dark matter detector, ZEPLIN III, has been achieved. Results from the analysis of a simulated data set are presented, showing primary and secondary signal distributions from low energy gamma ray events.

  4. Biases in Monte Carlo eigenvalue calculations

    Energy Technology Data Exchange (ETDEWEB)

    Gelbard, E.M.

    1992-12-01

    The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the ``fixed-source`` case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated (``replicated``) over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here.

  5. Biases in Monte Carlo eigenvalue calculations

    Energy Technology Data Exchange (ETDEWEB)

    Gelbard, E.M.

    1992-01-01

    The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the fixed-source'' case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated ( replicated'') over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here.

  6. Dynamic bounds coupled with Monte Carlo simulations

    NARCIS (Netherlands)

    Rajabali Nejad, Mohammadreza; Meester, L.E.; van Gelder, P.H.A.J.M.; Vrijling, J.K.

    2011-01-01

    For the reliability analysis of engineering structures a variety of methods is known, of which Monte Carlo (MC) simulation is widely considered to be among the most robust and most generally applicable. To reduce simulation cost of the MC method, variance reduction methods are applied. This paper

  7. Design and analysis of Monte Carlo experiments

    NARCIS (Netherlands)

    Kleijnen, Jack P.C.; Gentle, J.E.; Haerdle, W.; Mori, Y.

    2012-01-01

    By definition, computer simulation or Monte Carlo models are not solved by mathematical analysis (such as differential calculus), but are used for numerical experimentation. The goal of these experiments is to answer questions about the real world; i.e., the experimenters may use their models to

  8. Monte Carlo simulations in theoretical physic

    International Nuclear Information System (INIS)

    Billoire, A.

    1991-01-01

    After a presentation of the MONTE CARLO method principle, the method is applied, first to the critical exponents calculations in the three dimensions ISING model, and secondly to the discrete quantum chromodynamic with calculation times in function of computer power. 28 refs., 4 tabs

  9. Monte Carlo method for random surfaces

    International Nuclear Information System (INIS)

    Berg, B.

    1985-01-01

    Previously two of the authors proposed a Monte Carlo method for sampling statistical ensembles of random walks and surfaces with a Boltzmann probabilistic weight. In the present paper we work out the details for several models of random surfaces, defined on d-dimensional hypercubic lattices. (orig.)

  10. Monte Carlo simulation of the microcanonical ensemble

    International Nuclear Information System (INIS)

    Creutz, M.

    1984-01-01

    We consider simulating statistical systems with a random walk on a constant energy surface. This combines features of deterministic molecular dynamics techniques and conventional Monte Carlo simulations. For discrete systems the method can be programmed to run an order of magnitude faster than other approaches. It does not require high quality random numbers and may also be useful for nonequilibrium studies. 10 references

  11. Coded aperture optimization using Monte Carlo simulations

    International Nuclear Information System (INIS)

    Martineau, A.; Rocchisani, J.M.; Moretti, J.L.

    2010-01-01

    Coded apertures using Uniformly Redundant Arrays (URA) have been unsuccessfully evaluated for two-dimensional and three-dimensional imaging in Nuclear Medicine. The images reconstructed from coded projections contain artifacts and suffer from poor spatial resolution in the longitudinal direction. We introduce a Maximum-Likelihood Expectation-Maximization (MLEM) algorithm for three-dimensional coded aperture imaging which uses a projection matrix calculated by Monte Carlo simulations. The aim of the algorithm is to reduce artifacts and improve the three-dimensional spatial resolution in the reconstructed images. Firstly, we present the validation of GATE (Geant4 Application for Emission Tomography) for Monte Carlo simulations of a coded mask installed on a clinical gamma camera. The coded mask modelling was validated by comparison between experimental and simulated data in terms of energy spectra, sensitivity and spatial resolution. In the second part of the study, we use the validated model to calculate the projection matrix with Monte Carlo simulations. A three-dimensional thyroid phantom study was performed to compare the performance of the three-dimensional MLEM reconstruction with conventional correlation method. The results indicate that the artifacts are reduced and three-dimensional spatial resolution is improved with the Monte Carlo-based MLEM reconstruction.

  12. Biases in Monte Carlo eigenvalue calculations

    International Nuclear Information System (INIS)

    Gelbard, E.M.

    1992-01-01

    The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the ''fixed-source'' case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated (''replicated'') over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here

  13. Monte Carlo studies of uranium calorimetry

    International Nuclear Information System (INIS)

    Brau, J.; Hargis, H.J.; Gabriel, T.A.; Bishop, B.L.

    1985-01-01

    Detailed Monte Carlo calculations of uranium calorimetry are presented which reveal a significant difference in the responses of liquid argon and plastic scintillator in uranium calorimeters. Due to saturation effects, neutrons from the uranium are found to contribute only weakly to the liquid argon signal. Electromagnetic sampling inefficiencies are significant and contribute substantially to compensation in both systems. 17 references

  14. SPANDY: a Monte Carlo program for gas target scattering geometry

    International Nuclear Information System (INIS)

    Jarmie, N.; Jett, J.H.; Niethammer, A.C.

    1977-02-01

    A Monte Carlo computer program is presented that simulates a two-slit gas target scattering geometry. The program is useful in estimating effects due to finite geometry and multiple scattering in the target foil. Details of the program are presented and experience with a specific example is discussed

  15. Monte Carlo methods of PageRank computation

    NARCIS (Netherlands)

    Litvak, Nelli

    2004-01-01

    We describe and analyze an on-line Monte Carlo method of PageRank computation. The PageRank is being estimated basing on results of a large number of short independent simulation runs initiated from each page that contains outgoing hyperlinks. The method does not require any storage of the hyperlink

  16. Fitting experimental data by using weighted Monte Carlo events

    International Nuclear Information System (INIS)

    Stojnev, S.

    2003-01-01

    A method for fitting experimental data using modified Monte Carlo (MC) sample is developed. It is intended to help when a single finite MC source has to fit experimental data looking for parameters in a certain underlying theory. The extraction of the searched parameters, the errors estimation and the goodness-of-fit testing is based on the binned maximum likelihood method

  17. Tackling the premature convergence problem in Monte-Carlo localization

    NARCIS (Netherlands)

    Kootstra, Gert; de Boer, Bart

    Monte-Carlo localization uses particle filtering to estimate the position of the robot. The method is known to suffer from the loss of potential positions when there is ambiguity present in the environment. Since many indoor environments are highly symmetric, this problem of premature convergence is

  18. Monte-Carlo approach to the generation of adversary paths

    International Nuclear Information System (INIS)

    1977-01-01

    This paper considers the definition of a threat as the sequence of events that might lead to adversary success. A nuclear facility is characterized as a weighted, labeled, directed graph, with critical adversary paths. A discrete-event, Monte-Carlo simulation model is used to estimate the probability of the critical paths. The model was tested for hypothetical facilities, with promising results

  19. Lecture 1. Monte Carlo basics. Lecture 2. Adjoint Monte Carlo. Lecture 3. Coupled Forward-Adjoint calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J.E. [Delft University of Technology, Interfaculty Reactor Institute, Delft (Netherlands)

    2000-07-01

    The Monte Carlo method is a statistical method to solve mathematical and physical problems using random numbers. The principle of the methods will be demonstrated for a simple mathematical problem and for neutron transport. Various types of estimators will be discussed, as well as generally applied variance reduction methods like splitting, Russian roulette and importance biasing. The theoretical formulation for solving eigenvalue problems for multiplying systems will be shown. Some reflections will be given about the applicability of the Monte Carlo method, its limitations and its future prospects for reactor physics calculations. Adjoint Monte Carlo is a Monte Carlo game to solve the adjoint neutron (or photon) transport equation. The adjoint transport equation can be interpreted in terms of simulating histories of artificial particles, which show properties of neutrons that move backwards in history. These particles will start their history at the detector from which the response must be estimated and give a contribution to the estimated quantity when they hit or pass through the neutron source. Application to multigroup transport formulation will be demonstrated Possible implementation for the continuous energy case will be outlined. The inherent advantages and disadvantages of the method will be discussed. The Midway Monte Carlo method will be presented for calculating a detector response due to a (neutron or photon) source. A derivation will be given of the basic formula for the Midway Monte Carlo method The black absorber technique, allowing for a cutoff of particle histories when reaching the midway surface in one of the calculations will be derived. An extension of the theory to coupled neutron-photon problems is given. The method will be demonstrated for an oil well logging problem, comprising a neutron source in a borehole and photon detectors to register the photons generated by inelastic neutron scattering. (author)

  20. Lecture 1. Monte Carlo basics. Lecture 2. Adjoint Monte Carlo. Lecture 3. Coupled Forward-Adjoint calculations

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.

    2000-01-01

    The Monte Carlo method is a statistical method to solve mathematical and physical problems using random numbers. The principle of the methods will be demonstrated for a simple mathematical problem and for neutron transport. Various types of estimators will be discussed, as well as generally applied variance reduction methods like splitting, Russian roulette and importance biasing. The theoretical formulation for solving eigenvalue problems for multiplying systems will be shown. Some reflections will be given about the applicability of the Monte Carlo method, its limitations and its future prospects for reactor physics calculations. Adjoint Monte Carlo is a Monte Carlo game to solve the adjoint neutron (or photon) transport equation. The adjoint transport equation can be interpreted in terms of simulating histories of artificial particles, which show properties of neutrons that move backwards in history. These particles will start their history at the detector from which the response must be estimated and give a contribution to the estimated quantity when they hit or pass through the neutron source. Application to multigroup transport formulation will be demonstrated Possible implementation for the continuous energy case will be outlined. The inherent advantages and disadvantages of the method will be discussed. The Midway Monte Carlo method will be presented for calculating a detector response due to a (neutron or photon) source. A derivation will be given of the basic formula for the Midway Monte Carlo method The black absorber technique, allowing for a cutoff of particle histories when reaching the midway surface in one of the calculations will be derived. An extension of the theory to coupled neutron-photon problems is given. The method will be demonstrated for an oil well logging problem, comprising a neutron source in a borehole and photon detectors to register the photons generated by inelastic neutron scattering. (author)

  1. Uncertainty analysis in Monte Carlo criticality computations

    International Nuclear Information System (INIS)

    Qi Ao

    2011-01-01

    Highlights: ► Two types of uncertainty methods for k eff Monte Carlo computations are examined. ► Sampling method has the least restrictions on perturbation but computing resources. ► Analytical method is limited to small perturbation on material properties. ► Practicality relies on efficiency, multiparameter applicability and data availability. - Abstract: Uncertainty analysis is imperative for nuclear criticality risk assessments when using Monte Carlo neutron transport methods to predict the effective neutron multiplication factor (k eff ) for fissionable material systems. For the validation of Monte Carlo codes for criticality computations against benchmark experiments, code accuracy and precision are measured by both the computational bias and uncertainty in the bias. The uncertainty in the bias accounts for known or quantified experimental, computational and model uncertainties. For the application of Monte Carlo codes for criticality analysis of fissionable material systems, an administrative margin of subcriticality must be imposed to provide additional assurance of subcriticality for any unknown or unquantified uncertainties. Because of a substantial impact of the administrative margin of subcriticality on economics and safety of nuclear fuel cycle operations, recently increasing interests in reducing the administrative margin of subcriticality make the uncertainty analysis in criticality safety computations more risk-significant. This paper provides an overview of two most popular k eff uncertainty analysis methods for Monte Carlo criticality computations: (1) sampling-based methods, and (2) analytical methods. Examples are given to demonstrate their usage in the k eff uncertainty analysis due to uncertainties in both neutronic and non-neutronic parameters of fissionable material systems.

  2. Program for shaping neutron microconstants for calculations by means of the Monte-Carlo method on the base of estimated data files (NEDAM)

    International Nuclear Information System (INIS)

    Zakharov, L.N.; Markovskij, D.V.; Frank-Kamenetskij, A.D.; Shatalov, G.E.

    1978-01-01

    The program for shaping neutron microconstants for calculations by means of the Monte-Carlo method, oriented on the detailed consideration of processes in the quick region. The initial information is files of the estimated datea within the UKNDL formate. The method combines the group approach to representation of the process probability and anisotropy of the elastic scattering with the individual description of the secondary neutron spectra of non-elastic processes. The NEDAM program is written in the FORTRAN language for BESM-6 computer and has the following characteristics: the initial file length of the evaluated data is 20000 words, the multigroup constant file length equals 8000 words, the MARK massive length equals 1000 words. The calculation time of a single variant equals 1-2 min

  3. Pore-scale uncertainty quantification with multilevel Monte Carlo

    KAUST Repository

    Icardi, Matteo; Hoel, Haakon; Long, Quan; Tempone, Raul

    2014-01-01

    . Since there are no generic ways to parametrize the randomness in the porescale structures, Monte Carlo techniques are the most accessible to compute statistics. We propose a multilevel Monte Carlo (MLMC) technique to reduce the computational cost

  4. Prospect on general software of Monte Carlo method

    International Nuclear Information System (INIS)

    Pei Lucheng

    1992-01-01

    This is a short paper on the prospect of Monte Carlo general software. The content consists of cluster sampling method, zero variance technique, self-improved method, and vectorized Monte Carlo method

  5. Bayesian phylogeny analysis via stochastic approximation Monte Carlo

    KAUST Repository

    Cheon, Sooyoung; Liang, Faming

    2009-01-01

    in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method

  6. Applications of Monte Carlo method in Medical Physics

    International Nuclear Information System (INIS)

    Diez Rios, A.; Labajos, M.

    1989-01-01

    The basic ideas of Monte Carlo techniques are presented. Random numbers and their generation by congruential methods, which underlie Monte Carlo calculations are shown. Monte Carlo techniques to solve integrals are discussed. The evaluation of a simple monodimensional integral with a known answer, by means of two different Monte Carlo approaches are discussed. The basic principles to simualate on a computer photon histories reduce variance and the current applications in Medical Physics are commented. (Author)

  7. Global Monte Carlo Simulation with High Order Polynomial Expansions

    International Nuclear Information System (INIS)

    William R. Martin; James Paul Holloway; Kaushik Banerjee; Jesse Cheatham; Jeremy Conlin

    2007-01-01

    The functional expansion technique (FET) was recently developed for Monte Carlo simulation. The basic idea of the FET is to expand a Monte Carlo tally in terms of a high order expansion, the coefficients of which can be estimated via the usual random walk process in a conventional Monte Carlo code. If the expansion basis is chosen carefully, the lowest order coefficient is simply the conventional histogram tally, corresponding to a flat mode. This research project studied the applicability of using the FET to estimate the fission source, from which fission sites can be sampled for the next generation. The idea is that individual fission sites contribute to expansion modes that may span the geometry being considered, possibly increasing the communication across a loosely coupled system and thereby improving convergence over the conventional fission bank approach used in most production Monte Carlo codes. The project examined a number of basis functions, including global Legendre polynomials as well as 'local' piecewise polynomials such as finite element hat functions and higher order versions. The global FET showed an improvement in convergence over the conventional fission bank approach. The local FET methods showed some advantages versus global polynomials in handling geometries with discontinuous material properties. The conventional finite element hat functions had the disadvantage that the expansion coefficients could not be estimated directly but had to be obtained by solving a linear system whose matrix elements were estimated. An alternative fission matrix-based response matrix algorithm was formulated. Studies were made of two alternative applications of the FET, one based on the kernel density estimator and one based on Arnoldi's method of minimized iterations. Preliminary results for both methods indicate improvements in fission source convergence. These developments indicate that the FET has promise for speeding up Monte Carlo fission source convergence

  8. GATE based Monte Carlo simulation of planar scintigraphy to estimate the nodular dose in radioiodine therapy for autonomous thyroid adenoma

    Energy Technology Data Exchange (ETDEWEB)

    Hammes, Jochen; Schmidt, Matthias; Schicha, Harald; Eschner, Wolfgang [Universitaetsklinikum Koeln (Germany). Klinik und Poliklinik fuer Nuklearmedizin; Pietrzyk, Uwe [Forschungszentrum Juelich GmbH (Germany). Inst. fuer Neurowissenschaften und Medizin (INM-4); Wuppertal Univ. (Germany). Fachbereich C - Physik

    2011-07-01

    The recommended target dose in radioiodine therapy of solitary hyperfunctioning thyroid nodules is 300-400 Gy and therefore higher than in other radiotherapies. This is due to the fact that an unknown, yet significant portion of the activity is stored in extranodular areas but is neglected in the calculatory dosimetry. We investigate the feasibility of determining the ratio of nodular and extranodular activity concentrations (uptakes) from post-therapeutically acquired planar scintigrams with Monte Carlo simulations in GATE. The geometry of a gamma camera with a high energy collimator was emulated in GATE (Version 5). A geometrical thyroid-neck phantom (GP) and the ICRP reference voxel phantoms 'Adult Female' (AF, 16 ml thyroid) and 'Adult Male' (AM, 19 ml thyroid) were used as source regions. Nodules of 1 ml and 3 ml volume were placed in the phantoms. For each phantom and each nodule 200 scintigraphic acquisitions were simulated. Uptake ratios of nodule and rest of thyroid ranging from 1 to 20 could be created by summation. Quantitative image analysis was performed by investigating the number of simulated counts in regions of interest (ROIs). ROIs were created by perpendicular projection of the phantom onto the camera plane to avoid a user dependant bias. The ratio of count densities in ROIs over the nodule and over the contralateral lobe, which should be least affected by nodular activity, was taken to be the best available measure for the uptake ratios. However, the predefined uptake ratios are underestimated by these count density ratios: For an uptake ratio of 20 the count ratios range from 4.5 (AF, 1 ml nodule) to 15.3 (AM, 3 ml nodule). Furthermore, the contralateral ROI is more strongly affected by nodular activity than expected: For an uptake ratio of 20 between nodule and rest of thyroid up to 29% of total counts in the ROI over the contralateral lobe are caused by decays in the nodule (AF 3 ml). In the case of the 1 ml nodules this

  9. GATE based Monte Carlo simulation of planar scintigraphy to estimate the nodular dose in radioiodine therapy for autonomous thyroid adenoma.

    Science.gov (United States)

    Hammes, Jochen; Pietrzyk, Uwe; Schmidt, Matthias; Schicha, Harald; Eschner, Wolfgang

    2011-12-01

    The recommended target dose in radioiodine therapy of solitary hyperfunctioning thyroid nodules is 300-400Gy and therefore higher than in other radiotherapies. This is due to the fact that an unknown, yet significant portion of the activity is stored in extranodular areas but is neglected in the calculatory dosimetry. We investigate the feasibility of determining the ratio of nodular and extranodular activity concentrations (uptakes) from post-therapeutically acquired planar scintigrams with Monte Carlo simulations in GATE. The geometry of a gamma camera with a high energy collimator was emulated in GATE (Version 5). A geometrical thyroid-neck phantom (GP) and the ICRP reference voxel phantoms "Adult Female" (AF, 16ml thyroid) and "Adult Male" (AM, 19ml thyroid) were used as source regions. Nodules of 1ml and 3ml volume were placed in the phantoms. For each phantom and each nodule 200 scintigraphic acquisitions were simulated. Uptake ratios of nodule and rest of thyroid ranging from 1 to 20 could be created by summation. Quantitative image analysis was performed by investigating the number of simulated counts in regions of interest (ROIs). ROIs were created by perpendicular projection of the phantom onto the camera plane to avoid a user dependant bias. The ratio of count densities in ROIs over the nodule and over the contralateral lobe, which should be least affected by nodular activity, was taken to be the best available measure for the uptake ratios. However, the predefined uptake ratios are underestimated by these count density ratios: For an uptake ratio of 20 the count ratios range from 4.5 (AF, 1ml nodule) to 15.3 (AM, 3ml nodule). Furthermore, the contralateral ROI is more strongly affected by nodular activity than expected: For an uptake ratio of 20 between nodule and rest of thyroid up to 29% of total counts in the ROI over the contralateral lobe are caused by decays in the nodule (AF 3 ml). In the case of the 1ml nodules this effect is smaller: 9-11% (AF

  10. GATE based Monte Carlo simulation of planar scintigraphy to estimate the nodular dose in radioiodine therapy for autonomous thyroid adenoma

    International Nuclear Information System (INIS)

    Hammes, Jochen; Schmidt, Matthias; Schicha, Harald; Eschner, Wolfgang; Pietrzyk, Uwe; Wuppertal Univ.

    2011-01-01

    The recommended target dose in radioiodine therapy of solitary hyperfunctioning thyroid nodules is 300-400 Gy and therefore higher than in other radiotherapies. This is due to the fact that an unknown, yet significant portion of the activity is stored in extranodular areas but is neglected in the calculatory dosimetry. We investigate the feasibility of determining the ratio of nodular and extranodular activity concentrations (uptakes) from post-therapeutically acquired planar scintigrams with Monte Carlo simulations in GATE. The geometry of a gamma camera with a high energy collimator was emulated in GATE (Version 5). A geometrical thyroid-neck phantom (GP) and the ICRP reference voxel phantoms 'Adult Female' (AF, 16 ml thyroid) and 'Adult Male' (AM, 19 ml thyroid) were used as source regions. Nodules of 1 ml and 3 ml volume were placed in the phantoms. For each phantom and each nodule 200 scintigraphic acquisitions were simulated. Uptake ratios of nodule and rest of thyroid ranging from 1 to 20 could be created by summation. Quantitative image analysis was performed by investigating the number of simulated counts in regions of interest (ROIs). ROIs were created by perpendicular projection of the phantom onto the camera plane to avoid a user dependant bias. The ratio of count densities in ROIs over the nodule and over the contralateral lobe, which should be least affected by nodular activity, was taken to be the best available measure for the uptake ratios. However, the predefined uptake ratios are underestimated by these count density ratios: For an uptake ratio of 20 the count ratios range from 4.5 (AF, 1 ml nodule) to 15.3 (AM, 3 ml nodule). Furthermore, the contralateral ROI is more strongly affected by nodular activity than expected: For an uptake ratio of 20 between nodule and rest of thyroid up to 29% of total counts in the ROI over the contralateral lobe are caused by decays in the nodule (AF 3 ml). In the case of the 1 ml nodules this effect is smaller: 9

  11. Monte Carlo computation in the applied research of nuclear technology

    International Nuclear Information System (INIS)

    Xu Shuyan; Liu Baojie; Li Qin

    2007-01-01

    This article briefly introduces Monte Carlo Methods and their properties. It narrates the Monte Carlo methods with emphasis in their applications to several domains of nuclear technology. Monte Carlo simulation methods and several commonly used computer software to implement them are also introduced. The proposed methods are demonstrated by a real example. (authors)

  12. Effect of error propagation of nuclide number densities on Monte Carlo burn-up calculations

    International Nuclear Information System (INIS)

    Tohjoh, Masayuki; Endo, Tomohiro; Watanabe, Masato; Yamamoto, Akio

    2006-01-01

    As a result of improvements in computer technology, the continuous energy Monte Carlo burn-up calculation has received attention as a good candidate for an assembly calculation method. However, the results of Monte Carlo calculations contain the statistical errors. The results of Monte Carlo burn-up calculations, in particular, include propagated statistical errors through the variance of the nuclide number densities. Therefore, if statistical error alone is evaluated, the errors in Monte Carlo burn-up calculations may be underestimated. To make clear this effect of error propagation on Monte Carlo burn-up calculations, we here proposed an equation that can predict the variance of nuclide number densities after burn-up calculations, and we verified this equation using enormous numbers of the Monte Carlo burn-up calculations by changing only the initial random numbers. We also verified the effect of the number of burn-up calculation points on Monte Carlo burn-up calculations. From these verifications, we estimated the errors in Monte Carlo burn-up calculations including both statistical and propagated errors. Finally, we made clear the effects of error propagation on Monte Carlo burn-up calculations by comparing statistical errors alone versus both statistical and propagated errors. The results revealed that the effects of error propagation on the Monte Carlo burn-up calculations of 8 x 8 BWR fuel assembly are low up to 60 GWd/t

  13. Monte Carlo techniques in diagnostic and therapeutic nuclear medicine

    International Nuclear Information System (INIS)

    Zaidi, H.

    2002-01-01

    Monte Carlo techniques have become one of the most popular tools in different areas of medical radiation physics following the development and subsequent implementation of powerful computing systems for clinical use. In particular, they have been extensively applied to simulate processes involving random behaviour and to quantify physical parameters that are difficult or even impossible to calculate analytically or to determine by experimental measurements. The use of the Monte Carlo method to simulate radiation transport turned out to be the most accurate means of predicting absorbed dose distributions and other quantities of interest in the radiation treatment of cancer patients using either external or radionuclide radiotherapy. The same trend has occurred for the estimation of the absorbed dose in diagnostic procedures using radionuclides. There is broad consensus in accepting that the earliest Monte Carlo calculations in medical radiation physics were made in the area of nuclear medicine, where the technique was used for dosimetry modelling and computations. Formalism and data based on Monte Carlo calculations, developed by the Medical Internal Radiation Dose (MIRD) committee of the Society of Nuclear Medicine, were published in a series of supplements to the Journal of Nuclear Medicine, the first one being released in 1968. Some of these pamphlets made extensive use of Monte Carlo calculations to derive specific absorbed fractions for electron and photon sources uniformly distributed in organs of mathematical phantoms. Interest in Monte Carlo-based dose calculations with β-emitters has been revived with the application of radiolabelled monoclonal antibodies to radioimmunotherapy. As a consequence of this generalized use, many questions are being raised primarily about the need and potential of Monte Carlo techniques, but also about how accurate it really is, what would it take to apply it clinically and make it available widely to the medical physics

  14. The use of simple reparameterizations to improve the efficiency of Markov chain Monte Carlo estimation for multilevel models with applications to discrete time survival models.

    Science.gov (United States)

    Browne, William J; Steele, Fiona; Golalizadeh, Mousa; Green, Martin J

    2009-06-01

    We consider the application of Markov chain Monte Carlo (MCMC) estimation methods to random-effects models and in particular the family of discrete time survival models. Survival models can be used in many situations in the medical and social sciences and we illustrate their use through two examples that differ in terms of both substantive area and data structure. A multilevel discrete time survival analysis involves expanding the data set so that the model can be cast as a standard multilevel binary response model. For such models it has been shown that MCMC methods have advantages in terms of reducing estimate bias. However, the data expansion results in very large data sets for which MCMC estimation is often slow and can produce chains that exhibit poor mixing. Any way of improving the mixing will result in both speeding up the methods and more confidence in the estimates that are produced. The MCMC methodological literature is full of alternative algorithms designed to improve mixing of chains and we describe three reparameterization techniques that are easy to implement in available software. We consider two examples of multilevel survival analysis: incidence of mastitis in dairy cattle and contraceptive use dynamics in Indonesia. For each application we show where the reparameterization techniques can be used and assess their performance.

  15. Monte Carlo Euler approximations of HJM term structure financial models

    KAUST Repository

    Björk, Tomas

    2012-11-22

    We present Monte Carlo-Euler methods for a weak approximation problem related to the Heath-Jarrow-Morton (HJM) term structure model, based on Itô stochastic differential equations in infinite dimensional spaces, and prove strong and weak error convergence estimates. The weak error estimates are based on stochastic flows and discrete dual backward problems, and they can be used to identify different error contributions arising from time and maturity discretization as well as the classical statistical error due to finite sampling. Explicit formulas for efficient computation of sharp error approximation are included. Due to the structure of the HJM models considered here, the computational effort devoted to the error estimates is low compared to the work to compute Monte Carlo solutions to the HJM model. Numerical examples with known exact solution are included in order to show the behavior of the estimates. © 2012 Springer Science+Business Media Dordrecht.

  16. Monte Carlo Euler approximations of HJM term structure financial models

    KAUST Repository

    Bjö rk, Tomas; Szepessy, Anders; Tempone, Raul; Zouraris, Georgios E.

    2012-01-01

    We present Monte Carlo-Euler methods for a weak approximation problem related to the Heath-Jarrow-Morton (HJM) term structure model, based on Itô stochastic differential equations in infinite dimensional spaces, and prove strong and weak error convergence estimates. The weak error estimates are based on stochastic flows and discrete dual backward problems, and they can be used to identify different error contributions arising from time and maturity discretization as well as the classical statistical error due to finite sampling. Explicit formulas for efficient computation of sharp error approximation are included. Due to the structure of the HJM models considered here, the computational effort devoted to the error estimates is low compared to the work to compute Monte Carlo solutions to the HJM model. Numerical examples with known exact solution are included in order to show the behavior of the estimates. © 2012 Springer Science+Business Media Dordrecht.

  17. No-compromise reptation quantum Monte Carlo

    International Nuclear Information System (INIS)

    Yuen, W K; Farrar, Thomas J; Rothstein, Stuart M

    2007-01-01

    Since its publication, the reptation quantum Monte Carlo algorithm of Baroni and Moroni (1999 Phys. Rev. Lett. 82 4745) has been applied to several important problems in physics, but its mathematical foundations are not well understood. We show that their algorithm is not of typical Metropolis-Hastings type, and we specify conditions required for the generated Markov chain to be stationary and to converge to the intended distribution. The time-step bias may add up, and in many applications it is only the middle of a reptile that is the most important. Therefore, we propose an alternative, 'no-compromise reptation quantum Monte Carlo' to stabilize the middle of the reptile. (fast track communication)

  18. Multilevel Monte Carlo Approaches for Numerical Homogenization

    KAUST Repository

    Efendiev, Yalchin R.

    2015-10-01

    In this article, we study the application of multilevel Monte Carlo (MLMC) approaches to numerical random homogenization. Our objective is to compute the expectation of some functionals of the homogenized coefficients, or of the homogenized solutions. This is accomplished within MLMC by considering different sizes of representative volumes (RVEs). Many inexpensive computations with the smallest RVE size are combined with fewer expensive computations performed on larger RVEs. Likewise, when it comes to homogenized solutions, different levels of coarse-grid meshes are used to solve the homogenized equation. We show that, by carefully selecting the number of realizations at each level, we can achieve a speed-up in the computations in comparison to a standard Monte Carlo method. Numerical results are presented for both one-dimensional and two-dimensional test-cases that illustrate the efficiency of the approach.

  19. Status of Monte Carlo at Los Alamos

    International Nuclear Information System (INIS)

    Thompson, W.L.; Cashwell, E.D.

    1980-01-01

    At Los Alamos the early work of Fermi, von Neumann, and Ulam has been developed and supplemented by many followers, notably Cashwell and Everett, and the main product today is the continuous-energy, general-purpose, generalized-geometry, time-dependent, coupled neutron-photon transport code called MCNP. The Los Alamos Monte Carlo research and development effort is concentrated in Group X-6. MCNP treats an arbitrary three-dimensional configuration of arbitrary materials in geometric cells bounded by first- and second-degree surfaces and some fourth-degree surfaces (elliptical tori). Monte Carlo has evolved into perhaps the main method for radiation transport calculations at Los Alamos. MCNP is used in every technical division at the Laboratory by over 130 users about 600 times a month accounting for nearly 200 hours of CDC-7600 time

  20. Monte Carlo simulations in skin radiotherapy

    International Nuclear Information System (INIS)

    Sarvari, A.; Jeraj, R.; Kron, T.

    2000-01-01

    The primary goal of this work was to develop a procedure for calculation the appropriate filter shape for a brachytherapy applicator used for skin radiotherapy. In the applicator a radioactive source is positioned close to the skin. Without a filter, the resultant dose distribution would be highly nonuniform.High uniformity is usually required however. This can be achieved using an appropriately shaped filter, which flattens the dose profile. Because of the complexity of the transport and geometry, Monte Carlo simulations had to be used. An 192 Ir high dose rate photon source was used. All necessary transport parameters were simulated with the MCNP4B Monte Carlo code. A highly efficient iterative procedure was developed, which enabled calculation of the optimal filter shape in only few iterations. The initially non-uniform dose distributions became uniform within a percent when applying the filter calculated by this procedure. (author)

  1. Coevolution Based Adaptive Monte Carlo Localization (CEAMCL

    Directory of Open Access Journals (Sweden)

    Luo Ronghua

    2008-11-01

    Full Text Available An adaptive Monte Carlo localization algorithm based on coevolution mechanism of ecological species is proposed. Samples are clustered into species, each of which represents a hypothesis of the robot's pose. Since the coevolution between the species ensures that the multiple distinct hypotheses can be tracked stably, the problem of premature convergence when using MCL in highly symmetric environments can be solved. And the sample size can be adjusted adaptively over time according to the uncertainty of the robot's pose by using the population growth model. In addition, by using the crossover and mutation operators in evolutionary computation, intra-species evolution can drive the samples move towards the regions where the desired posterior density is large. So a small size of samples can represent the desired density well enough to make precise localization. The new algorithm is termed coevolution based adaptive Monte Carlo localization (CEAMCL. Experiments have been carried out to prove the efficiency of the new localization algorithm.

  2. Monte Carlo simulation of gas Cerenkov detectors

    International Nuclear Information System (INIS)

    Mack, J.M.; Jain, M.; Jordan, T.M.

    1984-01-01

    Theoretical study of selected gamma-ray and electron diagnostic necessitates coupling Cerenkov radiation to electron/photon cascades. A Cerenkov production model and its incorporation into a general geometry Monte Carlo coupled electron/photon transport code is discussed. A special optical photon ray-trace is implemented using bulk optical properties assigned to each Monte Carlo zone. Good agreement exists between experimental and calculated Cerenkov data in the case of a carbon-dioxide gas Cerenkov detector experiment. Cerenkov production and threshold data are presented for a typical carbon-dioxide gas detector that converts a 16.7 MeV photon source to Cerenkov light, which is collected by optics and detected by a photomultiplier

  3. Hypothesis testing of scientific Monte Carlo calculations

    Science.gov (United States)

    Wallerberger, Markus; Gull, Emanuel

    2017-11-01

    The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.

  4. Monte Carlo Simulation for Particle Detectors

    CERN Document Server

    Pia, Maria Grazia

    2012-01-01

    Monte Carlo simulation is an essential component of experimental particle physics in all the phases of its life-cycle: the investigation of the physics reach of detector concepts, the design of facilities and detectors, the development and optimization of data reconstruction software, the data analysis for the production of physics results. This note briefly outlines some research topics related to Monte Carlo simulation, that are relevant to future experimental perspectives in particle physics. The focus is on physics aspects: conceptual progress beyond current particle transport schemes, the incorporation of materials science knowledge relevant to novel detection technologies, functionality to model radiation damage, the capability for multi-scale simulation, quantitative validation and uncertainty quantification to determine the predictive power of simulation. The R&D on simulation for future detectors would profit from cooperation within various components of the particle physics community, and synerg...

  5. Topological zero modes in Monte Carlo simulations

    International Nuclear Information System (INIS)

    Dilger, H.

    1994-08-01

    We present an improvement of global Metropolis updating steps, the instanton hits, used in a hybrid Monte Carlo simulation of the two-flavor Schwinger model with staggered fermions. These hits are designed to change the topological sector of the gauge field. In order to match these hits to an unquenched simulation with pseudofermions, the approximate zero mode structure of the lattice Dirac operator has to be considered explicitly. (orig.)

  6. Handbook of Markov chain Monte Carlo

    CERN Document Server

    Brooks, Steve

    2011-01-01

    ""Handbook of Markov Chain Monte Carlo"" brings together the major advances that have occurred in recent years while incorporating enough introductory material for new users of MCMC. Along with thorough coverage of the theoretical foundations and algorithmic and computational methodology, this comprehensive handbook includes substantial realistic case studies from a variety of disciplines. These case studies demonstrate the application of MCMC methods and serve as a series of templates for the construction, implementation, and choice of MCMC methodology.

  7. The lund Monte Carlo for jet fragmentation

    International Nuclear Information System (INIS)

    Sjoestrand, T.

    1982-03-01

    We present a Monte Carlo program based on the Lund model for jet fragmentation. Quark, gluon, diquark and hadron jets are considered. Special emphasis is put on the fragmentation of colour singlet jet systems, for which energy, momentum and flavour are conserved explicitly. The model for decays of unstable particles, in particular the weak decay of heavy hadrons, is described. The central part of the paper is a detailed description on how to use the FORTRAN 77 program. (Author)

  8. Monte Carlo methods for preference learning

    DEFF Research Database (Denmark)

    Viappiani, P.

    2012-01-01

    Utility elicitation is an important component of many applications, such as decision support systems and recommender systems. Such systems query the users about their preferences and give recommendations based on the system’s belief about the utility function. Critical to these applications is th...... is the acquisition of prior distribution about the utility parameters and the possibility of real time Bayesian inference. In this paper we consider Monte Carlo methods for these problems....

  9. Monte Carlo methods for shield design calculations

    International Nuclear Information System (INIS)

    Grimstone, M.J.

    1974-01-01

    A suite of Monte Carlo codes is being developed for use on a routine basis in commercial reactor shield design. The methods adopted for this purpose include the modular construction of codes, simplified geometries, automatic variance reduction techniques, continuous energy treatment of cross section data, and albedo methods for streaming. Descriptions are given of the implementation of these methods and of their use in practical calculations. 26 references. (U.S.)

  10. General purpose code for Monte Carlo simulations

    International Nuclear Information System (INIS)

    Wilcke, W.W.

    1983-01-01

    A general-purpose computer called MONTHY has been written to perform Monte Carlo simulations of physical systems. To achieve a high degree of flexibility the code is organized like a general purpose computer, operating on a vector describing the time dependent state of the system under simulation. The instruction set of the computer is defined by the user and is therefore adaptable to the particular problem studied. The organization of MONTHY allows iterative and conditional execution of operations

  11. Introduction to the Monte Carlo methods

    International Nuclear Information System (INIS)

    Uzhinskij, V.V.

    1993-01-01

    Codes illustrating the use of Monte Carlo methods in high energy physics such as the inverse transformation method, the ejection method, the particle propagation through the nucleus, the particle interaction with the nucleus, etc. are presented. A set of useful algorithms of random number generators is given (the binomial distribution, the Poisson distribution, β-distribution, γ-distribution and normal distribution). 5 figs., 1 tab

  12. Sequential Monte Carlo with Highly Informative Observations

    OpenAIRE

    Del Moral, Pierre; Murray, Lawrence M.

    2014-01-01

    We propose sequential Monte Carlo (SMC) methods for sampling the posterior distribution of state-space models under highly informative observation regimes, a situation in which standard SMC methods can perform poorly. A special case is simulating bridges between given initial and final values. The basic idea is to introduce a schedule of intermediate weighting and resampling times between observation times, which guide particles towards the final state. This can always be done for continuous-...

  13. Monte Carlo codes use in neutron therapy

    International Nuclear Information System (INIS)

    Paquis, P.; Mokhtari, F.; Karamanoukian, D.; Pignol, J.P.; Cuendet, P.; Iborra, N.

    1998-01-01

    Monte Carlo calculation codes allow to study accurately all the parameters relevant to radiation effects, like the dose deposition or the type of microscopic interactions, through one by one particle transport simulation. These features are very useful for neutron irradiations, from device development up to dosimetry. This paper illustrates some applications of these codes in Neutron Capture Therapy and Neutron Capture Enhancement of fast neutrons irradiations. (authors)

  14. Quantum Monte Carlo calculations of light nuclei

    International Nuclear Information System (INIS)

    Pandharipande, V. R.

    1999-01-01

    Quantum Monte Carlo methods provide an essentially exact way to calculate various properties of nuclear bound, and low energy continuum states, from realistic models of nuclear interactions and currents. After a brief description of the methods and modern models of nuclear forces, we review the results obtained for all the bound, and some continuum states of up to eight nucleons. Various other applications of the methods are reviewed along with future prospects

  15. Monte-Carlo simulation of electromagnetic showers

    International Nuclear Information System (INIS)

    Amatuni, Ts.A.

    1984-01-01

    The universal ELSS-1 program for Monte Carlo simulation of high energy electromagnetic showers in homogeneous absorbers of arbitrary geometry is written. The major processes and effects of electron and photon interaction with matter, particularly the Landau-Pomeranchuk-Migdal effect, are taken into account in the simulation procedures. The simulation results are compared with experimental data. Some characteristics of shower detectors and electromagnetic showers for energies up 1 TeV are calculated

  16. Monte Carlo simulation of Touschek effect

    Directory of Open Access Journals (Sweden)

    Aimin Xiao

    2010-07-01

    Full Text Available We present a Monte Carlo method implementation in the code elegant for simulating Touschek scattering effects in a linac beam. The local scattering rate and the distribution of scattered electrons can be obtained from the code either for a Gaussian-distributed beam or for a general beam whose distribution function is given. In addition, scattered electrons can be tracked through the beam line and the local beam-loss rate and beam halo information recorded.

  17. POWER ANALYSIS FOR COMPLEX MEDIATIONAL DESIGNS USING MONTE CARLO METHODS

    OpenAIRE

    Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.

    2010-01-01

    Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex mediational models. The approach is based on the well known technique of generating a large number of samples in a Monte Carlo study, and estimating power...

  18. Biased Monte Carlo optimization: the basic approach

    International Nuclear Information System (INIS)

    Campioni, Luca; Scardovelli, Ruben; Vestrucci, Paolo

    2005-01-01

    It is well-known that the Monte Carlo method is very successful in tackling several kinds of system simulations. It often happens that one has to deal with rare events, and the use of a variance reduction technique is almost mandatory, in order to have Monte Carlo efficient applications. The main issue associated with variance reduction techniques is related to the choice of the value of the biasing parameter. Actually, this task is typically left to the experience of the Monte Carlo user, who has to make many attempts before achieving an advantageous biasing. A valuable result is provided: a methodology and a practical rule addressed to establish an a priori guidance for the choice of the optimal value of the biasing parameter. This result, which has been obtained for a single component system, has the notable property of being valid for any multicomponent system. In particular, in this paper, the exponential and the uniform biases of exponentially distributed phenomena are investigated thoroughly

  19. Lattice gauge theories and Monte Carlo simulations

    International Nuclear Information System (INIS)

    Rebbi, C.

    1981-11-01

    After some preliminary considerations, the discussion of quantum gauge theories on a Euclidean lattice takes up the definition of Euclidean quantum theory and treatment of the continuum limit; analogy is made with statistical mechanics. Perturbative methods can produce useful results for strong or weak coupling. In the attempts to investigate the properties of the systems for intermediate coupling, numerical methods known as Monte Carlo simulations have proved valuable. The bulk of this paper illustrates the basic ideas underlying the Monte Carlo numerical techniques and the major results achieved with them according to the following program: Monte Carlo simulations (general theory, practical considerations), phase structure of Abelian and non-Abelian models, the observables (coefficient of the linear term in the potential between two static sources at large separation, mass of the lowest excited state with the quantum numbers of the vacuum (the so-called glueball), the potential between two static sources at very small distance, the critical temperature at which sources become deconfined), gauge fields coupled to basonic matter (Higgs) fields, and systems with fermions

  20. Estimation of the dose deposited by electron beams in radiotherapy in voxelised phantoms using the Monte Carlo simulation platform GATE based on GEANT4 in a grid environment

    International Nuclear Information System (INIS)

    Perrot, Y.

    2011-01-01

    Radiation therapy treatment planning requires accurate determination of absorbed dose in the patient. Monte Carlo simulation is the most accurate method for solving the transport problem of particles in matter. This thesis is the first study dealing with the validation of the Monte Carlo simulation platform GATE (GEANT4 Application for Tomographic Emission), based on GEANT4 (Geometry And Tracking) libraries, for the computation of absorbed dose deposited by electron beams. This thesis aims at demonstrating that GATE/GEANT4 calculations are able to reach treatment planning requirements in situations where analytical algorithms are not satisfactory. The goal is to prove that GATE/GEANT4 is useful for treatment planning using electrons and competes with well validated Monte Carlo codes. This is demonstrated by the simulations with GATE/GEANT4 of realistic electron beams and electron sources used for external radiation therapy or targeted radiation therapy. The computed absorbed dose distributions are in agreement with experimental measurements and/or calculations from other Monte Carlo codes. Furthermore, guidelines are proposed to fix the physics parameters of the GATE/GEANT4 simulations in order to ensure the accuracy of absorbed dose calculations according to radiation therapy requirements. (author)

  1. Decomposition of a laser-Doppler spectrum for estimation of speed distribution of particles moving in an optically turbid medium: Monte Carlo validation study

    International Nuclear Information System (INIS)

    Liebert, A; Zolek, N; Maniewski, R

    2006-01-01

    A method for measurement of distribution of speed of particles moving in an optically turbid medium is presented. The technique is based on decomposition of the laser-Doppler spectrum. The theoretical background is shown together with the results of Monte Carlo simulations, which were performed to validate the proposed method. The laser-Doppler spectra were obtained by Monte Carlo simulations for assumed uniform and Gaussian speed distributions of particles moving in the turbid medium. The Doppler shift probability distributions were calculated by Monte Carlo simulations for several anisotropy factors of the medium, assuming the Hanyey-Greenstein phase function. The results of the spectra decomposition show that the calculated speed distribution of moving particles match well the distribution assumed for Monte Carlo simulations. This result was obtained for the spectra simulated in optical conditions, in which the photon is scattered with the Doppler shift not more than once during its travel between the source and detector. Influence of multiple scattering of the photon is analysed and a perspective of spectrum decomposition under such conditions is considered. Potential applications and limitations of the method are discussed

  2. A Monte Carlo study of the impact of the choice of rectum volume definition on estimates of equivalent uniform doses and the volume parameter

    International Nuclear Information System (INIS)

    Kvinnsland, Yngve; Muren, Ludvig Paul; Dahl, Olav

    2004-01-01

    Calculations of normal tissue complication probability (NTCP) values for the rectum are difficult because it is a hollow, non-rigid, organ. Finding the true cumulative dose distribution for a number of treatment fractions requires a CT scan before each treatment fraction. This is labour intensive, and several surrogate distributions have therefore been suggested, such as dose wall histograms, dose surface histograms and histograms for the solid rectum, with and without margins. In this study, a Monte Carlo method is used to investigate the relationships between the cumulative dose distributions based on all treatment fractions and the above-mentioned histograms that are based on one CT scan only, in terms of equivalent uniform dose. Furthermore, the effect of a specific choice of histogram on estimates of the volume parameter of the probit NTCP model was investigated. It was found that the solid rectum and the rectum wall histograms (without margins) gave equivalent uniform doses with an expected value close to the values calculated from the cumulative dose distributions in the rectum wall. With the number of patients available in this study the standard deviations of the estimates of the volume parameter were large, and it was not possible to decide which volume gave the best estimates of the volume parameter, but there were distinct differences in the mean values of the values obtained

  3. Monte Carlo numerical study of lattice field theories

    International Nuclear Information System (INIS)

    Gan Cheekwan; Kim Seyong; Ohta, Shigemi

    1997-01-01

    The authors are interested in the exact first-principle calculations of quantum field theories which are indeed exact ones. For quantum chromodynamics (QCD) at low energy scale, a nonperturbation method is needed, and the only known such method is the lattice method. The path integral can be evaluated by putting a system on a finite 4-dimensional volume and discretizing space time continuum into finite points, lattice. The continuum limit is taken by making the lattice infinitely fine. For evaluating such a finite-dimensional integral, the Monte Carlo numerical estimation of the path integral can be obtained. The calculation of light hadron mass in quenched lattice QCD with staggered quarks, 3-dimensional Thirring model calculation and the development of self-test Monte Carlo method have been carried out by using the RIKEN supercomputer. The motivation of this study, lattice QCD formulation, continuum limit, Monte Carlo update, hadron propagator, light hadron mass, auto-correlation and source size dependence are described on lattice QCD. The phase structure of the 3-dimensional Thirring model for a small 8 3 lattice has been mapped. The discussion on self-test Monte Carlo method is described again. (K.I.)

  4. Sequential Monte Carlo Instant Radiosity.

    Science.gov (United States)

    Hedman, Peter; Karras, Tero; Lehtinen, Jaakko

    2017-05-01

    Instant Radiosity and its derivatives are interactive methods for efficiently estimating global (indirect) illumination. They represent the last indirect bounce of illumination before the camera as the composite radiance field emitted by a set of virtual point light sources (VPLs). In complex scenes, current algorithms suffer from a difficult combination of two issues: it remains a challenge to distribute VPLs in a manner that simultaneously gives a high-quality indirect illumination solution for each frame, and to do so in a temporally coherent manner. We address both issues by building, and maintaining over time, an adaptive and temporally coherent distribution of VPLs in locations where they bring indirect light to the image. We introduce a novel heuristic sampling method that strives to only move as few of the VPLs between frames as possible. The result is, to the best of our knowledge, the first interactive global illumination algorithm that works in complex, highly-occluded scenes, suffers little from temporal flickering, supports moving cameras and light sources, and is output-sensitive in the sense that it places VPLs in locations that matter most to the final result.

  5. Monte Carlo perturbation theory in neutron transport calculations

    International Nuclear Information System (INIS)

    Hall, M.C.G.

    1980-01-01

    The need to obtain sensitivities in complicated geometrical configurations has resulted in the development of Monte Carlo sensitivity estimation. A new method has been developed to calculate energy-dependent sensitivities of any number of responses in a single Monte Carlo calculation with a very small time penalty. This estimation typically increases the tracking time per source particle by about 30%. The method of estimation is explained. Sensitivities obtained are compared with those calculated by discrete ordinates methods. Further theoretical developments, such as second-order perturbation theory and application to k/sub eff/ calculations, are discussed. The application of the method to uncertainty analysis and to the analysis of benchmark experiments is illustrated. 5 figures

  6. Monte carlo sampling of fission multiplicity.

    Energy Technology Data Exchange (ETDEWEB)

    Hendricks, J. S. (John S.)

    2004-01-01

    Two new methods have been developed for fission multiplicity modeling in Monte Carlo calculations. The traditional method of sampling neutron multiplicity from fission is to sample the number of neutrons above or below the average. For example, if there are 2.7 neutrons per fission, three would be chosen 70% of the time and two would be chosen 30% of the time. For many applications, particularly {sup 3}He coincidence counting, a better estimate of the true number of neutrons per fission is required. Generally, this number is estimated by sampling a Gaussian distribution about the average. However, because the tail of the Gaussian distribution is negative and negative neutrons cannot be produced, a slight positive bias can be found in the average value. For criticality calculations, the result of rejecting the negative neutrons is an increase in k{sub eff} of 0.1% in some cases. For spontaneous fission, where the average number of neutrons emitted from fission is low, the error also can be unacceptably large. If the Gaussian width approaches the average number of fissions, 10% too many fission neutrons are produced by not treating the negative Gaussian tail adequately. The first method to treat the Gaussian tail is to determine a correction offset, which then is subtracted from all sampled values of the number of neutrons produced. This offset depends on the average value for any given fission at any energy and must be computed efficiently at each fission from the non-integrable error function. The second method is to determine a corrected zero point so that all neutrons sampled between zero and the corrected zero point are killed to compensate for the negative Gaussian tail bias. Again, the zero point must be computed efficiently at each fission. Both methods give excellent results with a negligible computing time penalty. It is now possible to include the full effects of fission multiplicity without the negative Gaussian tail bias.

  7. Multilevel and Multi-index Monte Carlo methods for the McKean–Vlasov equation

    KAUST Repository

    Haji Ali, Abdul Lateef; Tempone, Raul

    2017-01-01

    of particles. Based on these two parameters, we consider different variants of the Monte Carlo and Multilevel Monte Carlo (MLMC) methods and show that, in the best case, the optimal work complexity of MLMC, to estimate the functional in one typical setting

  8. Estimation of children's radiation dose from cardiac catheterisations, performed for the diagnosis or the treatment of a congenital heart disease using TLD dosimetry and Monte Carlo simulation

    International Nuclear Information System (INIS)

    Yakoumakis, E N; Gialousis, G I; Papadopoulou, Despina; Makri, Triantafillia; Pappouli, Zografia; Yakoumakis, Nikolaos; Papagiannis, Panayotis; Georgiou, Evangelos

    2009-01-01

    Entrance surface radiation doses were measured with thermoluminescent dosimeters for 98 children who were referred to a cardiology department for the diagnosis or the treatment of a congenital heart disease. Additionally, all the radiographic parameters were recorded and Monte Carlo simulations were performed for the estimation of entrance surface dose to effective dose conversion factors, in order to further calculate the effective dose for each child. For diagnostic catheterisations the values ranged from 0.16 to 14.44 mSv, with average 3.71 mSv, and for therapeutic catheterisations the values ranged from 0.38 to 25.01 mSv, with average value 5 mSv. Effective doses were estimated for diagnostic procedures and interventional procedures performed for the treatment of five different heart diseases: (a) atrial septal defect (ASD), (b) ventricular septal defect (VSD), (c) patent ductus arteriosus (PDA), (d) aorta coarctation and (e) pulmonary stenosis. The high levels of radiation exposure are, however, balanced with the advantages of cardiac catheterisations such as the avoidance of surgical closure and the necessity of shorter or even no hospitalisation.

  9. Estimation of children's radiation dose from cardiac catheterisations, performed for the diagnosis or the treatment of a congenital heart disease using TLD dosimetry and Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Yakoumakis, E N; Gialousis, G I; Papadopoulou, Despina; Makri, Triantafillia; Pappouli, Zografia; Yakoumakis, Nikolaos; Papagiannis, Panayotis; Georgiou, Evangelos [Medical Physics Department, University of Athens, 75 Mikras Asias Street, Athens 11527 (Greece)

    2009-06-15

    Entrance surface radiation doses were measured with thermoluminescent dosimeters for 98 children who were referred to a cardiology department for the diagnosis or the treatment of a congenital heart disease. Additionally, all the radiographic parameters were recorded and Monte Carlo simulations were performed for the estimation of entrance surface dose to effective dose conversion factors, in order to further calculate the effective dose for each child. For diagnostic catheterisations the values ranged from 0.16 to 14.44 mSv, with average 3.71 mSv, and for therapeutic catheterisations the values ranged from 0.38 to 25.01 mSv, with average value 5 mSv. Effective doses were estimated for diagnostic procedures and interventional procedures performed for the treatment of five different heart diseases: (a) atrial septal defect (ASD), (b) ventricular septal defect (VSD), (c) patent ductus arteriosus (PDA), (d) aorta coarctation and (e) pulmonary stenosis. The high levels of radiation exposure are, however, balanced with the advantages of cardiac catheterisations such as the avoidance of surgical closure and the necessity of shorter or even no hospitalisation.

  10. Estimation of the heat generation in vitrified waste product and shield thickness of the cask for the transportation of vitrified waste product using Monte Carlo technique

    International Nuclear Information System (INIS)

    Deepa, A.K.; Jakhete, A.P.; Mehta, D.; Kaushik, C.P.

    2011-01-01

    High Level Liquid waste (HLW) generated during reprocessing of spent fuel contains most of the radioactivity present in the spent fuel resulting in the need for isolation and surveillance for extended period of time. Major components in HLW are the corrosion products, fission products such as 137 Cs, 90 Sr, 106 Ru, 144 Ce, 125 Sb etc, actinides and various chemicals used during reprocessing of spent fuel. Fresh HLW having an activity concentration of around 100Ci/l is to be vitrified into borosilicate glass and packed in canisters which are placed in S.S overpacks for better confinement. These overpacks contain around 0.7 Million Curies of activity. Characterisation of activity in HLW and activity profile of radionuclides for various cooling periods sets the base for the study. For transporting the vitrified waste product (VWP), two most important parameters is the shield thickness of the transportation cask and the heat generation in the waste product. This paper describes the methodology used in the estimation of lead thickness for the transportation cask using the Monte Carlo Technique. Heat generation due to decay of fission products results in the increase in temperature of the vitrified waste product during interim storage and disposal. Glass being the material, not having very high thermal conductivity, temperature difference between the canister and surrounding bears significance in view of the possibility of temperature based devitrification of VWP. The heat generation in the canister and the overpack containing vitrified glass is also estimated using MCNP. (author)

  11. Guideline of Monte Carlo calculation. Neutron/gamma ray transport simulation by Monte Carlo method

    CERN Document Server

    2002-01-01

    This report condenses basic theories and advanced applications of neutron/gamma ray transport calculations in many fields of nuclear energy research. Chapters 1 through 5 treat historical progress of Monte Carlo methods, general issues of variance reduction technique, cross section libraries used in continuous energy Monte Carlo codes. In chapter 6, the following issues are discussed: fusion benchmark experiments, design of ITER, experiment analyses of fast critical assembly, core analyses of JMTR, simulation of pulsed neutron experiment, core analyses of HTTR, duct streaming calculations, bulk shielding calculations, neutron/gamma ray transport calculations of the Hiroshima atomic bomb. Chapters 8 and 9 treat function enhancements of MCNP and MVP codes, and a parallel processing of Monte Carlo calculation, respectively. An important references are attached at the end of this report.

  12. Suppression of the initial transient in Monte Carlo criticality simulations; Suppression du regime transitoire initial des simulations Monte-Carlo de criticite

    Energy Technology Data Exchange (ETDEWEB)

    Richet, Y

    2006-12-15

    Criticality Monte Carlo calculations aim at estimating the effective multiplication factor (k-effective) for a fissile system through iterations simulating neutrons propagation (making a Markov chain). Arbitrary initialization of the neutron population can deeply bias the k-effective estimation, defined as the mean of the k-effective computed at each iteration. A simplified model of this cycle k-effective sequence is built, based on characteristics of industrial criticality Monte Carlo calculations. Statistical tests, inspired by Brownian bridge properties, are designed to discriminate stationarity of the cycle k-effective sequence. The initial detected transient is, then, suppressed in order to improve the estimation of the system k-effective. The different versions of this methodology are detailed and compared, firstly on a plan of numerical tests fitted on criticality Monte Carlo calculations, and, secondly on real criticality calculations. Eventually, the best methodologies observed in these tests are selected and allow to improve industrial Monte Carlo criticality calculations. (author)

  13. Monte Carlo evaluation of derivative-based global sensitivity measures

    Energy Technology Data Exchange (ETDEWEB)

    Kucherenko, S. [Centre for Process Systems Engineering, Imperial College London, London SW7 2AZ (United Kingdom)], E-mail: s.kucherenko@ic.ac.uk; Rodriguez-Fernandez, M. [Process Engineering Group, Instituto de Investigaciones Marinas, Spanish Council for Scientific Research (C.S.I.C.), C/ Eduardo Cabello, 6, 36208 Vigo (Spain); Pantelides, C.; Shah, N. [Centre for Process Systems Engineering, Imperial College London, London SW7 2AZ (United Kingdom)

    2009-07-15

    A novel approach for evaluation of derivative-based global sensitivity measures (DGSM) is presented. It is compared with the Morris and the Sobol' sensitivity indices methods. It is shown that there is a link between DGSM and Sobol' sensitivity indices. DGSM are very easy to implement and evaluate numerically. The computational time required for numerical evaluation of DGSM is many orders of magnitude lower than that for estimation of the Sobol' sensitivity indices. It is also lower than that for the Morris method. Efficiencies of Monte Carlo (MC) and quasi-Monte Carlo (QMC) sampling methods for calculation of DGSM are compared. It is shown that the superiority of QMC over MC depends on the problem's effective dimension, which can also be estimated using DGSM.

  14. Monte Carlo evaluation of derivative-based global sensitivity measures

    International Nuclear Information System (INIS)

    Kucherenko, S.; Rodriguez-Fernandez, M.; Pantelides, C.; Shah, N.

    2009-01-01

    A novel approach for evaluation of derivative-based global sensitivity measures (DGSM) is presented. It is compared with the Morris and the Sobol' sensitivity indices methods. It is shown that there is a link between DGSM and Sobol' sensitivity indices. DGSM are very easy to implement and evaluate numerically. The computational time required for numerical evaluation of DGSM is many orders of magnitude lower than that for estimation of the Sobol' sensitivity indices. It is also lower than that for the Morris method. Efficiencies of Monte Carlo (MC) and quasi-Monte Carlo (QMC) sampling methods for calculation of DGSM are compared. It is shown that the superiority of QMC over MC depends on the problem's effective dimension, which can also be estimated using DGSM.

  15. Dielectric response of periodic systems from quantum Monte Carlo calculations.

    Science.gov (United States)

    Umari, P; Willamson, A J; Galli, Giulia; Marzari, Nicola

    2005-11-11

    We present a novel approach that allows us to calculate the dielectric response of periodic systems in the quantum Monte Carlo formalism. We employ a many-body generalization for the electric-enthalpy functional, where the coupling with the field is expressed via the Berry-phase formulation for the macroscopic polarization. A self-consistent local Hamiltonian then determines the ground-state wave function, allowing for accurate diffusion quantum Monte Carlo calculations where the polarization's fixed point is estimated from the average on an iterative sequence, sampled via forward walking. This approach has been validated for the case of an isolated hydrogen atom and then applied to a periodic system, to calculate the dielectric susceptibility of molecular-hydrogen chains. The results found are in excellent agreement with the best estimates obtained from the extrapolation of quantum-chemistry calculations.

  16. Quantum Monte Carlo studies in Hamiltonian lattice gauge theory

    International Nuclear Information System (INIS)

    Hamer, C.J.; Samaras, M.; Bursill, R.J.

    2000-01-01

    Full text: The application of Monte Carlo methods to the 'Hamiltonian' formulation of lattice gauge theory has been somewhat neglected, and lags at least ten years behind the classical Monte Carlo simulations of Euclidean lattice gauge theory. We have applied a Green's Function Monte Carlo algorithm to lattice Yang-Mills theories in the Hamiltonian formulation, combined with a 'forward-walking' technique to estimate expectation values and correlation functions. In this approach, one represents the wave function in configuration space by a discrete ensemble of random walkers, and application of the time development operator is simulated by a diffusion and branching process. The approach has been used to estimate the ground-state energy and Wilson loop values in the U(1) theory in (2+1)D, and the SU(3) Yang-Mills theory in (3+1)D. The finite-size scaling behaviour has been explored, and agrees with the predictions of effective Lagrangian theory, and weak-coupling expansions. Crude estimates of the string tension are derived, which agree with previous results at intermediate couplings; but more accurate results for larger loops will be required to establish scaling behaviour at weak couplings. A drawback to this method is that it is necessary to introduce a 'trial' or 'guiding wave function' to guide the walkers towards the most probable regions of configuration space, in order to achieve convergence and accuracy. The 'forward-walking' estimates should be independent of this guidance, but in fact for the SU(3) case they turn out to be sensitive to the choice of trial wave function. It would be preferable to use some sort of Metropolis algorithm instead to produce a correct distribution of walkers: this may point in the direction of a Path Integral Monte Carlo approach

  17. Quantum computational finance: Monte Carlo pricing of financial derivatives

    OpenAIRE

    Rebentrost, Patrick; Gupt, Brajesh; Bromley, Thomas R.

    2018-01-01

    Financial derivatives are contracts that can have a complex payoff dependent upon underlying benchmark assets. In this work, we present a quantum algorithm for the Monte Carlo pricing of financial derivatives. We show how the relevant probability distributions can be prepared in quantum superposition, the payoff functions can be implemented via quantum circuits, and the price of financial derivatives can be extracted via quantum measurements. We show how the amplitude estimation algorithm can...

  18. A Monte Carlo-based method to estimate radiation dose from spiral CT: from phantom testing to patient-specific models

    International Nuclear Information System (INIS)

    Jarry, G; De Marco, J J; Beifuss, U; Cagnon, C H; McNitt-Gray, M F

    2003-01-01

    The purpose of this work is to develop and test a method to estimate the relative and absolute absorbed radiation dose from axial and spiral CT scans using a Monte Carlo approach. Initial testing was done in phantoms and preliminary results were obtained from a standard mathematical anthropomorphic model (MIRD V) and voxelized patient data. To accomplish this we have modified a general purpose Monte Carlo transport code (MCNP4B) to simulate the CT x-ray source and movement, and then to calculate absorbed radiation dose in desired objects. The movement of the source in either axial or spiral modes was modelled explicitly while the CT system components were modelled using published information about x-ray spectra as well as information provided by the manufacturer. Simulations were performed for single axial scans using the head and body computed tomography dose index (CTDI) polymethylmethacrylate phantoms at both central and peripheral positions for all available beam energies and slice thicknesses. For comparison, corresponding physical measurements of CTDI in phantom were made with an ion chamber. To obtain absolute dose values, simulations and measurements were performed in air at the scanner isocentre for each beam energy. To extend the verification, the CT scanner model was applied to the MIRD V model and compared with published results using similar technical factors. After verification of the model, the generalized source was simulated and applied to voxelized models of patient anatomy. The simulated and measured absolute dose data in phantom agreed to within 2% for the head phantom and within 4% for the body phantom at 120 and 140 kVp; this extends to 8% for the head and 9% for the body phantom across all available beam energies and positions. For the head phantom, the simulated and measured absolute dose data agree to within 2% across all slice thicknesses at 120 kVp. Our results in the MIRD phantom agree within 11% of all the different organ dose values

  19. Investigating the impossible: Monte Carlo simulations

    International Nuclear Information System (INIS)

    Kramer, Gary H.; Crowley, Paul; Burns, Linda C.

    2000-01-01

    Designing and testing new equipment can be an expensive and time consuming process or the desired performance characteristics may preclude its construction due to technological shortcomings. Cost may also prevent equipment being purchased for other scenarios to be tested. An alternative is to use Monte Carlo simulations to make the investigations. This presentation exemplifies how Monte Carlo code calculations can be used to fill the gap. An example is given for the investigation of two sizes of germanium detector (70 mm and 80 mm diameter) at four different crystal thicknesses (15, 20, 25, and 30 mm) and makes predictions on how the size affects the counting efficiency and the Minimum Detectable Activity (MDA). The Monte Carlo simulations have shown that detector efficiencies can be adequately modelled using photon transport if the data is used to investigate trends. The investigation of the effect of detector thickness on the counting efficiency has shown that thickness for a fixed diameter detector of either 70 mm or 80 mm is unimportant up to 60 keV. At higher photon energies, the counting efficiency begins to decrease as the thickness decreases as expected. The simulations predict that the MDA of either the 70 mm or 80 mm diameter detectors does not differ by more than a factor of 1.15 at 17 keV or 1.2 at 60 keV when comparing detectors of equivalent thicknesses. The MDA is slightly increased at 17 keV, and rises by about 52% at 660 keV, when the thickness is decreased from 30 mm to 15 mm. One could conclude from this information that the extra cost associated with the larger area Ge detectors may not be justified for the slight improvement predicted in the MDA. (author)

  20. Monte Carlo simulations on SIMD computer architectures

    International Nuclear Information System (INIS)

    Burmester, C.P.; Gronsky, R.; Wille, L.T.

    1992-01-01

    In this paper algorithmic considerations regarding the implementation of various materials science applications of the Monte Carlo technique to single instruction multiple data (SIMD) computer architectures are presented. In particular, implementation of the Ising model with nearest, next nearest, and long range screened Coulomb interactions on the SIMD architecture MasPar MP-1 (DEC mpp-12000) series of massively parallel computers is demonstrated. Methods of code development which optimize processor array use and minimize inter-processor communication are presented including lattice partitioning and the use of processor array spanning tree structures for data reduction. Both geometric and algorithmic parallel approaches are utilized. Benchmarks in terms of Monte Carl updates per second for the MasPar architecture are presented and compared to values reported in the literature from comparable studies on other architectures

  1. keV-Scale sterile neutrino sensitivity estimation with time-of-flight spectroscopy in KATRIN using self-consistent approximate Monte Carlo

    Science.gov (United States)

    Steinbrink, Nicholas M. N.; Behrens, Jan D.; Mertens, Susanne; Ranitzsch, Philipp C.-O.; Weinheimer, Christian

    2018-03-01

    We investigate the sensitivity of the Karlsruhe Tritium Neutrino Experiment (KATRIN) to keV-scale sterile neutrinos, which are promising dark matter candidates. Since the active-sterile mixing would lead to a second component in the tritium β-spectrum with a weak relative intensity of order sin ^2θ ≲ 10^{-6}, additional experimental strategies are required to extract this small signature and to eliminate systematics. A possible strategy is to run the experiment in an alternative time-of-flight (TOF) mode, yielding differential TOF spectra in contrast to the integrating standard mode. In order to estimate the sensitivity from a reduced sample size, a new analysis method, called self-consistent approximate Monte Carlo (SCAMC), has been developed. The simulations show that an ideal TOF mode would be able to achieve a statistical sensitivity of sin ^2θ ˜ 5 × 10^{-9} at one σ , improving the standard mode by approximately a factor two. This relative benefit grows significantly if additional exemplary systematics are considered. A possible implementation of the TOF mode with existing hardware, called gated filtering, is investigated, which, however, comes at the price of a reduced average signal rate.

  2. Monte Carlo study of the multiquark systems

    International Nuclear Information System (INIS)

    Kerbikov, B.O.; Polikarpov, M.I.; Zamolodchikov, A.B.

    1986-01-01

    Random walks have been used to calculate the energies of the ground states in systems of N=3, 6, 9, 12 quarks. Multiquark states with N>3 are unstable with respect to the spontaneous dissociation into color singlet hadrons. The modified Green's function Monte Carlo algorithm which proved to be more simple and much accurate than the conventional few body methods have been employed. In contrast to other techniques, the same equations are used for any number of particles, while the computer time increases only linearly V, S the number of particles

  3. ATLAS Monte Carlo tunes for MC09

    CERN Document Server

    The ATLAS collaboration

    2010-01-01

    This note describes the ATLAS tunes of underlying event and minimum bias description for the main Monte Carlo generators used in the MC09 production. For the main shower generators, pythia and herwig (with jimmy), the MRST LO* parton distribution functions (PDFs) were used for the first time in ATLAS. Special studies on the performance of these, conceptually new, PDFs for high pt physics processes at LHC energies are presented. In addition, a tune of jimmy for CTEQ6.6 is presented, for use with MC@NLO.

  4. Markov chains analytic and Monte Carlo computations

    CERN Document Server

    Graham, Carl

    2014-01-01

    Markov Chains: Analytic and Monte Carlo Computations introduces the main notions related to Markov chains and provides explanations on how to characterize, simulate, and recognize them. Starting with basic notions, this book leads progressively to advanced and recent topics in the field, allowing the reader to master the main aspects of the classical theory. This book also features: Numerous exercises with solutions as well as extended case studies.A detailed and rigorous presentation of Markov chains with discrete time and state space.An appendix presenting probabilistic notions that are nec

  5. Atomistic Monte Carlo simulation of lipid membranes

    DEFF Research Database (Denmark)

    Wüstner, Daniel; Sklenar, Heinz

    2014-01-01

    Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction...... into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches...

  6. Monte Carlo method in radiation transport problems

    International Nuclear Information System (INIS)

    Dejonghe, G.; Nimal, J.C.; Vergnaud, T.

    1986-11-01

    In neutral radiation transport problems (neutrons, photons), two values are important: the flux in the phase space and the density of particles. To solve the problem with Monte Carlo method leads to, among other things, build a statistical process (called the play) and to provide a numerical value to a variable x (this attribution is called score). Sampling techniques are presented. Play biasing necessity is proved. A biased simulation is made. At last, the current developments (rewriting of programs for instance) are presented due to several reasons: two of them are the vectorial calculation apparition and the photon and neutron transport in vacancy media [fr

  7. Mosaic crystal algorithm for Monte Carlo simulations

    CERN Document Server

    Seeger, P A

    2002-01-01

    An algorithm is presented for calculating reflectivity, absorption, and scattering of mosaic crystals in Monte Carlo simulations of neutron instruments. The algorithm uses multi-step transport through the crystal with an exact solution of the Darwin equations at each step. It relies on the kinematical model for Bragg reflection (with parameters adjusted to reproduce experimental data). For computation of thermal effects (the Debye-Waller factor and coherent inelastic scattering), an expansion of the Debye integral as a rapidly converging series of exponential terms is also presented. Any crystal geometry and plane orientation may be treated. The algorithm has been incorporated into the neutron instrument simulation package NISP. (orig.)

  8. MBR Monte Carlo Simulation in PYTHIA8

    Science.gov (United States)

    Ciesielski, R.

    We present the MBR (Minimum Bias Rockefeller) Monte Carlo simulation of (anti)proton-proton interactions and its implementation in the PYTHIA8 event generator. We discuss the total, elastic, and total-inelastic cross sections, and three contributions from diffraction dissociation processes that contribute to the latter: single diffraction, double diffraction, and central diffraction or double-Pomeron exchange. The event generation follows a renormalized-Regge-theory model, successfully tested using CDF data. Based on the MBR-enhanced PYTHIA8 simulation, we present cross-section predictions for the LHC and beyond, up to collision energies of 50 TeV.

  9. Spectral functions from Quantum Monte Carlo

    International Nuclear Information System (INIS)

    Silver, R.N.

    1989-01-01

    In his review, D. Scalapino identified two serious limitations on the application of Quantum Monte Carlo (QMC) methods to the models of interest in High T c Superconductivity (HTS). One is the ''sign problem''. The other is the ''analytic continuation problem'', which is how to extract electron spectral functions from QMC calculations of the imaginary time Green's functions. Through-out this Symposium on HTS, the spectral functions have been the focus for the discussion of normal state properties including the applicability of band theory, Fermi liquid theory, marginal Fermi liquids, and novel non-perturbative states. 5 refs., 1 fig

  10. An analysis of Monte Carlo tree search

    CSIR Research Space (South Africa)

    James, S

    2017-02-01

    Full Text Available Tree Search Steven James∗, George Konidaris† & Benjamin Rosman∗‡ ∗University of the Witwatersrand, Johannesburg, South Africa †Brown University, Providence RI 02912, USA ‡Council for Scientific and Industrial Research, Pretoria, South Africa steven....james@students.wits.ac.za, gdk@cs.brown.edu, brosman@csir.co.za Abstract Monte Carlo Tree Search (MCTS) is a family of directed search algorithms that has gained widespread attention in re- cent years. Despite the vast amount of research into MCTS, the effect of modifications...

  11. Monte Carlo simulation for the transport beamline

    Energy Technology Data Exchange (ETDEWEB)

    Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania (Italy); Attili, A.; Marchetto, F.; Russo, G. [INFN, Sezione di Torino, Via P.Giuria, 1 10125 Torino (Italy); Cirrone, G. A. P.; Schillaci, F.; Scuderi, V. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Institute of Physics Czech Academy of Science, ELI-Beamlines project, Na Slovance 2, Prague (Czech Republic); Carpinelli, M. [INFN Sezione di Cagliari, c/o Dipartimento di Fisica, Università di Cagliari, Cagliari (Italy); Tramontana, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Università di Catania, Dipartimento di Fisica e Astronomia, Via S. Sofia 64, Catania (Italy)

    2013-07-26

    In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.

  12. Monte Carlo simulation for the transport beamline

    International Nuclear Information System (INIS)

    Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A.; Attili, A.; Marchetto, F.; Russo, G.; Cirrone, G. A. P.; Schillaci, F.; Scuderi, V.; Carpinelli, M.; Tramontana, A.

    2013-01-01

    In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery

  13. Diffusion quantum Monte Carlo for molecules

    International Nuclear Information System (INIS)

    Lester, W.A. Jr.

    1986-07-01

    A quantum mechanical Monte Carlo method has been used for the treatment of molecular problems. The imaginary-time Schroedinger equation written with a shift in zero energy [E/sub T/ - V(R)] can be interpreted as a generalized diffusion equation with a position-dependent rate or branching term. Since diffusion is the continuum limit of a random walk, one may simulate the Schroedinger equation with a function psi (note, not psi 2 ) as a density of ''walks.'' The walks undergo an exponential birth and death as given by the rate term. 16 refs., 2 tabs

  14. Monte Carlo modelling for neutron guide losses

    International Nuclear Information System (INIS)

    Cser, L.; Rosta, L.; Toeroek, Gy.

    1989-09-01

    In modern research reactors, neutron guides are commonly used for beam conducting. The neutron guide is a well polished or equivalently smooth glass tube covered inside by sputtered or evaporated film of natural Ni or 58 Ni isotope where the neutrons are totally reflected. A Monte Carlo calculation was carried out to establish the real efficiency and the spectral as well as spatial distribution of the neutron beam at the end of a glass mirror guide. The losses caused by mechanical inaccuracy and mirror quality were considered and the effects due to the geometrical arrangement were analyzed. (author) 2 refs.; 2 figs

  15. Diffusion Monte Carlo approach versus adiabatic computation for local Hamiltonians

    Science.gov (United States)

    Bringewatt, Jacob; Dorland, William; Jordan, Stephen P.; Mink, Alan

    2018-02-01

    Most research regarding quantum adiabatic optimization has focused on stoquastic Hamiltonians, whose ground states can be expressed with only real non-negative amplitudes and thus for whom destructive interference is not manifest. This raises the question of whether classical Monte Carlo algorithms can efficiently simulate quantum adiabatic optimization with stoquastic Hamiltonians. Recent results have given counterexamples in which path-integral and diffusion Monte Carlo fail to do so. However, most adiabatic optimization algorithms, such as for solving MAX-k -SAT problems, use k -local Hamiltonians, whereas our previous counterexample for diffusion Monte Carlo involved n -body interactions. Here we present a 6-local counterexample which demonstrates that even for these local Hamiltonians there are cases where diffusion Monte Carlo cannot efficiently simulate quantum adiabatic optimization. Furthermore, we perform empirical testing of diffusion Monte Carlo on a standard well-studied class of permutation-symmetric tunneling problems and similarly find large advantages for quantum optimization over diffusion Monte Carlo.

  16. A Mixture Rasch Model with a Covariate: A Simulation Study via Bayesian Markov Chain Monte Carlo Estimation

    Science.gov (United States)

    Dai, Yunyun

    2013-01-01

    Mixtures of item response theory (IRT) models have been proposed as a technique to explore response patterns in test data related to cognitive strategies, instructional sensitivity, and differential item functioning (DIF). Estimation proves challenging due to difficulties in identification and questions of effect size needed to recover underlying…

  17. Monte Carlo learning/biasing experiment with intelligent random numbers

    International Nuclear Information System (INIS)

    Booth, T.E.

    1985-01-01

    A Monte Carlo learning and biasing technique is described that does its learning and biasing in the random number space rather than the physical phase-space. The technique is probably applicable to all linear Monte Carlo problems, but no proof is provided here. Instead, the technique is illustrated with a simple Monte Carlo transport problem. Problems encountered, problems solved, and speculations about future progress are discussed. 12 refs

  18. Discrete diffusion Monte Carlo for frequency-dependent radiative transfer

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Thompson, Kelly G.; Urbatsch, Todd J.

    2011-01-01

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique. (author)

  19. Monte Carlo criticality analysis for dissolvers with neutron poison

    International Nuclear Information System (INIS)

    Yu, Deshun; Dong, Xiufang; Pu, Fuxiang.

    1987-01-01

    Criticality analysis for dissolvers with neutron poison is given on the basis of Monte Carlo method. In Monte Carlo calculations of thermal neutron group parameters for fuel pieces, neutron transport length is determined in terms of maximum cross section approach. A set of related effective multiplication factors (K eff ) are calculated by Monte Carlo method for the three cases. Related numerical results are quite useful for the design and operation of this kind of dissolver in the criticality safety analysis. (author)

  20. Temperature variance study in Monte-Carlo photon transport theory

    International Nuclear Information System (INIS)

    Giorla, J.

    1985-10-01

    We study different Monte-Carlo methods for solving radiative transfer problems, and particularly Fleck's Monte-Carlo method. We first give the different time-discretization schemes and the corresponding stability criteria. Then we write the temperature variance as a function of the variances of temperature and absorbed energy at the previous time step. Finally we obtain some stability criteria for the Monte-Carlo method in the stationary case [fr

  1. Nonlinear Spatial Inversion Without Monte Carlo Sampling

    Science.gov (United States)

    Curtis, A.; Nawaz, A.

    2017-12-01

    High-dimensional, nonlinear inverse or inference problems usually have non-unique solutions. The distribution of solutions are described by probability distributions, and these are usually found using Monte Carlo (MC) sampling methods. These take pseudo-random samples of models in parameter space, calculate the probability of each sample given available data and other information, and thus map out high or low probability values of model parameters. However, such methods would converge to the solution only as the number of samples tends to infinity; in practice, MC is found to be slow to converge, convergence is not guaranteed to be achieved in finite time, and detection of convergence requires the use of subjective criteria. We propose a method for Bayesian inversion of categorical variables such as geological facies or rock types in spatial problems, which requires no sampling at all. The method uses a 2-D Hidden Markov Model over a grid of cells, where observations represent localized data constraining the model in each cell. The data in our example application are seismic properties such as P- and S-wave impedances or rock density; our model parameters are the hidden states and represent the geological rock types in each cell. The observations at each location are assumed to depend on the facies at that location only - an assumption referred to as `localized likelihoods'. However, the facies at a location cannot be determined solely by the observation at that location as it also depends on prior information concerning its correlation with the spatial distribution of facies elsewhere. Such prior information is included in the inversion in the form of a training image which represents a conceptual depiction of the distribution of local geologies that might be expected, but other forms of prior information can be used in the method as desired. The method provides direct (pseudo-analytic) estimates of posterior marginal probability distributions over each variable

  2. Mesh-based weight window approach for Monte Carlo simulation

    International Nuclear Information System (INIS)

    Liu, L.; Gardner, R.P.

    1997-01-01

    The Monte Carlo method has been increasingly used to solve particle transport problems. Statistical fluctuation from random sampling is the major limiting factor of its application. To obtain the desired precision, variance reduction techniques are indispensable for most practical problems. Among various variance reduction techniques, the weight window method proves to be one of the most general, powerful, and robust. The method is implemented in the current MCNP code. An importance map is estimated during a regular Monte Carlo run, and then the map is used in the subsequent run for splitting and Russian roulette games. The major drawback of this weight window method is lack of user-friendliness. It normally requires that users divide the large geometric cells into smaller ones by introducing additional surfaces to ensure an acceptable spatial resolution of the importance map. In this paper, we present a new weight window approach to overcome this drawback

  3. Improvement of correlated sampling Monte Carlo methods for reactivity calculations

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki; Asaoka, Takumi

    1978-01-01

    Two correlated Monte Carlo methods, the similar flight path and the identical flight path methods, have been improved to evaluate up to the second order change of the reactivity perturbation. Secondary fission neutrons produced by neutrons having passed through perturbed regions in both unperturbed and perturbed systems are followed in a way to have a strong correlation between secondary neutrons in both the systems. These techniques are incorporated into the general purpose Monte Carlo code MORSE, so as to be able to estimate also the statistical error of the calculated reactivity change. The control rod worths measured in the FCA V-3 assembly are analyzed with the present techniques, which are shown to predict the measured values within the standard deviations. The identical flight path method has revealed itself more useful than the similar flight path method for the analysis of the control rod worth. (auth.)

  4. Monte Carlo simulations of plutonium gamma-ray spectra

    International Nuclear Information System (INIS)

    Koenig, Z.M.; Carlson, J.B.; Wang, Tzu-Fang; Ruhter, W.D.

    1993-01-01

    Monte Carlo calculations were investigated as a means of simulating the gamma-ray spectra of Pu. These simulated spectra will be used to develop and evaluate gamma-ray analysis techniques for various nondestructive measurements. Simulated spectra of calculational standards can be used for code intercomparisons, to understand systematic biases and to estimate minimum detection levels of existing and proposed nondestructive analysis instruments. The capability to simulate gamma-ray spectra from HPGe detectors could significantly reduce the costs of preparing large numbers of real reference materials. MCNP was used for the Monte Carlo transport of the photons. Results from the MCNP calculations were folded in with a detector response function for a realistic spectrum. Plutonium spectrum peaks were produced with Lorentzian shapes, for the x-rays, and Gaussian distributions. The MGA code determined the Pu isotopes and specific power of this calculated spectrum and compared it to a similar analysis on a measured spectrum

  5. Multi-Index Monte Carlo (MIMC)

    KAUST Repository

    Haji Ali, Abdul Lateef

    2016-01-06

    We propose and analyze a novel Multi-Index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles s seminal work, instead of using first-order differences as in MLMC, we use in MIMC high-order mixed differences to reduce the variance of the hierarchical differences dramatically. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be of Total Degree (TD) type. When using such sets, MIMC yields new and improved complexity results, which are natural generalizations of Giles s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence, O(TOL-2).

  6. Multi-Index Monte Carlo (MIMC)

    KAUST Repository

    Haji Ali, Abdul Lateef; Nobile, Fabio; Tempone, Raul

    2016-01-01

    We propose and analyze a novel Multi-Index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles s seminal work, instead of using first-order differences as in MLMC, we use in MIMC high-order mixed differences to reduce the variance of the hierarchical differences dramatically. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be of Total Degree (TD) type. When using such sets, MIMC yields new and improved complexity results, which are natural generalizations of Giles s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence, O(TOL-2).

  7. Multi-Index Monte Carlo (MIMC)

    KAUST Repository

    Haji Ali, Abdul Lateef; Nobile, Fabio; Tempone, Raul

    2015-01-01

    We propose and analyze a novel Multi-Index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles’s seminal work, instead of using first-order differences as in MLMC, we use in MIMC high-order mixed differences to reduce the variance of the hierarchical differences dramatically. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be of Total Degree (TD) type. When using such sets, MIMC yields new and improved complexity results, which are natural generalizations of Giles’s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence.

  8. Self-test Monte Carlo method

    International Nuclear Information System (INIS)

    Ohta, Shigemi

    1996-01-01

    The Self-Test Monte Carlo (STMC) method resolves the main problems in using algebraic pseudo-random numbers for Monte Carlo (MC) calculations: that they can interfere with MC algorithms and lead to erroneous results, and that such an error often cannot be detected without known exact solution. STMC is based on good randomness of about 10 10 bits available from physical noise or transcendental numbers like π = 3.14---. Various bit modifiers are available to get more bits for applications that demands more than 10 10 random bits such as lattice quantum chromodynamics (QCD). These modifiers are designed so that a) each of them gives a bit sequence comparable in randomness as the original if used separately from each other, and b) their mutual interference when used jointly in a single MC calculation is adjustable. Intermediate data of the MC calculation itself are used to quantitatively test and adjust the mutual interference of the modifiers in respect of the MC algorithm. STMC is free of systematic error and gives reliable statistical error. Also it can be easily implemented on vector and parallel supercomputers. (author)

  9. Quantum Monte Carlo for atoms and molecules

    International Nuclear Information System (INIS)

    Barnett, R.N.

    1989-11-01

    The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H 2 , LiH, Li 2 , and H 2 O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li 2 , and H 2 O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations, the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions

  10. Monte Carlo simulation of grain growth

    Directory of Open Access Journals (Sweden)

    Paulo Blikstein

    1999-07-01

    Full Text Available Understanding and predicting grain growth in Metallurgy is meaningful. Monte Carlo methods have been used in computer simulations in many different fields of knowledge. Grain growth simulation using this method is especially attractive as the statistical behavior of the atoms is properly reproduced; microstructural evolution depends only on the real topology of the grains and not on any kind of geometric simplification. Computer simulation has the advantage of allowing the user to visualize graphically the procedures, even dynamically and in three dimensions. Single-phase alloy grain growth simulation was carried out by calculating the free energy of each atom in the lattice (with its present crystallographic orientation and comparing this value to another one calculated with a different random orientation. When the resulting free energy is lower or equal to the initial value, the new orientation replaces the former. The measure of time is the Monte Carlo Step (MCS, which involves a series of trials throughout the lattice. A very close relationship between experimental and theoretical values for the grain growth exponent (n was observed.

  11. Multi-Index Monte Carlo (MIMC)

    KAUST Repository

    Haji Ali, Abdul Lateef

    2015-01-07

    We propose and analyze a novel Multi-Index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles’s seminal work, instead of using first-order differences as in MLMC, we use in MIMC high-order mixed differences to reduce the variance of the hierarchical differences dramatically. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be of Total Degree (TD) type. When using such sets, MIMC yields new and improved complexity results, which are natural generalizations of Giles’s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence.

  12. Parallel Monte Carlo Search for Hough Transform

    Science.gov (United States)

    Lopes, Raul H. C.; Franqueira, Virginia N. L.; Reid, Ivan D.; Hobson, Peter R.

    2017-10-01

    We investigate the problem of line detection in digital image processing and in special how state of the art algorithms behave in the presence of noise and whether CPU efficiency can be improved by the combination of a Monte Carlo Tree Search, hierarchical space decomposition, and parallel computing. The starting point of the investigation is the method introduced in 1962 by Paul Hough for detecting lines in binary images. Extended in the 1970s to the detection of space forms, what came to be known as Hough Transform (HT) has been proposed, for example, in the context of track fitting in the LHC ATLAS and CMS projects. The Hough Transform transfers the problem of line detection, for example, into one of optimization of the peak in a vote counting process for cells which contain the possible points of candidate lines. The detection algorithm can be computationally expensive both in the demands made upon the processor and on memory. Additionally, it can have a reduced effectiveness in detection in the presence of noise. Our first contribution consists in an evaluation of the use of a variation of the Radon Transform as a form of improving theeffectiveness of line detection in the presence of noise. Then, parallel algorithms for variations of the Hough Transform and the Radon Transform for line detection are introduced. An algorithm for Parallel Monte Carlo Search applied to line detection is also introduced. Their algorithmic complexities are discussed. Finally, implementations on multi-GPU and multicore architectures are discussed.

  13. Monte Carlo simulation for radiographic applications

    International Nuclear Information System (INIS)

    Tillack, G.R.; Bellon, C.

    2003-01-01

    Standard radiography simulators are based on the attenuation law complemented by built-up-factors (BUF) to describe the interaction of radiation with material. The assumption of BUF implies that scattered radiation reduces only the contrast in radiographic images. This simplification holds for a wide range of applications like weld inspection as known from practical experience. But only a detailed description of the different underlying interaction mechanisms is capable to explain effects like mottling or others that every radiographer has experienced in practice. The application of Monte Carlo models is capable to handle primary and secondary interaction mechanisms contributing to the image formation process like photon interactions (absorption, incoherent and coherent scattering including electron-binding effects, pair production) and electron interactions (electron tracing including X-Ray fluorescence and Bremsstrahlung production). It opens up possibilities like the separation of influencing factors and the understanding of the functioning of intensifying screen used in film radiography. The paper discusses the opportunities in applying the Monte Carlo method to investigate special features in radiography in terms of selected examples. (orig.) [de

  14. Odd-flavor Simulations by the Hybrid Monte Carlo

    CERN Document Server

    Takaishi, Tetsuya; Takaishi, Tetsuya; De Forcrand, Philippe

    2001-01-01

    The standard hybrid Monte Carlo algorithm is known to simulate even flavors QCD only. Simulations of odd flavors QCD, however, can be also performed in the framework of the hybrid Monte Carlo algorithm where the inverse of the fermion matrix is approximated by a polynomial. In this exploratory study we perform three flavors QCD simulations. We make a comparison of the hybrid Monte Carlo algorithm and the R-algorithm which also simulates odd flavors systems but has step-size errors. We find that results from our hybrid Monte Carlo algorithm are in agreement with those from the R-algorithm obtained at very small step-size.

  15. Wielandt acceleration for MCNP5 Monte Carlo eigenvalue calculations

    International Nuclear Information System (INIS)

    Brown, F.

    2007-01-01

    Monte Carlo criticality calculations use the power iteration method to determine the eigenvalue (k eff ) and eigenfunction (fission source distribution) of the fundamental mode. A recently proposed method for accelerating convergence of the Monte Carlo power iteration using Wielandt's method has been implemented in a test version of MCNP5. The method is shown to provide dramatic improvements in convergence rates and to greatly reduce the possibility of false convergence assessment. The method is effective and efficient, improving the Monte Carlo figure-of-merit for many problems. In addition, the method should eliminate most of the underprediction bias in confidence intervals for Monte Carlo criticality calculations. (authors)

  16. Monte Carlo shielding analyses using an automated biasing procedure

    International Nuclear Information System (INIS)

    Tang, J.S.; Hoffman, T.J.

    1988-01-01

    A systematic and automated approach for biasing Monte Carlo shielding calculations is described. In particular, adjoint fluxes from a one-dimensional discrete ordinates calculation are used to generate biasing parameters for a Monte Carlo calculation. The entire procedure of adjoint calculation, biasing parameters generation, and Monte Carlo calculation has been automated. The automated biasing procedure has been applied to several realistic deep-penetration shipping cask problems. The results obtained for neutron and gamma-ray transport indicate that with the automated biasing procedure Monte Carlo shielding calculations of spent-fuel casks can be easily performed with minimum effort and that accurate results can be obtained at reasonable computing cost

  17. Igo - A Monte Carlo Code For Radiotherapy Planning

    International Nuclear Information System (INIS)

    Goldstein, M.; Regev, D.

    1999-01-01

    The goal of radiation therapy is to deliver a lethal dose to the tumor, while minimizing the dose to normal tissues and vital organs. To carry out this task, it is critical to calculate correctly the 3-D dose delivered. Monte Carlo transport methods (especially the Adjoint Monte Carlo have the potential to provide more accurate predictions of the 3-D dose the currently used methods. IG0 is a Monte Carlo code derived from the general Monte Carlo Program - MCNP, tailored specifically for calculating the effects of radiation therapy. This paper describes the IG0 transport code, the PIG0 interface and some preliminary results

  18. Quantum statistical Monte Carlo methods and applications to spin systems

    International Nuclear Information System (INIS)

    Suzuki, M.

    1986-01-01

    A short review is given concerning the quantum statistical Monte Carlo method based on the equivalence theorem that d-dimensional quantum systems are mapped onto (d+1)-dimensional classical systems. The convergence property of this approximate tansformation is discussed in detail. Some applications of this general appoach to quantum spin systems are reviewed. A new Monte Carlo method, ''thermo field Monte Carlo method,'' is presented, which is an extension of the projection Monte Carlo method at zero temperature to that at finite temperatures

  19. Applications of the Monte Carlo method in radiation protection

    International Nuclear Information System (INIS)

    Kulkarni, R.N.; Prasad, M.A.

    1999-01-01

    This paper gives a brief introduction to the application of the Monte Carlo method in radiation protection. It may be noted that an exhaustive review has not been attempted. The special advantage of the Monte Carlo method has been first brought out. The fundamentals of the Monte Carlo method have next been explained in brief, with special reference to two applications in radiation protection. Some sample current applications have been reported in the end in brief as examples. They are, medical radiation physics, microdosimetry, calculations of thermoluminescence intensity and probabilistic safety analysis. The limitations of the Monte Carlo method have also been mentioned in passing. (author)

  20. The metabolic network of Clostridium acetobutylicum: Comparison of the approximate Bayesian computation via sequential Monte Carlo (ABC-SMC) and profile likelihood estimation (PLE) methods for determinability analysis.

    Science.gov (United States)

    Thorn, Graeme J; King, John R

    2016-01-01

    The Gram-positive bacterium Clostridium acetobutylicum is an anaerobic endospore-forming species which produces acetone, butanol and ethanol via the acetone-butanol (AB) fermentation process, leading to biofuels including butanol. In previous work we looked to estimate the parameters in an ordinary differential equation model of the glucose metabolism network using data from pH-controlled continuous culture experiments. Here we combine two approaches, namely the approximate Bayesian computation via an existing sequential Monte Carlo (ABC-SMC) method (to compute credible intervals for the parameters), and the profile likelihood estimation (PLE) (to improve the calculation of confidence intervals for the same parameters), the parameters in both cases being derived from experimental data from forward shift experiments. We also apply the ABC-SMC method to investigate which of the models introduced previously (one non-sporulation and four sporulation models) have the greatest strength of evidence. We find that the joint approximate posterior distribution of the parameters determines the same parameters as previously, including all of the basal and increased enzyme production rates and enzyme reaction activity parameters, as well as the Michaelis-Menten kinetic parameters for glucose ingestion, while other parameters are not as well-determined, particularly those connected with the internal metabolites acetyl-CoA, acetoacetyl-CoA and butyryl-CoA. We also find that the approximate posterior is strongly non-Gaussian, indicating that our previous assumption of elliptical contours of the distribution is not valid, which has the effect of reducing the numbers of pairs of parameters that are (linearly) correlated with each other. Calculations of confidence intervals using the PLE method back this up. Finally, we find that all five of our models are equally likely, given the data available at present. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Estimation of radiation dose and risk to children undergoing cardiac catheterization for the treatment of a congenital heart disease using Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Yakoumakis, Emmanuel; Kostopoulou, Helen; Dimitriadis, Anastastios; Georgiou, Evaggelos [University of Athens, Medical Physics Department, Medical School, Athens (Greece); Makri, Triantafilia [' Agia Sofia' Hospital, Medical Physics Unit, Athens (Greece); Tsalafoutas, Ioannis [Anticancer-Oncology Hospital of Athens ' Agios Savvas' , Medical Physics Department, Athens (Greece)

    2013-03-15

    Children diagnosed with congenital heart disease often undergo cardiac catheterization for their treatment, which involves the use of ionizing radiation and therefore a risk of radiation-induced cancer. The purpose of this study was to calculate the effective and equivalent organ doses (H{sub T}) in those children and estimate the risk of exposure-induced death. Fifty-three children were divided into three groups: atrial septal defect (ASD), ventricular septal defect (VSD) and patent ductus arteriosus (PDA). In all procedures, the exposure conditions and the dose-area product meters readings were recorded for each individual acquisition. Monte Carlo simulations were run using the PCXMC 2.0 code and mathematical phantoms simulating a child's anatomy. The H{sub T} values to all irradiated organs and the resulting E and risk of exposure-induced death values were calculated. The average dose-area product values were, respectively, 40 {+-} 12 Gy.cm{sup 2} for the ASD, 17.5 {+-} 0.7 Gy.cm{sup 2} for the VSD and 9.5 {+-} 1 Gy.cm{sup 2} for the PDA group. The average E values were 40 {+-} 12, 22 {+-} 2.5 and 17 {+-} 3.6 mSv for ASD, VSD and PDA groups, respectively. The respective estimated risk of exposure-induced death values per procedure were 0.109, 0.106 and 0.067%. Cardiac catheterizations in children involve a considerable risk for radiation-induced cancer that has to be further reduced. (orig.)

  2. SU-F-J-14: Kilovoltage Cone-Beam CT Dose Estimation of Varian On-Board Imager Using GMctdospp Monte Carlo Framework

    Energy Technology Data Exchange (ETDEWEB)

    Kim, S; Rangaraj, D [Baylor Scott & White Health, Temple, TX (United States)

    2016-06-15

    Purpose: Although cone-beam CT (CBCT) imaging became popular in radiation oncology, its imaging dose estimation is still challenging. The goal of this study is to assess the kilovoltage CBCT doses using GMctdospp - an EGSnrc based Monte Carlo (MC) framework. Methods: Two Varian OBI x-ray tube models were implemented in the GMctpdospp framework of EGSnrc MC System. The x-ray spectrum of 125 kVp CBCT beam was acquired from an EGSnrc/BEAMnrc simulation and validated with IPEM report 78. Then, the spectrum was utilized as an input spectrum in GMctdospp dose calculations. Both full and half bowtie pre-filters of the OBI system were created by using egs-prism module. The x-ray tube MC models were verified by comparing calculated dosimetric profiles (lateral and depth) to ion chamber measurements for a static x-ray beam irradiation to a cuboid water phantom. An abdominal CBCT imaging doses was simulated in GMctdospp framework using a 5-year-old anthropomorphic phantom. The organ doses and effective dose (ED) from the framework were assessed and compared to the MOSFET measurements and convolution/superposition dose calculations. Results: The lateral and depth dose profiles in the water cuboid phantom were well matched within 6% except a few areas - left shoulder of the half bowtie lateral profile and surface of water phantom. The organ doses and ED from the MC framework were found to be closer to MOSFET measurements and CS calculations within 2 cGy and 5 mSv respectively. Conclusion: This study implemented and validated the Varian OBI x-ray tube models in the GMctdospp MC framework using a cuboid water phantom and CBCT imaging doses were also evaluated in a 5-year-old anthropomorphic phantom. In future study, various CBCT imaging protocols will be implemented and validated and consequently patient CT images will be used to estimate the CBCT imaging doses in patients.

  3. Estimation of Extreme Responses and Failure Probability of Wind Turbines under Normal Operation by Controlled Monte Carlo Simulation

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri

    of the evolution of the PDF of a stochastic process; hence an alternative to the FPK. The considerable advantage of the introduced method over FPK is that its solution does not require high computational cost which extends its range of applicability to high order structural dynamic problems. The problem...... an alternative approach for estimation of the first excursion probability of any system is based on calculating the evolution of the Probability Density Function (PDF) of the process and integrating it on the specified domain. Clearly this provides the most accurate results among the three classes of the methods....... The solution of the Fokker-Planck-Kolmogorov (FPK) equation for systems governed by a stochastic differential equation driven by Gaussian white noise will give the sought time variation of the probability density function. However the analytical solution of the FPK is available for only a few dynamic systems...

  4. Time delays between core power production and external detector response from Monte Carlo calculations

    International Nuclear Information System (INIS)

    Valentine, T.E.; Mihalczo, J.T.

    1996-01-01

    One primary concern for design of safety systems for reactors is the time response of external detectors to changes in the core. This paper describes a way to estimate the time delay between the core power production and the external detector response using Monte Carlo calculations and suggests a technique to measure the time delay. The Monte Carlo code KENO-NR was used to determine the time delay between the core power production and the external detector response for a conceptual design of the Advanced Neutron Source (ANS) reactor. The Monte Carlo estimated time delay was determined to be about 10 ms for this conceptual design of the ANS reactor

  5. Applications to shielding design and others of monte carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Ito, Daiichiro [Mitsui Engineering and Shipbuiding Co., Ltd., Tokyo (Japan)

    2001-01-01

    One-dimensional or two-dimensional Sn computer code (ANISN, DOT3.5, etc.) and a point attenuation kernel integral code (QAD, etc.) have been used widely for shielding design. Application examples of monte carlo method which could follow precisely the three-dimensional configuration of shielding structure are shown as follow: (1) CASTER cask has a complex structure which consists of a large number of fuel baskets (stainless steel), neutron moderators (polyethylene rods), the body (cast iron), and cooling fin. The R-{theta} model of Sn code DOT3.5 cannot follow closely the complex form of polyethylene rods and fuel baskets. A monte carlo code MORSE is used to ascertain the calculation results of DOT3.5. The discrepancy between the calculation results of DOT3.5 and MORSE was in 10% for dose rate at distance of 1 m from the cask surface. (2) The dose rates of an iron cell at 10 cm above the floor are calculated by the code QAD and the MORSE. The reflected components of gamma ray caused by the auxiliary floor shield (lead) are analyzed by the MORSE. (3) A monte carlo code MCNP4A is used for skyshine evaluation of spent fuel carrier ship 'ROKUEIMARU'. The direct and skyshine components of gamma ray and neutron flux are estimated at each center of engine room and wheel house. The skyshine dose rate of neutron flux is 5-15 times larger than the gamma ray. (M. Suetake)

  6. Development of fast and accurate Monte Carlo code MVP

    International Nuclear Information System (INIS)

    Mori, Takamasa

    2001-01-01

    The development work of fast and accurate Monte Carlo code MVP has started at JAERI in late 80s. From the beginning, the code was designed to utilize vector supercomputers and achieved higher computation speed by a factor of 10 or more compared with conventional codes. In 1994, the first version of MVP was released together with cross section libraries based on JENDL-3.1 and JENDL-3.2. In 1996, minor revision was made by adding several functions such as treatments of ENDF-B6 file 6 data, time dependent problem, and so on. Since 1996, several works have been carried out for the next version of MVP. The main works are (1) the development of continuous energy Monte Carlo burn-up calculation code MVP-BURN, (2) the development of a system to generate cross section libraries at arbitrary temperature, and (3) the study on error estimations and their biases in Monte Carlo eigenvalue calculations. This paper summarizes the main features of MVP, results of recent studies and future plans for MVP. (author)

  7. Automatic fission source convergence criteria for Monte Carlo criticality calculations

    International Nuclear Information System (INIS)

    Shim, Hyung Jin; Kim, Chang Hyo

    2005-01-01

    The Monte Carlo criticality calculations for the multiplication factor and the power distribution in a nuclear system require knowledge of stationary or fundamental-mode fission source distribution (FSD) in the system. Because it is a priori unknown, so-called inactive cycle Monte Carlo (MC) runs are performed to determine it. The inactive cycle MC runs should be continued until the FSD converges to the stationary FSD. Obviously, if one stops them prematurely, the MC calculation results may have biases because the followup active cycles may be run with the non-stationary FSD. Conversely, if one performs the inactive cycle MC runs more than necessary, one is apt to waste computing time because inactive cycle MC runs are used to elicit the fundamental-mode FSD only. In the absence of suitable criteria for terminating the inactive cycle MC runs, one cannot but rely on empiricism in deciding how many inactive cycles one should conduct for a given problem. Depending on the problem, this may introduce biases into Monte Carlo estimates of the parameters one tries to calculate. The purpose of this paper is to present new fission source convergence criteria designed for the automatic termination of inactive cycle MC runs

  8. On-the-fly doppler broadening for Monte Carlo codes

    International Nuclear Information System (INIS)

    Yesilyurt, G.; Martin, W. R.; Brown, F. B.

    2009-01-01

    A methodology to allow on-the-fly Doppler broadening of neutron cross sections for use in Monte Carlo codes has been developed. The Monte Carlo code only needs to store 0 K cross sections for each isotope and the method will broaden the 0 K cross sections for any isotope in the library to any temperature in the range 77 K-3200 K. The methodology is based on a combination of Taylor series expansions and asymptotic series expansions. The type of series representation was determined by investigating the temperature dependence of U3o8 resonance cross sections in three regions: near the resonance peaks, mid-resonance, and the resonance wings. The coefficients for these series expansions were determined by a regression over the energy and temperature range of interest. Since the resonance parameters are a function of the neutron energy and target nuclide, the ψ and χ functions in the Adler-Adler multi-level resonance model can be represented by series expansions in temperature only, allowing the least number of terms to approximate the temperature dependent cross sections within a given accuracy. The comparison of the broadened cross sections using this methodology with the NJOY cross sections was excellent over the entire temperature range (77 K-3200 K) and energy range. A Monte Carlo code was implemented to apply the combined regression model and used to estimate the additional computing cost which was found to be less than <1%. (authors)

  9. The iterative hopping expansion algorithm for Monte Carlo calculations with very light fermions

    International Nuclear Information System (INIS)

    Montvay, I.

    1985-03-01

    The number of numerical operations necessary for a Monte Carlo simulation with very light fermions (like u- and d-quarks in quantum chromodynamics) is estimated within the iterative hopping expansion method. (orig.)

  10. Adaptively Learning an Importance Function Using Transport Constrained Monte Carlo

    International Nuclear Information System (INIS)

    Booth, T.E.

    1998-01-01

    It is well known that a Monte Carlo estimate can be obtained with zero-variance if an exact importance function for the estimate is known. There are many ways that one might iteratively seek to obtain an ever more exact importance function. This paper describes a method that has obtained ever more exact importance functions that empirically produce an error that is dropping exponentially with computer time. The method described herein constrains the importance function to satisfy the (adjoint) Boltzmann transport equation. This constraint is provided by using the known form of the solution, usually referred to as the Case eigenfunction solution

  11. Response decomposition with Monte Carlo correlated coupling

    International Nuclear Information System (INIS)

    Ueki, T.; Hoogenboom, J.E.; Kloosterman, J.L.

    2001-01-01

    Particle histories that contribute to a detector response are categorized according to whether they are fully confined inside a source-detector enclosure or cross and recross the same enclosure. The contribution from the confined histories is expressed using a forward problem with the external boundary condition on the source-detector enclosure. The contribution from the crossing and recrossing histories is expressed as the surface integral at the same enclosure of the product of the directional cosine and the fluxes in the foregoing forward problem and the adjoint problem for the whole spatial domain. The former contribution can be calculated by a standard forward Monte Carlo. The latter contribution can be calculated by correlated coupling of forward and adjoint histories independently of the former contribution. We briefly describe the computational method and discuss its application to perturbation analysis for localized material changes. (orig.)

  12. Response decomposition with Monte Carlo correlated coupling

    Energy Technology Data Exchange (ETDEWEB)

    Ueki, T.; Hoogenboom, J.E.; Kloosterman, J.L. [Delft Univ. of Technology (Netherlands). Interfaculty Reactor Inst.

    2001-07-01

    Particle histories that contribute to a detector response are categorized according to whether they are fully confined inside a source-detector enclosure or cross and recross the same enclosure. The contribution from the confined histories is expressed using a forward problem with the external boundary condition on the source-detector enclosure. The contribution from the crossing and recrossing histories is expressed as the surface integral at the same enclosure of the product of the directional cosine and the fluxes in the foregoing forward problem and the adjoint problem for the whole spatial domain. The former contribution can be calculated by a standard forward Monte Carlo. The latter contribution can be calculated by correlated coupling of forward and adjoint histories independently of the former contribution. We briefly describe the computational method and discuss its application to perturbation analysis for localized material changes. (orig.)

  13. Monte Carlo simulations of low background detectors

    International Nuclear Information System (INIS)

    Miley, H.S.; Brodzinski, R.L.; Hensley, W.K.; Reeves, J.H.

    1995-01-01

    An implementation of the Electron Gamma Shower 4 code (EGS4) has been developed to allow convenient simulation of typical gamma ray measurement systems. Coincidence gamma rays, beta spectra, and angular correlations have been added to adequately simulate a complete nuclear decay and provide corrections to experimentally determined detector efficiencies. This code has been used to strip certain low-background spectra for the purpose of extremely low-level assay. Monte Carlo calculations of this sort can be extremely successful since low background detectors are usually free of significant contributions from poorly localized radiation sources, such as cosmic muons, secondary cosmic neutrons, and radioactive construction or shielding materials. Previously, validation of this code has been obtained from a series of comparisons between measurements and blind calculations. An example of the application of this code to an exceedingly low background spectrum stripping will be presented. (author) 5 refs.; 3 figs.; 1 tab

  14. Homogenized group cross sections by Monte Carlo

    International Nuclear Information System (INIS)

    Van Der Marck, S. C.; Kuijper, J. C.; Oppe, J.

    2006-01-01

    Homogenized group cross sections play a large role in making reactor calculations efficient. Because of this significance, many codes exist that can calculate these cross sections based on certain assumptions. However, the application to the High Flux Reactor (HFR) in Petten, the Netherlands, the limitations of such codes imply that the core calculations would become less accurate when using homogenized group cross sections (HGCS). Therefore we developed a method to calculate HGCS based on a Monte Carlo program, for which we chose MCNP. The implementation involves an addition to MCNP, and a set of small executables to perform suitable averaging after the MCNP run(s) have completed. Here we briefly describe the details of the method, and we report on two tests we performed to show the accuracy of the method and its implementation. By now, this method is routinely used in preparation of the cycle to cycle core calculations for HFR. (authors)

  15. Nuclear reactions in Monte Carlo codes

    CERN Document Server

    Ferrari, Alfredo

    2002-01-01

    The physics foundations of hadronic interactions as implemented in most Monte Carlo codes are presented together with a few practical examples. The description of the relevant physics is presented schematically split into the major steps in order to stress the different approaches required for the full understanding of nuclear reactions at intermediate and high energies. Due to the complexity of the problem, only a few semi-qualitative arguments are developed in this paper. The description will be necessarily schematic and somewhat incomplete, but hopefully it will be useful for a first introduction into this topic. Examples are shown mostly for the high energy regime, where all mechanisms mentioned in the paper are at work and to which perhaps most of the readers are less accustomed. Examples for lower energies can be found in the references. (43 refs) .

  16. Angular biasing in implicit Monte-Carlo

    International Nuclear Information System (INIS)

    Zimmerman, G.B.

    1994-01-01

    Calculations of indirect drive Inertial Confinement Fusion target experiments require an integrated approach in which laser irradiation and radiation transport in the hohlraum are solved simultaneously with the symmetry, implosion and burn of the fuel capsule. The Implicit Monte Carlo method has proved to be a valuable tool for the two dimensional radiation transport within the hohlraum, but the impact of statistical noise on the symmetric implosion of the small fuel capsule is difficult to overcome. We present an angular biasing technique in which an increased number of low weight photons are directed at the imploding capsule. For typical parameters this reduces the required computer time for an integrated calculation by a factor of 10. An additional factor of 5 can also be achieved by directing even smaller weight photons at the polar regions of the capsule where small mass zones are most sensitive to statistical noise

  17. An accurate nonlinear Monte Carlo collision operator

    International Nuclear Information System (INIS)

    Wang, W.X.; Okamoto, M.; Nakajima, N.; Murakami, S.

    1995-03-01

    A three dimensional nonlinear Monte Carlo collision model is developed based on Coulomb binary collisions with the emphasis both on the accuracy and implementation efficiency. The operator of simple form fulfills particle number, momentum and energy conservation laws, and is equivalent to exact Fokker-Planck operator by correctly reproducing the friction coefficient and diffusion tensor, in addition, can effectively assure small-angle collisions with a binary scattering angle distributed in a limited range near zero. Two highly vectorizable algorithms are designed for its fast implementation. Various test simulations regarding relaxation processes, electrical conductivity, etc. are carried out in velocity space. The test results, which is in good agreement with theory, and timing results on vector computers show that it is practically applicable. The operator may be used for accurately simulating collisional transport problems in magnetized and unmagnetized plasmas. (author)

  18. Computation cluster for Monte Carlo calculations

    International Nuclear Information System (INIS)

    Petriska, M.; Vitazek, K.; Farkas, G.; Stacho, M.; Michalek, S.

    2010-01-01

    Two computation clusters based on Rocks Clusters 5.1 Linux distribution with Intel Core Duo and Intel Core Quad based computers were made at the Department of the Nuclear Physics and Technology. Clusters were used for Monte Carlo calculations, specifically for MCNP calculations applied in Nuclear reactor core simulations. Optimization for computation speed was made on hardware and software basis. Hardware cluster parameters, such as size of the memory, network speed, CPU speed, number of processors per computation, number of processors in one computer were tested for shortening the calculation time. For software optimization, different Fortran compilers, MPI implementations and CPU multi-core libraries were tested. Finally computer cluster was used in finding the weighting functions of neutron ex-core detectors of VVER-440. (authors)

  19. Monte Carlo stratified source-sampling

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Gelbard, E.M.

    1997-01-01

    In 1995, at a conference on criticality safety, a special session was devoted to the Monte Carlo open-quotes eigenvalue of the worldclose quotes problem. Argonne presented a paper, at that session, in which the anomalies originally observed in that problem were reproduced in a much simplified model-problem configuration, and removed by a version of stratified source-sampling. The original test-problem was treated by a special code designed specifically for that purpose. Recently ANL started work on a method for dealing with more realistic eigenvalue of the world configurations, and has been incorporating this method into VIM. The original method has been modified to take into account real-world statistical noise sources not included in the model problem. This paper constitutes a status report on work still in progress

  20. Monte Carlo simulation of a CZT detector

    International Nuclear Information System (INIS)

    Chun, Sung Dae; Park, Se Hwan; Ha, Jang Ho; Kim, Han Soo; Cho, Yoon Ho; Kang, Sang Mook; Kim, Yong Kyun; Hong, Duk Geun

    2008-01-01

    CZT detector is one of the most promising radiation detectors for hard X-ray and γ-ray measurement. The energy spectrum of CZT detector has to be simulated to optimize the detector design. A CZT detector was fabricated with dimensions of 5x5x2 mm 3 . A Peltier cooler with a size of 40x40 mm 2 was installed below the fabricated CZT detector to reduce the operation temperature of the detector. Energy spectra of were measured with 59.5 keV γ-ray from 241 Am. A Monte Carlo code was developed to simulate the CZT energy spectrum, which was measured with a planar-type CZT detector, and the result was compared with the measured one. The simulation was extended to the CZT detector with strip electrodes. (author)

  1. Vectorization of Monte Carlo particle transport

    International Nuclear Information System (INIS)

    Burns, P.J.; Christon, M.; Schweitzer, R.; Lubeck, O.M.; Wasserman, H.J.; Simmons, M.L.; Pryor, D.V.

    1989-01-01

    This paper reports that fully vectorized versions of the Los Alamos National Laboratory benchmark code Gamteb, a Monte Carlo photon transport algorithm, were developed for the Cyber 205/ETA-10 and Cray X-MP/Y-MP architectures. Single-processor performance measurements of the vector and scalar implementations were modeled in a modified Amdahl's Law that accounts for additional data motion in the vector code. The performance and implementation strategy of the vector codes are related to architectural features of each machine. Speedups between fifteen and eighteen for Cyber 205/ETA-10 architectures, and about nine for CRAY X-MP/Y-MP architectures are observed. The best single processor execution time for the problem was 0.33 seconds on the ETA-10G, and 0.42 seconds on the CRAY Y-MP

  2. Computation cluster for Monte Carlo calculations

    Energy Technology Data Exchange (ETDEWEB)

    Petriska, M.; Vitazek, K.; Farkas, G.; Stacho, M.; Michalek, S. [Dep. Of Nuclear Physics and Technology, Faculty of Electrical Engineering and Information, Technology, Slovak Technical University, Ilkovicova 3, 81219 Bratislava (Slovakia)

    2010-07-01

    Two computation clusters based on Rocks Clusters 5.1 Linux distribution with Intel Core Duo and Intel Core Quad based computers were made at the Department of the Nuclear Physics and Technology. Clusters were used for Monte Carlo calculations, specifically for MCNP calculations applied in Nuclear reactor core simulations. Optimization for computation speed was made on hardware and software basis. Hardware cluster parameters, such as size of the memory, network speed, CPU speed, number of processors per computation, number of processors in one computer were tested for shortening the calculation time. For software optimization, different Fortran compilers, MPI implementations and CPU multi-core libraries were tested. Finally computer cluster was used in finding the weighting functions of neutron ex-core detectors of VVER-440. (authors)

  3. Monte Carlo calculations of channeling radiation

    International Nuclear Information System (INIS)

    Bloom, S.D.; Berman, B.L.; Hamilton, D.C.; Alguard, M.J.; Barrett, J.H.; Datz, S.; Pantell, R.H.; Swent, R.H.

    1981-01-01

    Results of classical Monte Carlo calculations are presented for the radiation produced by ultra-relativistic positrons incident in a direction parallel to the (110) plane of Si in the energy range 30 to 100 MeV. The results all show the characteristic CR(channeling radiation) peak in the energy range 20 keV to 100 keV. Plots of the centroid energies, widths, and total yields of the CR peaks as a function of energy show the power law dependences of γ 1 5 , γ 1 7 , and γ 2 5 respectively. Except for the centroid energies and power-law dependence is only approximate. Agreement with experimental data is good for the centroid energies and only rough for the widths. Adequate experimental data for verifying the yield dependence on γ does not yet exist

  4. Monte Carlo simulation of neutron scattering instruments

    International Nuclear Information System (INIS)

    Seeger, P.A.; Daemen, L.L.; Hjelm, R.P. Jr.

    1998-01-01

    A code package consisting of the Monte Carlo Library MCLIB, the executing code MC RUN, the web application MC Web, and various ancillary codes is proposed as an open standard for simulation of neutron scattering instruments. The architecture of the package includes structures to define surfaces, regions, and optical elements contained in regions. A particle is defined by its vector position and velocity, its time of flight, its mass and charge, and a polarization vector. The MC RUN code handles neutron transport and bookkeeping, while the action on the neutron within any region is computed using algorithms that may be deterministic, probabilistic, or a combination. Complete versatility is possible because the existing library may be supplemented by any procedures a user is able to code. Some examples are shown

  5. Monte Carlo simulation of the ARGO

    International Nuclear Information System (INIS)

    Depaola, G.O.

    1997-01-01

    We use GEANT Monte Carlo code to design an outline of the geometry and simulate the performance of the Argentine gamma-ray observer (ARGO), a telescope based on silicon strip detector technlogy. The γ-ray direction is determined by geometrical means and the angular resolution is calculated for small variations of the basic design. The results show that the angular resolutions vary from a few degrees at low energies (∝50 MeV) to 0.2 , approximately, at high energies (>500 MeV). We also made simulations using as incoming γ-ray the energy spectrum of PKS0208-512 and PKS0528+134 quasars. Moreover, a method based on multiple scattering theory is also used to determine the incoming energy. We show that this method is applicable to energy spectrum. (orig.)

  6. Variational Monte Carlo study of pentaquark states

    Energy Technology Data Exchange (ETDEWEB)

    Mark W. Paris

    2005-07-01

    Accurate numerical solution of the five-body Schrodinger equation is effected via variational Monte Carlo. The spectrum is assumed to exhibit a narrow resonance with strangeness S=+1. A fully antisymmetrized and pair-correlated five-quark wave function is obtained for the assumed non-relativistic Hamiltonian which has spin, isospin, and color dependent pair interactions and many-body confining terms which are fixed by the non-exotic spectra. Gauge field dynamics are modeled via flux tube exchange factors. The energy determined for the ground states with J=1/2 and negative (positive) parity is 2.22 GeV (2.50 GeV). A lower energy negative parity state is consistent with recent lattice results. The short-range structure of the state is analyzed via its diquark content.

  7. Geometric Monte Carlo and black Janus geometries

    Energy Technology Data Exchange (ETDEWEB)

    Bak, Dongsu, E-mail: dsbak@uos.ac.kr [Physics Department, University of Seoul, Seoul 02504 (Korea, Republic of); B.W. Lee Center for Fields, Gravity & Strings, Institute for Basic Sciences, Daejeon 34047 (Korea, Republic of); Kim, Chanju, E-mail: cjkim@ewha.ac.kr [Department of Physics, Ewha Womans University, Seoul 03760 (Korea, Republic of); Kim, Kyung Kiu, E-mail: kimkyungkiu@gmail.com [Department of Physics, Sejong University, Seoul 05006 (Korea, Republic of); Department of Physics, College of Science, Yonsei University, Seoul 03722 (Korea, Republic of); Min, Hyunsoo, E-mail: hsmin@uos.ac.kr [Physics Department, University of Seoul, Seoul 02504 (Korea, Republic of); Song, Jeong-Pil, E-mail: jeong_pil_song@brown.edu [Department of Chemistry, Brown University, Providence, RI 02912 (United States)

    2017-04-10

    We describe an application of the Monte Carlo method to the Janus deformation of the black brane background. We present numerical results for three and five dimensional black Janus geometries with planar and spherical interfaces. In particular, we argue that the 5D geometry with a spherical interface has an application in understanding the finite temperature bag-like QCD model via the AdS/CFT correspondence. The accuracy and convergence of the algorithm are evaluated with respect to the grid spacing. The systematic errors of the method are determined using an exact solution of 3D black Janus. This numerical approach for solving linear problems is unaffected initial guess of a trial solution and can handle an arbitrary geometry under various boundary conditions in the presence of source fields.

  8. Radiation Modeling with Direct Simulation Monte Carlo

    Science.gov (United States)

    Carlson, Ann B.; Hassan, H. A.

    1991-01-01

    Improvements in the modeling of radiation in low density shock waves with direct simulation Monte Carlo (DSMC) are the subject of this study. A new scheme to determine the relaxation collision numbers for excitation of electronic states is proposed. This scheme attempts to move the DSMC programs toward a more detailed modeling of the physics and more reliance on available rate data. The new method is compared with the current modeling technique and both techniques are compared with available experimental data. The differences in the results are evaluated. The test case is based on experimental measurements from the AVCO-Everett Research Laboratory electric arc-driven shock tube of a normal shock wave in air at 10 km/s and .1 Torr. The new method agrees with the available data as well as the results from the earlier scheme and is more easily extrapolated to di erent ow conditions.

  9. Methods for Monte Carlo simulations of biomacromolecules.

    Science.gov (United States)

    Vitalis, Andreas; Pappu, Rohit V

    2009-01-01

    The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies.

  10. Markov Chain Monte Carlo from Lagrangian Dynamics.

    Science.gov (United States)

    Lan, Shiwei; Stathopoulos, Vasileios; Shahbaba, Babak; Girolami, Mark

    2015-04-01

    Hamiltonian Monte Carlo (HMC) improves the computational e ciency of the Metropolis-Hastings algorithm by reducing its random walk behavior. Riemannian HMC (RHMC) further improves the performance of HMC by exploiting the geometric properties of the parameter space. However, the geometric integrator used for RHMC involves implicit equations that require fixed-point iterations. In some cases, the computational overhead for solving implicit equations undermines RHMC's benefits. In an attempt to circumvent this problem, we propose an explicit integrator that replaces the momentum variable in RHMC by velocity. We show that the resulting transformation is equivalent to transforming Riemannian Hamiltonian dynamics to Lagrangian dynamics. Experimental results suggests that our method improves RHMC's overall computational e ciency in the cases considered. All computer programs and data sets are available online (http://www.ics.uci.edu/~babaks/Site/Codes.html) in order to allow replication of the results reported in this paper.

  11. Monte Carlo modelling of TRIGA research reactor

    Science.gov (United States)

    El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.

    2010-10-01

    The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.

  12. A 3D Monte Carlo Method for Estimation of Patient-specific Internal Organs Absorbed Dose for 99mTc-hynic-Tyr3-octreotide Imaging

    International Nuclear Information System (INIS)

    Momennezhad, Mehdi; Nasseri, Shahrokh; Zakavi, Seyed Rasoul; Parach, Ali Asghar; Ghorbani, Mahdi; Asl, Ruhollah Ghahraman

    2016-01-01

    Single-photon emission computed tomography (SPECT)-based tracers are easily available and more widely used than positron emission tomography (PET)-based tracers, and SPECT imaging still remains the most prevalent nuclear medicine imaging modality worldwide. The aim of this study is to implement an image-based Monte Carlo method for patient-specific three-dimensional (3D) absorbed dose calculation in patients after injection of 99m Tc-hydrazinonicotinamide (hynic)-Tyr 3 -octreotide as a SPECT radiotracer. 99m Tc patient-speci@@@@@@c S values and the absorbed doses were calculated with GATE code for each source-target organ pair in four patients who were imaged for suspected neuroendocrine tumors. Each patient underwent multiple whole-body planar scans as well as SPECT imaging over a period of 1-24 h after intravenous injection of 99m hynic-Tyr 3 -octreotide. The patient-specific S values calculated by GATE Monte Carlo code and the corresponding S values obtained by MIRDOSE program differed within 4.3% on an average for self-irradiation, and differed within 69.6% on an average for cross-irradiation. However, the agreement between total organ doses calculated by GATE code and MIRDOSE program for all patients was reasonably well (percentage difference was about 4.6% on an average). Normal and tumor absorbed doses calculated with GATE were slightly higher than those calculated with MIRDOSE program. The average ratio of GATE absorbed doses to MIRDOSE was 1.07 ± 0.11 (ranging from 0.94 to 1.36). According to the results, it is proposed that when cross-organ irradiation is dominant, a comprehensive approach such as GATE Monte Carlo dosimetry be used since it provides more reliable dosimetric results

  13. PEPSI: a Monte Carlo generator for polarized leptoproduction

    International Nuclear Information System (INIS)

    Mankiewicz, L.

    1992-01-01

    We describe PEPSI (Polarized Electron Proton Scattering Interactions) a Monte Carlo program for the polarized deep inelastic leptoproduction mediated by electromagnetic interaction. The code is a modification of the LEPTO 4.3 Lund Monte Carlo for unpolarized scattering and requires the standard polarization-independent JETSET routines to perform fragmentation into final hadrons. (orig.)

  14. Closed-shell variational quantum Monte Carlo simulation for the ...

    African Journals Online (AJOL)

    Closed-shell variational quantum Monte Carlo simulation for the electric dipole moment calculation of hydrazine molecule using casino-code. ... Nigeria Journal of Pure and Applied Physics ... The variational quantum Monte Carlo (VQMC) technique used in this work employed the restricted Hartree-Fock (RHF) scheme.

  15. Efficiency and accuracy of Monte Carlo (importance) sampling

    NARCIS (Netherlands)

    Waarts, P.H.

    2003-01-01

    Monte Carlo Analysis is often regarded as the most simple and accurate reliability method. Be-sides it is the most transparent method. The only problem is the accuracy in correlation with the efficiency. Monte Carlo gets less efficient or less accurate when very low probabilities are to be computed

  16. Exponential convergence on a continuous Monte Carlo transport problem

    International Nuclear Information System (INIS)

    Booth, T.E.

    1997-01-01

    For more than a decade, it has been known that exponential convergence on discrete transport problems was possible using adaptive Monte Carlo techniques. An adaptive Monte Carlo method that empirically produces exponential convergence on a simple continuous transport problem is described

  17. Multiple histogram method and static Monte Carlo sampling

    NARCIS (Netherlands)

    Inda, M.A.; Frenkel, D.

    2004-01-01

    We describe an approach to use multiple-histogram methods in combination with static, biased Monte Carlo simulations. To illustrate this, we computed the force-extension curve of an athermal polymer from multiple histograms constructed in a series of static Rosenbluth Monte Carlo simulations. From

  18. A Monte Carlo approach to combating delayed completion of ...

    African Journals Online (AJOL)

    The objective of this paper is to unveil the relevance of Monte Carlo critical path analysis in resolving problem of delays in scheduled completion of development projects. Commencing with deterministic network scheduling, Monte Carlo critical path analysis was advanced by assigning probability distributions to task times.

  19. Forest canopy BRDF simulation using Monte Carlo method

    NARCIS (Netherlands)

    Huang, J.; Wu, B.; Zeng, Y.; Tian, Y.

    2006-01-01

    Monte Carlo method is a random statistic method, which has been widely used to simulate the Bidirectional Reflectance Distribution Function (BRDF) of vegetation canopy in the field of visible remote sensing. The random process between photons and forest canopy was designed using Monte Carlo method.

  20. New Approaches and Applications for Monte Carlo Perturbation Theory

    Energy Technology Data Exchange (ETDEWEB)

    Aufiero, Manuele; Bidaud, Adrien; Kotlyar, Dan; Leppänen, Jaakko; Palmiotti, Giuseppe; Salvatores, Massimo; Sen, Sonat; Shwageraus, Eugene; Fratoni, Massimiliano

    2017-02-01

    This paper presents some of the recent and new advancements in the extension of Monte Carlo Perturbation Theory methodologies and application. In particular, the discussed problems involve Brunup calculation, perturbation calculation based on continuous energy functions, and Monte Carlo Perturbation Theory in loosely coupled systems.