Accelerating Monte Carlo Renderers by Ray Histogram Fusion
Mauricio Delbracio
2015-03-01
Full Text Available This paper details the recently introduced Ray Histogram Fusion (RHF filter for accelerating Monte Carlo renderers [M. Delbracio et al., Boosting Monte Carlo Rendering by Ray Histogram Fusion, ACM Transactions on Graphics, 33 (2014]. In this filter, each pixel in the image is characterized by the colors of the rays that reach its surface. Pixels are compared using a statistical distance on the associated ray color distributions. Based on this distance, it decides whether two pixels can share their rays or not. The RHF filter is consistent: as the number of samples increases, more evidence is required to average two pixels. The algorithm provides a significant gain in PSNR, or equivalently accelerates the rendering process by using many fewer Monte Carlo samples without observable bias. Since the RHF filter depends only on the Monte Carlo samples color values, it can be naturally combined with all rendering effects.
Guideline of Monte Carlo calculation. Neutron/gamma ray transport simulation by Monte Carlo method
2002-01-01
This report condenses basic theories and advanced applications of neutron/gamma ray transport calculations in many fields of nuclear energy research. Chapters 1 through 5 treat historical progress of Monte Carlo methods, general issues of variance reduction technique, cross section libraries used in continuous energy Monte Carlo codes. In chapter 6, the following issues are discussed: fusion benchmark experiments, design of ITER, experiment analyses of fast critical assembly, core analyses of JMTR, simulation of pulsed neutron experiment, core analyses of HTTR, duct streaming calculations, bulk shielding calculations, neutron/gamma ray transport calculations of the Hiroshima atomic bomb. Chapters 8 and 9 treat function enhancements of MCNP and MVP codes, and a parallel processing of Monte Carlo calculation, respectively. An important references are attached at the end of this report.
Development of ray tracing visualization program by Monte Carlo method
Higuchi, Kenji; Otani, Takayuki [Japan Atomic Energy Research Inst., Tokyo (Japan); Hasegawa, Yukihiro
1997-09-01
Ray tracing algorithm is a powerful method to synthesize three dimensional computer graphics. In conventional ray tracing algorithms, a view point is used as a starting point of ray tracing, from which the rays are tracked up to the light sources through center points of pixels on the view screen to calculate the intensities of the pixels. This manner, however, makes it difficult to define the configuration of light source as well as to strictly simulate the reflections of the rays. To resolve these problems, we have developed a new ray tracing means which traces rays from a light source, not from a view point, with use of Monte Carlo method which is widely applied in nuclear fields. Moreover, we adopt the variance reduction techniques to the program with use of the specialized machine (Monte-4) for particle transport Monte Carlo so that the computational time could be successfully reduced. (author)
Development of ray tracing visualization program by Monte Carlo method
Higuchi, Kenji; Otani, Takayuki [Japan Atomic Energy Research Inst., Tokyo (Japan); Hasegawa, Yukihiro
1997-09-01
Ray tracing algorithm is a powerful method to synthesize three dimensional computer graphics. In conventional ray tracing algorithms, a view point is used as a starting point of ray tracing, from which the rays are tracked up to the light sources through center points of pixels on the view screen to calculate the intensities of the pixels. This manner, however, makes it difficult to define the configuration of light source as well as to strictly simulate the reflections of the rays. To resolve these problems, we have developed a new ray tracing means which traces rays from a light source, not from a view point, with use of Monte Carlo method which is widely applied in nuclear fields. Moreover, we adopt the variance reduction techniques to the program with use of the specialized machine (Monte-4) for particle transport Monte Carlo so that the computational time could be successfully reduced. (author)
Monte Carlo simulation of gamma ray tomography for image reconstruction
Guedes, Karlos A.N.; Moura, Alex; Dantas, Carlos; Melo, Silvio; Lima, Emerson, E-mail: karlosguedes@hotmail.com [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil); Meric, Ilker [University of Bergen (Norway)
2015-07-01
The Monte Carlo simulations of known density and shape object was validate with Gamma Ray Tomography in static experiments. An aluminum half-moon piece placed inside a steel pipe was the MC simulation test object that was also measured by means of gamma ray transmission. Wall effect of the steel pipe due to irradiation geometry in a single pair source-detector tomography was evaluated by comparison with theoretical data. MCNPX code requires a defined geometry to each photon trajectory which practically prevents this usage for tomography reconstruction simulation. The solution was found by writing a program in Delphi language to create input files automation code. Simulations of tomography data by automated MNCPX code were carried out and validated by experimental data. Working in this sequence the produced data needed a databank to be stored. Experimental setup used a Cesium-137 isotopic radioactive source (7.4 × 109 Bq), and NaI(Tl) scintillation detector of (51 × 51) × 10−3 m crystal size coupled to a multichannel analyzer. A stainless steel tubes of 0,154 m internal diameter, 0.014 m thickness wall. The results show that the MCNPX simulation code adapted to automated input file is useful for generating a matrix data M(θ,t), of a computerized gamma ray tomography for any known density and regular shape object. Experimental validation used RMSE from gamma ray paths and from attenuation coefficient data. (author)
A Monte Carlo approach for simulating the propagation of partially coherent x-ray beams
Prodi, A.; Bergbäck Knudsen, Erik; Willendrup, Peter Kjær
2011-01-01
by sampling Huygens-Fresnel waves with Monte Carlo methods and is used to propagate each source realization to the detector plane. The sampling is implemented with a modified Monte Carlo ray tracing scheme where the optical path of each generated ray is stored. Such information is then used in the summation...
Monte Carlo Simulations of Cosmic Rays Hadronic Interactions
Aguayo Navarrete, Estanislao; Orrell, John L.; Kouzes, Richard T.
2011-04-01
This document describes the construction and results of the MaCoR software tool, developed to model the hadronic interactions of cosmic rays with different geometries of materials. The ubiquity of cosmic radiation in the environment results in the activation of stable isotopes, referred to as cosmogenic activities. The objective is to use this application in conjunction with a model of the MAJORANA DEMONSTRATOR components, from extraction to deployment, to evaluate cosmogenic activation of such components before and after deployment. The cosmic ray showers include several types of particles with a wide range of energy (MeV to GeV). It is infeasible to compute an exact result with a deterministic algorithm for this problem; Monte Carlo simulations are a more suitable approach to model cosmic ray hadronic interactions. In order to validate the results generated by the application, a test comparing experimental muon flux measurements and those predicted by the application is presented. The experimental and simulated results have a deviation of 3%.
Modelling hadronic interactions in cosmic ray Monte Carlo generators
Pierog Tanguy
2015-01-01
Full Text Available Currently the uncertainty in the prediction of shower observables for different primary particles and energies is dominated by differences between hadronic interaction models. The LHC data on minimum bias measurements can be used to test Monte Carlo generators and these new constraints will help to reduce the uncertainties in air shower predictions. In this article, after a short introduction on air showers and Monte Carlo generators, we will show the results of the comparison between the updated version of high energy hadronic interaction models EPOS LHC and QGSJETII-04 with LHC data. Results for air shower simulations and their consequences on comparisons with air shower data will be discussed.
Development of Monte Carlo decay gamma-ray transport calculation system
Sato, Satoshi [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment; Kawasaki, Nobuo [Fujitsu Ltd., Tokyo (Japan); Kume, Etsuo [Japan Atomic Energy Research Inst., Center for Promotion of Computational Science and Engineering, Tokai, Ibaraki (Japan)
2001-06-01
In the DT fusion reactor, it is critical concern to evaluate the decay gamma-ray biological dose rates after the reactor shutdown exactly. In order to evaluate the decay gamma-ray biological dose rates exactly, three dimensional Monte Carlo decay gamma-ray transport calculation system have been developed by connecting the three dimensional Monte Carlo particle transport calculation code and the induced activity calculation code. The developed calculation system consists of the following four functions. (1) The operational neutron flux distribution is calculated by the three dimensional Monte Carlo particle transport calculation code. (2) The induced activities are calculated by the induced activity calculation code. (3) The decay gamma-ray source distribution is obtained from the induced activities. (4) The decay gamma-rays are generated by using the decay gamma-ray source distribution, and the decay gamma-ray transport calculation is conducted by the three dimensional Monte Carlo particle transport calculation code. In order to reduce the calculation time drastically, a biasing system for the decay gamma-ray source distribution has been developed, and the function is also included in the present system. In this paper, the outline and the detail of the system, and the execution example are reported. The evaluation for the effect of the biasing system is also reported. (author)
Deficiency in Monte Carlo simulations of coupled neutron-gamma-ray fields
Maleka, Peane P.; Maucec, Marko; de Meijer, Robert J.
2011-01-01
The deficiency in Monte Carlo simulations of coupled neutron-gamma-ray field was investigated by benchmarking two simulation codes with experimental data. Simulations showed better correspondence with the experimental data for gamma-ray transport only. In simulations, the neutron interactions with m
Monte Carlo simulations of radio emission from cosmic ray air showers
Huege, T.; Falcke, H.D.E.
2006-01-01
As a basis for the interpretation of data gathered by LOPES and other experiments, we have carried out Monte Carlo simulations of geosynchrotron radio emission from cosmic ray air showers. The simulations, having been verified carefully with analytical calculations, reveal a wealth of information on
Nimal, J.C.; Vergnaud, T. (CEA Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France))
1990-01-01
This paper describes the most important features of the Monte Carlo code TRIPOLI-2. This code solves the Boltzmann equation in three-dimensional geometries for coupled neutron and gamma rays problems. A particular emphasis is devoted to the biasing techniques, which are very important for deep penetration. Future developments in TRIPOLI are described in the conclusion. (author).
A numerical study of rays in random media. [Monte Carlo method simulation
Youakim, M. Y.; Liu, C. H.; Yeh, K. C.
1973-01-01
Statistics of electromagnetic rays in a random medium are studied numerically by the Monte Carlo method. Two dimensional random surfaces with prescribed correlation functions are used to simulate the random media. Rays are then traced in these sample media. Statistics of the ray properties such as the ray positions and directions are computed. Histograms showing the distributions of the ray positions and directions at different points along the ray path as well as at given points in space are given. The numerical experiment is repeated for different cases corresponding to weakly and strongly random media with isotropic and anisotropic irregularities. Results are compared with those derived from theoretical investigations whenever possible.
Iba, Yukito
2000-01-01
``Extended Ensemble Monte Carlo''is a generic term that indicates a set of algorithms which are now popular in a variety of fields in physics and statistical information processing. Exchange Monte Carlo (Metropolis-Coupled Chain, Parallel Tempering), Simulated Tempering (Expanded Ensemble Monte Carlo), and Multicanonical Monte Carlo (Adaptive Umbrella Sampling) are typical members of this family. Here we give a cross-disciplinary survey of these algorithms with special emphasis on the great f...
Investigating the origin of high-energy cosmic-ray electrons with Monte Carlo simulation
Attallah, R.
2017-06-01
Due to severe radiative energy losses during propagation, high-energy cosmic-ray electrons can reach Earth only from nearby sources. Although these sources clearly manifest themselves in the special features of the energy spectrum observed by recent space-borne experiments, especially the increase in the positron fraction, their exact nature is still a matter of debate. The standard method for interpreting cosmic-ray electron data consists in solving appropriate transport equations. It can be supplemented with a Monte Carlo approach taking advantage of the intrinsic random nature of cosmic-ray diffusive propagation. This analysis gives valuable information on the electron-by-electron fluctuations and hence allows to address the issue from a different angle. Here we show how to implement a fully three-dimensional time-dependent Monte Carlo simulation of the propagation of high-energy cosmic-ray electrons from nearby sources and discuss the “single-source” astrophysical scenario.
Monte Carlo simulation applied in total reflection x-ray fluorescence: Preliminary results
Meira, Luiza L. C.; Inocente, Guilherme F.; Vieira, Leticia D.; Mesa, Joel [Departamento de Fisica e Biofisica - Instituto de Biociencias de Botucatu, Universidade Estadual Paulista Julio de Mesquita Filho (Brazil)
2013-05-06
The X-ray Fluorescence (XRF) analysis is a technique for the qualitative and quantitative determination of chemical constituents in a sample. This method is based on detection of the characteristic radiation intensities emitted by the elements of the sample, when properly excited. A variant of this technique is the Total Reflection X-ray Fluorescence (TXRF) that utilizes electromagnetic radiation as excitation source. In total reflection of X-ray, the angle of refraction of the incident beam tends to zero and the refracted beam is tangent to the sample support interface. Thus, there is a minimum angle of incidence at which no refracted beam exists and all incident radiation undergoes total reflection. In this study, we evaluated the influence of the energy variation of the beam of incident x-rays, using the MCNPX code (Monte Carlo NParticle) based on Monte Carlo method.
A Monte Carlo study of x-ray fluorescence in x-ray detectors.
Boone, J M; Seibert, J A; Sabol, J M; Tecotzky, M
1999-06-01
Advances in digital x-ray detector systems have led to a renewed interest in the performance of x-ray phosphors and other detector materials. Indirect flat panel x-ray detector and charged coupled device (CCD) systems require a more technologically challenging geometry, whereby the x-ray beam is incident on the front side of the scintillator, and the light produced must diffuse to the back surface of the screen to reach the photoreceptor. Direct detector systems based on selenium have also enjoyed a growing interest, both commercially and academically. Monte Carlo simulation techniques were used to study the x-ray scattering (Rayleigh and Compton) and the more prevalent x-ray fluorescence properties of seven different x-ray detector materials, Gd2O2S, CsI, Se, BaFBr, YTaO4, CaWO4, and ThO2. The redistribution of x-ray energy, back towards the x-ray source, in a forward direction through the detector, and lateral reabsorption in the detector was computed under monoenergetic conditions (1 keV to 130 keV by 1 keV intervals) with five detector thicknesses, 30, 60, 90, 120, and 150 mg/cm2 (Se was studied from 30 to 1000 mg/cm2). The radial distribution (related to the point spread function) of reabsorbed x-ray energy was also determined. Representative results are as follows: At 55 keV, more (31.3%) of the incident x-ray energy escaped from a 90 mg/cm2Gd2O2S detector than was absorbed (27.9%). Approximately 1% of the total absorbed energy was reabsorbed greater than 0.5 mm from the primary interaction, for 90 mg/cm2 CsI exposed at 100 kVp. The ratio of reabsorbed secondary (fluorescence + scatter) radiation to the primary radiation absorbed in the detectors (90 mg/cm2) (S/P) was determined as 10%, 16%, 2%, 12%, 3%, 3%, and 0.3% for a 100 kVp tungsten anode x-ray spectrum, for the Gd2O2S, CsI, Se, BaFBr, YTaO4, CaWO4, and ThO2 detectors, respectively. The results indicate significant x-ray fluorescent escape and reabsorption in common x-ray detectors. These findings
Coe, J P
2015-01-01
We adapt the method of Monte Carlo configuration interaction to calculate core-hole states and use this for the computation of X-ray emission and absorption values. We consider CO, CH$_{4}$, NH$_{3}$, H$_{2}$O, HF, HCN, CH$_{3}$OH, CH$_{3}$F, HCl and NO using a 6-311G** basis. We also look at carbon monoxide with a stretched geometry and discuss the dependence of its results on the cutoff used. The Monte Carlo configuration interaction results are compared with EOM-CCSD values for X-ray emission and with experiment for X-ray absorption. Oscillator strengths are also computed and we quantify the multireference nature of the wavefunctions to suggest when approaches based on a single reference would be expected to be successful.
Robust Image Denoising using a Virtual Flash Image for Monte Carlo Ray Tracing
Moon, Bochang; Jun, Jong Yun; Lee, JongHyeob
2013-01-01
parameters. To highlight the benefits of our method, we apply our method to two Monte Carlo ray tracing methods, photon mapping and path tracing, with various input scenes. We demonstrate that using virtual flash images and homogeneous pixels with a standard denoising method outperforms state-of-the-art......We propose an efficient and robust image-space denoising method for noisy images generated by Monte Carlo ray tracing methods. Our method is based on two new concepts: virtual flash images and homogeneous pixels. Inspired by recent developments in flash photography, virtual flash images emulate...... values. While denoising each pixel, we consider only homogeneous pixels—pixels that are statistically equivalent to each other. This makes it possible to define a stochastic error bound of our method, and this bound goes to zero as the number of ray samples goes to infinity, irrespective of denoising...
Maucec, M; de Meijer, RJ; van der Klis, MMIP; Hendriks, Peter; Jones, DG
2004-01-01
This paper represents a supplementary study to Part I: Monte Carlo assessment of detection depth limits, aimed at estimating the acquisition times, required to detect radioactive particles offshore by towed gamma-ray spectrometry. Using Monte Carlo simulations, sets of measuring conditions were cove
MCViNE -- An object oriented Monte Carlo neutron ray tracing simulation package
Lin, Jiao Y Y; Granroth, Garrett E; Abernathy, Douglas L; Lumsden, Mark D; Winn, Barry; Aczel, Adam A; Aivazis, Michael; Fultz, Brent
2015-01-01
MCViNE (Monte-Carlo VIrtual Neutron Experiment) is a versatile Monte Carlo (MC) neutron ray-tracing program that provides researchers with tools for performing computer modeling and simulations that mirror real neutron scattering experiments. By adopting modern software engineering practices such as using composite and visitor design patterns for representing and accessing neutron scatterers, and using recursive algorithms for multiple scattering, MCViNE is flexible enough to handle sophisticated neutron scattering problems including, for example, neutron detection by complex detector systems, and single and multiple scattering events in a variety of samples and sample environments. In addition, MCViNE can take advantage of simulation components in linear-chain-based MC ray tracing packages widely used in instrument design and optimization, as well as NumPy-based components that make prototypes useful and easy to develop. These developments have enabled us to carry out detailed simulations of neutron scatteri...
FLUKA Monte Carlo Simulations about Cosmic Rays Interactions with Kaidun Meteorite
Turgay Korkut
2013-01-01
Full Text Available An asteroid called Kaidun fell on December 3, 1980, in Yemen (15° 0′N, 48° 18′E. Investigations on this large-sized meteorite are ongoing today. In this paper, interactions between cosmic rays-earth atmosphere and cosmic rays-Kaidun meteorite were modeled using a cosmic ray generator FLUKA Monte Carlo code. Isotope distributions and produced particles were given after these interactions. Also, simulation results were compared for these two types of interactions.
Brown, F.B.; Sutton, T.M.
1996-02-01
This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.
Bardenet, R.
2012-01-01
ISBN:978-2-7598-1032-1; International audience; Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC) methods. We give intuition on the theoretic...
Dunn, William L
2012-01-01
Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble
Monte-Carlo Estimation of the Inflight Performance of the GEMS Satellite X-Ray Polarimeter
Kitaguchi, Takao; Tamagawa, Toru; Hayato, Asami; Enoto, Teruaki; Yoshikawa, Akifumi; Kaneko, Kenta; Takeuchi, Yoko; Black, Kevin; Hill, Joanne; Jahoda, Keith; Krizmanic, John; Sturner, Steve; Griffiths, Scott; Kaaret, Philip; Marlowe, Hannah
2014-01-01
We report a Monte-Carlo estimation of the in-orbit performance of a cosmic X-ray polarimeter designed to be installed on the focal plane of a small satellite. The simulation uses GEANT for the transport of photons and energetic particles and results from Magboltz for the transport of secondary electrons in the detector gas. We validated the simulation by comparing spectra and modulation curves with actual data taken with radioactive sources and an X-ray generator. We also estimated the in-orbit background induced by cosmic radiation in low Earth orbit.
Monte Carlo simulations of the gamma-ray exposure rates of common rocks
Haber, Daniel A. [Univ. of Nevada, Las Vegas, NV (United States). Geoscience Dept.; National Security Technologies, Las Vegas, NV (United States); Malchow, Russell L. [National Security Technologies, Las Vegas, NV (United States); Burnley, Pamela C. [Univ. of Nevada, Las Vegas, NV (United States). Geoscience Dept.
2016-11-01
Monte Carlo simulations have been performed to model the gamma ray emission and attenuation properties of common rocks. In geologic materials, ^{40}K, ^{238}U, and ^{232}Th are responsible for most gamma ray production. If the concentration of these radioelements and attenuation factors such as degree of water saturation are known, an estimate of the gamma-ray exposure rate can be made. The results show that there are no significant differences in gamma-ray screening between major rock types. If the total number of radionuclide atoms are held constant then the major controlling factor is density of the rock. Finally, the thickness of regolith or soil overlying rock can be estimated by modeling the exposure rate if the radionuclide contents of both materials are known.
Cosmic rays Monte Carlo simulations for the Extreme Energy Events Project
Abbrescia, M; Aiola, S; Antolini, R; Avanzini, C; Baldini Ferroli, R; Bencivenni, G; Bossini, E; Bressan, E; Chiavassa, A; Cicalò, C; Cifarelli, L; Coccia, E; De Gruttola, D; De Pasquale, S; Di Giovanni, A; D'Incecco, M; Dreucci, M; Fabbri, F L; Frolov, V; Garbini, M; Gemme, G; Gnesi, I; Gustavino, C; Hatzifotiadou, D; La Rocca, P; Li, S; Librizzi, F; Maggiora, A; Massai, M; Miozzi, S; Panareo, M; Paoletti, R; Perasso, L; Pilo, F; Piragino, G; Regano, A; Riggi, F; Righini, G C; Sartorelli, G; Scapparone, E; Scribano, A; Selvi, M; Serci, S; Siddi, E; Spandre, G; Squarcia, S; Taiuti, M; Tosello, F; Votano, L; Williams, M C S; Yánez, G; Zichichi, A; Zuyeuski, R
2014-01-01
The Extreme Energy Events Project (EEE Project) is an innovative experiment to study very high energy cosmic rays by means of the detection of the associated air shower muon component. It consists of a network of tracking detectors installed inside Italian High Schools. Each tracking detector, called EEE telescope, is composed of three Multigap Resistive Plate Chambers (MRPCs). At present, 43 telescopes are installed and taking data, opening the way for the detection of far away coincidences over a total area of about 3 × 10 5 km 2 . In this paper we present the Monte Carlo simulations that have been performed to predict the expected coincidence rate between distant EEE telescopes.
Reliability of Monte Carlo event generators for gamma ray dark matter searches
Cembranos, J A R; Gammaldi, V; Lineros, R A; Maroto, A L
2013-01-01
We study the differences in the gamma ray spectra simulated by four Monte Carlo event generator packages developed in particle physics. Two different versions of PYTHIA and two of HERWIG are analyzed, namely PYTHIA 6.418 and HERWIG 6.5.10 in Fortran and PYTHIA 8.165 and HERWIG 2.6.1 in C++. For all the studied channels, the intrinsic differences between them are shown to be significative and may play an important role in misunderstanding dark matter signals.
Cosmic rays Monte Carlo simulations for the Extreme Energy Events Project
Abbrescia, M.; Agocs, A.; Aiola, S.; Antolini, R.; Avanzini, C.; Baldini Ferroli, R.; Bencivenni, G.; Bossini, E.; Bressan, E.; Chiavassa, A.; Cicalò, C.; Cifarelli, L.; Coccia, E.; De Gruttola, D.; De Pasquale, S.; Di Giovanni, A.; D'Incecco, M.; Dreucci, M.; Fabbri, F. L.; Frolov, V.; Garbini, M.; Gemme, G.; Gnesi, I.; Gustavino, C.; Hatzifotiadou, D.; La Rocca, P.; Li, S.; Librizzi, F.; Maggiora, A.; Massai, M.; Miozzi, S.; Panareo, M.; Paoletti, R.; Perasso, L.; Pilo, F.; Piragino, G.; Regano, A.; Riggi, F.; Righini, G. C.; Sartorelli, G.; Scapparone, E.; Scribano, A.; Selvi, M.; Serci, S.; Siddi, E.; Spandre, G.; Squarcia, S.; Taiuti, M.; Tosello, F.; Votano, L.; Williams, M. C. S.; Yánez, G.; Zichichi, A.; Zuyeuski, R.
2014-08-01
The Extreme Energy Events Project (EEE Project) is an innovative experiment to study very high energy cosmic rays by means of the detection of the associated air shower muon component. It consists of a network of tracking detectors installed inside Italian High Schools. Each tracking detector, called EEE telescope, is composed of three Multigap Resistive Plate Chambers (MRPCs). At present, 43 telescopes are installed and taking data, opening the way for the detection of far away coincidences over a total area of about 3 × 105 km2. In this paper we present the Monte Carlo simulations that have been performed to predict the expected coincidence rate between distant EEE telescopes.
Gonthier, Peter L.; Koh, Yew-Meng; Kust Harding, Alice
2016-04-01
We present preliminary results of a new population synthesis of millisecond pulsars (MSP) from the Galactic disk using Markov Chain Monte Carlo techniques to better understand the model parameter space. We include empirical radio and gamma-ray luminosity models that are dependent on the pulsar period and period derivative with freely varying exponents. The magnitudes of the model luminosities are adjusted to reproduce the number of MSPs detected by a group of thirteen radio surveys as well as the MSP birth rate in the Galaxy and the number of MSPs detected by Fermi. We explore various high-energy emission geometries like the slot gap, outer gap, two pole caustic and pair starved polar cap models. The parameters associated with the birth distributions for the mass accretion rate, magnetic field, and period distributions are well constrained. With the set of four free parameters, we employ Markov Chain Monte Carlo simulations to explore the model parameter space. We present preliminary comparisons of the simulated and detected distributions of radio and gamma-ray pulsar characteristics. We estimate the contribution of MSPs to the diffuse gamma-ray background with a special focus on the Galactic Center.We express our gratitude for the generous support of the National Science Foundation (RUI: AST-1009731), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program (NNX09AQ71G).
Cramer, S.N.
1984-01-01
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.
Probing the astrophysical origin of high-energy cosmic-ray electrons with Monte Carlo simulation
Attallah, Reda
2016-01-01
High-energy cosmic-ray electrons reveal some remarkable spectral features, the most noteworthy of which is the rise in the positron fraction above 10 GeV. Due to strong energy loss during propagation, these particles can reach Earth only from nearby sources. Yet, the exact nature of these sources, which most likely manifest themselves in the observed anomalies, remains elusive. The many explanations put forward to resolve this case range from standard astrophysics to exotic physics. In this paper, we discuss the possible astrophysical origin of high-energy cosmic-ray electrons through a fully three-dimensional time-dependent Monte Carlo simulation. This approach takes advantage of the intrinsic random nature of cosmic-ray diffusive propagation. It provides valuable information on the electron-by-electron fluctuations, making it particularly suitable for analyzing in depth the single-source astrophysical scenario.
Probing the astrophysical origin of high-energy cosmic-ray electrons with Monte Carlo simulation
Attallah, R.
2016-12-01
High-energy cosmic-ray electrons reveal some remarkable spectral features, the most noteworthy of which is the rise in the positron fraction above 10 GeV. Due to strong energy loss during propagation, these particles can reach Earth only from nearby sources. Yet, the exact nature of these sources, which most likely manifest themselves in the observed anomalies, remains elusive. The many explanations put forward to resolve this case range from standard astrophysics to exotic physics. In this paper, we discuss the possible astrophysical origin of high-energy cosmic-ray electrons through a fully three-dimensional time-dependent Monte Carlo simulation. This approach, which takes advantage of the intrinsic random nature of cosmic-ray diffusive propagation, provides valuable information on the electron-by-electron fluctuations, making it particularly suitable for analyzing in depth the single-source scenario.
Kobayashi, Shingo; Hayatsu, Kanako; Uchihori, Yukio; Hareyama, Makoto; Hasebe, Nobuyuki; Fujibayashi, Yukari
2012-07-01
We have continued to improve the estimation of radiation dose on the Moon based on observation by remote sensing and calculation of the transportation of cosmic-ray particles in the lunar materials in order to provide basic data for a future manned lunar exploration. On the lunar surface, the dose of primary galactic cosmic rays (pGCR) is the most significant and the contributions of neutrons and gamma rays are relatively small and are approximately 10% and 1% of that of pGCR, respectively. However, these percentages are changed by use of thick shieldings and also geographical feature of the lunar surface, such as margin of a huge boulder, bottom of a pit, inside of a possible lava tube. In this case, the dose by pGCRs is moderated and the contributions of neutrons and gamma rays relatively increase. Here, we show the recent estimation of spatial variation of the lunar dose due to gamma ray and neutrons measured by Kaguya gamma-ray spectrometer. The energy spectrum of gamma rays from the lunar surface are precisely measured by a germanium (Ge) gamma-ray spectrometer onboard the Japanese lunar orbiter (Kaguya/SELENE). The flux of fast neutrons from the lunar surface was also measured by detecting the characteristic gamma rays due to the neutron inelastic reaction with the Ge of the spectrometer, that is 72Ge(n, n'g)72Ge. The estimation of radiation dose on the Moon based on Monte Carlo simulation will also be presented.
MCViNE - An object oriented Monte Carlo neutron ray tracing simulation package
Lin, Jiao Y. Y.; Smith, Hillary L.; Granroth, Garrett E.; Abernathy, Douglas L.; Lumsden, Mark D.; Winn, Barry; Aczel, Adam A.; Aivazis, Michael; Fultz, Brent
2016-02-01
MCViNE (Monte-Carlo VIrtual Neutron Experiment) is an open-source Monte Carlo (MC) neutron ray-tracing software for performing computer modeling and simulations that mirror real neutron scattering experiments. We exploited the close similarity between how instrument components are designed and operated and how such components can be modeled in software. For example we used object oriented programming concepts for representing neutron scatterers and detector systems, and recursive algorithms for implementing multiple scattering. Combining these features together in MCViNE allows one to handle sophisticated neutron scattering problems in modern instruments, including, for example, neutron detection by complex detector systems, and single and multiple scattering events in a variety of samples and sample environments. In addition, MCViNE can use simulation components from linear-chain-based MC ray tracing packages which facilitates porting instrument models from those codes. Furthermore it allows for components written solely in Python, which expedites prototyping of new components. These developments have enabled detailed simulations of neutron scattering experiments, with non-trivial samples, for time-of-flight inelastic instruments at the Spallation Neutron Source. Examples of such simulations for powder and single-crystal samples with various scattering kernels, including kernels for phonon and magnon scattering, are presented. With simulations that closely reproduce experimental results, scattering mechanisms can be turned on and off to determine how they contribute to the measured scattering intensities, improving our understanding of the underlying physics.
Quantum Monte Carlo simulation
Wang, Yazhen
2011-01-01
Contemporary scientific studies often rely on the understanding of complex quantum systems via computer simulation. This paper initiates the statistical study of quantum simulation and proposes a Monte Carlo method for estimating analytically intractable quantities. We derive the bias and variance for the proposed Monte Carlo quantum simulation estimator and establish the asymptotic theory for the estimator. The theory is used to design a computational scheme for minimizing the mean square er...
Monte Carlo transition probabilities
Lucy, L. B.
2001-01-01
Transition probabilities governing the interaction of energy packets and matter are derived that allow Monte Carlo NLTE transfer codes to be constructed without simplifying the treatment of line formation. These probabilities are such that the Monte Carlo calculation asymptotically recovers the local emissivity of a gas in statistical equilibrium. Numerical experiments with one-point statistical equilibrium problems for Fe II and Hydrogen confirm this asymptotic behaviour. In addition, the re...
Cosmic-ray acceleration at collisionless astrophysical shocks using Monte-Carlo simulations
Wolff, M
2015-01-01
Context. The diffusive shock acceleration mechanism has been widely accepted as the acceleration mechanism for galactic cosmic rays. While self-consistent hybrid simulations have shown how power-law spectra are produced, detailed information on the interplay of diffusive particle motion and the turbulent electromagnetic fields responsible for repeated shock crossings are still elusive. Aims. The framework of test-particle theory is applied to investigate the effect of diffusive shock acceleration by inspecting the obtained cosmic-ray energy spectra. The resulting energy spectra can be obtained this way from the particle motion and, depending on the prescribed turbulence model, the influence of stochastic acceleration through plasma waves can be studied. Methods. A numerical Monte-Carlo simulation code is extended to include collisionless shock waves. This allows one to trace the trajectories of test particle while they are being accelerated. In addition, the diffusion coefficients can be obtained directly fro...
Development of Monte Carlo code for coincidence prompt gamma-ray neutron activation analysis
Han, Xiaogang
Prompt Gamma-Ray Neutron Activation Analysis (PGNAA) offers a non-destructive, relatively rapid on-line method for determination of elemental composition of bulk and other samples. However, PGNAA has an inherently large background. These backgrounds are primarily due to the presence of the neutron excitation source. It also includes neutron activation of the detector and the prompt gamma rays from the structure materials of PGNAA devices. These large backgrounds limit the sensitivity and accuracy of PGNAA. Since most of the prompt gamma rays from the same element are emitted in coincidence, a possible approach for further improvement is to change the traditional PGNAA measurement technique and introduce the gamma-gamma coincidence technique. It is well known that the coincidence techniques can eliminate most of the interference backgrounds and improve the signal-to-noise ratio. A new Monte Carlo code, CEARCPG has been developed at CEAR to simulate gamma-gamma coincidence spectra in PGNAA experiment. Compared to the other existing Monte Carlo code CEARPGA I and CEARPGA II, a new algorithm of sampling the prompt gamma rays produced from neutron capture reaction and neutron inelastic scattering reaction, is developed in this work. All the prompt gamma rays are taken into account by using this new algorithm. Before this work, the commonly used method is to interpolate the prompt gamma rays from the pre-calculated gamma-ray table. This technique works fine for the single spectrum. However it limits the capability to simulate the coincidence spectrum. The new algorithm samples the prompt gamma rays from the nucleus excitation scheme. The primary nuclear data library used to sample the prompt gamma rays comes from ENSDF library. Three cases are simulated and the simulated results are benchmarked with experiments. The first case is the prototype for ETI PGNAA application. This case is designed to check the capability of CEARCPG for single spectrum simulation. The second
Khromova, A N; Arfelli, F; Menk, R H; Besch, H J; Plothow-Besch, H; 10.1109/NSSMIC.2004.1466758
2010-01-01
In this work we present a novel 3D Monte Carlo photon transport program for simulation of multiple refractive scattering based on the refractive properties of X-rays in highly scattering media, like lung tissue. Multiple scattering reduces not only the quality of the image, but contains also information on the internal structure of the object. This information can be exploited utilizing image modalities such as Diffraction Enhanced Imaging (DEI). To study the effect of multiple scattering a Monte Carlo program was developed that simulates multiple refractive scattering of X-ray photons on monodisperse PMMA (poly-methyl-methacrylate) microspheres representing alveoli in lung tissue. Eventually, the results of the Monte Carlo program were compared to the measurements taken at the SYRMEP beamline at Elettra (Trieste, Italy) on special phantoms showing a good agreement between both data.
Paixão, Lucas; Oliveira, Bruno Beraldo; Viloria, Carolina; de Oliveira, Marcio Alves; Teixeira, Maria Helena Araújo; Nogueira, Maria do Socorro
2015-01-01
Objective Derive filtered tungsten X-ray spectra used in digital mammography systems by means of Monte Carlo simulations. Materials and Methods Filtered spectra for rhodium filter were obtained for tube potentials between 26 and 32 kV. The half-value layer (HVL) of simulated filtered spectra were compared with those obtained experimentally with a solid state detector Unfors model 8202031-H Xi R/F & MAM Detector Platinum and 8201023-C Xi Base unit Platinum Plus w mAs in a Hologic Selenia Dimensions system using a direct radiography mode. Results Calculated HVL values showed good agreement as compared with those obtained experimentally. The greatest relative difference between the Monte Carlo calculated HVL values and experimental HVL values was 4%. Conclusion The results show that the filtered tungsten anode X-ray spectra and the EGSnrc Monte Carlo code can be used for mean glandular dose determination in mammography. PMID:26811553
Monte Carlo simulations for 20 MV X-ray spectrum reconstruction of a linear induction accelerator
WANG Yi; LI Qin; JIANG Xiao-Guo
2012-01-01
To study the spectrum reconstruction of the 20 MV X-ray generated by the Dragon-I linear induction accelerator,the Monte Carlo method is applied to simulate the attenuations of the X-ray in the attenuators of different thicknesses and thus provide the transmission data.As is known,the spectrum estimation from transmission data is an ill-conditioned problem.The method based on iterative perturbations is employed to derive the X-ray spectra,where initial guesses are used to start the process.This algorithm takes into account not only the minimization of the differences between the measured and the calculated transmissions but also the smoothness feature of the spectrum function.In this work,various filter materials are put to use as the attenuator,and the condition for an accurate and robust solution of the X-ray spectrum calculation is demonstrated.The influences of the scattering photons within different intervals of emergence angle on the X-ray spectrum reconstruction are also analyzed.
Expected performance of a hard X-ray polarimeter (POLAR) by Monte Carlo Simulation
Xiong, Shaolin; Wu, Bobing
2009-01-01
Polarization measurements of the prompt emission in Gamma-ray Bursts (GRBs) can provide diagnostic information for understanding the nature of the central engine. POLAR is a compact polarimeter dedicated to the polarization measurement of GRBs between 50-300 keV and is scheduled to be launched aboard the Chinese Space Laboratory about year 2012. A preliminary Monte Carlo simulation has been accomplished to attain the expected performance of POLAR, while a prototype of POLAR is being constructed at the Institute of High Energy Physics, Chinese Academy of Sciences. The modulation factor, efficiency and effective area, background rates and Minimum Detectable Polarization (MDP) were calculated for different detector configurations and trigger strategies. With the optimized detector configuration and trigger strategy and the constraint of total weight less than 30 kg, the primary science goal to determine whether most GRBs are strongly polarized can be achieved, and about 9 GRBs/yr can be detected with MDP < 10...
Heat-Flux Analysis of Solar Furnace Using the Monte Carlo Ray-Tracing Method
Lee, Hyun Jin; Kim, Jong Kyu; Lee, Sang Nam; Kang, Yong Heack [Korea Institute of Energy Research, Daejeon (Korea, Republic of)
2011-10-15
An understanding of the concentrated solar flux is critical for the analysis and design of solar-energy-utilization systems. The current work focuses on the development of an algorithm that uses the Monte Carlo ray-tracing method with excellent flexibility and expandability; this method considers both solar limb darkening and the surface slope error of reflectors, thereby analyzing the solar flux. A comparison of the modeling results with measurements at the solar furnace in Korea Institute of Energy Research (KIER) show good agreement within a measurement uncertainty of 10%. The model evaluates the concentration performance of the KIER solar furnace with a tracking accuracy of 2 mrad and a maximum attainable concentration ratio of 4400 sun. Flux variations according to measurement position and flux distributions depending on acceptance angles provide detailed information for the design of chemical reactors or secondary concentrators.
Monte Carlo simulation for background study of geophysical inspection with cosmic-ray muons
Nishiyama, Ryuichi; Taketa, Akimichi; Miyamoto, Seigo; Kasahara, Katsuaki
2016-08-01
Several attempts have been made to obtain a radiographic image inside volcanoes using cosmic-ray muons (muography). Muography is expected to resolve highly heterogeneous density profiles near the surface of volcanoes. However, several prior works have failed to make clear observations due to contamination by background noise. The background contamination leads to an overestimation of the muon flux and consequently a significant underestimation of the density in the target mountains. To investigate the origin of the background noise, we performed a Monte Carlo simulation. The main components of the background noise in muography are found to be low-energy protons, electrons and muons in case of detectors without particle identification and with energy thresholds below 1 GeV. This result was confirmed by comparisons with actual observations of nuclear emulsions. This result will be useful for detector design in future works, and in addition some previous works of muography should be reviewed from the view point of background contamination.
Hrivnacova, I; Berejnov, V V; Brun, R; Carminati, F; Fassò, A; Futo, E; Gheata, A; Caballero, I G; Morsch, Andreas
2003-01-01
The concept of Virtual Monte Carlo (VMC) has been developed by the ALICE Software Project to allow different Monte Carlo simulation programs to run without changing the user code, such as the geometry definition, the detector response simulation or input and output formats. Recently, the VMC classes have been integrated into the ROOT framework, and the other relevant packages have been separated from the AliRoot framework and can be used individually by any other HEP project. The general concept of the VMC and its set of base classes provided in ROOT will be presented. Existing implementations for Geant3, Geant4 and FLUKA and simple examples of usage will be described.
Iiyama, Taku; Hagi, Kousuke; Urushibara, Takafumi; Ozeki, Sumio
2009-01-01
The intermolecular structure of C(2)H(5)OH molecules confined in slit-shaped graphitic micropore of activated carbon fiber was investigated by in situ X-ray diffraction (XRD) measurement and reverse Monte Carlo (RMC) analysis. The pseudo-3-dimensional intermolecular structure Of C(2)H(5)OH adsorbed in the micropores was determined by applying the RMC analysis to XRD data, assuming a simple slit-shaped space composed of double graphene sheets. The results were consistent with conventional Mont...
Monte Carlo simulations of a high-resolution X-ray CT system for industrial applications
Miceli, A.; Thierry, R.; Flisch, A.; Sennhauser, U.; Casali, F.; Simon, M.
2007-12-01
An X-ray computed tomography (CT) model based on the GEANT4 Monte Carlo code was developed for simulation of a cone-beam CT system for industrial applications. The full simulation of the X-ray tube, object, and area detector was considered. The model was validated through comparison with experimental measurements of different test objects. There is good agreement between the simulated and measured projections. To validate the model we reduced the beam aperture of the X-ray tube, using a source-collimator, to decrease the scattered radiation from the CT system structure and from the walls of the X-ray shielding room. The degradation of the image contrast using larger beam apertures is also shown. Thereafter, the CT model was used to calculate the spatial distribution and the magnitude of the scattered radiation from different objects. It has been assessed that the scatter-to-primary ratio (SPR) is below 5% for small aluminum objects (approx. 5 cm path length), and in the case of large aluminum objects (approx. 20 cm path length) it can reach up to a factor of 3 in the region corresponding to the maximum path length. Therefore, the scatter from the object significantly affects quantitative accuracy. The model was also used to evaluate the degradation of the image contrast due to the detector box.
Antonov, Lubomir Dimitrov; Andreetta, Christian; Hamelryck, Thomas Wim
2013-01-01
Inference of protein structure from experimental data is of crucial interest in science, medicine and biotechnology. Low-resolution methods, such as small angle X-ray scattering (SAXS), play a major role in investigating important biological questions regarding the structure of proteins in soluti......, and implements a caching procedure employed in the partial forward model evaluations within a Markov chain Monte Carlo framework....
Prompt gamma ray imaging for verification of proton boron fusion therapy: A Monte Carlo study.
Shin, Han-Back; Yoon, Do-Kun; Jung, Joo-Young; Kim, Moo-Sub; Suh, Tae Suk
2016-10-01
The purpose of this study was to verify acquisition feasibility of a single photon emission computed tomography image using prompt gamma rays for proton boron fusion therapy (PBFT) and to confirm an enhanced therapeutic effect of PBFT by comparison with conventional proton therapy without use of boron. Monte Carlo simulation was performed to acquire reconstructed image during PBFT. We acquired percentage depth dose (PDD) of the proton beams in a water phantom, energy spectrum of the prompt gamma rays, and tomographic images, including the boron uptake region (BUR; target). The prompt gamma ray image was reconstructed using maximum likelihood expectation maximisation (MLEM) with 64 projection raw data. To verify the reconstructed image, both an image profile and contrast analysis according to the iteration number were conducted. In addition, the physical distance between two BURs in the region of interest of each BUR was measured. The PDD of the proton beam from the water phantom including the BURs shows more efficient than that of conventional proton therapy on tumour region. A 719keV prompt gamma ray peak was clearly observed in the prompt gamma ray energy spectrum. The prompt gamma ray image was reconstructed successfully using 64 projections. Different image profiles including two BURs were acquired from the reconstructed image according to the iteration number. We confirmed successful acquisition of a prompt gamma ray image during PBFT. In addition, the quantitative image analysis results showed relatively good performance for further study. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
X-ray simulation with the Monte Carlo code PENELOPE. Application to Quality Control.
Pozuelo, F; Gallardo, S; Querol, A; Verdú, G; Ródenas, J
2012-01-01
A realistic knowledge of the energy spectrum is very important in Quality Control (QC) of X-ray tubes in order to reduce dose to patients. However, due to the implicit difficulties to measure the X-ray spectrum accurately, it is not normally obtained in routine QC. Instead, some parameters are measured and/or calculated. PENELOPE and MCNP5 codes, based on the Monte Carlo method, can be used as complementary tools to verify parameters measured in QC. These codes allow estimating Bremsstrahlung and characteristic lines from the anode taking into account specific characteristics of equipment. They have been applied to simulate an X-ray spectrum. Results are compared with theoretical IPEM 78 spectrum. A sensitivity analysis has been developed to estimate the influence on simulated spectra of important parameters used in simulation codes. With this analysis it has been obtained that the FORCE factor is the most important parameter in PENELOPE simulations. FORCE factor, which is a variance reduction method, improves the simulation but produces hard increases of computer time. The value of FORCE should be optimized so that a good agreement of simulated and theoretical spectra is reached, but with a reduction of computer time. Quality parameters such as Half Value Layer (HVL) can be obtained with the PENELOPE model developed, but FORCE takes such a high value that computer time is hardly increased. On the other hand, depth dose assessment can be achieved with acceptable results for small values of FORCE.
A gamma-ray Monte Carlo study of the clumpy debris of SN1987a
Burrows, A; Burrows, Adam; Van Riper, Ken
1995-01-01
We have performed Monte Carlo calculations of gamma-ray transport in models of the clumpy debris cloud of the LMC supernova, SN1987A, to study the influence of composition mixing and heterogeneity on its emergent gamma-ray and X-ray fluxes. In particular, we have focused on the problematic Ginga band (16 -- 28 keV) flux at day 600, whose measured value was an order of magnitude higher than predicted by previous theory. We find that the hydrogen of the envelope could not have been intimately mixed with the heavy elements of the core and that the hydrogen/helium volume filling factor interior to 4000 km/s must have been large (\\ge 40\\%). A physical separation of the scattering region and the regions occupied by the high-Z elements is required. The 600-day models that best fit both the line data at 847 keV and 1238 keV and the measured Ginga band fluxes suggest that as much as 50\\% of the explosively produced ^{56}Ni stayed interior to 1000 km s^{-1} and 2 M_\\odot. The ^{56}Ni may have been more centrally-concen...
Monte-Carlo Hauser-Feshbach simulations of prompt fission gamma-ray properties
Stetcu, Ionel; Talou, Patrick; Kawano, Toshihiko; Jandel, Marian
2014-09-01
Properties of prompt fission neutrons and γ rays, emitted before the weak decays of the fission fragments toward stability, are important for both nuclear technologies and a better understanding of the fission process. In the present work, we use the Hauser-Feshbach model to simulate the de-excitation of the fully accelerated fission fragments treated as compound nuclei. Our Monte-Carlo implementation of the Hauser-Feshbach statistical model, which takes into account the competition between the neutron and γ emissions, allows the description of both average quantities, like in the Los Alamos model, and correlations between the emitted particles. Our simulations will be compared against available experimental data and current evaluations. In particular, we will compare our average γ-ray spectrum with recent measurements at the research reactor KFKI in Budapest for the 235U(nth , f) and 252Cf(sf) reactions, as well as multiplicity-dependent distributions obtained at the DANCE facility at LANSCE. Properties of prompt fission neutrons and γ rays, emitted before the weak decays of the fission fragments toward stability, are important for both nuclear technologies and a better understanding of the fission process. In the present work, we use the Hauser-Feshbach model to simulate the de-excitation of the fully accelerated fission fragments treated as compound nuclei. Our Monte-Carlo implementation of the Hauser-Feshbach statistical model, which takes into account the competition between the neutron and γ emissions, allows the description of both average quantities, like in the Los Alamos model, and correlations between the emitted particles. Our simulations will be compared against available experimental data and current evaluations. In particular, we will compare our average γ-ray spectrum with recent measurements at the research reactor KFKI in Budapest for the 235U(nth , f) and 252Cf(sf) reactions, as well as multiplicity-dependent distributions obtained at the
Cosmic-ray acceleration at collisionless astrophysical shocks using Monte-Carlo simulations
Wolff, M.; Tautz, R. C.
2015-08-01
Context. The diffusive shock acceleration mechanism has been widely accepted as the acceleration mechanism for galactic cosmic rays. While self-consistent hybrid simulations have shown how power-law spectra are produced, detailed information on the interplay of diffusive particle motion and the turbulent electromagnetic fields responsible for repeated shock crossings are still elusive. Aims: The framework of test-particle theory is applied to investigate the effect of diffusive shock acceleration by inspecting the obtained cosmic-ray energy spectra. The resulting energy spectra can be obtained this way from the particle motion and, depending on the prescribed turbulence model, the influence of stochastic acceleration through plasma waves can be studied. Methods: A numerical Monte-Carlo simulation code is extended to include collisionless shock waves. This allows one to trace the trajectories of test particle while they are being accelerated. In addition, the diffusion coefficients can be obtained directly from the particle motion, which allows for a detailed understanding of the acceleration process. Results: The classic result of an energy spectrum with E-2 is only reproduced for parallel shocks, while, for all other cases, the energy spectral index is reduced depending on the shock obliqueness. Qualitatively, this can be explained in terms of the diffusion coefficients in the directions that are parallel and perpendicular to the shock front.
Kharrati, Hedi; Agrebi, Amel; Karoui, Mohamed Karim [Ecole Superieure des Sciences et Techniques de la Sante de Monastir, Avenue Avicenne, 5000 Monastir (Tunisia); Faculte des Sciences de Monastir, 5000 Monastir (Tunisia)
2012-10-15
Purpose: A simulation of buildup factors for ordinary concrete, steel, lead, plate glass, lead glass, and gypsum wallboard in broad beam geometry for photons energies from 10 keV to 150 keV at 5 keV intervals is presented. Methods: Monte Carlo N-particle radiation transport computer code has been used to determine the buildup factors for the studied shielding materials. Results: An example concretizing the use of the obtained buildup factors data in computing the broad beam transmission for tube potentials at 70, 100, 120, and 140 kVp is given. The half value layer, the tenth value layer, and the equilibrium tenth value layer are calculated from the broad beam transmission for these tube potentials. Conclusions: The obtained values compared with those calculated from the published data show the ability of these data to predict shielding transmission curves. Therefore, the buildup factors data can be combined with primary, scatter, and leakage x-ray spectra to provide a computationally based solution to broad beam transmission for barriers in shielding x-ray facilities.
Karim Karoui, Mohamed [Faculte des Sciences de Monastir, Avenue de l' environnement 5019 Monastir -Tunisia (Tunisia); Kharrati, Hedi [Ecole Superieure des Sciences et Techniques de la Sante de Monastir, Avenue Avicenne 5000 Monastir (Tunisia)
2013-07-15
Purpose: This paper presents the results of a series of calculations to determine buildup factors for ordinary concrete, baryte concrete, lead, steel, and iron in broad beam geometry for photons energies from 0.125 to 25.125 MeV at 0.250 MeV intervals.Methods: Monte Carlo N-particle radiation transport computer code has been used to determine the buildup factors for the studied shielding materials.Results: The computation of the primary broad beams using buildup factors data was done for nine published megavoltage photon beam spectra ranging from 4 to 25 MV in nominal energies, representing linacs made by the three major manufacturers. The first tenth value layer and the equilibrium tenth value layer are calculated from the broad beam transmission for these nine primary megavoltage photon beam spectra.Conclusions: The results, compared with published data, show the ability of these buildup factor data to predict shielding transmission curves for the primary radiation beam. Therefore, the buildup factor data can be combined with primary, scatter, and leakage x-ray spectra to perform computation of broad beam transmission for barriers in radiotherapy shielding x-ray facilities.
Monte Carlo simulation of an x-ray luminescence optical tomography scanner prototype
Rosas-González, S., E-mail: sarahi@fisica.unam.mx, E-mail: arnulfo@fisica.unam.mx; Martínez-Dávalos, A., E-mail: sarahi@fisica.unam.mx, E-mail: arnulfo@fisica.unam.mx; Rodríguez-Villafuerte, M., E-mail: sarahi@fisica.unam.mx, E-mail: arnulfo@fisica.unam.mx; Murrieta-Rodríguez, T., E-mail: sarahi@fisica.unam.mx, E-mail: arnulfo@fisica.unam.mx [Instituto de Física, Universidad Nacional Autónoma de México, A.P. 20-364, 01000 (Mexico)
2014-11-07
In this work we report the calculation of the deposited energy distribution produced by an x-ray luminescence optical tomography (XLOT) system in a phantom containing different concentrations of Gd{sub 2}O{sub 2}S:Eu nanoparticles. The calculations were performed via Monte Carlo simulation considering spectra from a W target x-ray tube operating between 30 and 90 kVp, with 1.0 mm Al added filtration. CT and XLOT tomographic images were reconstructed from the same data. The results show that XLOT has better detectability than CT alone, that the dose scales linearly with kVp for a fixed concentration of Gd{sub 2}O{sub 2}S:Eu and air-kerma rate, the scattered radiation contribution to the total dose and signal is about 20% and that the dose ratio for a 3 mm diameter insert containing 10 mg/ml Gd{sub 2}O{sub 2}S embedded in a 30 mm diameter water phantom is 6:1. This ratio drops to less than 2:1 for a 1 mg/ml concentration. Finally we show that the method of conjugate images can be used to correct for artifacts due to attenuation effects in XLOT images.
Monte Carlo tolerancing tool using nonsequential ray tracing on a computer cluster
Reimer, Christopher
2010-08-01
The development of a flexible tolerancing tool for illumination systems based on Matlab® and Zemax® is described in this paper. Two computationally intensive techniques are combined, Monte Carlo tolerancing and non-sequential ray tracing. Implementation of the tool on a computer cluster allows for relatively rapid tolerancing. This paper explores the tool structure, describing the splitting the task of tolerancing between Zemax and Matlab. An equation is derived that determines the number of simulated ray traces needed to accurately resolve illumination uniformity. Two examples of tolerancing illuminators are given. The first one is a projection system consisting of a pico-DLP, a light pipe, a TIR prism and the critical illumination relay optics. The second is a wide band, high performance Köhler illuminator, which includes a modified molded LED as the light source. As high performance illumination systems evolve, the practice of applying standard workshop tolerances to these systems may need to be re-examined.
PCXMC, a Monte Carlo program for calculating patient doses in medical x-ray examinations
Tapiovaara, M.; Siiskonen, T.
2008-11-15
PCXMC is a Monte Carlo program for calculating patients' organ doses and effective doses in medical x-ray examinations. The organs and tissues considered in the program are: active bone marrow, adrenals, brain, breasts, colon (upper and lower large intestine), extrathoracic airways, gall bladder, heart, kidneys, liver, lungs, lymph nodes, muscle, oesophagus, oral mucosa, ovaries, pancreas, prostate, salivary glands, skeleton, skin, small intestine, spleen, stomach, testicles, thymus, thyroid, urinary bladder and uterus. The program calculates the effective dose with both the present tissue weighting factors of ICRP Publication 103 (2007) and the old tissue weighting factors of ICRP Publication 60 (1991). The anatomical data are based on the mathematical hermaphrodite phantom models of Cristy and Eckerman (1987), which describe patients of six different ages: new-born, 1, 5, 10, 15-year-old and adult patients. Some changes are made to these phantoms in order to make them more realistic for external irradiation conditions and to enable the calculation of the effective dose according to the new ICRP Publication 103 tissue weighting factors. The phantom sizes are adjustable to mimic patients of an arbitrary weight and height. PCXMC allows a free adjustment of the x-ray beam projection and other examination conditions of projection radiography and fluoroscopy
van der Graaf, E. R.; Limburg, J.; Koomans, R. L.; Tijs, M.
2011-01-01
The calibration of scintillation detectors for gamma radiation in a well characterized setup can be transferred to other geometries using Monte Carlo simulations to account for the differences between the calibration and the other geometry. In this study a calibration facility was used that is const
van der Graaf, E. R.; Limburg, J.; Koomans, R. L.; Tijs, M.
The calibration of scintillation detectors for gamma radiation in a well characterized setup can be transferred to other geometries using Monte Carlo simulations to account for the differences between the calibration and the other geometry. In this study a calibration facility was used that is
Cecilia Maya
2004-12-01
Full Text Available El método Monte Carlo se aplica a varios casos de valoración de opciones financieras. El método genera una buena aproximación al comparar su precisión con la de otros métodos numéricos. La estimación que produce la versión Cruda de Monte Carlo puede ser aún más exacta si se recurre a metodologías de reducción de la varianza entre las cuales se sugieren la variable antitética y de la variable de control. Sin embargo, dichas metodologías requieren un esfuerzo computacional mayor por lo cual las mismas deben ser evaluadas en términos no sólo de su precisión sino también de su eficiencia.
Monte Carlo and nonlinearities
Dauchet, Jérémi; Blanco, Stéphane; Caliot, Cyril; Charon, Julien; Coustet, Christophe; Hafi, Mouna El; Eymet, Vincent; Farges, Olivier; Forest, Vincent; Fournier, Richard; Galtier, Mathieu; Gautrais, Jacques; Khuong, Anaïs; Pelissier, Lionel; Piaud, Benjamin; Roger, Maxime; Terrée, Guillaume; Weitz, Sebastian
2016-01-01
The Monte Carlo method is widely used to numerically predict systems behaviour. However, its powerful incremental design assumes a strong premise which has severely limited application so far: the estimation process must combine linearly over dimensions. Here we show that this premise can be alleviated by projecting nonlinearities on a polynomial basis and increasing the configuration-space dimension. Considering phytoplankton growth in light-limited environments, radiative transfer in planetary atmospheres, electromagnetic scattering by particles and concentrated-solar-power-plant productions, we prove the real world usability of this advance on four test-cases that were so far regarded as impracticable by Monte Carlo approaches. We also illustrate an outstanding feature of our method when applied to sharp problems with interacting particles: handling rare events is now straightforward. Overall, our extension preserves the features that made the method popular: addressing nonlinearities does not compromise o...
Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
Monte Carlo validation of optimal material discrimination using spectral x-ray imaging
Nik, Syen J; Watts, Richard; Dale, Tony; Currie, Bryn; Meyer, Juergen
2014-01-01
The validation of a previous work on the optimization of material discrimination in spectral x-ray imaging is reported. Using Monte Carlo simulations based on the BEAMnrc package, material decomposition was performed on the projection images of phantoms containing up to three materials. The simulated projection data was first decomposed into material basis images by minimizing the z-score between expected and simulated counts. Statistical analysis was performed for the pixels within the region-of-interest consisting of contrast material(s) in the BEAMnrc simulations. With the consideration of scattered radiation and a realistic scanning geometry, the theoretical optima of energy bin borders provided by the algorithm were shown to have an accuracy of $\\pm$2 keV for the decomposition of 2 and 3 materials. Finally, the signal-to-noise ratio predicted by the theoretical model was also validated. The counts per pixel needed for achieving a specific imaging aim can therefore be estimated using the validated model.
Monte Carlo dosimetry for forthcoming clinical trials in x-ray microbeam radiation therapy
MartInez-Rovira, I; Bravin, A; Prezado, Y [ID17 Biomedical Beamline, European Synchrotron Radiation Facility (ESRF), B.P. 220, 6 Jules Horowitz, F-38043 Grenoble Cedex (France); Sempau, J [Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, E-08028 Barcelona (Spain); Fernandez-Varea, J M, E-mail: yolanda.prezado@esrf.f [Facultat de Fisica (ECM and ICC), Universitat de Barcelona, Diagonal 647, E-08028 Barcelona (Spain)
2010-08-07
The purpose of this work is to define safe irradiation protocols in microbeam radiation therapy. The intense synchrotron-generated x-ray beam used for the treatment is collimated and delivered in an array of 50 {mu}m-sized rectangular fields with a centre-to-centre distance between microplanes of 400 {mu}m. The absorbed doses received by the tumour and the healthy tissues in a human head phantom have been assessed by means of Monte Carlo simulations. The identification of safe dose limits is carried out by evaluating the maximum peak and valley doses achievable in the tumour while keeping the valley doses in the healthy tissues under tolerances. As the skull receives a significant fraction of the dose, the dose limits are referred to this tissue. Dose distributions with high spatial resolution are presented for various tumour positions, skull thicknesses and interbeam separations. Considering a unidirectional irradiation (field size of 2x2 cm{sup 2}) and a centrally located tumour, the largest peak and valley doses achievable in the tumour are 55 Gy and 2.6 Gy, respectively. The corresponding maximum valley doses received by the skin, bone and healthy brain are 4 Gy, 14 Gy and 7 Gy (doses in one fraction), respectively, i.e. within tolerances (5% probability of complication within 5 years).
Woei Leow, Shin; Corrado, Carley; Osborn, Melissa; Isaacson, Michael; Alers, Glenn; Carter, Sue A.
2013-06-01
Luminescent solar concentrators (LSC) collect ambient light from a broad range of angles and concentrate the captured light onto photovoltaic (PV) cells. LSCs with front-facing cells collect direct and indirect sunlight ensuring a gain factor greater than one. The flexible placement and percentage coverage of PV cells on the LSC panel allow for layout adjustments to be made in order to balance re-absorption losses and the level of light concentration desired. A weighted Monte Carlo ray tracing program was developed to study the transport of photons and loss mechanisms in the LSC to aid in design optimization. The program imports measured absorption/emission spectra of an organic luminescent dye (LR305), the transmission coefficient, and refractive index of acrylic as parameters that describe the system. Simulations suggest that for LR305, 8-10 cm of luminescent material surrounding the PV cell yields the highest increase in power gain per unit area of LSC added, thereby determining the ideal spacing between PV cells in the panel. For rectangular PV cells, results indicate that for each centimeter of PV cell width, an additional increase of 0.15 mm to the waveguide thickness is required to efficiently transport photon collected by the LSC to the PV cell with minimal loss.
MGEANT a generic multi-purpose Monte-Carlo simulation package for gamma-ray experiments
Seifert, H; Sturner, S J; Teegarden, B J
1997-01-01
The authors present a generic multi-purpose Monte-Carlo simulation package, based on GEANT and the CERN Program Library, which is appropriate for gamma-ray astronomy, and allows the rapid prototyping of a wide variety of detector systems. Instrument specific geometry and materials data can simply be supplied via input files, and are independent of the main code. The user can select from a standard set of event generators and beam options (implemented as "plug-ins") which would be needed in an astrophysical or instrument calibration context. The philosophy of this approach is to facilitate the implementation of new software add-ons and changes in the instrumental setup. This is especially useful for projects which involve a large group of developers and collaborators. MGEANT has been successfully used to perform background calculations and to generate the instrument response for the WIND/TGRS instrument, and is currently being used to study the background, sensitivity, response, and imaging capabilities of the...
Leow, Shin Woei; Corrado, Carley; Osborn, Melissa; Carter, Sue A.
2013-09-01
Luminescent solar concentrators (LSCs) have the ability to receive light from a wide range of angles, concentrating the captured light onto small photo active areas. This enables greater incorporation of LSCs into building designs as windows, skylights and wall claddings in addition to rooftop installations of current solar panels. Using relatively cheap luminescent dyes and acrylic waveguides to effect light concentration onto lesser photovoltaic (PV) cells, there is potential for this technology to approach grid price parity. We employ a panel design in which the front facing PV cells collect both direct and concentrated light ensuring a gain factor greater than one. This also allows for flexibility in determining the placement and percentage coverage of PV cells during the design process to balance reabsorption losses against the power output and level of light concentration desired. To aid in design optimization, a Monte-Carlo ray tracing program was developed to study the transport of photons and loss mechanisms in LSC panels. The program imports measured absorption/emission spectra and transmission coefficients as simulation parameters with interactions of photons in the panel determined by comparing calculated probabilities with random number generators. LSC panels with multiple dyes or layers can also be simulated. Analysis of the results reveals optimal panel dimensions and PV cell layouts for maximum power output for a given dye concentration, absorbtion/emission spectrum and quantum efficiency.
A hybrid Monte Carlo model for the energy response functions of X-ray photon counting detectors
Wu, Dufan; Xu, Xiaofei; Zhang, Li; Wang, Sen
2016-09-01
In photon counting computed tomography (CT), it is vital to know the energy response functions of the detector for noise estimation and system optimization. Empirical methods lack flexibility and Monte Carlo simulations require too much knowledge of the detector. In this paper, we proposed a hybrid Monte Carlo model for the energy response functions of photon counting detectors in X-ray medical applications. GEANT4 was used to model the energy deposition of X-rays in the detector. Then numerical models were used to describe the process of charge sharing, anti-charge sharing and spectral broadening, which were too complicated to be included in the Monte Carlo model. Several free parameters were introduced in the numerical models, and they could be calibrated from experimental measurements such as X-ray fluorescence from metal elements. The method was used to model the energy response function of an XCounter Flite X1 photon counting detector. The parameters of the model were calibrated with fluorescence measurements. The model was further tested against measured spectrums of a VJ X-ray source to validate its feasibility and accuracy.
A hybrid Monte Carlo model for the energy response functions of X-ray photon counting detectors
Wu, Dufan; Xu, Xiaofei [Key Laboratory of Particle & Radiation Imaging, Tsinghua University, Ministry of Education (China); Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Zhang, Li, E-mail: zli@mail.tsinghua.edu.cn [Key Laboratory of Particle & Radiation Imaging, Tsinghua University, Ministry of Education (China); Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Wang, Sen [Key Laboratory of Particle & Radiation Imaging, Tsinghua University, Ministry of Education (China); Department of Engineering Physics, Tsinghua University, Beijing 100084 (China)
2016-09-11
In photon counting computed tomography (CT), it is vital to know the energy response functions of the detector for noise estimation and system optimization. Empirical methods lack flexibility and Monte Carlo simulations require too much knowledge of the detector. In this paper, we proposed a hybrid Monte Carlo model for the energy response functions of photon counting detectors in X-ray medical applications. GEANT4 was used to model the energy deposition of X-rays in the detector. Then numerical models were used to describe the process of charge sharing, anti-charge sharing and spectral broadening, which were too complicated to be included in the Monte Carlo model. Several free parameters were introduced in the numerical models, and they could be calibrated from experimental measurements such as X-ray fluorescence from metal elements. The method was used to model the energy response function of an XCounter Flite X1 photon counting detector. The parameters of the model were calibrated with fluorescence measurements. The model was further tested against measured spectrums of a VJ X-ray source to validate its feasibility and accuracy.
Comparing analytical and Monte Carlo optical diffusion models in phosphor-based X-ray detectors
Kalyvas, N.; Liaparinos, P.
2014-03-01
Luminescent materials are employed as radiation to light converters in detectors of medical imaging systems, often referred to as phosphor screens. Several processes affect the light transfer properties of phosphors. Amongst the most important is the interaction of light. Light attenuation (absorption and scattering) can be described either through "diffusion" theory in theoretical models or "quantum" theory in Monte Carlo methods. Although analytical methods, based on photon diffusion equations, have been preferentially employed to investigate optical diffusion in the past, Monte Carlo simulation models can overcome several of the analytical modelling assumptions. The present study aimed to compare both methodologies and investigate the dependence of the analytical model optical parameters as a function of particle size. It was found that the optical photon attenuation coefficients calculated by analytical modeling are decreased with respect to the particle size (in the region 1- 12 μm). In addition, for particles sizes smaller than 6μm there is no simultaneous agreement between the theoretical modulation transfer function and light escape values with respect to the Monte Carlo data.
LMC: Logarithmantic Monte Carlo
Mantz, Adam B.
2017-06-01
LMC is a Markov Chain Monte Carlo engine in Python that implements adaptive Metropolis-Hastings and slice sampling, as well as the affine-invariant method of Goodman & Weare, in a flexible framework. It can be used for simple problems, but the main use case is problems where expensive likelihood evaluations are provided by less flexible third-party software, which benefit from parallelization across many nodes at the sampling level. The parallel/adaptive methods use communication through MPI, or alternatively by writing/reading files, and mostly follow the approaches pioneered by CosmoMC (ascl:1106.025).
Monte Carlo-based multiphysics coupling analysis of x-ray pulsar telescope
Li, Liansheng; Deng, Loulou; Mei, Zhiwu; Zuo, Fuchang; Zhou, Hao
2015-10-01
X-ray pulsar telescope (XPT) is a complex optical payload, which involves optical, mechanical, electrical and thermal disciplines. The multiphysics coupling analysis (MCA) plays an important role in improving the in-orbit performance. However, the conventional MCA methods encounter two serious problems in dealing with the XTP. One is that both the energy and reflectivity information of X-ray can't be taken into consideration, which always misunderstands the essence of XPT. Another is that the coupling data can't be transferred automatically among different disciplines, leading to computational inefficiency and high design cost. Therefore, a new MCA method for XPT is proposed based on the Monte Carlo method and total reflective theory. The main idea, procedures and operational steps of the proposed method are addressed in detail. Firstly, it takes both the energy and reflectivity information of X-ray into consideration simultaneously. And formulate the thermal-structural coupling equation and multiphysics coupling analysis model based on the finite element method. Then, the thermalstructural coupling analysis under different working conditions has been implemented. Secondly, the mirror deformations are obtained using construction geometry function. Meanwhile, the polynomial function is adopted to fit the deformed mirror and meanwhile evaluate the fitting error. Thirdly, the focusing performance analysis of XPT can be evaluated by the RMS. Finally, a Wolter-I XPT is taken as an example to verify the proposed MCA method. The simulation results show that the thermal-structural coupling deformation is bigger than others, the vary law of deformation effect on the focusing performance has been obtained. The focusing performances of thermal-structural, thermal, structural deformations have degraded 30.01%, 14.35% and 7.85% respectively. The RMS of dispersion spot are 2.9143mm, 2.2038mm and 2.1311mm. As a result, the validity of the proposed method is verified through
Rehman Shakeel U.
2009-01-01
Full Text Available A primary-interaction based Monte Carlo algorithm has been developed for determination of the total efficiency of cylindrical scintillation g-ray detectors. This methodology has been implemented in a Matlab based computer program BPIMC. For point isotropic sources at axial locations with respect to the detector axis, excellent agreement has been found between the predictions of the BPIMC code with the corresponding results obtained by using hybrid Monte Carlo as well as by experimental measurements over a wide range of g-ray energy values. For off-axis located point sources, the comparison of the BPIMC predictions with the corresponding results obtained by direct calculations as well as by conventional Monte Carlo schemes shows good agreement validating the proposed algorithm. Using the BPIMC program, the energy dependent detector efficiency has been found to approach an asymptotic profile by increasing either thickness or diameter of scintillator while keeping the other fixed. The variation of energy dependent total efficiency of a 3'x3' NaI(Tl scintillator with axial distance has been studied using the BPIMC code. About two orders of magnitude change in detector efficiency has been observed for zero to 50 cm variation in the axial distance. For small values of axial separation, a similar large variation has also been observed in total efficiency for 137Cs as well as for 60Co sources by increasing the axial-offset from zero to 50 cm.
Lakshmanan, Manu N. [Ravin Advanced Imaging Laboratories, Dept. of Radiology, Duke University Medical Center, Durham, NC (United States); Kapadia, Anuj J., E-mail: anuj.kapadia@duke.edu [Ravin Advanced Imaging Laboratories, Dept. of Radiology, Duke University Medical Center, Durham, NC (United States); Sahbaee, Pooyan [Ravin Advanced Imaging Laboratories, Dept. of Radiology, Duke University Medical Center, Durham, NC (United States); Dept. of Physics, NC State University, Raleigh, NC (United States); Wolter, Scott D. [Dept. of Physics, Elon University, Elon, NC (United States); Harrawood, Brian P. [Ravin Advanced Imaging Laboratories, Dept. of Radiology, Duke University Medical Center, Durham, NC (United States); Brady, David [Dept. of Electrical and Computer Engineering, Duke University, Durham, NC (United States); Samei, Ehsan [Ravin Advanced Imaging Laboratories, Dept. of Radiology, Duke University Medical Center, Durham, NC (United States); Dept. of Electrical and Computer Engineering, Duke University, Durham, NC (United States)
2014-09-15
The analysis of X-ray scatter patterns has been demonstrated as an effective method of identifying specific materials in mixed object environments, for both biological and non-biological applications. Here we describe an X-ray scatter imaging system for material identification in cluttered objects and investigate its performance using a large-scale Monte Carlo simulation study of one-thousand objects containing a broad array of materials. The GEANT4 Monte Carlo source code for Rayleigh scatter physics was modified to model coherent scatter diffraction in bulk materials based on experimentally measured form factors for 33 materials. The simulation was then used to model coherent scatter signals from a variety of targets and clutter (background) materials in one thousand randomized objects. The resulting scatter images were used to characterize four parameters of the imaging system that affected its ability to identify target materials: (a) the arrangement of materials in the object, (b) clutter attenuation, (c) type of target material, and (d) the X-ray tube current. We found that the positioning of target materials within the object did not significantly affect their detectability; however, a strong negative correlation was observed between the target detectability and the clutter attenuation of the object. The imaging signal was also found to be relatively invariant to increases in X-ray tube current above 1 mAs for most materials considered in the study. This work is the first Monte Carlo study to our knowledge of a large population of cluttered object of an X-ray scatter imaging system for material identification and lays the foundation for large-scale studies of the effectiveness of X-ray scatter imaging systems for material identification in complex samples.
Lakshmanan, Manu N.; Kapadia, Anuj J.; Sahbaee, Pooyan; Wolter, Scott D.; Harrawood, Brian P.; Brady, David; Samei, Ehsan
2014-09-01
The analysis of X-ray scatter patterns has been demonstrated as an effective method of identifying specific materials in mixed object environments, for both biological and non-biological applications. Here we describe an X-ray scatter imaging system for material identification in cluttered objects and investigate its performance using a large-scale Monte Carlo simulation study of one-thousand objects containing a broad array of materials. The GEANT4 Monte Carlo source code for Rayleigh scatter physics was modified to model coherent scatter diffraction in bulk materials based on experimentally measured form factors for 33 materials. The simulation was then used to model coherent scatter signals from a variety of targets and clutter (background) materials in one thousand randomized objects. The resulting scatter images were used to characterize four parameters of the imaging system that affected its ability to identify target materials: (a) the arrangement of materials in the object, (b) clutter attenuation, (c) type of target material, and (d) the X-ray tube current. We found that the positioning of target materials within the object did not significantly affect their detectability; however, a strong negative correlation was observed between the target detectability and the clutter attenuation of the object. The imaging signal was also found to be relatively invariant to increases in X-ray tube current above 1 mAs for most materials considered in the study. This work is the first Monte Carlo study to our knowledge of a large population of cluttered object of an X-ray scatter imaging system for material identification and lays the foundation for large-scale studies of the effectiveness of X-ray scatter imaging systems for material identification in complex samples.
Marcus, Ryan C. [Los Alamos National Laboratory
2012-07-25
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.
Monte Carlo methods for electromagnetics
Sadiku, Matthew NO
2009-01-01
Until now, novices had to painstakingly dig through the literature to discover how to use Monte Carlo techniques for solving electromagnetic problems. Written by one of the foremost researchers in the field, Monte Carlo Methods for Electromagnetics provides a solid understanding of these methods and their applications in electromagnetic computation. Including much of his own work, the author brings together essential information from several different publications.Using a simple, clear writing style, the author begins with a historical background and review of electromagnetic theory. After addressing probability and statistics, he introduces the finite difference method as well as the fixed and floating random walk Monte Carlo methods. The text then applies the Exodus method to Laplace's and Poisson's equations and presents Monte Carlo techniques for handing Neumann problems. It also deals with whole field computation using the Markov chain, applies Monte Carlo methods to time-varying diffusion problems, and ...
A new Monte Carlo code for simulation of the effect of irregular surfaces on X-ray spectra
Brunetti, Antonio, E-mail: brunetti@uniss.it; Golosio, Bruno
2014-04-01
Generally, quantitative X-ray fluorescence (XRF) analysis estimates the content of chemical elements in a sample based on the areas of the fluorescence peaks in the energy spectrum. Besides the concentration of the elements, the peak areas depend also on the geometrical conditions. In fact, the estimate of the peak areas is simple if the sample surface is smooth and if the spectrum shows a good statistic (large-area peaks). For this reason often the sample is prepared as a pellet. However, this approach is not always feasible, for instance when cultural heritage or valuable samples must be analyzed. In this case, the sample surface cannot be smoothed. In order to address this problem, several works have been reported in the literature, based on experimental measurements on a few sets of specific samples or on Monte Carlo simulations. The results obtained with the first approach are limited by the specific class of samples analyzed, while the second approach cannot be applied to arbitrarily irregular surfaces. The present work describes a more general analysis tool based on a new fast Monte Carlo algorithm, which is virtually able to simulate any kind of surface. At the best of our knowledge, it is the first Monte Carlo code with this option. A study of the influence of surface irregularities on the measured spectrum is performed and some results reported. - Highlights: • We present a fast Monte Carlo code with the possibility to simulate any irregularly rough surfaces. • We show applications to multilayer measurements. • Real time simulations are available.
Use of Monte Carlo simulations for cultural heritage X-ray fluorescence analysis
Brunetti, Antonio, E-mail: brunetti@uniss.it [Polcoming Department, University of Sassari (Italy); Golosio, Bruno [Polcoming Department, University of Sassari (Italy); Schoonjans, Tom; Oliva, Piernicola [Chemical and Pharmaceutical Department, University of Sassari (Italy)
2015-06-01
The analytical study of Cultural Heritage objects often requires merely a qualitative determination of composition and manufacturing technology. However, sometimes a qualitative estimate is not sufficient, for example when dealing with multilayered metallic objects. Under such circumstances a quantitative estimate of the chemical contents of each layer is sometimes required in order to determine the technology that was used to produce the object. A quantitative analysis is often complicated by the surface state: roughness, corrosion, incrustations that remain even after restoration, due to efforts to preserve the patina. Furthermore, restorers will often add a protective layer on the surface. In all these cases standard quantitative methods such as the fundamental parameter based approaches are generally not applicable. An alternative approach is presented based on the use of Monte Carlo simulations for quantitative estimation. - Highlights: • We present an application of fast Monte Carlo codes for Cultural Heritage artifact analysis. • We show applications to complex multilayer structures. • The methods allow estimating both the composition and the thickness of multilayer, such as bronze with patina. • The performance in terms of accuracy and uncertainty is described for the bronze samples.
X-ray imaging plate performance investigation based on a Monte Carlo simulation tool
Yao, M., E-mail: philippe.duvauchelle@insa-lyon.fr [Laboratoire Vibration Acoustique (LVA), INSA de Lyon, 25 Avenue Jean Capelle, 69621 Villeurbanne Cedex (France); Duvauchelle, Ph.; Kaftandjian, V. [Laboratoire Vibration Acoustique (LVA), INSA de Lyon, 25 Avenue Jean Capelle, 69621 Villeurbanne Cedex (France); Peterzol-Parmentier, A. [AREVA NDE-Solutions, 4 Rue Thomas Dumorey, 71100 Chalon-sur-Saône (France); Schumm, A. [EDF R& D SINETICS, 1 Avenue du Général de Gaulle, 92141 Clamart Cedex (France)
2015-01-01
Computed radiography (CR) based on imaging plate (IP) technology represents a potential replacement technique for traditional film-based industrial radiography. For investigating the IP performance especially at high energies, a Monte Carlo simulation tool based on PENELOPE has been developed. This tool tracks separately direct and secondary radiations, and monitors the behavior of different particles. The simulation output provides 3D distribution of deposited energy in IP and evaluation of radiation spectrum propagation allowing us to visualize the behavior of different particles and the influence of different elements. A detailed analysis, on the spectral and spatial responses of IP at different energies up to MeV, has been performed. - Highlights: • A Monte Carlo tool for imaging plate (IP) performance investigation is presented. • The tool outputs 3D maps of energy deposition in IP due to different signals. • The tool also provides the transmitted spectra along the radiation propagation. • An industrial imaging case is simulated with the presented tool. • A detailed analysis, on the spectral and spatial responses of IP, is presented.
Metropolis Methods for Quantum Monte Carlo Simulations
Ceperley, D. M.
2003-01-01
Since its first description fifty years ago, the Metropolis Monte Carlo method has been used in a variety of different ways for the simulation of continuum quantum many-body systems. This paper will consider some of the generalizations of the Metropolis algorithm employed in quantum Monte Carlo: Variational Monte Carlo, dynamical methods for projector monte carlo ({\\it i.e.} diffusion Monte Carlo with rejection), multilevel sampling in path integral Monte Carlo, the sampling of permutations, ...
Kohei Arai
2013-01-01
Full Text Available Monte Carlo Ray Tracing: MCRT based sensitivity analysis of the geophysical parameters (the atmosphere and the ocean on Top of the Atmosphere: TOA radiance in visible to near infrared wavelength regions is conducted. As the results, it is confirmed that the influence due to the atmosphere is greater than that of the ocean. Scattering and absorption due to aerosol particles and molecules in the atmosphere is major contribution followed by water vapor and ozone while scattering due to suspended solid is dominant contribution for the ocean parameters.
Lopez-Pino, N.; Padilla-Cabal, F.; Garcia-Alvarez, J. A.; Vazquez, L.; D' Alessandro, K.; Correa-Alfonso, C. M. [Departamento de Fisica Nuclear, Instituto Superior de Tecnologia y Ciencias Aplicadas (InSTEC) Ave. Salvador Allende y Luaces. Quinta de los Molinos. Habana 10600. A.P. 6163, La Habana (Cuba); Godoy, W.; Maidana, N. L.; Vanin, V. R. [Laboratorio do Acelerador Linear, Instituto de Fisica - Universidade de Sao Paulo Rua do Matao, Travessa R, 187, 05508-900, SP (Brazil)
2013-05-06
A detailed characterization of a X-ray Si(Li) detector was performed to obtain the energy dependence of efficiency in the photon energy range of 6.4 - 59.5 keV, which was measured and reproduced by Monte Carlo (MC) simulations. Significant discrepancies between MC and experimental values were found when the manufacturer parameters of the detector were used in the simulation. A complete Computerized Tomography (CT) detector scan allowed to find the correct crystal dimensions and position inside the capsule. The computed efficiencies with the resulting detector model differed with the measured values no more than 10% in most of the energy range.
Lectures on Monte Carlo methods
Madras, Neal
2001-01-01
Monte Carlo methods form an experimental branch of mathematics that employs simulations driven by random number generators. These methods are often used when others fail, since they are much less sensitive to the "curse of dimensionality", which plagues deterministic methods in problems with a large number of variables. Monte Carlo methods are used in many fields: mathematics, statistics, physics, chemistry, finance, computer science, and biology, for instance. This book is an introduction to Monte Carlo methods for anyone who would like to use these methods to study various kinds of mathemati
Sharma, Diksha; Badal, Andreu; Badano, Aldo
2012-04-21
The computational modeling of medical imaging systems often requires obtaining a large number of simulated images with low statistical uncertainty which translates into prohibitive computing times. We describe a novel hybrid approach for Monte Carlo simulations that maximizes utilization of CPUs and GPUs in modern workstations. We apply the method to the modeling of indirect x-ray detectors using a new and improved version of the code MANTIS, an open source software tool used for the Monte Carlo simulations of indirect x-ray imagers. We first describe a GPU implementation of the physics and geometry models in fastDETECT2 (the optical transport model) and a serial CPU version of the same code. We discuss its new features like on-the-fly column geometry and columnar crosstalk in relation to the MANTIS code, and point out areas where our model provides more flexibility for the modeling of realistic columnar structures in large area detectors. Second, we modify PENELOPE (the open source software package that handles the x-ray and electron transport in MANTIS) to allow direct output of location and energy deposited during x-ray and electron interactions occurring within the scintillator. This information is then handled by optical transport routines in fastDETECT2. A load balancer dynamically allocates optical transport showers to the GPU and CPU computing cores. Our hybridMANTIS approach achieves a significant speed-up factor of 627 when compared to MANTIS and of 35 when compared to the same code running only in a CPU instead of a GPU. Using hybridMANTIS, we successfully hide hours of optical transport time by running it in parallel with the x-ray and electron transport, thus shifting the computational bottleneck from optical tox-ray transport. The new code requires much less memory than MANTIS and, asa result, allows us to efficiently simulate large area detectors.
Grau Carles, A.; Garcia Gomez-Tejedor, G.
2001-07-01
The final objective of any ionization chamber is the measurement of the energy amount or radiation dose absorbed by the gas into the chamber. The final value depends on the composition of the gas, its density and temperature, the ionization chamber geometry, and type and intensity of the radiation. We describe a Monte Carlo simulation method, which allows one to compute the dose absorbed by the gas for a X-ray beam. Verification of model has been carried out by simulating the attenuation of standard X-ray radiation through the half value layers established in the ISO 4037 report, while assuming a Weibull type energy distribution for the incident photons. (Author) 6 refs.
Monte Carlo integration on GPU
Kanzaki, J.
2010-01-01
We use a graphics processing unit (GPU) for fast computations of Monte Carlo integrations. Two widely used Monte Carlo integration programs, VEGAS and BASES, are parallelized on GPU. By using $W^{+}$ plus multi-gluon production processes at LHC, we test integrated cross sections and execution time for programs in FORTRAN and C on CPU and those on GPU. Integrated results agree with each other within statistical errors. Execution time of programs on GPU run about 50 times faster than those in C...
3D polymer gel dosimetry and Geant4 Monte Carlo characterization of novel needle based X-ray source
Liu, Y.; Sozontov, E.; Safronov, V.; Gutman, G.; Strumban, E.; Jiang, Q.; Li, S.
2010-11-01
In the recent years, there have been a few attempts to develop a low energy x-ray radiation sources alternative to conventional radioisotopes used in brachytherapy. So far, all efforts have been centered around the intent to design an interstitial miniaturized x-ray tube. Though direct irradiation of tumors looks very promising, the known insertable miniature x-ray tubes have many limitations: (a) difficulties with focusing and steering the electron beam to the target; (b)necessity to cool the target to increase x-ray production efficiency; (c)impracticability to reduce the diameter of the miniaturized x-ray tube below 4mm (the requirement to decrease the diameter of the x-ray tube and the need to have a cooling system for the target have are mutually exclusive); (c) significant limitations in changing shape and energy of the emitted radiation. The specific aim of this study is to demonstrate the feasibility of a new concept for an insertable low-energy needle x-ray device based on simulation with Geant4 Monte Carlo code and to measure the dose rate distribution for low energy (17.5 keV) x-ray radiation with the 3D polymer gel dosimetry.
Jovari, P.; Saksl, K.; Pryds, Nini;
2007-01-01
Short range order of amorphous Mg60Cu30Y10 was investigated by x-ray and neutron diffraction, Cu and Y K-edge x-ray absorption fine structure measurements, and the reverse Monte Carlo simulation technique. We found that Mg-Mg and Mg-Cu nearest neighbor distances are very similar to values found i...
MOHAMED M OULD; DIB A S A; BELBACHIR A H
2016-07-01
Cosmic rays cause significant damage to the electronic equipments of the aircrafts. In this paper, we have investigated the accumulation of the deposited energy of cosmic rays on the Earth’s atmosphere, especially in the aircraft area. In fact, if a high-energy neutron or proton interacts with a nanodevice having only a few atoms, this neutron or proton particle can change the nature of this device and destroy it. Our simulation based on Monte Carlo using Geant4 code shows that the deposited energy of neutron particles ranging between 200MeV and 5 GeV are strongly concentrated in the region between 10 and 15 km from the sea level which is exactly the avionic area. However, the Bragg peak energy of proton particle is slightly localized above the avionic area.
Golosio, Bruno; Schoonjans, Tom; Brunetti, Antonio; Oliva, Piernicola; Masala, Giovanni Luca
2014-03-01
The simulation of X-ray imaging experiments is often performed using deterministic codes, which can be relatively fast and easy to use. However, such codes are generally not suitable for the simulation of even slightly more complex experimental conditions, involving, for instance, first-order or higher-order scattering, X-ray fluorescence emissions, or more complex geometries, particularly for experiments that combine spatial resolution with spectral information. In such cases, simulations are often performed using codes based on the Monte Carlo method. In a simple Monte Carlo approach, the interaction position of an X-ray photon and the state of the photon after an interaction are obtained simply according to the theoretical probability distributions. This approach may be quite inefficient because the final channels of interest may include only a limited region of space or photons produced by a rare interaction, e.g., fluorescent emission from elements with very low concentrations. In the field of X-ray fluorescence spectroscopy, this problem has been solved by combining the Monte Carlo method with variance reduction techniques, which can reduce the computation time by several orders of magnitude. In this work, we present a C++ code for the general simulation of X-ray imaging and spectroscopy experiments, based on the application of the Monte Carlo method in combination with variance reduction techniques, with a description of sample geometry based on quadric surfaces. We describe the benefits of the object-oriented approach in terms of code maintenance, the flexibility of the program for the simulation of different experimental conditions and the possibility of easily adding new modules. Sample applications in the fields of X-ray imaging and X-ray spectroscopy are discussed. Catalogue identifier: AERO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERO_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland
Short-Term Variability of X-rays from Accreting Neutron Star Vela X-1: II. Monte-Carlo Modeling
Odaka, Hirokazu; Tanaka, Yasuyuki T; Watanabe, Shin; Takahashi, Tadayuki; Makishima, Kazuo
2013-01-01
We develop a Monte Carlo Comptonization model for the X-ray spectrum of accretion-powered pulsars. Simple, spherical, thermal Comptonization models give harder spectra for higher optical depth, while the observational data from Vela X-1 show that the spectra are harder at higher luminosity. This suggests a physical interpretation where the optical depth of the accreting plasma increases with mass accretion rate. We develop a detailed Monte-Carlo model of the accretion flow, including the effects of the strong magnetic field ($\\sim 10^{12}$ G) both in geometrically constraining the flow into an accretion column, and in reducing the cross section. We treat bulk-motion Comptonization of the infalling material as well as thermal Comptonization. These model spectra can match the observed broad-band {\\it Suzaku} data from Vela X-1 over a wide range of mass accretion rates. The model can also explain the so-called "low state", in which the uminosity decreases by an order of magnitude. Here, thermal Comptonization sh...
Choi, Yu-Na; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Park, Hye-Suk; Kim, Dae-Hong; Lee, Seung-Wan; Ryu, Hyun-Ju
2011-03-01
A photon counting detector based on semiconductor materials is a very promising approach for x-ray imaging. Cadmium zinc telluride (CZT) semiconductor has a high atomic number which results in higher absorption coefficients for x-rays. However, the CZT detectors exhibit several problems with hole trapping and charge sharing. Charge sharing occurs due to diffusion of charge and characteristic x-ray escape and scattered x-rays in the detectors. In this study, we evaluated the effect of interaction with CZT detector using Monte Carlo simulations. To demonstrate the effectiveness of CZT detector in clinical application, we reported confirmation of CNR improvement in K-edge images, and material decomposition using energy selective windows. X-ray energy spectrum acquired at 120 kVp tube voltage and 2 mm Al filtration and 10 cm added water phantom in the x-ray beam. Geant4 Application for Tomographic Emission (GATE) version 6.0 was used for a CZT crystal with size of 10x10 mm2 and thickness of 4 mm. The detector pixel with sizes of 0.09x0.09, 0.45x0.45, and 0.90x0.90 mm2 were simulated. For all pixel sizes, the x-ray spectra of the simulations were distorted towards the lower energy region. Because the characteristic x-rays add counts in the range of 20-40 keV. The magnitude of this deterioration is substantial for small pixel sizes. However, we demonstrated that the distortion of spectrum does not greatly affect the x-ray imaging. The GATE simulation model and these results may be used as a basis of development of energy-resolved photon counting x-ray detector. We believe that the CZT detector may enhance the detectability of multi-energy x-ray imaging.
Gallardo, Sergio; Pozuelo, Fausto; Querol, Andrea; Ródenas, José; Verdú, Gumersindo
2014-06-01
An accurate knowledge of the photon spectra emitted by X-ray tubes in radiodiagnostic is essential to better estimate the imparted dose to patients and to improve the quality image obtained with these devices. In this work, it is proposed the use of a flat panel detector together with a PMMA wedge to estimate the actual X-ray spectrum using the Monte Carlo method and unfolding techniques. The MCNP5 code has been used to model different flat panels (based on indirect and direct methods to produce charge carriers from absorbed X-rays) and to obtain the dose curves and system response functions. Most of the actual flat panel devices use scintillator materials that present K-edge discontinuities in the mass energy-absorption coefficient, which strongly affect the response matrix. In this paper, the applicability of different flat panels for reconstructing X-ray spectra is studied. The effect of the mass energy-absorption coefficient of the scintillator material has been studied on the response matrix and consequently, in the reconstructed spectra. Different unfolding methods are tested to reconstruct the actual X-ray spectrum knowing the dose curve and the response function. It has been concluded that the regularization method MTSVD is appropriate to unfold X-ray spectra in all the scintillators studied.
Peng He
2014-01-01
Full Text Available Currently industrial X-CT system is designed according to characteristics of test objects, and test objects determine industrial X-CT system structure, X-ray detector/sensor property, scanning mode, and so forth. So there are no uniform standards for the geometry size of scintillation crystals of detector. Moreover, scintillation crystals are usually mixed with some highly toxic impurity elements, such as Tl and Cd. Thus, it is indispensable for establishing guidelines of engineering practice to simulate X-ray detection performances of different scintillation crystals. This paper focuses on how to achieve high efficient X-ray detection in industrial X-CT system which used Monte Carlo (MC method to study X-ray energy straggling characteristics, full energy peak efficiency, and conversion efficiency of some scintillation crystals (e.g., CsI(Tl, NaI(Tl, and CdWO4 after X-ray interacted with these scintillation crystals. Our experimental results demonstrate that CsI(Tl scintillation crystal has the advantages of conversion efficiency, spectral matching, manufacturing process, and full energy peak efficiency; it is an ideal choice for high efficient X-ray detection in industrial X-CT system.
Lee, Seung Kyu; Seo, Hee; Won, Byung Hee; Lee, Hyun Su; Park, Se-Hwan; Kim, Ho-Dong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-10-15
The XRF technique compares the measured pulse height of U and Pu peaks which are self-induced characteristic xray emitted from U and Pu to quantify the elemental U and Pu. The measurement of the U and Pu x-ray peak ratio provides information on the relative concentration of U and Pu elements. Photon measurements of spent nuclear fuel using high resolution spectrometers show a large background continuum in the low energy x-ray region in large part from Compton scattering of energetic gamma-rays. The high Compton continuum can make measurements of plutonium x-rays difficult because the relatively small signal to background ratio produced. In pressurized water reactor (PWR) spent fuels with low plutonium contents (-1%), the signal to background ratio may be too low to get an accurate plutonium x-ray measurement. The Compton suppression system has been proposed to reduce the Compton continuum background. In the present study, the feasibility of a Compton suppression system for XRF was evaluated by Monte Carlo simulations and measurements of the radiation source. In this study, the feasibility of a Compton suppression system for XRF was evaluated by MCNP simulations and measurements of the radiation source. Experiments using a standard gamma-ray source showed that the peak-to-total ratios were improved by a factor of three when the Compton suppression system was used.
Alcaraz, Olga; Trullàs, Joaquim; Tahara, Shuta; Kawakita, Yukinobu; Takeda, Shin'ichi
2016-09-01
The results of the structural properties of molten copper chloride are reported from high-energy X-ray diffraction measurements, reverse Monte Carlo modeling method, and molecular dynamics simulations using a polarizable ion model. The simulated X-ray structure factor reproduces all trends observed experimentally, in particular the shoulder at around 1 Å-1 related to intermediate range ordering, as well as the partial copper-copper correlations from the reverse Monte Carlo modeling, which cannot be reproduced by using a simple rigid ion model. It is shown that the shoulder comes from intermediate range copper-copper correlations caused by the polarized chlorides.
Paixao, L.; Oliveira, B. B.; Nogueira, M. do S. [Centro de Desenvolvimento da Tecnologia Nuclear, Post-graduation in Science and Technology of Radiations, Minerals and Materials, Pte. Antonio Carlos 6.627, Pampulha, 31270-901 Belo Horizonte (Brazil); Viloria, C. [UFMG, Departamento de Engenharia Nuclear, Post-graduation in Nuclear Sciences and Techniques, Pte. Antonio Carlos 6.627, Pampulha, 31270-901 Belo Horizonte (Brazil); Alves de O, M. [UFMG, Department of Anatomy and Imaging, Prof. Alfredo Balena 190, 30130-100 Belo Horizonte (Brazil); Araujo T, M. H., E-mail: lpr@cdtn.br [Dr Maria Helena Araujo Teixeira Clinic, Guajajaras 40, 30180-100 Belo Horizonte (Brazil)
2014-08-15
It is widely accepted that the mean glandular dose (D{sub G}) for the glandular tissue is the more useful magnitude for characterizing the breast cancer risk. The procedure to estimate the D{sub G}, for being difficult to measure it directly in the breast, it is to make the use of conversion factors that relate incident air kerma (K{sub i}) at this dose. Generally, the conversion factors vary with the x-ray spectrum half-value layer and the breast composition and thickness. Several authors through computer simulations have calculated such factors by the Monte Carlo (Mc) method. Many spectral models for D{sub G} computer simulations purposes are available in the diagnostic range. One of the models available generates unfiltered spectra. In this work, the Monte Carlo EGSnrc code package with the C++ class library (eg spp) was employed to derive filtered tungsten x-ray spectra used in digital mammography systems. Filtered spectra for rhodium and aluminium filters were obtained for tube potentials between 26 and 32 kV. The half-value layer of simulated filtered spectra were compared with those obtained experimentally with a solid state detector Unfors model 8202031-H Xi R/F and Mam Detector Platinum and 8201023-C Xi Base unit Platinum Plus w m As in a Hologic Selenia Dimensions system using a Direct Radiography mode. Calculated half-value layer values showed good agreement compared to those obtained experimentally. These results show that the filtered tungsten anode x-ray spectra and the EGSnrc Mc code can be used for D{sub G} determination in mammography. (Author)
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros
2016-08-29
In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . âˆž>h0>h1â‹¯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. Â© 2016 Elsevier B.V.
Virgilli, E; Rosati, P; Bonnini, E; Buffagni, E; Ferrari, C; Stephen, J B; Caroli, E; Auricchio, N; Basili, A; Silvestri, S
2015-01-01
We report on results of observation of the focusing effect from the planes (220) of Gallium Arsenide (GaAs) crystals. We have compared the experimental results with the simulations of the focusing capability of GaAs tiles through a developed Monte Carlo. The GaAs tiles were bent using a lapping process developed at the cnr/imem - Parma (Italy) in the framework of the laue project, funded by ASI, dedicated to build a broad band Laue lens prototype for astrophysical applications in the hard X-/soft gamma-ray energy range (80-600 keV). We present and discuss the results obtained from their characterization, mainly in terms of focusing capability. Bent crystals will significantly increase the signal to noise ratio of a telescope based on a Laue lens, consequently leading to an unprecedented enhancement of sensitivity with respect to the present non focusing instrumentation.
Itoh, Keiji
2017-02-01
Pulsed neutron diffraction and synchrotron X-ray diffraction measurements were performed on Se100-xTex bulk glasses with x=10, 20, 30 and 40. The coordination numbers obtained from the diffraction results demonstrate that Se and Te atoms are twofold coordinated and the glass structure is formed by the chain network. The three-dimensional structure model for Se60Te40 glass obtained by using reverse Monte Carlo modelling shows that the alternating arrangements of Se and Te atoms compose the major part of the chain clusters but several other fragments such as Sen chains and Te-Te dimers are also present in large numbers. The chain clusters have geometrically disordered forms and the interchain atomic order is different from those in the crystal structures of trigonal Se and trigonal Te.
Equilibrium Statistics: Monte Carlo Methods
Kröger, Martin
Monte Carlo methods use random numbers, or ‘random’ sequences, to sample from a known shape of a distribution, or to extract distribution by other means. and, in the context of this book, to (i) generate representative equilibrated samples prior being subjected to external fields, or (ii) evaluate high-dimensional integrals. Recipes for both topics, and some more general methods, are summarized in this chapter. It is important to realize, that Monte Carlo should be as artificial as possible to be efficient and elegant. Advanced Monte Carlo ‘moves’, required to optimize the speed of algorithms for a particular problem at hand, are outside the scope of this brief introduction. One particular modern example is the wavelet-accelerated MC sampling of polymer chains [406].
Bonamente, Massimillano; Joy, Marshall K.; Carlstrom, John E.; Reese, Erik D.; LaRoque, Samuel J.
2004-01-01
X-ray and Sunyaev-Zel'dovich effect data can be combined to determine the distance to galaxy clusters. High-resolution X-ray data are now available from Chandra, which provides both spatial and spectral information, and Sunyaev-Zel'dovich effect data were obtained from the BIMA and Owens Valley Radio Observatory (OVRO) arrays. We introduce a Markov Chain Monte Carlo procedure for the joint analysis of X-ray and Sunyaev- Zel'dovich effect data. The advantages of this method are the high computational efficiency and the ability to measure simultaneously the probability distribution of all parameters of interest, such as the spatial and spectral properties of the cluster gas and also for derivative quantities such as the distance to the cluster. We demonstrate this technique by applying it to the Chandra X-ray data and the OVRO radio data for the galaxy cluster A611. Comparisons with traditional likelihood ratio methods reveal the robustness of the method. This method will be used in follow-up paper to determine the distances to a large sample of galaxy cluster.
Monte Carlo Hamiltonian: Linear Potentials
LUO Xiang-Qian; LIU Jin-Jiang; HUANG Chun-Qing; JIANG Jun-Qin; Helmut KROGER
2002-01-01
We further study the validity of the Monte Carlo Hamiltonian method. The advantage of the method,in comparison with the standard Monte Carlo Lagrangian approach, is its capability to study the excited states. Weconsider two quantum mechanical models: a symmetric one V(x) = |x|/2; and an asymmetric one V(x) = ∞, forx ＜ 0 and V(x) = x, for x ≥ 0. The results for the spectrum, wave functions and thermodynamical observables are inagreement with the analytical or Runge-Kutta calculations.
Proton Upset Monte Carlo Simulation
O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.
2009-01-01
The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.
Kalkanis, G.; Sarris, M. M.
1999-01-01
Describes an educational software program for the study of and detection methods for the cosmic ray muons passing through several light transparent materials (i.e., water, air, etc.). Simulates muons and Cherenkov photons' paths and interactions and visualizes/animates them on the computer screen using Monte Carlo methods/techniques which employ…
Brunetti, Antonio; Golosio, Bruno [Universita degli Studi di Sassari, Dipartimento di Scienze Politiche, Scienze della Comunicazione e Ingegneria dell' Informazione, Sassari (Italy); Melis, Maria Grazia [Universita degli Studi di Sassari, Dipartimento di Storia, Scienze dell' Uomo e della Formazione, Sassari (Italy); Mura, Stefania [Universita degli Studi di Sassari, Dipartimento di Agraria e Nucleo di Ricerca sulla Desertificazione, Sassari (Italy)
2014-11-08
X-ray fluorescence (XRF) is a well known nondestructive technique. It is also applied to multilayer characterization, due to its possibility of estimating both composition and thickness of the layers. Several kinds of cultural heritage samples can be considered as a complex multilayer, such as paintings or decorated objects or some types of metallic samples. Furthermore, they often have rough surfaces and this makes a precise determination of the structure and composition harder. The standard quantitative XRF approach does not take into account this aspect. In this paper, we propose a novel approach based on a combined use of X-ray measurements performed with a polychromatic beam and Monte Carlo simulations. All the information contained in an X-ray spectrum is used. This approach allows obtaining a very good estimation of the sample contents both in terms of chemical elements and material thickness, and in this sense, represents an improvement of the possibility of XRF measurements. Some examples will be examined and discussed. (orig.)
Tomal, A. [Universidade Federale de Goias, Instituto de Fisica, Campus Samambaia, 74001-970, Goiania, (Brazil); Lopez G, A. H.; Santos, J. C.; Costa, P. R., E-mail: alessandra_tomal@yahoo.com.br [Universidade de Sao Paulo, Instituto de Fisica, Rua du Matao Travessa R. 187, Cidade Universitaria, 05508-090 Sao Paulo (Brazil)
2014-08-15
In this work, the energy response functions of a Cd Te detector were obtained by Monte Carlo simulation in the energy range from 5 to 150 keV, using the Penelope code. The response functions simulated included the finite detector resolution and the carrier transport. The simulated energy response matrix was validated through comparison with experimental results obtained for radioactive sources. In order to investigate the influence of the correction by the detector response at diagnostic energy range, x-ray spectra were measured using a Cd Te detector (model Xr-100-T, Amptek), and then corrected by the energy response of the detector using the stripping procedure. Results showed that the Cd Te exhibit good energy response at low energies (below 40 keV), showing only small distortions on the measured spectra. For energies below about 70 keV, the contribution of the escape of Cd- and Te-K x-rays produce significant distortions on the measured x-ray spectra. For higher energies, the most important correction is the detector efficiency and the carrier trapping effects. The results showed that, after correction by the energy response, the measured spectra are in good agreement with those provided by different models from the literature. Finally, our results showed that the detailed knowledge of the response function and a proper correction procedure are fundamental for achieve more accurate spectra from which several qualities parameters (i.e. half-value layer, effective energy and mean energy) can be determined. (Author)
Pruet, J; Prussin, S; Descalle, M; Hall, J
2004-02-03
A Monte Carlo method for the estimation of {beta}-delayed {gamma}-ray spectra following fission is described that can accommodate an arbitrary time-dependent fission rate and photon collection history. The method invokes direct sampling of the independent fission yield distributions of the fissioning system, the branching ratios for decay of individual fission products and the spectral distributions for photon emission for each decay mode. Though computationally intensive, the method can provide a detailed estimate of the spectrum that would be recorded by an arbitrary spectrometer, and can prove useful in assessing the quality of evaluated data libraries, for identifying gaps in these libraries, etc. The method is illustrated by a first comparison of calculated and experimental spectra from decay of short-lived fission products following the reactions {sup 235}U(n{sub th}, f) and {sup 239}Pu(n{sub th}, f). For general purpose transport calculations, where detailed consideration of the large number of individual {gamma}-ray transitions in a spectrum may be unnecessary, it is shown that an accurate and simple parameterization of a {gamma}-ray source function can be obtained. These parametrizations should provide high-quality average spectral distributions that should prove useful in calculations describing photons escaping from thick attenuating media.
无
2006-01-01
The pipe holdup measurement is very important for decommissioning nuclear facilities and nuclear-material control and accounting. The absolute detection efficiencies (εsp) of full-energy γ rays peak under different source density distribution function have been simulated using the Monte Carlo (MC) software, and the counting rates (n0) of the characteristic γ rays have been measured using the γ spectrometer followed by the calculation of the holdup. The holdup is affected by the energy of γ rays, distance at which they are detected, pipe material, thickness,and source distribution of pipe, especially source distribution at a short distance. The comparative test of 235U reference materials on the inner wall of Fe and A1 pipes (the total mass of 235U is 44.6 mg and 222.8 mg, respectively) have been accomplished using this method. The determined result of 235U is 43.2mg (U0.95rel=5.4%) and 216.2mg (U0.95rel= 3.2%), respectively, which are in accordance with the reference values.
Pazirandeh, Ali; Azizi, Maryam; Farhad Masoudi, S
2006-01-01
Among many conventional techniques, nuclear techniques have shown to be faster, more reliable, and more effective in detecting explosives. In the present work, neutrons from a 5 Ci Am-Be neutron source being in water tank are captured by elements of soil and landmine (TNT), namely (14)N, H, C, and O. The prompt capture gamma-ray spectrum taken by a NaI (Tl) scintillation detector indicates the characteristic photo peaks of the elements in soil and landmine. In the high-energy region of the gamma-ray spectrum, besides 10.829 MeV of (15)N, single escape (SE) and double escape (DE) peaks are unmistakable photo peaks, which make the detection of concealed explosive possible. The soil has the property of moderating neutrons as well as diffusing the thermal neutron flux. Among many elements in soil, silicon is more abundant and (29)Si emits 10.607 MeV prompt capture gamma-ray, which makes 10.829 MeV detection difficult. The Monte Carlo simulation was used to adjust source-target-detector distances and soil moisture content to yield the best result. Therefore, we applied MCNP4C for configuration very close to reality of a hidden landmine in soil.
Kanematsu, Nobuyuki; Inaniwa, Taku; Nakao, Minoru
2016-07-01
In the conventional procedure for accurate Monte Carlo simulation of radiotherapy, a CT number given to each pixel of a patient image is directly converted to mass density and elemental composition using their respective functions that have been calibrated specifically for the relevant x-ray CT system. We propose an alternative approach that is a conversion in two steps: the first from CT number to density and the second from density to composition. Based on the latest compilation of standard tissues for reference adult male and female phantoms, we sorted the standard tissues into groups by mass density and defined the representative tissues by averaging the material properties per group. With these representative tissues, we formulated polyline relations between mass density and each of the following; electron density, stopping-power ratio and elemental densities. We also revised a procedure of stoichiometric calibration for CT-number conversion and demonstrated the two-step conversion method for a theoretically emulated CT system with hypothetical 80 keV photons. For the standard tissues, high correlation was generally observed between mass density and the other densities excluding those of C and O for the light spongiosa tissues between 1.0 g cm-3 and 1.1 g cm-3 occupying 1% of the human body mass. The polylines fitted to the dominant tissues were generally consistent with similar formulations in the literature. The two-step conversion procedure was demonstrated to be practical and will potentially facilitate Monte Carlo simulation for treatment planning and for retrospective analysis of treatment plans with little impact on the management of planning CT systems.
Monte Carlo Particle Lists: MCPL
Kittelmann, Thomas; Knudsen, Erik B; Willendrup, Peter; Cai, Xiao Xiao; Kanaki, Kalliopi
2016-01-01
A binary format with lists of particle state information, for interchanging particles between various Monte Carlo simulation applications, is presented. Portable C code for file manipulation is made available to the scientific community, along with converters and plugins for several popular simulation packages.
Applications of Monte Carlo Methods in Calculus.
Gordon, Sheldon P.; Gordon, Florence S.
1990-01-01
Discusses the application of probabilistic ideas, especially Monte Carlo simulation, to calculus. Describes some applications using the Monte Carlo method: Riemann sums; maximizing and minimizing a function; mean value theorems; and testing conjectures. (YP)
Alxneit, I. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)
1999-08-01
The program RAY was developed to perform Monte-Carlo simulations of the flux distribution in solar reactors in connection with an arbitrary heliostat field. The code accounts for the shading of the incoming rays from the sun due to the reactor supporting tower as well as for full blocking and shading of the heliostats among themselves. A simplified falling particle reactor (FPR) was evaluated. A central receiver field was used with a total area of 311 m{sup 2} composed of 176 round, focusing heliostats. No attempt was undertaken to optimise either the geometry of the heliostat field nor the aiming strategy of the heliostats. The FPR was evaluated at two different geographic latitudes (-8.23W/47.542N; PSI and -8.23W/20.0N) and during the course of a day (May 30{sup th}). The incident power passing through the reactor aperture and the flux density distribution within the FPR was calculated. (author) 3 figs., 1 tab., 3 refs.
Amaro, Pedro; Santos, José Paulo; Samouco, Ana; Adão, Ricardo; Martins, Luís Souto; Weber, Sebastian; Tashenov, Stanislav; Carvalho, Maria Luisa; Pessanha, Sofia
2017-04-01
In this study, we investigated the potential of the Geant4 Monte Carlo simulation package for retrieving accurate elemental concentrations from energy dispersive X-ray fluorescence spectra. For this purpose, we implemented a Geant4 code that simulates an energy dispersive X-ray fluorescence spectrometer in a triaxial geometry. In parallel, we also performed measurements in a spectrometer with the same geometry, for validation of the present code. This spectrometer allows low limits of detection and permits an effective comparison of elemental concentrations down to tens of part-per-million. Several standard reference materials of both light, medium and heavy matrices were employed in order to attest the validity of simulations for several values of averaged atomic number. We observed good agreement of better than 25% for most fluorescence lines of interest, and for all materials. Discrepancies were observed at the multiple Compton scattering tail. We thus concluded from this experimental and theoretical study that the present Geant4 code can be incorporated in a quantitative method for the determination of trace elements in a triaxial-type spectrometer.
Mahdavi, Naser; Shamsaei, Mojtaba; Shafaei, Mostafa; Rabiei, Ali
2013-10-01
The objective of this study was to design a system in order to analyze gold and other heavy elements in internal organs using in vivo x-ray fluorescence (XRF) analysis. Monte Carlo N Particle code MCNP was used to simulate phantoms and sources. A source of 99mTc was simulated in kidney to excite the gold x-rays. Changes in K XRF response due to variations in tissue thickness overlying the kidney at the measurement site were investigated. Different simulations having tissue thicknesses of 20, 30, 40, 50 and 60 mm were performed. Kα1 and Kα2 for all depths were measured. The linearity of the XRF system was also studied by increasing the gold concentration in the kidney phantom from 0 to 500 µg g-1 kidney tissue. The results show that gold concentration between 3 and 10 µg g-1 kidney tissue can be detected for distance between the skin and the kidney surface of 20-60 mm. The study also made a comparison between the skin doses for the source outside and inside the phantom.
Saghamanesh, S.; Aghamiri, S. M.; Kamali-Asl, A.; Yashiro, W.
2017-09-01
An important challenge in real-world biomedical applications of x-ray phase contrast imaging (XPCI) techniques is the efficient use of the photon flux generated by an incoherent and polychromatic x-ray source. This efficiency can directly influence dose and exposure time and ideally should not affect the superior contrast and sensitivity of XPCI. In this paper, we present a quantitative evaluation of the photon detection efficiency of two laboratory-based XPCI methods, grating interferometry (GI) and coded-aperture (CA). We adopt a Monte Carlo approach to simulate existing prototypes of those systems, tailored for mammography applications. Our simulations were validated by means of a simple experiment performed on a CA XPCI system. Our results show that the fraction of detected photons in the standard energy range of mammography are about 1.4% and 10% for the GI and CA techniques, respectively. The simulations indicate that the design of the optical components plays an important role in the higher efficiency of CA compared to the GI method. It is shown that the use of lower absorbing materials as the substrates for GI gratings can improve its flux efficiency by up to four times. Along similar lines, we also show that an optimized and compact configuration of GI could lead to a 3.5 times higher fraction of detected counts compared to a standard and non-optimised GI implementation.
(U) Introduction to Monte Carlo Methods
Hungerford, Aimee L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-03-20
Monte Carlo methods are very valuable for representing solutions to particle transport problems. Here we describe a “cook book” approach to handling the terms in a transport equation using Monte Carlo methods. Focus is on the mechanics of a numerical Monte Carlo code, rather than the mathematical foundations of the method.
Maleka, PP; Maucec, M
2005-01-01
Monte Carlo method was used to simulate the pulse-height response function of high-precision germanium (HPGe) detector for photon energies below 1 MeV. The calculations address the uncertainty estimation due to inadequate specifications of source positioning and to variations in the detector's physi
Density matrix quantum Monte Carlo
Blunt, N S; Spencer, J S; Foulkes, W M C
2013-01-01
This paper describes a quantum Monte Carlo method capable of sampling the full density matrix of a many-particle system, thus granting access to arbitrary reduced density matrices and allowing expectation values of complicated non-local operators to be evaluated easily. The direct sampling of the density matrix also raises the possibility of calculating previously inaccessible entanglement measures. The algorithm closely resembles the recently introduced full configuration interaction quantum Monte Carlo method, but works all the way from infinite to zero temperature. We explain the theory underlying the method, describe the algorithm, and introduce an importance-sampling procedure to improve the stochastic efficiency. To demonstrate the potential of our approach, the energy and staggered magnetization of the isotropic antiferromagnetic Heisenberg model on small lattices and the concurrence of one-dimensional spin rings are compared to exact or well-established results. Finally, the nature of the sign problem...
Efficient kinetic Monte Carlo simulation
Schulze, Tim P.
2008-02-01
This paper concerns kinetic Monte Carlo (KMC) algorithms that have a single-event execution time independent of the system size. Two methods are presented—one that combines the use of inverted-list data structures with rejection Monte Carlo and a second that combines inverted lists with the Marsaglia-Norman-Cannon algorithm. The resulting algorithms apply to models with rates that are determined by the local environment but are otherwise arbitrary, time-dependent and spatially heterogeneous. While especially useful for crystal growth simulation, the algorithms are presented from the point of view that KMC is the numerical task of simulating a single realization of a Markov process, allowing application to a broad range of areas where heterogeneous random walks are the dominate simulation cost.
Zane, S; Turolla, R; Nobili, L
2009-01-01
Within the magnetar scenario, the "twisted magnetosphere" model appears very promising in explaining the persistent X-ray emission from the Soft Gamma Repeaters and the Anomalous X-ray Pulsars (SGRs and AXPs). In the first two papers of the series, we have presented a 3D Monte Carlo code for solving radiation transport as soft, thermal photons emitted by the star surface are resonantly upscattered by the magnetospheric particles. A spectral model archive has been generated and implemented in XSPEC. Here we report on the systematic application of our spectral model to different XMM-Newton and Integral observations of SGRs and AXPs. We find that the synthetic spectra provide a very good fit to the data for the nearly all the source (and source states) we have analyzed.
Adaptive Multilevel Monte Carlo Simulation
Hoel, H
2011-08-23
This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).
Maucec, M.; Hendriks, Peter; Limburg, J.; de Meijer, R. J.
2009-01-01
The analysis of natural gamma-ray spectra measured in boreholes has to take into account borehole parameters such as the presence of casings and borehole diameter. For large, high-efficiency gamma-ray detectors, such as BGO-based systems, which employ full-spectrum data analysis, corresponding corre
Tomal, A; Santos, J C; Costa, P R; Lopez Gonzales, A H; Poletti, M E
2015-06-01
In this work, the energy response functions of a CdTe detector were obtained by Monte Carlo (MC) simulation in the energy range from 5 to 160keV, using the PENELOPE code. In the response calculations the carrier transport features and the detector resolution were included. The computed energy response function was validated through comparison with experimental results obtained with (241)Am and (152)Eu sources. In order to investigate the influence of the correction by the detector response at diagnostic energy range, x-ray spectra were measured using a CdTe detector (model XR-100T, Amptek), and then corrected by the energy response of the detector using the stripping procedure. Results showed that the CdTe exhibits good energy response at low energies (below 40keV), showing only small distortions on the measured spectra. For energies below about 80keV, the contribution of the escape of Cd- and Te-K x-rays produce significant distortions on the measured x-ray spectra. For higher energies, the most important correction is the detector efficiency and the carrier trapping effects. The results showed that, after correction by the energy response, the measured spectra are in good agreement with those provided by a theoretical model of the literature. Finally, our results showed that the detailed knowledge of the response function and a proper correction procedure are fundamental for achieving more accurate spectra from which quality parameters (i.e., half-value layer and homogeneity coefficient) can be determined.
Karimi Jashni, Hojatollah; Safigholi, Habib; Meigooni, Ali S
2014-10-23
Dose calculations in current brachytherapy treatment planning systems (TPS) are commonly based on TG-43U1 formalism. These TPS are obtained by superposition principle of single-source dosimetric parameters in liquid water, neglecting the effects of tissue heterogeneity. In this work, the sensitivity of the TG-43U1 based radial dose function (g(r)) of Miniature Electronic Brachytherapy X-ray Sources (MEBXS) to bone-heterogeneity was examined. To quantify the heterogeneity effects for g(r), a series of Monte Carlo (MC) based radiation transport simulations at the center of homogeneous and heterogeneous spherical phantoms were performed using the MCNP5 code. The ratio of the g(r) in the heterogeneius phantom to the uniform soft tisuue phantom as a function of the bone thickness was determined. These results indicated that for 40keV beam, the maximum ratios for thicknesses of 1cm and 2cm were 3.36 and 3.27, respectively. These values changed to 4.28 and 4.06, for 60keV beam, respectively. Introduction of 0.5cm or 1cm red marrow, into the interior of the cortical bone changed the maximum variations to, 3.54, and 3.57 for 40keV, and 4.28, and 4.25, for 60keV, respectively. Copyright © 2014 Elsevier Ltd. All rights reserved.
Fabian, M.; Svab, E.; Krezhov, K.
2016-09-01
Rare-earth molybdate glasses have been prepared by rapid quench technique, the network structure was investigated by neutron and high-energy X-ray diffraction. For data evaluation the reverse Monte Carlo simulation technique was applied to obtain a possible 3dimensional network configuration, which is consistent with the experimental data. From the modelling the partial atomic correlation functions giJ(r) and the coordination number distributions CNij have been revealed. Formation of MoO4 (55%) and MoO6 (25%) units was established for the binary 90MoO3-10Nd2O3 glass. The B-O first neighbour distribution show a relatively broad first neighbour distance at 1.40A, the average coordination numbers show the presents of trigonal BO3 and tetrahedral BO4 groups. For 50MoO3-25Nd2O3-25B2O3 sample mixed MoO4-BO4 and MoO4-BO3 linkages form pronounced intermediate-range order.
Jensen, K. A.; Ripoll, J.-F.; Wray, A. A.; Joseph, D.; ElHafi, M.
2004-01-01
Five computational methods for solution of the radiative transfer equation in an absorbing-emitting and non-scattering gray medium were compared on a 2 m JP-8 pool fire. The temperature and absorption coefficient fields were taken from a synthetic fire due to the lack of a complete set of experimental data for fires of this size. These quantities were generated by a code that has been shown to agree well with the limited quantity of relevant data in the literature. Reference solutions to the governing equation were determined using the Monte Carlo method and a ray tracing scheme with high angular resolution. Solutions using the discrete transfer method, the discrete ordinate method (DOM) with both S(sub 4) and LC(sub 11) quadratures, and moment model using the M(sub 1) closure were compared to the reference solutions in both isotropic and anisotropic regions of the computational domain. DOM LC(sub 11) is shown to be the more accurate than the commonly used S(sub 4) quadrature technique, especially in anisotropic regions of the fire domain. This represents the first study where the M(sub 1) method was applied to a combustion problem occurring in a complex three-dimensional geometry. The M(sub 1) results agree well with other solution techniques, which is encouraging for future applications to similar problems since it is computationally the least expensive solution technique. Moreover, M(sub 1) results are comparable to DOM S(sub 4).
Comptomization and radiation spectra of X-ray sources. Calculation of the Monte Carlo method
Pozdnyakov, L. A.; Sobol, I. M.; Sonyayev, R. A.
1980-01-01
The results of computations of the Comptomization of low frequency radiation in weakly relativistic plasma are presented. The influence of photoabsorption by iron ions on a hard X-ray spectrum is considered.
Schiavon, Nick; de Palmas, Anna; Bulla, Claudio; Piga, Giampaolo; Brunetti, Antonio
2016-09-01
A spectrometric protocol combining Energy Dispersive X-Ray Fluorescence Spectrometry with Monte Carlo simulations of experimental spectra using the XRMC code package has been applied for the first time to characterize the elemental composition of a series of famous Iron Age small scale archaeological bronze replicas of ships (known as the ;Navicelle;) from the Nuragic civilization in Sardinia, Italy. The proposed protocol is a useful, nondestructive and fast analytical tool for Cultural Heritage sample. In Monte Carlo simulations, each sample was modeled as a multilayered object composed by two or three layers depending on the sample: when all present, the three layers are the original bronze substrate, the surface corrosion patina and an outermost protective layer (Paraloid) applied during past restorations. Monte Carlo simulations were able to account for the presence of the patina/corrosion layer as well as the presence of the Paraloid protective layer. It also accounted for the roughness effect commonly found at the surface of corroded metal archaeological artifacts. In this respect, the Monte Carlo simulation approach adopted here was, to the best of our knowledge, unique and enabled to determine the bronze alloy composition together with the thickness of the surface layers without the need for previously removing the surface patinas, a process potentially threatening preservation of precious archaeological/artistic artifacts for future generations.
范钦敏; 刘亚雯; 等
1995-01-01
Simulation approach includes such processes as photon emissions from X-ray tube with a spectral distribution,total reflection on the sample support,photoelectric effect in thin layer sample,as well as characteristic line absorption and detection,The calculation results are in agreement with experimental ones.
Monte Carlo Study on Focus Properties of Portable Ultrabright Microfocus X-Ray Sources
WANG Kai-Ge; WANG Lei; LIU Wen-Qing; NIU Han-Ben
2006-01-01
@@ The construct and electrode potential of emitting systems are very important for the portable ultrahigh brightness microfocus x-ray sources. The ratio of Dw/H (Dw is the diameter of Wehnelt grid aperture and H is the setting height of the cathode) and the grid bias are determinative parameters for the emission current and focus properties of an electron beam.
Graefe, J.L., E-mail: grafejl@mcmaster.ca [Department of Medical Physics and Applied Radiation Sciences, McMaster University, Hamilton, Ontario, L8S 4K1 (Canada); McNeill, F.E.; Chettle, D.R.; Byun, S.H. [Department of Medical Physics and Applied Radiation Sciences, McMaster University, Hamilton, Ontario, L8S 4K1 (Canada)
2012-06-15
We have extended our previous experimental and Monte-Carlo work on the detection of Gd by in vivo prompt gamma neutron activation analysis to include X ray emission. In this paper we incorporate the characteristic K X ray emission that occurs due to internal conversion from the de-excitation of the {sup 155}Gd(n,{gamma}){sup 156}Gd{sup Asterisk-Operator} and {sup 157}Gd(n,{gamma}){sup 158}Gd{sup Asterisk-Operator} reactions. The experimental Gd K X ray intensities are compared with the Monte-Carlo model and demonstrate excellent agreement. The experiment was consistently higher than simulation by 5%. For the detection system used, the Gd K{sub {alpha}} X rays are about 1.5 times as intense as the most dominant prompt gamma ray from the {sup 157}Gd(n,{gamma}) reaction. The partial elemental cross section for K{sub {alpha}} X ray emission is {approx}1.35 times larger than that of the most dominant prompt gamma ray from neutron capture of {sup 157}Gd alone. The use of the K X rays was found to improve the sensitivity of the proposed system to measure Gd retention after exposure to a Gd-based MRI contrast agent. The detection limit in phantoms was {approx}30% better when the X ray signal was incorporated into the analysis method, reducing the detection limit from 0.89 to 0.64 ppm Gd.
Geometrical and Monte Carlo projectors in 3D PET reconstruction
Aguiar, Pablo; Rafecas López, Magdalena; Ortuno, Juan Enrique; Kontaxakis, George; Santos, Andrés; Pavía, Javier; Ros, Domènec
2010-01-01
Purpose: In the present work, the authors compare geometrical and Monte Carlo projectors in detail. The geometrical projectors considered were the conventional geometrical Siddon ray-tracer (S-RT) and the orthogonal distance-based ray-tracer (OD-RT), based on computing the orthogonal distance from the center of image voxel to the line-of-response. A comparison of these geometrical projectors was performed using different point spread function (PSF) models. The Monte Carlo-based method under c...
吴永鹏; 汤彬
2012-01-01
Usually, there are several methods, e.g. experiment, interpolation experiment-based, analytic function, and Monte-Carlo simulation, to calculate the response functions in LaBr3(Ce) detectors. In logging applications, the experiment-based methods cannot be adopted because of their limitations. Analytic function has the advantage of fast calculating speed, but it is very difficult to take into account many effects that occur in practical applications. On the contrary, Monte-Carlo simulation can deal with physical and geometric configurations very tactfully. It has a distinct advantage for calculating the functions with complex configurations in borehole. A new application of LaBr3(Ce) detector is in natural gamma-rays borehole spectrometer for uranium well logging. Calculation of response functions must consider a series of physical and geometric factors under complex logging conditions, including earth formations and its relevant parameters, different energies, material and thickness of the casings, the fluid between the two tubes, and relative position of the LaBr3(Ce) crystal to steel ingot at the front of logging tube. The present work establishes Monte-Carlo simulation models for the above-mentioned situations, and then performs calculations for main gamma-rays from natural radio-elements series. The response functions can offer experimental directions for the design of borehole detection system, and provide technique basis and basic data for spectral analysis of natural gamma-rays, and for sonrceless calibration in uranium quantitative interpretation.
Vergnaud, T.; Nimal, J.C. (CEA Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France))
1990-01-01
The three-dimensional polycinetic Monte Carlo particle transport code TRIPOLI has been under development in the French Shielding Laboratory at Saclay since 1965. TRIPOLI-1 began to run in 1970 and became TRIPOLI-2 in 1978: since then its capabilities have been improved and many studies have been performed. TRIPOLI can treat stationary or time dependent problems in shielding and in neutronics. Some examples of solved problems are presented to demonstrate the many possibilities of the system. (author).
Composite analysis with Monte Carlo methods: an example with cosmic rays and clouds
Laken, Benjamin A
2013-01-01
The composite (superposed epoch) analysis technique has been frequently employed to examine a hypothesized link between solar activity and the Earth's atmosphere, often through an investigation of Forbush decrease (Fd) events (sudden high-magnitude decreases in the flux cosmic rays impinging on the upper-atmosphere lasting up to several days). This technique is useful for isolating low-amplitude signals within data where background variability would otherwise obscure detection. The application of composite analyses to investigate the possible impacts of Fd events involves a statistical examination of time-dependent atmospheric responses to Fds often from aerosol and/or cloud datasets. Despite the publication of numerous results within this field, clear conclusions have yet to be drawn and much ambiguity and disagreement still remain. In this paper, we argue that the conflicting findings of composite studies within this field relate to methodological differences in the manner in which the composites have been ...
A graphics-card implementation of Monte-Carlo simulations for cosmic-ray transport
Tautz, R. C.
2016-05-01
A graphics card implementation of a test-particle simulation code is presented that is based on the CUDA extension of the C/C++ programming language. The original CPU version has been developed for the calculation of cosmic-ray diffusion coefficients in artificial Kolmogorov-type turbulence. In the new implementation, the magnetic turbulence generation, which is the most time-consuming part, is separated from the particle transport and is performed on a graphics card. In this article, the modification of the basic approach of integrating test particle trajectories to employ the SIMD (single instruction, multiple data) model is presented and verified. The efficiency of the new code is tested and several language-specific accelerating factors are discussed. For the example of isotropic magnetostatic turbulence, sample results are shown and a comparison to the results of the CPU implementation is performed.
Monte Carlo approach to turbulence
Dueben, P.; Homeier, D.; Muenster, G. [Muenster Univ. (Germany). Inst. fuer Theoretische Physik; Jansen, K. [DESY, Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Mesterhazy, D. [Humboldt Univ., Berlin (Germany). Inst. fuer Physik
2009-11-15
The behavior of the one-dimensional random-force-driven Burgers equation is investigated in the path integral formalism on a discrete space-time lattice. We show that by means of Monte Carlo methods one may evaluate observables, such as structure functions, as ensemble averages over different field realizations. The regularization of shock solutions to the zero-viscosity limit (Hopf-equation) eventually leads to constraints on lattice parameters required for the stability of the simulations. Insight into the formation of localized structures (shocks) and their dynamics is obtained. (orig.)
Simulation of ultrasoft X-rays induced DNA damage using the Geant4 Monte Carlo toolkit
Tajik, Marjan; Rozatian, Amir S. H.; Semsarha, Farid
2015-01-01
In this study, the total yields of SSB and DSB induced by monoenergetic electrons with energies of 0.28-4.55 keV, corresponding to ultrasoft X-rays energies, have been calculated in Charlton and Humm volume model using the Geant4-DNA toolkit and compared with theoretical and experimental data. A reasonable agreement between the obtained results in the present study and experimental and theoretical data of previous studies showed the efficiency of this model in estimating the total yield of strand breaks in spite of its simplicity. Also, it has been found that in the low energy region, the yield of the total SSB remains nearly constant while the DSB yield increases with decreasing energy. Moreover, a direct dependency between DSB induction, RBE value and the mean lineal energy as a microdosimetry quantity has been observed. In addition, it has become clear that the use of the threshold energy of 10.79 eV to calculate the total strand breaks yields results in a better agreement with the experiments, while the threshold of 17.5 eV shows a big difference.
Simulation of ultrasoft X-rays induced DNA damage using the Geant4 Monte Carlo toolkit
Tajik, Marjan; Rozatian, Amir S.H. [Department of Physics, University of Isfahan, Hezar Jarib Street, Isfahan 81746-73441 (Iran, Islamic Republic of); Semsarha, Farid, E-mail: Semsarha@ibb.ut.ac.ir [Institute of Biochemistry and Biophysics (IBB), University of Tehran, P.O. Box: 13145-1384, Tehran (Iran, Islamic Republic of)
2015-01-01
In this study, the total yields of SSB and DSB induced by monoenergetic electrons with energies of 0.28–4.55 keV, corresponding to ultrasoft X-rays energies, have been calculated in Charlton and Humm volume model using the Geant4-DNA toolkit and compared with theoretical and experimental data. A reasonable agreement between the obtained results in the present study and experimental and theoretical data of previous studies showed the efficiency of this model in estimating the total yield of strand breaks in spite of its simplicity. Also, it has been found that in the low energy region, the yield of the total SSB remains nearly constant while the DSB yield increases with decreasing energy. Moreover, a direct dependency between DSB induction, RBE value and the mean lineal energy as a microdosimetry quantity has been observed. In addition, it has become clear that the use of the threshold energy of 10.79 eV to calculate the total strand breaks yields results in a better agreement with the experiments, while the threshold of 17.5 eV shows a big difference.
Composite analysis with Monte Carlo methods: an example with cosmic rays and clouds
Laken B.A.
2013-09-01
Full Text Available The composite (superposed epoch analysis technique has been frequently employed to examine a hypothesized link between solar activity and the Earth’s atmosphere, often through an investigation of Forbush decrease (Fd events (sudden high-magnitude decreases in the flux cosmic rays impinging on the upper-atmosphere lasting up to several days. This technique is useful for isolating low-amplitude signals within data where background variability would otherwise obscure detection. The application of composite analyses to investigate the possible impacts of Fd events involves a statistical examination of time-dependent atmospheric responses to Fds often from aerosol and/or cloud datasets. Despite the publication of numerous results within this field, clear conclusions have yet to be drawn and much ambiguity and disagreement still remain. In this paper, we argue that the conflicting findings of composite studies within this field relate to methodological differences in the manner in which the composites have been constructed and analyzed. Working from an example, we show how a composite may be objectively constructed to maximize signal detection, robustly identify statistical significance, and quantify the lower-limit uncertainty related to hypothesis testing. Additionally, we also demonstrate how a seemingly significant false positive may be obtained from non-significant data by minor alterations to methodological approaches.
Monte Carlo techniques in radiation therapy
Verhaegen, Frank
2013-01-01
Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...
Approaching Chemical Accuracy with Quantum Monte Carlo
Petruzielo, Frank R.; Toulouse, Julien; Umrigar, C. J.
2012-01-01
International audience; A quantum Monte Carlo study of the atomization energies for the G2 set of molecules is presented. Basis size dependence of diffusion Monte Carlo atomization energies is studied with a single determinant Slater-Jastrow trial wavefunction formed from Hartree-Fock orbitals. With the largest basis set, the mean absolute deviation from experimental atomization energies for the G2 set is 3.0 kcal/mol. Optimizing the orbitals within variational Monte Carlo improves the agreem...
Mean field simulation for Monte Carlo integration
Del Moral, Pierre
2013-01-01
In the last three decades, there has been a dramatic increase in the use of interacting particle methods as a powerful tool in real-world applications of Monte Carlo simulation in computational physics, population biology, computer sciences, and statistical machine learning. Ideally suited to parallel and distributed computation, these advanced particle algorithms include nonlinear interacting jump diffusions; quantum, diffusion, and resampled Monte Carlo methods; Feynman-Kac particle models; genetic and evolutionary algorithms; sequential Monte Carlo methods; adaptive and interacting Marko
Monte Carlo Treatment Planning for Advanced Radiotherapy
Cronholm, Rickard
and validation of a Monte Carlo model of a medical linear accelerator (i), converting a CT scan of a patient to a Monte Carlo compliant phantom (ii) and translating the treatment plan parameters (including beam energy, angles of incidence, collimator settings etc) to a Monte Carlo input file (iii). A protocol...... previous algorithms since it uses delineations of structures in order to include and/or exclude certain media in various anatomical regions. This method has the potential to reduce anatomically irrelevant media assignment. In house MATLAB scripts translating the treatment plan parameters to Monte Carlo...
1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO
T. EVANS; ET AL
2000-08-01
We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.
ZHAO Hong-bin; KONG Xiao-xiao; LI Quan-feng; LIN Xiao-qi; BAO Shang-lian
2009-01-01
Objective:In this study,we try to establish an initial electron beam model by combining Monte Carlo simulation method with particle dynamic calculation (TRSV) for the single 6 MV X-ray accelerating waveguide of BJ- 6 medical linac. Methods and Materials:1. We adapted the treatment head configuration of BJ- 6 medical linac made by Beijing Medical Equipment Institute (BMEI) as the radiation system for this study. 2. Use particle dynamics calculation code called TRSV to drive out the initial electron beam parameters of the energy spectrum, the spatial intensity distribution, and the beam incidence angle. 3. Analyze the 6 MV X-ray beam characteristics of PDDc, OARc in a water phantom by using Monte Carlo simulation (BEAMnrc,DOSXYZnrc) for a preset of the initial electron beam parameters which have been determined by TRSV, do the comparisons of the measured results of PDDm, OARm in a real water phantom, and then use the deviations of calculated and measured results to slightly modify the initial electron beam model back and forth until the deviations meet the error less than 2%. Results:The deviations between the Monte Carlo simulation results of percentage depth doses at PDDc and off-axis ratios OARc and the measured results of PDDm and OARm in a water phantom were within 2%. Conclusion:When doing the Monte Carlo simulation to determine the parameters of an initial electron beam for a particular medical linac like BJ- 6, modifying some parameters based on the particle dynamics calculation code would give some more reasonable and more acceptable results.
Error in Monte Carlo, quasi-error in Quasi-Monte Carlo
Kleiss, R. H. P.; Lazopoulos, A.
2006-01-01
While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the abscence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction o...
Gastellu-Etchegorry, Jean-Philippe; Yin, Tiangang; Lauret, Nicolas; Grau, Eloi; Rubio, Jeremy; Cook, Bruce D.; Morton, Douglas C.; Sun, Guoqing
2016-01-01
Light Detection And Ranging (LiDAR) provides unique data on the 3-D structure of atmosphere constituents and the Earth's surface. Simulating LiDAR returns for different laser technologies and Earth scenes is fundamental for evaluating and interpreting signal and noise in LiDAR data. Different types of models are capable of simulating LiDAR waveforms of Earth surfaces. Semi-empirical and geometric models can be imprecise because they rely on simplified simulations of Earth surfaces and light interaction mechanisms. On the other hand, Monte Carlo ray tracing (MCRT) models are potentially accurate but require long computational time. Here, we present a new LiDAR waveform simulation tool that is based on the introduction of a quasi-Monte Carlo ray tracing approach in the Discrete Anisotropic Radiative Transfer (DART) model. Two new approaches, the so-called "box method" and "Ray Carlo method", are implemented to provide robust and accurate simulations of LiDAR waveforms for any landscape, atmosphere and LiDAR sensor configuration (view direction, footprint size, pulse characteristics, etc.). The box method accelerates the selection of the scattering direction of a photon in the presence of scatterers with non-invertible phase function. The Ray Carlo method brings traditional ray-tracking into MCRT simulation, which makes computational time independent of LiDAR field of view (FOV) and reception solid angle. Both methods are fast enough for simulating multi-pulse acquisition. Sensitivity studies with various landscapes and atmosphere constituents are presented, and the simulated LiDAR signals compare favorably with their associated reflectance images and Laser Vegetation Imaging Sensor (LVIS) waveforms. The LiDAR module is fully integrated into DART, enabling more detailed simulations of LiDAR sensitivity to specific scene elements (e.g., atmospheric aerosols, leaf area, branches, or topography) and sensor configuration for airborne or satellite LiDAR sensors.
Pothoczki, Szilvia, E-mail: pothoczki.szilvia@wigner.mta.hu; Temleitner, László; Pusztai, László [Institute for Solid State Physics and Optics, Wigner Research Centre for Physics, Hungarian Academy of Sciences, Konkoly-Thege M. út 29-33, 1121 Budapest (Hungary)
2014-02-07
Synchrotron X-ray diffraction measurements have been conducted on liquid phosphorus trichloride, tribromide, and triiodide. Molecular Dynamics simulations for these molecular liquids were performed with a dual purpose: (1) to establish whether existing intermolecular potential functions can provide a picture that is consistent with diffraction data and (2) to generate reliable starting configurations for subsequent Reverse Monte Carlo modelling. Structural models (i.e., sets of coordinates of thousands of atoms) that were fully consistent with experimental diffraction information, within errors, have been prepared by means of the Reverse Monte Carlo method. Comparison with reference systems, generated by hard sphere-like Monte Carlo simulations, was also carried out to demonstrate the extent to which simple space filling effects determine the structure of the liquids (and thus, also estimating the information content of measured data). Total scattering structure factors, partial radial distribution functions and orientational correlations as a function of distances between the molecular centres have been calculated from the models. In general, more or less antiparallel arrangements of the primary molecular axes that are found to be the most favourable orientation of two neighbouring molecules. In liquid PBr{sub 3} electrostatic interactions seem to play a more important role in determining intermolecular correlations than in the other two liquids; molecular arrangements in both PCl{sub 3} and PI{sub 3} are largely driven by steric effects.
Nasir, M.; Pratama, D.; Anam, C.; Haryanto, F.
2016-03-01
The aim of this research was to calculate Size Specific Dose Estimates (SSDE) generated by the varian OBI CBCT v1.4 X-ray tube working at 100 kV using EGSnrc Monte Carlo simulations. The EGSnrc Monte Carlo code used in this simulation was divided into two parts. Phase space file data resulted by the first part simulation became an input to the second part. This research was performed with varying phantom diameters of 5 to 35 cm and varying phantom lengths of 10 to 25 cm. Dose distribution data were used to calculate SSDE values using trapezoidal rule (trapz) function in a Matlab program. SSDE obtained from this calculation was compared to that in AAPM report and experimental data. It was obtained that the normalization of SSDE value for each phantom diameter was between 1.00 and 3.19. The normalization of SSDE value for each phantom length was between 0.96 and 1.07. The statistical error in this simulation was 4.98% for varying phantom diameters and 5.20% for varying phantom lengths. This study demonstrated the accuracy of the Monte Carlo technique in simulating the dose calculation. In the future, the influence of cylindrical phantom material to SSDE would be studied.
Langevin Monte Carlo filtering for target tracking
Iglesias Garcia, Fernando; Bocquel, Melanie; Driessen, Hans
2015-01-01
This paper introduces the Langevin Monte Carlo Filter (LMCF), a particle filter with a Markov chain Monte Carlo algorithm which draws proposals by simulating Hamiltonian dynamics. This approach is well suited to non-linear filtering problems in high dimensional state spaces where the bootstrap filte
An introduction to Monte Carlo methods
Walter, J. -C.; Barkema, G. T.
2015-01-01
Monte Carlo simulations are methods for simulating statistical systems. The aim is to generate a representative ensemble of configurations to access thermodynamical quantities without the need to solve the system analytically or to perform an exact enumeration. The main principles of Monte Carlo sim
An introduction to Monte Carlo methods
Walter, J. -C.; Barkema, G. T.
2015-01-01
Monte Carlo simulations are methods for simulating statistical systems. The aim is to generate a representative ensemble of configurations to access thermodynamical quantities without the need to solve the system analytically or to perform an exact enumeration. The main principles of Monte Carlo sim
Challenges of Monte Carlo Transport
Long, Alex Roberts [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-10
These are slides from a presentation for Parallel Summer School at Los Alamos National Laboratory. Solving discretized partial differential equations (PDEs) of interest can require a large number of computations. We can identify concurrency to allow parallel solution of discrete PDEs. Simulated particles histories can be used to solve the Boltzmann transport equation. Particle histories are independent in neutral particle transport, making them amenable to parallel computation. Physical parameters and method type determine the data dependencies of particle histories. Data requirements shape parallel algorithms for Monte Carlo. Then, Parallel Computational Physics and Parallel Monte Carlo are discussed and, finally, the results are given. The mesh passing method greatly simplifies the IMC implementation and allows simple load-balancing. Using MPI windows and passive, one-sided RMA further simplifies the implementation by removing target synchronization. The author is very interested in implementations of PGAS that may allow further optimization for one-sided, read-only memory access (e.g. Open SHMEM). The MPICH_RMA_OVER_DMAPP option and library is required to make one-sided messaging scale on Trinitite - Moonlight scales poorly. Interconnect specific libraries or functions are likely necessary to ensure performance. BRANSON has been used to directly compare the current standard method to a proposed method on idealized problems. The mesh passing algorithm performs well on problems that are designed to show the scalability of the particle passing method. BRANSON can now run load-imbalanced, dynamic problems. Potential avenues of improvement in the mesh passing algorithm will be implemented and explored. A suite of test problems that stress DD methods will elucidate a possible path forward for production codes.
The MC21 Monte Carlo Transport Code
Sutton TM, Donovan TJ, Trumbull TH, Dobreff PS, Caro E, Griesheimer DP, Tyburski LJ, Carpenter DC, Joo H
2007-01-09
MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities.
Horowitz, Y.S. [Ben Gurion Univ. of the Negev, Beersheva (Israel); Hirning, C.R. [Ontario Hydro, Whitby (Canada); Yuen, P.; Wong, P. [Chalk River Labs., Ontario (Canada)
1994-10-01
Monte Carlo calculations have been carried out for monoenergetic electrons from 0.1 to 4 MeV irradiating LiF chips in both perpendicular and isotropic geometry. This enabled the calculation of skin dose correction factors (beta factors) for typical beta energy spectra as measured with a beta-ray spectrometer at CANDU nuclear generating stations. The correction factors were estimated by averaging the depth dose distributions for the monoenergetic electrons over the experimentally measured beta-ray spectra. The calculations illustrate the large uncertainty in beta factors arising from the unknown angular distribution of the beta-ray radiation field and uncertainties in the shape of the beta-ray spectra below 500 keV. 28 refs., 8 figs., 2 tabs.
Lee, Taewoong; Lee, Hyounggun; Lee, Wonho, E-mail: wonhol@korea.ac.kr
2015-10-21
This study evaluated the use of Compton imaging technology to monitor prompt gamma rays emitted by {sup 10}B in boron neutron capture therapy (BNCT) applied to a computerized human phantom. The Monte Carlo method, including particle-tracking techniques, was used for simulation. The distribution of prompt gamma rays emitted by the phantom during irradiation with neutron beams is closely associated with the distribution of the boron in the phantom. Maximum likelihood expectation maximization (MLEM) method was applied to the information obtained from the detected prompt gamma rays to reconstruct the distribution of the tumor including the boron uptake regions (BURs). The reconstructed Compton images of the prompt gamma rays were combined with the cross-sectional images of the human phantom. Quantitative analysis of the intensity curves showed that all combined images matched the predetermined conditions of the simulation. The tumors including the BURs were distinguishable if they were more than 2 cm apart.
Qin, Jianguo; Liu, Rong; Zhu, Tonghua; Zhang, Xinwei; Ye, Bangjiao
2015-01-01
To overcome the problem of inefficient computing time and unreliable results in MCNP5 calculation, a two-step method is adopted to calculate the energy deposition of prompt gamma-rays in detectors for depleted uranium spherical shells under D-T neutrons irradiation. In the first step, the gamma-ray spectrum for energy below 7 MeV is calculated by MCNP5 code; secondly, the electron recoil spectrum in a BC501A liquid scintillator detector is simulated based on EGSnrc Monte Carlo Code with the gamma-ray spectrum from the first step as input. The comparison of calculated results with experimental ones shows that the simulations agree well with experiment in the energy region 0.4-3 MeV for the prompt gamma-ray spectrum and below 4 MeVee for the electron recoil spectrum. The reliability of the two-step method in this work is validated.
Kohei Arai
2012-06-01
Full Text Available Simulation method of sea water which contains spherical and non-spherical particles of suspended solid and phytoplankton based on Monte Carlo Ray Tracing: MCRT is proposed for identifying non-spherical species of phytoplankton. From the simulation results, it is found that the proposed MCRT model is validated. Also some possibility of identification of spherical and non-spherical shapes of particles which are contained in sea water is shown. Meanwhile, simulations with the different shape of particles, Prolate and Oblate show that Degree of Polarization: DP depends on shapes. Therefore, non-spherical shape of phytoplankton can be identified with polarization characteristics measurements of the ocean.
Kohei Arai
2013-04-01
Full Text Available Comparative study on linear and nonlinear mixed pixel models of which pixels in remote sensing satellite images is composed with plural ground cover materials mixed together, is conducted for remote sensing satellite image analysis. The mixed pixel models are based on Cierniewski of ground surface reflectance model. The comparative study is conducted by using of Monte Carlo Ray Tracing: MCRT simulations. Through simulation study, the difference between linear and nonlinear mixed pixel models is clarified. Also it is found that the simulation model is validated.
Slot Thing, Rune; Bernchou, Uffe; Mainegra-Hing, Ernesto;
2013-01-01
Abstract Purpose. Cone beam computed tomography (CBCT) image quality is limited by scattered photons. Monte Carlo (MC) simulations provide the ability of predicting the patient-specific scatter contamination in clinical CBCT imaging. Lengthy simulations prevent MC-based scatter correction from...... and pelvis scan were simulated within 2% statistical uncertainty in two hours per scan. Within the same time, the ray tracing algorithm provided the primary signal for each of the projections. Thus, all the data needed for MC-based scatter correction in clinical CBCT imaging was obtained within two hours per...
Spike Inference from Calcium Imaging using Sequential Monte Carlo Methods
NeuroData; Paninski, L
2015-01-01
Vogelstein JT, Paninski L. Spike Inference from Calcium Imaging using Sequential Monte Carlo Methods. Statistical and Applied Mathematical Sciences Institute (SAMSI) Program on Sequential Monte Carlo Methods, 2008
Jovari, P.; Saksl, K.; Pryds, Nini
2007-01-01
Short range order of amorphous Mg60Cu30Y10 was investigated by x-ray and neutron diffraction, Cu and Y K-edge x-ray absorption fine structure measurements, and the reverse Monte Carlo simulation technique. We found that Mg-Mg and Mg-Cu nearest neighbor distances are very similar to values found...... studied by differential scanning calorimetry and in situ x-ray powder diffraction. The alloy shows a glass transition and three crystallization events, the first and dominant one at 456 K corresponding to eutectic crystallization of at least three phases: Mg2Cu and most likely cubic MgY and CuMgY....
Kim, J.; Park, J.; Kim, J.; Kim, D. W.; Yun, S.; Lim, C. H.; Kim, H. K.
2016-11-01
For the purpose of designing an x-ray detector system for cargo container inspection, we have investigated the energy-absorption signal and noise in CdWO4 detectors for megavoltage x-ray photons. We describe the signal and noise measures, such as quantum efficiency, average energy absorption, Swank noise factor, and detective quantum efficiency (DQE), in terms of energy moments of absorbed energy distributions (AEDs) in a detector. The AED is determined by using a Monte Carlo simulation. The results show that the signal-related measures increase with detector thickness. However, the improvement of Swank noise factor with increasing thickness is weak, and this energy-absorption noise characteristic dominates the DQE performance. The energy-absorption noise mainly limits the signal-to-noise performance of CdWO4 detectors operated at megavoltage x-ray beam.
Monte Carlo approaches to light nuclei
Carlson, J.
1990-01-01
Significant progress has been made recently in the application of Monte Carlo methods to the study of light nuclei. We review new Green's function Monte Carlo results for the alpha particle, Variational Monte Carlo studies of {sup 16}O, and methods for low-energy scattering and transitions. Through these calculations, a coherent picture of the structure and electromagnetic properties of light nuclei has arisen. In particular, we examine the effect of the three-nucleon interaction and the importance of exchange currents in a variety of experimentally measured properties, including form factors and capture cross sections. 29 refs., 7 figs.
Monte carlo simulation for soot dynamics
Zhou, Kun
2012-01-01
A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.
Lattice gauge theories and Monte Carlo simulations
Rebbi, Claudio
1983-01-01
This volume is the most up-to-date review on Lattice Gauge Theories and Monte Carlo Simulations. It consists of two parts. Part one is an introductory lecture on the lattice gauge theories in general, Monte Carlo techniques and on the results to date. Part two consists of important original papers in this field. These selected reprints involve the following: Lattice Gauge Theories, General Formalism and Expansion Techniques, Monte Carlo Simulations. Phase Structures, Observables in Pure Gauge Theories, Systems with Bosonic Matter Fields, Simulation of Systems with Fermions.
Quantum Monte Carlo for minimum energy structures
Wagner, Lucas K
2010-01-01
We present an efficient method to find minimum energy structures using energy estimates from accurate quantum Monte Carlo calculations. This method involves a stochastic process formed from the stochastic energy estimates from Monte Carlo that can be averaged to find precise structural minima while using inexpensive calculations with moderate statistical uncertainty. We demonstrate the applicability of the algorithm by minimizing the energy of the H2O-OH- complex and showing that the structural minima from quantum Monte Carlo calculations affect the qualitative behavior of the potential energy surface substantially.
Fast quantum Monte Carlo on a GPU
Lutsyshyn, Y
2013-01-01
We present a scheme for the parallelization of quantum Monte Carlo on graphical processing units, focusing on bosonic systems and variational Monte Carlo. We use asynchronous execution schemes with shared memory persistence, and obtain an excellent acceleration. Comparing with single core execution, GPU-accelerated code runs over x100 faster. The CUDA code is provided along with the package that is necessary to execute variational Monte Carlo for a system representing liquid helium-4. The program was benchmarked on several models of Nvidia GPU, including Fermi GTX560 and M2090, and the latest Kepler architecture K20 GPU. Kepler-specific optimization is discussed.
11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing
Nuyens, Dirk
2016-01-01
This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.
Simulation and the Monte Carlo method
Rubinstein, Reuven Y
2016-01-01
Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...
Monte Carlo simulations for plasma physics
Okamoto, M.; Murakami, S.; Nakajima, N.; Wang, W.X. [National Inst. for Fusion Science, Toki, Gifu (Japan)
2000-07-01
Plasma behaviours are very complicated and the analyses are generally difficult. However, when the collisional processes play an important role in the plasma behaviour, the Monte Carlo method is often employed as a useful tool. For examples, in neutral particle injection heating (NBI heating), electron or ion cyclotron heating, and alpha heating, Coulomb collisions slow down high energetic particles and pitch angle scatter them. These processes are often studied by the Monte Carlo technique and good agreements can be obtained with the experimental results. Recently, Monte Carlo Method has been developed to study fast particle transports associated with heating and generating the radial electric field. Further it is applied to investigating the neoclassical transport in the plasma with steep gradients of density and temperatures which is beyong the conventional neoclassical theory. In this report, we briefly summarize the researches done by the present authors utilizing the Monte Carlo method. (author)
Quantum Monte Carlo Calculations of Light Nuclei
Pieper, Steven C
2007-01-01
During the last 15 years, there has been much progress in defining the nuclear Hamiltonian and applying quantum Monte Carlo methods to the calculation of light nuclei. I describe both aspects of this work and some recent results.
Improved Monte Carlo Renormalization Group Method
Gupta, R.; Wilson, K. G.; Umrigar, C.
1985-01-01
An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.
Monte Carlo methods for particle transport
Haghighat, Alireza
2015-01-01
The Monte Carlo method has become the de facto standard in radiation transport. Although powerful, if not understood and used appropriately, the method can give misleading results. Monte Carlo Methods for Particle Transport teaches appropriate use of the Monte Carlo method, explaining the method's fundamental concepts as well as its limitations. Concise yet comprehensive, this well-organized text: * Introduces the particle importance equation and its use for variance reduction * Describes general and particle-transport-specific variance reduction techniques * Presents particle transport eigenvalue issues and methodologies to address these issues * Explores advanced formulations based on the author's research activities * Discusses parallel processing concepts and factors affecting parallel performance Featuring illustrative examples, mathematical derivations, computer algorithms, and homework problems, Monte Carlo Methods for Particle Transport provides nuclear engineers and scientists with a practical guide ...
Smart detectors for Monte Carlo radiative transfer
Baes, Maarten
2008-01-01
Many optimization techniques have been invented to reduce the noise that is inherent in Monte Carlo radiative transfer simulations. As the typical detectors used in Monte Carlo simulations do not take into account all the information contained in the impacting photon packages, there is still room to optimize this detection process and the corresponding estimate of the surface brightness distributions. We want to investigate how all the information contained in the distribution of impacting photon packages can be optimally used to decrease the noise in the surface brightness distributions and hence to increase the efficiency of Monte Carlo radiative transfer simulations. We demonstrate that the estimate of the surface brightness distribution in a Monte Carlo radiative transfer simulation is similar to the estimate of the density distribution in an SPH simulation. Based on this similarity, a recipe is constructed for smart detectors that take full advantage of the exact location of the impact of the photon pack...
Quantum Monte Carlo approaches for correlated systems
Becca, Federico
2017-01-01
Over the past several decades, computational approaches to studying strongly-interacting systems have become increasingly varied and sophisticated. This book provides a comprehensive introduction to state-of-the-art quantum Monte Carlo techniques relevant for applications in correlated systems. Providing a clear overview of variational wave functions, and featuring a detailed presentation of stochastic samplings including Markov chains and Langevin dynamics, which are developed into a discussion of Monte Carlo methods. The variational technique is described, from foundations to a detailed description of its algorithms. Further topics discussed include optimisation techniques, real-time dynamics and projection methods, including Green's function, reptation and auxiliary-field Monte Carlo, from basic definitions to advanced algorithms for efficient codes, and the book concludes with recent developments on the continuum space. Quantum Monte Carlo Approaches for Correlated Systems provides an extensive reference ...
Bartalini, P.; Kryukov, A.; Selyuzhenkov, Ilya V.; Sherstnev, A.; Vologdin, A.
2004-01-01
We present the Monte-Carlo events Data Base (MCDB) project and its development plans. MCDB facilitates communication between authors of Monte-Carlo generators and experimental users. It also provides a convenient book-keeping and an easy access to generator level samples. The first release of MCDB is now operational for the CMS collaboration. In this paper we review the main ideas behind MCDB and discuss future plans to develop this Data Base further within the CERN LCG framework.
Monte Carlo Algorithms for Linear Problems
DIMOV, Ivan
2000-01-01
MSC Subject Classification: 65C05, 65U05. Monte Carlo methods are a powerful tool in many fields of mathematics, physics and engineering. It is known, that these methods give statistical estimates for the functional of the solution by performing random sampling of a certain chance variable whose mathematical expectation is the desired functional. Monte Carlo methods are methods for solving problems using random variables. In the book [16] edited by Yu. A. Shreider one can find the followin...
The Feynman Path Goes Monte Carlo
Sauer, Tilman
2001-01-01
Path integral Monte Carlo (PIMC) simulations have become an important tool for the investigation of the statistical mechanics of quantum systems. I discuss some of the history of applying the Monte Carlo method to non-relativistic quantum systems in path-integral representation. The principle feasibility of the method was well established by the early eighties, a number of algorithmic improvements have been introduced in the last two decades.
Monte Carlo Hamiltonian:Inverse Potential
LUO Xiang-Qian; CHENG Xiao-Ni; Helmut KR(O)GER
2004-01-01
The Monte Carlo Hamiltonian method developed recently allows to investigate the ground state and low-lying excited states of a quantum system,using Monte Carlo(MC)algorithm with importance sampling.However,conventional MC algorithm has some difficulties when applied to inverse potentials.We propose to use effective potential and extrapolation method to solve the problem.We present examples from the hydrogen system.
Self-consistent kinetic lattice Monte Carlo
Horsfield, A.; Dunham, S.; Fujitani, Hideaki
1999-07-01
The authors present a brief description of a formalism for modeling point defect diffusion in crystalline systems using a Monte Carlo technique. The main approximations required to construct a practical scheme are briefly discussed, with special emphasis on the proper treatment of charged dopants and defects. This is followed by tight binding calculations of the diffusion barrier heights for charged vacancies. Finally, an application of the kinetic lattice Monte Carlo method to vacancy diffusion is presented.
Error in Monte Carlo, quasi-error in Quasi-Monte Carlo
Kleiss, R H
2006-01-01
While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the abscence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction of an estimator of stochastic nature, based on the ensemble of pointsets with a particular discrepancy value. We investigate the consequences of this choice and give some first empirical results on the suggested estimators.
Kim, W. R.; Lee, C. S.; Lee, J. K. [Hanyang Univ., Seoul (Korea, Republic of)
2001-10-01
Mathematical phantoms of representing the adult female at 0,3,6 and 9 months of gestation were constructed, and organ doses and effective doses were calculated in standard irradiation environment and abdomen X-ray examination. Phantoms were based on the data set of ORNL and MCNP4B, a general-purposed Monte Carlo code was used for dose calculation. Firstly, organ doses and effective doses of pregnant female and fetus for 0.4 and 0.8MeV broad parallel beam incident from AP and PA direction were calculated. Then, the same calculations were performed in abdomen AP X-ray examination. As gestation period went by, effective doses of pregnant woman decreased because major organs were shielded by expanded uterus. Fetus of 9 month is lower than that of 6 month because of shielding effect of placement for AP irradiation.
Skidmore, M.S., E-mail: mss16@star.le.ac.u [Space Research Centre, Department of Physics and Astronomy, University of Leicester, University Road, Leicester, LE1 7RH (United Kingdom); Ambrosi, R.M. [Space Research Centre, Department of Physics and Astronomy, University of Leicester, University Road, Leicester, LE1 7RH (United Kingdom)
2010-01-01
Characterising a planetary radiation environment is important to: (1) assess the habitability of a planetary body for indigenous life; (2) assess the risks associated with manned exploration missions to a planetary body and (3) predict/interpret the results that remote sensing instrumentation may obtain from a planetary body (e.g. interpret the gamma-ray emissions from a planetary surface produced by radioactive decay or via the interaction of galactic cosmic rays to obtain meaningful estimates of the concentration of certain elements on the surface of a planet). The University of Leicester is developing instrumentation for geophysical applications that include gamma-ray spectroscopy, gamma-ray densitometry and radiometric dating. This paper describes the verification of a Monte-Carlo planetary radiation environment model developed using the MCNPX code. The model is designed to model the radiation environments of Mars and the Moon, but is applicable to other planetary bodies, and will be used to predict the performance of the instrumentation being developed at Leicester. This study demonstrates that the modelled gamma-ray data is in good agreement with gamma-ray data obtained by the gamma-ray spectrometers on 2001 Mars Odyssey and Lunar Prospector, and can be used to accurately model geophysical instrumentation for planetary science applications.
Mein, S [Duke University Medical Physics Graduate Program (United States); Gunasingha, R [Department of Radiation Safety, Duke University Medical Center (United States); Nolan, M [Department of Clinical Sciences, College of Veterinary Medicine, North Carolina State University (United States); Oldham, M; Adamson, J [Department of Radiation Oncology, Duke University Medical Center (United States)
2016-06-15
Purpose: X-PACT is an experimental cancer therapy where kV x-rays are used to photo-activate anti-cancer therapeutics through phosphor intermediaries (phosphors that absorb x-rays and re-radiate as UV light). Clinical trials in pet dogs are currently underway (NC State College of Veterinary Medicine) and an essential component is the ability to model the kV dose in these dogs. Here we report the commissioning and characterization of a Monte Carlo (MC) treatment planning simulation tool to calculate X-PACT radiation doses in canine trials. Methods: FLUKA multi-particle MC simulation package was used to simulate a standard X-PACT radiation treatment beam of 80kVp with the Varian OBI x-ray source geometry. The beam quality was verified by comparing measured and simulated attenuation of the beam by various thicknesses of aluminum (2–4.6 mm) under narrow beam conditions (HVL). The beam parameters at commissioning were then corroborated using MC, characterized and verified with empirically collected commissioning data, including: percent depth dose curves (PDD), back-scatter factors (BSF), collimator scatter factor(s), and heel effect, etc. All simulations were conducted for N=30M histories at M=100 iterations. Results: HVL and PDD simulation data agreed with an average percent error of 2.42%±0.33 and 6.03%±1.58, respectively. The mean square error (MSE) values for HVL and PDD (0.07% and 0.50%) were low, as expected; however, longer simulations are required to validate convergence to the expected values. Qualitatively, pre- and post-filtration source spectra matched well with 80kVp references generated via SPEKTR software. Further validation of commissioning data simulation is underway in preparation for first-time 3D dose calculations with canine CBCT data. Conclusion: We have prepared a Monte Carlo simulation capable of accurate dose calculation for use with ongoing X-PACT canine clinical trials. Preliminary results show good agreement with measured data and hold
Morse Monte Carlo Radiation Transport Code System
Emmett, M.B.
1975-02-01
The report contains sections containing descriptions of the MORSE and PICTURE codes, input descriptions, sample problems, deviations of the physical equations and explanations of the various error messages. The MORSE code is a multipurpose neutron and gamma-ray transport Monte Carlo code. Time dependence for both shielding and criticality problems is provided. General three-dimensional geometry may be used with an albedo option available at any material surface. The PICTURE code provide aid in preparing correct input data for the combinatorial geometry package CG. It provides a printed view of arbitrary two-dimensional slices through the geometry. By inspecting these pictures one may determine if the geometry specified by the input cards is indeed the desired geometry. 23 refs. (WRF)
Lunar Regolith Albedos Using Monte Carlos
Wilson, T. L.; Andersen, V.; Pinsky, L. S.
2003-01-01
The analysis of planetary regoliths for their backscatter albedos produced by cosmic rays (CRs) is important for space exploration and its potential contributions to science investigations in fundamental physics and astrophysics. Albedos affect all such experiments and the personnel that operate them. Groups have analyzed the production rates of various particles and elemental species by planetary surfaces when bombarded with Galactic CR fluxes, both theoretically and by means of various transport codes, some of which have emphasized neutrons. Here we report on the preliminary results of our current Monte Carlo investigation into the production of charged particles, neutrons, and neutrinos by the lunar surface using FLUKA. In contrast to previous work, the effects of charm are now included.
Modeling neutron guides using Monte Carlo simulations
Wang, D Q; Crow, M L; Wang, X L; Lee, W T; Hubbard, C R
2002-01-01
Four neutron guide geometries, straight, converging, diverging and curved, were characterized using Monte Carlo ray-tracing simulations. The main areas of interest are the transmission of the guides at various neutron energies and the intrinsic time-of-flight (TOF) peak broadening. Use of a delta-function time pulse from a uniform Lambert neutron source allows one to quantitatively simulate the effect of guides' geometry on the TOF peak broadening. With a converging guide, the intensity and the beam divergence increases while the TOF peak width decreases compared with that of a straight guide. By contrast, use of a diverging guide decreases the intensity and the beam divergence, and broadens the width (in TOF) of the transmitted neutron pulse.
Pazianotto, Mauricio Tizziani; Carlson, Brett Vern [Instituto Tecnologico de Aeronautica (ITA), Sao Jose dos Campos, SP (Brazil); Federico, Claudio Antonio; Goncalez, Odair Lelis [Centro Tecnico Aeroespacial (CTA), Sao Jose dos Campos, SP (Brazil). Instituto de Estudos Avancados
2011-07-01
Full text: Great effort is required to understand better the cosmic radiation (CR) dose received by sensitive equipment, on-board computers and aircraft crew members at Brazil airspace, because there is a large area of South America and Brazil subject to the South Atlantic Anomaly (SAA). High energy neutrons are produced by interactions between primary cosmic ray and atmospheric atoms, and also undergo moderation resulting in a wider spectrum of energy ranging from thermal energies (0:025eV ) to energies of several hundreds of MeV. Measurements of the cosmic radiation dose on-board aircrafts need to be followed with an integral flow monitor on the ground level in order to register CR intensity variations during the measurements. The Long Counter (LC) neutron detector was designed as a directional neutron flux meter standard because it presents fairly constant response for energy under 10MeV. However we would like to use it as a ground based neutron monitor for cosmic ray induced neutron spectrum (CRINS) that presents an isotropic fluency and a wider spectrum of energy. The LC was modeled and tested using a Monte Carlo transport simulation for irradiations with known neutron sources ({sup 241}Am-Be and {sup 251}Cf) as a benchmark. Using this geometric model its efficiency was calculated to CRINS isotropic flux, introducing high energy neutron interactions models. The objective of this work is to present the model for simulation of the isotropic neutron source employing the MCNPX code (Monte Carlo N-Particle eXtended) and then access the LC efficiency to compare it with experimental results for cosmic ray neutrons measures on ground level. (author)
Bottaini, C. [Hercules Laboratory, University of Évora, Palacio do Vimioso, Largo Marquês de Marialva 8, 7000-809 Évora (Portugal); Mirão, J. [Hercules Laboratory, University of Évora, Palacio do Vimioso, Largo Marquês de Marialva 8, 7000-809 Évora (Portugal); Évora Geophysics Centre, Rua Romão Ramalho 59, 7000 Évora (Portugal); Figuereido, M. [Archaeologist — Monte da Capelinha, Apartado 54, 7005, São Miguel de Machede, Évora (Portugal); Candeias, A. [Hercules Laboratory, University of Évora, Palacio do Vimioso, Largo Marquês de Marialva 8, 7000-809 Évora (Portugal); Évora Chemistry Centre, Rua Romão Ramalho 59, 7000 Évora (Portugal); Brunetti, A. [Department of Political Science and Communication, University of Sassari, Via Piandanna 2, 07100 Sassari (Italy); Schiavon, N., E-mail: schiavon@uevora.pt [Hercules Laboratory, University of Évora, Palacio do Vimioso, Largo Marquês de Marialva 8, 7000-809 Évora (Portugal); Évora Geophysics Centre, Rua Romão Ramalho 59, 7000 Évora (Portugal)
2015-01-01
Energy dispersive X-ray fluorescence (EDXRF) is a well-known technique for non-destructive and in situ analysis of archaeological artifacts both in terms of the qualitative and quantitative elemental composition because of its rapidity and non-destructiveness. In this study EDXRF and realistic Monte Carlo simulation using the X-ray Monte Carlo (XRMC) code package have been combined to characterize a Cu-based bowl from the Iron Age burial from Fareleira 3 (Southern Portugal). The artifact displays a multilayered structure made up of three distinct layers: a) alloy substrate; b) green oxidized corrosion patina; and c) brownish carbonate soil-derived crust. To assess the reliability of Monte Carlo simulation in reproducing the composition of the bulk metal of the objects without recurring to potentially damaging patina's and crust's removal, portable EDXRF analysis was performed on cleaned and patina/crust coated areas of the artifact. Patina has been characterized by micro X-ray Diffractometry (μXRD) and Back-Scattered Scanning Electron Microscopy + Energy Dispersive Spectroscopy (BSEM + EDS). Results indicate that the EDXRF/Monte Carlo protocol is well suited when a two-layered model is considered, whereas in areas where the patina + crust surface coating is too thick, X-rays from the alloy substrate are not able to exit the sample. - Highlights: • EDXRF/Monte Carlo simulation is used to characterize an archeological alloy. • EDXRF analysis was performed on cleaned and patina coated areas of the artifact. • EDXRF/Montes Carlo protocol is well suited when a two-layered model is considered. • When the patina is too thick, X-rays from substrate are unable to exit the sample.
Wang, Jiayue; Shi, Jiaru; Huang, Wenhui; Tang, Chuanxiang
2017-02-01
Among all microfocus X-ray tubes, 1 MeV has remained a "gray zone" despite its universal application in radiation therapy and non-destructive testing. One challenge existing in fabricating 1 MeV microfocus X-ray tubes is beam broadening inside metal anodes, which limits the minimum focal spot size a system can obtain. In particular, a complete understanding of the intrinsic broadening process, i.e., the point-spread function (PSF) of X-ray targets is needed. In this paper, relationships between PSF and beam energy, target thickness and electron incidence angle were investigated via Monte Carlo simulation. Focal spot limits for both transmission- and reflection-type tungsten targets at 0.5, 1 and 1.5 MeV were calculated, with target thicknesses ranging from 1 μm to 2 cm. Transmission-type targets with thickness less than 5 μ m could achieve micrometer-scale spots while reflection-type targets exhibited superiority for spots larger than 100 μm . In addition, by demonstrating the spot variation at off-normal incidence, the role of unidirectional beam was explored in microfocus X-ray systems. We expect that these results can enable alternative designs to improve the focal spot limit of X-ray tubes and benefit accurate photon source modeling.
Determining MTF of digital detector system with Monte Carlo simulation
Jeong, Eun Seon; Lee, Hyung Won; Nam, Sang Hee
2005-04-01
We have designed a detector based on a-Se(amorphous Selenium) and done simulation the detector with Monte Carlo method. We will apply the cascaded linear system theory to determine the MTF for whole detector system. For direct comparison with experiment, we have simulated 139um pixel pitch and used simulated X-ray tube spectrum.
Approaching Chemical Accuracy with Quantum Monte Carlo
Petruzielo, F R; Umrigar, C J
2012-01-01
A quantum Monte Carlo study of the atomization energies for the G2 set of molecules is presented. Basis size dependence of diffusion Monte Carlo atomization energies is studied with a single determinant Slater-Jastrow trial wavefunction formed from Hartree-Fock orbitals. With the largest basis set, the mean absolute deviation from experimental atomization energies for the G2 set is 3.0 kcal/mol. Optimizing the orbitals within variational Monte Carlo improves the agreement between diffusion Monte Carlo and experiment, reducing the mean absolute deviation to 2.1 kcal/mol. Moving beyond a single determinant Slater-Jastrow trial wavefunction, diffusion Monte Carlo with a small complete active space Slater-Jastrow trial wavefunction results in near chemical accuracy. In this case, the mean absolute deviation from experimental atomization energies is 1.2 kcal/mol. It is shown from calculations on systems containing phosphorus that the accuracy can be further improved by employing a larger active space.
Choi, Yu-Na; Kim, Hee-Joung; Park, Hye-Suk; Lee, Chang-Lae; Cho, Hyo-Min; Lee, Seung-Wan; Ryu, Hyun-Ju [Yonsei University, Wonju (Korea, Republic of)
2010-09-15
There have been many efforts to advance the technology of X-ray digital mammography in order to enhance the early detection of breast pathology. The purpose of this study was to evaluate image quality and the radiation dose after magnifying X-ray digital mammography using the Geant4 Application for Tomographic Emission (GATE). In this study, we simulated a Monte Carlo model of an X-ray digital mammographic system, and we present a technique for magnification and discuss how it affects the image quality. The simulated X-ray digital mammographic system with GATE consists of an X-ray source, a compression paddle, a supporting plate, and an imaging plate (IP) of computed radiography (CR). The degree of magnification ranged from 1.0 to 2.0. We designed a semi-cylindrical phantom with a thickness of 45-mm and a radius of 50-mm in order to evaluate the image quality after magnification. The phantom was made of poly methyl methacrylate (PMMA) and contained four spherical specks with diameters of 750, 500, 250, and 100-{mu}m to simulate microcalcifications. The simulation studies were performed with an X-ray energy spectrum calculated using the spectrum processor SRS-78. A combination of a molybdenum anode and a molybdenum filter (Mo/Mo) was used for the mammographic X-ray tubes. The effects of the degree of magnification were investigated in terms of both the contrast-to-noise ratio (CNR) and the average glandular dose (AGD). The results show that the CNR increased as the degree of magnification increased and decreased as breast glandularity increased. The AGD showed only a minor increase with magnification. Based on the results, magnification of mammographic images can be used to obtain high image quality with an increased CNR. Our X-ray digital mammographic system model with GATE may be used as a basis for future studies on X-ray imaging characteristics.
Albuquerque, M.A.G.; David, M.G.; Almeida, C.E. de; Magalhaes, L.A.G., E-mail: malbuqueque@hotmail.com [Universidade do Estado do Rio de Janeiro (UERJ), Rio de Janeiro, RJ (Brazil). Lab. de Ciencias Radiologicas; Bernal, M. [Universidade Estadual de Campinas (UNICAMP), SP (Brazil); Braz, D. [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil)
2015-07-01
Breast cancer is the most common type of cancer among women. The main strategy to increase the long-term survival of patients with this disease is the early detection of the tumor, and mammography is the most appropriate method for this purpose. Despite the reduction of cancer deaths, there is a big concern about the damage caused by the ionizing radiation to the breast tissue. To evaluate these measures it was modeled a mammography equipment, and obtained the depth spectra using the Monte Carlo method - PENELOPE code. The average energies of the spectra in depth and the half value layer of the mammography output spectrum. (author)
Monte Carlo EM加速算法%Acceleration of Monte Carlo EM Algorithm
罗季
2008-01-01
EM算法是近年来常用的求后验众数的估计的一种数据增广算法,但由于求出其E步中积分的显示表达式有时很困难,甚至不可能,限制了其应用的广泛性.而Monte Carlo EM算法很好地解决了这个问题,将EM算法中E步的积分用Monte Carlo模拟来有效实现,使其适用性大大增强.但无论是EM算法,还是Monte Carlo EM算法,其收敛速度都是线性的,被缺损信息的倒数所控制,当缺损数据的比例很高时,收敛速度就非常缓慢.而Newton-Raphson算法在后验众数的附近具有二次收敛速率.本文提出Monte Carlo EM加速算法,将Monte Carlo EM算法与Newton-Raphson算法结合,既使得EM算法中的E步用Monte Carlo模拟得以实现,又证明了该算法在后验众数附近具有二次收敛速度.从而使其保留了Monte Carlo EM算法的优点,并改进了Monte Carlo EM算法的收敛速度.本文通过数值例子,将Monte Carlo EM加速算法的结果与EM算法、Monte Carlo EM算法的结果进行比较,进一步说明了Monte Carlo EM加速算法的优良性.
Lazos, Dimitrios; Pokhrel, Damodar; Su, Zhong; Lu, Jun; Williamson, Jeffrey F.
2008-03-01
Fast and accurate modeling of cone-beam CT (CBCT) x-ray projection data can improve CBCT image quality either by linearizing projection data for each patient prior to image reconstruction (thereby mitigating detector blur/lag, spectral hardening, and scatter artifacts) or indirectly by supporting rigorous comparative simulation studies of competing image reconstruction and processing algorithms. In this study, we compare Monte Carlo-computed x-ray projections with projections experimentally acquired from our Varian Trilogy CBCT imaging system for phantoms of known design. Our recently developed Monte Carlo photon-transport code, PTRAN, was used to compute primary and scatter projections for cylindrical phantom of known diameter (NA model 76-410) with and without bow-tie filter and antiscatter grid for both full- and half-fan geometries. These simulations were based upon measured 120 kVp spectra, beam profiles, and flat-panel detector (4030CB) point-spread function. Compound Poisson- process noise was simulated based upon measured beam output. Computed projections were compared to flat- and dark-field corrected 4030CB images where scatter profiles were estimated by subtracting narrow axial-from full axial width 4030CB profiles. In agreement with the literature, the difference between simulated and measured projection data is of the order of 6-8%. The measurement of the scatter profiles is affected by the long tails of the detector PSF. Higher accuracy can be achieved mainly by improving the beam modeling and correcting the non linearities induced by the detector PSF.
Random Numbers and Monte Carlo Methods
Scherer, Philipp O. J.
Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.
SMCTC: Sequential Monte Carlo in C++
Adam M. Johansen
2009-04-01
Full Text Available Sequential Monte Carlo methods are a very general class of Monte Carlo methodsfor sampling from sequences of distributions. Simple examples of these algorithms areused very widely in the tracking and signal processing literature. Recent developmentsillustrate that these techniques have much more general applicability, and can be appliedvery eectively to statistical inference problems. Unfortunately, these methods are oftenperceived as being computationally expensive and dicult to implement. This articleseeks to address both of these problems.A C++ template class library for the ecient and convenient implementation of verygeneral Sequential Monte Carlo algorithms is presented. Two example applications areprovided: a simple particle lter for illustrative purposes and a state-of-the-art algorithmfor rare event estimation.
Shell model the Monte Carlo way
Ormand, W.E.
1995-03-01
The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.
Quantum Monte Carlo with variable spins.
Melton, Cody A; Bennett, M Chandler; Mitas, Lubos
2016-06-28
We investigate the inclusion of variable spins in electronic structure quantum Monte Carlo, with a focus on diffusion Monte Carlo with Hamiltonians that include spin-orbit interactions. Following our previous introduction of fixed-phase spin-orbit diffusion Monte Carlo, we thoroughly discuss the details of the method and elaborate upon its technicalities. We present a proof for an upper-bound property for complex nonlocal operators, which allows for the implementation of T-moves to ensure the variational property. We discuss the time step biases associated with our particular choice of spin representation. Applications of the method are also presented for atomic and molecular systems. We calculate the binding energies and geometry of the PbH and Sn2 molecules, as well as the electron affinities of the 6p row elements in close agreement with experiments.
A brief introduction to Monte Carlo simulation.
Bonate, P L
2001-01-01
Simulation affects our life every day through our interactions with the automobile, airline and entertainment industries, just to name a few. The use of simulation in drug development is relatively new, but its use is increasing in relation to the speed at which modern computers run. One well known example of simulation in drug development is molecular modelling. Another use of simulation that is being seen recently in drug development is Monte Carlo simulation of clinical trials. Monte Carlo simulation differs from traditional simulation in that the model parameters are treated as stochastic or random variables, rather than as fixed values. The purpose of this paper is to provide a brief introduction to Monte Carlo simulation methods.
Quantum Monte Carlo with Variable Spins
Melton, Cody A; Mitas, Lubos
2016-01-01
We investigate the inclusion of variable spins in electronic structure quantum Monte Carlo, with a focus on diffusion Monte Carlo with Hamiltonians that include spin-orbit interactions. Following our previous introduction of fixed-phase spin-orbit diffusion Monte Carlo (FPSODMC), we thoroughly discuss the details of the method and elaborate upon its technicalities. We present a proof for an upper-bound property for complex nonlocal operators, which allows for the implementation of T-moves to ensure the variational property. We discuss the time step biases associated with our particular choice of spin representation. Applications of the method are also presented for atomic and molecular systems. We calculate the binding energies and geometry of the PbH and Sn$_2$ molecules, as well as the electron affinities of the 6$p$ row elements in close agreement with experiments.
CosmoPMC: Cosmology Population Monte Carlo
Kilbinger, Martin; Cappe, Olivier; Cardoso, Jean-Francois; Fort, Gersende; Prunet, Simon; Robert, Christian P; Wraith, Darren
2011-01-01
We present the public release of the Bayesian sampling algorithm for cosmology, CosmoPMC (Cosmology Population Monte Carlo). CosmoPMC explores the parameter space of various cosmological probes, and also provides a robust estimate of the Bayesian evidence. CosmoPMC is based on an adaptive importance sampling method called Population Monte Carlo (PMC). Various cosmology likelihood modules are implemented, and new modules can be added easily. The importance-sampling algorithm is written in C, and fully parallelised using the Message Passing Interface (MPI). Due to very little overhead, the wall-clock time required for sampling scales approximately with the number of CPUs. The CosmoPMC package contains post-processing and plotting programs, and in addition a Monte-Carlo Markov chain (MCMC) algorithm. The sampling engine is implemented in the library pmclib, and can be used independently. The software is available for download at http://www.cosmopmc.info.
Quantum speedup of Monte Carlo methods.
Montanaro, Ashley
2015-09-08
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.
Adiabatic optimization versus diffusion Monte Carlo methods
Jarret, Michael; Jordan, Stephen P.; Lackey, Brad
2016-10-01
Most experimental and theoretical studies of adiabatic optimization use stoquastic Hamiltonians, whose ground states are expressible using only real nonnegative amplitudes. This raises a question as to whether classical Monte Carlo methods can simulate stoquastic adiabatic algorithms with polynomial overhead. Here we analyze diffusion Monte Carlo algorithms. We argue that, based on differences between L1 and L2 normalized states, these algorithms suffer from certain obstructions preventing them from efficiently simulating stoquastic adiabatic evolution in generality. In practice however, we obtain good performance by introducing a method that we call Substochastic Monte Carlo. In fact, our simulations are good classical optimization algorithms in their own right, competitive with the best previously known heuristic solvers for MAX-k -SAT at k =2 ,3 ,4 .
Self-learning Monte Carlo method
Liu, Junwei; Qi, Yang; Meng, Zi Yang; Fu, Liang
2017-01-01
Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of a general and efficient update algorithm for large size systems close to the phase transition, for which local updates perform badly. In this Rapid Communication, we propose a general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. We demonstrate the efficiency of SLMC in a spin model at the phase transition point, achieving a 10-20 times speedup.
Monte Carlo strategies in scientific computing
Liu, Jun S
2008-01-01
This paperback edition is a reprint of the 2001 Springer edition This book provides a self-contained and up-to-date treatment of the Monte Carlo method and develops a common framework under which various Monte Carlo techniques can be "standardized" and compared Given the interdisciplinary nature of the topics and a moderate prerequisite for the reader, this book should be of interest to a broad audience of quantitative researchers such as computational biologists, computer scientists, econometricians, engineers, probabilists, and statisticians It can also be used as the textbook for a graduate-level course on Monte Carlo methods Many problems discussed in the alter chapters can be potential thesis topics for masters’ or PhD students in statistics or computer science departments Jun Liu is Professor of Statistics at Harvard University, with a courtesy Professor appointment at Harvard Biostatistics Department Professor Liu was the recipient of the 2002 COPSS Presidents' Award, the most prestigious one for sta...
Lopez Maurino, Sebastian; Badano, Aldo; Cunningham, Ian A.; Karim, Karim S.
2016-03-01
We propose a new design of a stacked three-layer flat-panel x-ray detector for dual-energy (DE) imaging. Each layer consists of its own scintillator of individual thickness and an underlying thin-film-transistor-based flat-panel. Three images are obtained simultaneously in the detector during the same x-ray exposure, thereby eliminating any motion artifacts. The detector operation is two-fold: a conventional radiography image can be obtained by combining all three layers' images, while a DE subtraction image can be obtained from the front and back layers' images, where the middle layer acts as a mid-filter that helps achieve spectral separation. We proceed to optimize the detector parameters for two sample imaging tasks that could particularly benefit from this new detector by obtaining the best possible signal to noise ratio per root entrance exposure using well-established theoretical models adapted to fit our new design. These results are compared to a conventional DE temporal subtraction detector and a single-shot DE subtraction detector with a copper mid-filter, both of which underwent the same theoretical optimization. The findings are then validated using advanced Monte Carlo simulations for all optimized detector setups. Given the performance expected from initial results and the recent decrease in price for digital x-ray detectors, the simplicity of the three-layer stacked imager approach appears promising to usher in a new generation of multi-spectral digital x-ray diagnostics.
Parallel Markov chain Monte Carlo simulations.
Ren, Ruichao; Orkoulas, G
2007-06-07
With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.
Monte Carlo Hamiltonian：Linear Potentials
LUOXiang－Qian; HelmutKROEGER; 等
2002-01-01
We further study the validity of the Monte Carlo Hamiltonian method .The advantage of the method,in comparison with the standard Monte Carlo Lagrangian approach,is its capability to study the excited states.We consider two quantum mechanical models:a symmetric one V(x)=/x/2;and an asymmetric one V(x)==∞,for x<0 and V(x)=2,for x≥0.The results for the spectrum,wave functions and thermodynamical observables are in agreement with the analytical or Runge-Kutta calculations.
Monte Carlo dose distributions for radiosurgery
Perucha, M.; Leal, A.; Rincon, M.; Carrasco, E. [Sevilla Univ. (Spain). Dept. Fisiologia Medica y Biofisica; Sanchez-Doblado, F. [Sevilla Univ. (Spain). Dept. Fisiologia Medica y Biofisica]|[Hospital Univ. Virgen Macarena, Sevilla (Spain). Servicio de Oncologia Radioterapica; Nunez, L. [Clinica Puerta de Hierro, Madrid (Spain). Servicio de Radiofisica; Arrans, R.; Sanchez-Calzado, J.A.; Errazquin, L. [Hospital Univ. Virgen Macarena, Sevilla (Spain). Servicio de Oncologia Radioterapica; Sanchez-Nieto, B. [Royal Marsden NHS Trust (United Kingdom). Joint Dept. of Physics]|[Inst. of Cancer Research, Sutton, Surrey (United Kingdom)
2001-07-01
The precision of Radiosurgery Treatment planning systems is limited by the approximations of their algorithms and by their dosimetrical input data. This fact is especially important in small fields. However, the Monte Carlo methods is an accurate alternative as it considers every aspect of particle transport. In this work an acoustic neurinoma is studied by comparing the dose distribution of both a planning system and Monte Carlo. Relative shifts have been measured and furthermore, Dose-Volume Histograms have been calculated for target and adjacent organs at risk. (orig.)
Monte carlo simulations of organic photovoltaics.
Groves, Chris; Greenham, Neil C
2014-01-01
Monte Carlo simulations are a valuable tool to model the generation, separation, and collection of charges in organic photovoltaics where charges move by hopping in a complex nanostructure and Coulomb interactions between charge carriers are important. We review the Monte Carlo techniques that have been applied to this problem, and describe the results of simulations of the various recombination processes that limit device performance. We show how these processes are influenced by the local physical and energetic structure of the material, providing information that is useful for design of efficient photovoltaic systems.
Monte Carlo simulation of neutron scattering instruments
Seeger, P.A.
1995-12-31
A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width.
The Rational Hybrid Monte Carlo Algorithm
Clark, M A
2006-01-01
The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.
The Rational Hybrid Monte Carlo algorithm
Clark, Michael
2006-12-01
The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.
Arabi, Hossein; Asl, Ali Reza Kamali; Ay, Mohammad Reza; Zaidi, Habib
Objective: The purpose of this work is to evaluate the impact of optimization of magnification on performance parameters of the variable resolution X-ray (VRX) CT scanner. MethodsA realistic model based on an actual VRX CT scanner was implemented in the GATE Monte Carlo simulation platform. To
Arabi, Hossein; Asl, Ali Reza Kamali; Ay, Mohammad Reza; Zaidi, Habib
2015-01-01
Objective: The purpose of this work is to evaluate the impact of optimization of magnification on performance parameters of the variable resolution X-ray (VRX) CT scanner. MethodsA realistic model based on an actual VRX CT scanner was implemented in the GATE Monte Carlo simulation platform. To evalu
Fast sequential Monte Carlo methods for counting and optimization
Rubinstein, Reuven Y; Vaisman, Radislav
2013-01-01
A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the
Monte Carlo methods in AB initio quantum chemistry quantum Monte Carlo for molecules
Lester, William A; Reynolds, PJ
1994-01-01
This book presents the basic theory and application of the Monte Carlo method to the electronic structure of atoms and molecules. It assumes no previous knowledge of the subject, only a knowledge of molecular quantum mechanics at the first-year graduate level. A working knowledge of traditional ab initio quantum chemistry is helpful, but not essential.Some distinguishing features of this book are: Clear exposition of the basic theory at a level to facilitate independent study. Discussion of the various versions of the theory: diffusion Monte Carlo, Green's function Monte Carlo, and release n
Use of Monte Carlo Methods in brachytherapy; Uso del metodo de Monte Carlo en braquiterapia
Granero Cabanero, D.
2015-07-01
The Monte Carlo method has become a fundamental tool for brachytherapy dosimetry mainly because no difficulties associated with experimental dosimetry. In brachytherapy the main handicap of experimental dosimetry is the high dose gradient near the present sources making small uncertainties in the positioning of the detectors lead to large uncertainties in the dose. This presentation will review mainly the procedure for calculating dose distributions around a fountain using the Monte Carlo method showing the difficulties inherent in these calculations. In addition we will briefly review other applications of the method of Monte Carlo in brachytherapy dosimetry, as its use in advanced calculation algorithms, calculating barriers or obtaining dose applicators around. (Author)
On the use of stochastic approximation Monte Carlo for Monte Carlo integration
Liang, Faming
2009-03-01
The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration via a dynamically weighted estimator by calling some results from the literature of nonhomogeneous Markov chains. Our numerical results indicate that SAMC can yield significant savings over conventional Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, for the problems for which the energy landscape is rugged. © 2008 Elsevier B.V. All rights reserved.
Pazirandeh, Ali [Physics Department, University of Tehran, Tehran (Iran, Islamic Republic of) and Institute for Theoretical and Applied Physics, Tabriz (Iran, Islamic Republic of)]. E-mail: paziran@ut.ac.ir; Azizi, Maryam [Physics Department, University of Tehran, Tehran (Iran, Islamic Republic of); Institute for Theoretical and Applied Physics, Tabriz (Iran, Islamic Republic of); Farhad Masoudi, S. [Physics Department, University of Tehran, Tehran (Iran, Islamic Republic of); Institute for Theoretical and Applied Physics, Tabriz (Iran, Islamic Republic of)
2006-01-01
Among many conventional techniques, nuclear techniques have shown to be faster, more reliable, and more effective in detecting explosives. In the present work, neutrons from a 5 Ci Am-Be neutron source being in water tank are captured by elements of soil and landmine (TNT), namely {sup 14}N, H, C, and O. The prompt capture gamma-ray spectrum taken by a NaI (Tl) scintillation detector indicates the characteristic photo peaks of the elements in soil and landmine. In the high-energy region of the gamma-ray spectrum, besides 10.829 MeV of {sup 15}N, single escape (SE) and double escape (DE) peaks are unmistakable photo peaks, which make the detection of concealed explosive possible. The soil has the property of moderating neutrons as well as diffusing the thermal neutron flux. Among many elements in soil, silicon is more abundant and {sup 29}Si emits 10.607 MeV prompt capture gamma-ray, which makes 10.829 MeV detection difficult. The Monte Carlo simulation was used to adjust source-target-detector distances and soil moisture content to yield the best result. Therefore, we applied MCNP4C for configuration very close to reality of a hidden landmine in soil.
Saizu, Mirela Angela
2016-09-01
The developments of high-purity germanium detectors match very well the requirements of the in-vivo human body measurements regarding the gamma energy ranges of the radionuclides intended to be measured, the shape of the extended radioactive sources, and the measurement geometries. The Whole Body Counter (WBC) from IFIN-HH is based on an “over-square” high-purity germanium detector (HPGe) to perform accurate measurements of the incorporated radionuclides emitting X and gamma rays in the energy range of 10 keV-1500 keV, under conditions of good shielding, suitable collimation, and calibration. As an alternative to the experimental efficiency calibration method consisting of using reference calibration sources with gamma energy lines that cover all the considered energy range, it is proposed to use the Monte Carlo method for the efficiency calibration of the WBC using the radiation transport code MCNP5. The HPGe detector was modelled and the gamma energy lines of 241Am, 57Co, 133Ba, 137Cs, 60Co, and 152Eu were simulated in order to obtain the virtual efficiency calibration curve of the WBC. The Monte Carlo method was validated by comparing the simulated results with the experimental measurements using point-like sources. For their optimum matching, the impact of the variation of the front dead layer thickness and of the detector photon absorbing layers materials on the HPGe detector efficiency was studied, and the detector’s model was refined. In order to perform the WBC efficiency calibration for realistic people monitoring, more numerical calculations were generated simulating extended sources of specific shape according to the standard man characteristics.
A comparison of Monte Carlo generators
Golan, Tomasz
2014-01-01
A comparison of GENIE, NEUT, NUANCE, and NuWro Monte Carlo neutrino event generators is presented using a set of four observables: protons multiplicity, total visible energy, most energetic proton momentum, and $\\pi^+$ two-dimensional energy vs cosine distribution.
Monte Carlo Tools for Jet Quenching
Zapp, Korinna
2011-01-01
A thorough understanding of jet quenching on the basis of multi-particle final states and jet observables requires new theoretical tools. This talk summarises the status and propects of the theoretical description of jet quenching in terms of Monte Carlo generators.
An Introduction to Monte Carlo Methods
Raeside, D. E.
1974-01-01
Reviews the principles of Monte Carlo calculation and random number generation in an attempt to introduce the direct and the rejection method of sampling techniques as well as the variance-reduction procedures. Indicates that the increasing availability of computers makes it possible for a wider audience to learn about these powerful methods. (CC)
Variance Reduction Techniques in Monte Carlo Methods
Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.
2010-01-01
Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the intr
Scalable Domain Decomposed Monte Carlo Particle Transport
O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)
2013-12-05
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.
Monte Carlo methods beyond detailed balance
Schram, Raoul D.; Barkema, Gerard T.
2015-01-01
Monte Carlo algorithms are nearly always based on the concept of detailed balance and ergodicity. In this paper we focus on algorithms that do not satisfy detailed balance. We introduce a general method for designing non-detailed balance algorithms, starting from a conventional algorithm satisfying
Variance Reduction Techniques in Monte Carlo Methods
Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.
2010-01-01
Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the intr
An analysis of Monte Carlo tree search
James, S
2017-02-01
Full Text Available Monte Carlo Tree Search (MCTS) is a family of directed search algorithms that has gained widespread attention in recent years. Despite the vast amount of research into MCTS, the effect of modifications on the algorithm, as well as the manner...
Monte Carlo Simulation of Counting Experiments.
Ogden, Philip M.
A computer program to perform a Monte Carlo simulation of counting experiments was written. The program was based on a mathematical derivation which started with counts in a time interval. The time interval was subdivided to form a binomial distribution with no two counts in the same subinterval. Then the number of subintervals was extended to…
Monte Carlo Methods in ICF (LIRPP Vol. 13)
Zimmerman, George B.
2016-10-01
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved SOX in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
Shrestha, Suman; Vedantham, Srinivasan; Karellas, Andrew
2017-03-01
In digital breast tomosynthesis and digital mammography, the x-ray beam filter material and thickness vary between systems. Replacing K-edge filters with Al was investigated with the intent to reduce exposure duration and to simplify system design. Tungsten target x-ray spectra were simulated with K-edge filters (50 µm Rh; 50 µm Ag) and Al filters of varying thickness. Monte Carlo simulations were conducted to quantify the x-ray scatter from various filters alone, scatter-to-primary ratio (SPR) with compressed breasts, and to determine the radiation dose to the breast. These data were used to analytically compute the signal-difference-to-noise ratio (SDNR) at unit (1 mGy) mean glandular dose (MGD) for W/Rh and W/Ag spectra. At SDNR matched between K-edge and Al filtered spectra, the reductions in exposure duration and MGD were quantified for three strategies: (i) fixed Al thickness and matched tube potential in kilovolts (kV); (ii) fixed Al thickness and varying the kV to match the half-value layer (HVL) between Al and K-edge filtered spectra; and, (iii) matched kV and varying the Al thickness to match the HVL between Al and K-edge filtered spectra. Monte Carlo simulations indicate that the SPR with and without the breast were not different between Al and K-edge filters. Modelling for fixed Al thickness (700 µm) and kV matched to K-edge filtered spectra, identical SDNR was achieved with 37-57% reduction in exposure duration and with 2-20% reduction in MGD, depending on breast thickness. Modelling for fixed Al thickness (700 µm) and HVL matched by increasing the kV over (0,4) range, identical SDNR was achieved with 62-65% decrease in exposure duration and with 2-24% reduction in MGD, depending on breast thickness. For kV and HVL matched to K-edge filtered spectra by varying Al filter thickness over (700, 880) µm range, identical SDNR was achieved with 23-56% reduction in exposure duration and 2-20% reduction in MGD, depending on breast thickness. These
Depauw, Nicolas; Seco, Joao
2011-04-21
The imaging sensitivity of proton radiography has been studied and compared with kV and MV x-ray imaging using Monte Carlo simulations. A phantom was specifically modeled using 21 different material inserts with densities ranging from 0.001 to 1.92 g cm(-3). These simulations were run using the MGH double scattered proton beam, scanned pencil proton beams from 200 to 490 MeV, as well as pure 50 keV, 100 keV, 1 MeV and 2 MeV gamma x-ray beams. In order to compare the physics implied in both proton and photon radiography without being biased by the current state of the art in detector technology, the detectors were considered perfect. Along with spatial resolution, the contrast-to-noise ratio was evaluated and compared for each material. These analyses were performed using radiographic images that took into account the following: only primary protons, both primary and secondary protons, and both contributions while performing angular and energetic cuts. Additionally, tissue-to-tissue contrasts in an actual lung cancer patient case were studied for simulated proton radiographs and compared against the original kV x-ray image which corresponds to the current patient set-up image in the proton clinic. This study highlights the poorer spatial resolution of protons versus x-rays for radiographic imaging purposes, and the excellent density resolution of proton radiography. Contrasts around the tumor are higher using protons in a lung cancer patient case. The high-density resolution of proton radiography is of great importance for specific tumor diagnostics, such as in lung cancer, where x-ray radiography operates poorly. Furthermore, the use of daily proton radiography prior to proton therapy would ameliorate patient set-up while reducing the absorbed dose delivered through imaging.
Monte Carlo radiation transport in external beam radiotherapy
Çeçen, Yiğit
2013-01-01
The use of Monte Carlo in radiation transport is an effective way to predict absorbed dose distributions. Monte Carlo modeling has contributed to a better understanding of photon and electron transport by radiotherapy physicists. The aim of this review is to introduce Monte Carlo as a powerful radiation transport tool. In this review, photon and electron transport algorithms for Monte Carlo techniques are investigated and a clinical linear accelerator model is studied for external beam radiot...
无
2008-01-01
A matrix stripping method for the conversion of in-situ gamma ray spectrum, obtained with portable Ge detector, to photon flux energy distribution is proposed. The detector response is fully described by its stripping matrix and full absorption efficiency curve. A charge collection efficiency function is introduced in the simulation to take into account the existence of a transition zone of increasing charge collection after the inactive Ge layer. Good agreement is obtained between simulated and experimental full absorption efficiencies. The characteristic stripping matrix is determined by Monte Carlo simulation for different incident photon energies using the Geant4 toolkit system. The photon flux energy distribution is deduced by stripping the measured spectrum of the partial absorption and cosmic ray events and then applying the full absorption efficiency curve. The stripping method is applied to a measured in-situ spectrum. The value of the absorbed dose rate in air deduced from the corresponding flux energy distribution agrees well with the value measured directly in-situ.
Son, Sang-Kil; 10.1103/PhysRevA.85.063415
2013-01-01
When atoms and molecules are irradiated by an x-ray free-electron laser (XFEL), they are highly ionized via a sequence of one-photon ionization and relaxation processes. To describe the ionization dynamics during XFEL pulses, a rate equation model has been employed. Even though this model is straightforward for the case of light atoms, it generates a huge number of coupled rate equations for heavy atoms like xenon, which are not trivial to solve directly. Here, we employ the Monte Carlo method to address this problem and we investigate ionization dynamics of xenon atoms induced by XFEL pulses at a photon energy of 4500 eV. Charge state distributions, photo-/Auger electron spectra, and fluorescence spectra are presented for x-ray fluences of up to $10^{13}$ photons/$\\mu$m$^2$. With the photon energy of 4500 eV, xenon atoms can be ionized up to +44 through multiphoton absorption characterized by sequential one-photon single-electron interactions.
Monte Carlo Simulation Of Emission Tomography And Other Medical Imaging Techniques.
Harrison, Robert L
2010-01-05
An introduction to Monte Carlo simulation of emission tomography. This paper reviews the history and principles of Monte Carlo simulation, then applies these principles to emission tomography using the public domain simulation package SimSET (a Simulation System for Emission Tomography) as an example. Finally, the paper discusses how the methods are modified for X-ray computed tomography and radiotherapy simulations.
Monte Carlo Simulation for Statistical Decay of Compound Nucleus
Chadwick M.B.
2012-02-01
Full Text Available We perform Monte Carlo simulations for neutron and γ-ray emissions from a compound nucleus based on the Hauser-Feshbach statistical theory. This Monte Carlo Hauser-Feshbach (MCHF method calculation, which gives us correlated information between emitted particles and γ-rays. It will be a powerful tool in many applications, as nuclear reactions can be probed in a more microscopic way. We have been developing the MCHF code, CGM, which solves the Hauser-Feshbach theory with the Monte Carlo method. The code includes all the standard models that used in a standard Hauser-Feshbach code, namely the particle transmission generator, the level density module, interface to the discrete level database, and so on. CGM can emit multiple neutrons, as long as the excitation energy of the compound nucleus is larger than the neutron separation energy. The γ-ray competition is always included at each compound decay stage, and the angular momentum and parity are conserved. Some calculations for a fission fragment 140Xe are shown as examples of the MCHF method, and the correlation between the neutron and γ-ray is discussed.
Monte Carlo Simulations of Neutron Oil well Logging Tools
Azcurra, M
2002-01-01
Monte Carlo simulations of simple neutron oil well logging tools into typical geological formations are presented.The simulated tools consist of both 14 MeV pulsed and continuous Am-Be neutron sources with time gated and continuous gamma ray detectors respectively.The geological formation consists of pure limestone with 15% absolute porosity in a wide range of oil saturation.The particle transport was performed with the Monte Carlo N-Particle Transport Code System, MCNP-4B.Several gamma ray spectra were obtained at the detector position that allow to perform composition analysis of the formation.In particular, the ratio C/O was analyzed as an indicator of oil saturation.Further calculations are proposed to simulate actual detector responses in order to contribute to understand the relation between the detector response with the formation composition
Conceptual design and Monte Carlo simulations of the AGATA array
Farnea, E., E-mail: Enrico.Farnea@pd.infn.i [Istituto Nazionale di Fisica Nucleare, Sezione di Padova, Padova (Italy); Recchia, F.; Bazzacco, D. [Istituto Nazionale di Fisica Nucleare, Sezione di Padova, Padova (Italy); Kroell, Th. [Institut fuer Kernphysik, Technische Universitaet Darmstadt, Darmstadt (Germany); Podolyak, Zs. [Department of Physics, University of Surrey, Guildford (United Kingdom); Quintana, B. [Departamento de Fisica Fundamental, Universidad de Salamanca, Salamanca (Spain); Gadea, A. [Instituto de Fisica Corpuscular, CSIC-Universidad de Valencia, Valencia (Spain)
2010-09-21
The aim of the Advanced GAmma Tracking Array (AGATA) project is the construction of an array based on the novel concepts of pulse shape analysis and {gamma}-ray tracking with highly segmented Ge semiconductor detectors. The conceptual design of AGATA and its performance evaluation under different experimental conditions has required the development of a suitable Monte Carlo code. In this article, the description of the code as well as simulation results relevant for AGATA, are presented.
Wang, Y; Mazur, T; Green, O; Hu, Y; Wooten, H; Yang, D; Zhao, T; Mutic, S; Li, H [Washington University School of Medicine, St. Louis, MO (United States)
2015-06-15
Purpose: To build a fast, accurate and easily-deployable research platform for Monte-Carlo dose calculations. We port the dose calculation engine PENELOPE to C++, and accelerate calculations using GPU acceleration. Simulations of a Co-60 beam model provided by ViewRay demonstrate the capabilities of the platform. Methods: We built software that incorporates a beam model interface, CT-phantom model, GPU-accelerated PENELOPE engine, and GUI front-end. We rewrote the PENELOPE kernel in C++ (from Fortran) and accelerated the code on a GPU. We seamlessly integrated a Co-60 beam model (obtained from ViewRay) into our platform. Simulations of various field sizes and SSDs using a homogeneous water phantom generated PDDs, dose profiles, and output factors that were compared to experiment data. Results: With GPU acceleration using a dated graphics card (Nvidia Tesla C2050), a highly accurate simulation – including 100*100*100 grid, 3×3×3 mm3 voxels, <1% uncertainty, and 4.2×4.2 cm2 field size – runs 24 times faster (20 minutes versus 8 hours) than when parallelizing on 8 threads across a new CPU (Intel i7-4770). Simulated PDDs, profiles and output ratios for the commercial system agree well with experiment data measured using radiographic film or ionization chamber. Based on our analysis, this beam model is precise enough for general applications. Conclusions: Using a beam model for a Co-60 system provided by ViewRay, we evaluate a dose calculation platform that we developed. Comparison to measurements demonstrates the promise of our software for use as a research platform for dose calculations, with applications including quality assurance and treatment plan verification.
Haba, Tomonobu; Koyama, Shuji; Ida, Yoshihiro
2014-01-01
The longitudinal dose profile in a computed tomography dose index (CTDI) phantom had been studied by many researchers. The cross-sectional dose profile in the CTDI phantom, however, has not been studied. It is also important to understand the cross-sectional dose profile in the CTDI phantom for dose estimation in X-ray CT. In this study, the cross-sectional dose profile in the CTDI phantom was calculated by use of a Monte Carlo (MC) simulation method. A helical or a 320-detector-row cone-beam X-ray CT scanner was simulated. The cross-sectional dose profile in the CTDI phantom from surface to surface through the center point was calculated by MC simulation. The shape of the calculation region was a cylinder of 1-mm-diameter. The length of the cylinder was 23, 100, or 300 mm to represent various CT ionization chamber lengths. Detailed analyses of the energy depositions demonstrated that the cross-sectional dose profile was different in measurement methods and phantom sizes. In this study, we also focused on the validation of the weighting factor used in weighted CTDI (CTDI w ). As it stands now, the weighting factor used in CTDI w is (1/3, 2/3) for the (central, peripheral) axes. Our results showed that an equal weighting factor, which is (1/2, 1/2) for the (central, peripheral) axes, is more suitable to estimate the average cross-sectional dose when X-ray CT dose estimation is performed.
Hybrid Monte Carlo with Chaotic Mixing
Kadakia, Nirag
2016-01-01
We propose a hybrid Monte Carlo (HMC) technique applicable to high-dimensional multivariate normal distributions that effectively samples along chaotic trajectories. The method is predicated on the freedom of choice of the HMC momentum distribution, and due to its mixing properties, exhibits sample-to-sample autocorrelations that decay far faster than those in the traditional hybrid Monte Carlo algorithm. We test the methods on distributions of varying correlation structure, finding that the proposed technique produces superior covariance estimates, is less reliant on step-size tuning, and can even function with sparse or no momentum re-sampling. The method presented here is promising for more general distributions, such as those that arise in Bayesian learning of artificial neural networks and in the state and parameter estimation of dynamical systems.
Monte Carlo study of real time dynamics
Alexandru, Andrei; Bedaque, Paulo F; Vartak, Sohan; Warrington, Neill C
2016-01-01
Monte Carlo studies involving real time dynamics are severely restricted by the sign problem that emerges from highly oscillatory phase of the path integral. In this letter, we present a new method to compute real time quantities on the lattice using the Schwinger-Keldysh formalism via Monte Carlo simulations. The key idea is to deform the path integration domain to a complex manifold where the phase oscillations are mild and the sign problem is manageable. We use the previously introduced "contraction algorithm" to create a Markov chain on this alternative manifold. We substantiate our approach by analyzing the quantum mechanical anharmonic oscillator. Our results are in agreement with the exact ones obtained by diagonalization of the Hamiltonian. The method we introduce is generic and in principle applicable to quantum field theory albeit very slow. We discuss some possible improvements that should speed up the algorithm.
Multilevel sequential Monte-Carlo samplers
Jasra, Ajay
2016-01-05
Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.
Monte Carlo Simulation for Particle Detectors
Pia, Maria Grazia
2012-01-01
Monte Carlo simulation is an essential component of experimental particle physics in all the phases of its life-cycle: the investigation of the physics reach of detector concepts, the design of facilities and detectors, the development and optimization of data reconstruction software, the data analysis for the production of physics results. This note briefly outlines some research topics related to Monte Carlo simulation, that are relevant to future experimental perspectives in particle physics. The focus is on physics aspects: conceptual progress beyond current particle transport schemes, the incorporation of materials science knowledge relevant to novel detection technologies, functionality to model radiation damage, the capability for multi-scale simulation, quantitative validation and uncertainty quantification to determine the predictive power of simulation. The R&D on simulation for future detectors would profit from cooperation within various components of the particle physics community, and synerg...
An enhanced Monte Carlo outlier detection method.
Zhang, Liangxiao; Li, Peiwu; Mao, Jin; Ma, Fei; Ding, Xiaoxia; Zhang, Qi
2015-09-30
Outlier detection is crucial in building a highly predictive model. In this study, we proposed an enhanced Monte Carlo outlier detection method by establishing cross-prediction models based on determinate normal samples and analyzing the distribution of prediction errors individually for dubious samples. One simulated and three real datasets were used to illustrate and validate the performance of our method, and the results indicated that this method outperformed Monte Carlo outlier detection in outlier diagnosis. After these outliers were removed, the value of validation by Kovats retention indices and the root mean square error of prediction decreased from 3.195 to 1.655, and the average cross-validation prediction error decreased from 2.0341 to 1.2780. This method helps establish a good model by eliminating outliers. © 2015 Wiley Periodicals, Inc.
Composite biasing in Monte Carlo radiative transfer
Baes, Maarten; Lunttila, Tuomas; Bianchi, Simone; Camps, Peter; Juvela, Mika; Kuiper, Rolf
2016-01-01
Biasing or importance sampling is a powerful technique in Monte Carlo radiative transfer, and can be applied in different forms to increase the accuracy and efficiency of simulations. One of the drawbacks of the use of biasing is the potential introduction of large weight factors. We discuss a general strategy, composite biasing, to suppress the appearance of large weight factors. We use this composite biasing approach for two different problems faced by current state-of-the-art Monte Carlo radiative transfer codes: the generation of photon packages from multiple components, and the penetration of radiation through high optical depth barriers. In both cases, the implementation of the relevant algorithms is trivial and does not interfere with any other optimisation techniques. Through simple test models, we demonstrate the general applicability, accuracy and efficiency of the composite biasing approach. In particular, for the penetration of high optical depths, the gain in efficiency is spectacular for the spe...
Multilevel Monte Carlo Approaches for Numerical Homogenization
Efendiev, Yalchin R.
2015-10-01
In this article, we study the application of multilevel Monte Carlo (MLMC) approaches to numerical random homogenization. Our objective is to compute the expectation of some functionals of the homogenized coefficients, or of the homogenized solutions. This is accomplished within MLMC by considering different sizes of representative volumes (RVEs). Many inexpensive computations with the smallest RVE size are combined with fewer expensive computations performed on larger RVEs. Likewise, when it comes to homogenized solutions, different levels of coarse-grid meshes are used to solve the homogenized equation. We show that, by carefully selecting the number of realizations at each level, we can achieve a speed-up in the computations in comparison to a standard Monte Carlo method. Numerical results are presented for both one-dimensional and two-dimensional test-cases that illustrate the efficiency of the approach.
Monte Carlo simulations on SIMD computer architectures
Burmester, C.P.; Gronsky, R. [Lawrence Berkeley Lab., CA (United States); Wille, L.T. [Florida Atlantic Univ., Boca Raton, FL (United States). Dept. of Physics
1992-03-01
Algorithmic considerations regarding the implementation of various materials science applications of the Monte Carlo technique to single instruction multiple data (SMM) computer architectures are presented. In particular, implementation of the Ising model with nearest, next nearest, and long range screened Coulomb interactions on the SIMD architecture MasPar MP-1 (DEC mpp-12000) series of massively parallel computers is demonstrated. Methods of code development which optimize processor array use and minimize inter-processor communication are presented including lattice partitioning and the use of processor array spanning tree structures for data reduction. Both geometric and algorithmic parallel approaches are utilized. Benchmarks in terms of Monte Carlo updates per second for the MasPar architecture are presented and compared to values reported in the literature from comparable studies on other architectures.
Inhomogeneous Monte Carlo simulations of dermoscopic spectroscopy
Gareau, Daniel S.; Li, Ting; Jacques, Steven; Krueger, James
2012-03-01
Clinical skin-lesion diagnosis uses dermoscopy: 10X epiluminescence microscopy. Skin appearance ranges from black to white with shades of blue, red, gray and orange. Color is an important diagnostic criteria for diseases including melanoma. Melanin and blood content and distribution impact the diffuse spectral remittance (300-1000nm). Skin layers: immersion medium, stratum corneum, spinous epidermis, basal epidermis and dermis as well as laterally asymmetric features (eg. melanocytic invasion) were modeled in an inhomogeneous Monte Carlo model.
Handbook of Markov chain Monte Carlo
Brooks, Steve
2011-01-01
""Handbook of Markov Chain Monte Carlo"" brings together the major advances that have occurred in recent years while incorporating enough introductory material for new users of MCMC. Along with thorough coverage of the theoretical foundations and algorithmic and computational methodology, this comprehensive handbook includes substantial realistic case studies from a variety of disciplines. These case studies demonstrate the application of MCMC methods and serve as a series of templates for the construction, implementation, and choice of MCMC methodology.
Accelerated Monte Carlo by Embedded Cluster Dynamics
Brower, R. C.; Gross, N. A.; Moriarty, K. J. M.
1991-07-01
We present an overview of the new methods for embedding Ising spins in continuous fields to achieve accelerated cluster Monte Carlo algorithms. The methods of Brower and Tamayo and Wolff are summarized and variations are suggested for the O( N) models based on multiple embedded Z2 spin components and/or correlated projections. Topological features are discussed for the XY model and numerical simulations presented for d=2, d=3 and mean field theory lattices.
Welberry, T R; Heerdegen, A P
2003-12-01
A recently developed method for fitting a Monte Carlo computer-simulation model to observed single-crystal diffuse X-ray scattering has been used to study the diffuse scattering in 4,4'-dimethoxybenzil, C16H14O4. A model involving only nine parameters, consisting of seven intermolecular force constants and two intramolecular torsional force constants, was refined to give an agreement factor, omegaR = [sigma omega(deltaI)2/sigma omegaI2(obs)](1/2), of 18.1% for 118 918 data points in two sections of data. The model was purely thermal in nature. The analysis has shown that the most prominent features of the diffraction patterns, viz. diffuse streaks that occur normal to the [101] direction, are due to longitudinal displacement correlations along chains of molecules extending in this direction. These displacements are transmitted from molecule to molecule via contacts involving pairs of hydrogen bonds between adjacent methoxy groups. In contrast to an earlier study of benzil itself, it was not found to be possible to determine, with any degree of certainty, the torsional force constants for rotations about the single bonds in the molecule. It is supposed that this result may be due to the limited data available in the present study.
Sci—Fri PM: Topics — 01: A monte carlo model of a miniature low-energy x-ray tube using EGSnrc
Watson, P; Seuntjens, J [Medical Physics Unit, McGill University (Canada)
2014-08-15
The INTRABEAM system (Carl Zeiss, Oberkochen, Germany) is a miniature x-ray generator for use in intraoperative radiotherapy and brachytherapy. The device accelerates electrons to up to 50 keV, which are then steered down an evacuated needle probe to strike a thin gold target. For accurate dosimetry of the INTRABEAM system, it is important that the photon spectrum be well understood. Measurements based on air-kerma are heavily impacted by photon spectra, particularly for low photon energies due to the large photoelectric contribution in air mass energy absorption coefficient. While low energy photons have little clinical significance at treatment depths, they may have a large effect on air-kerma measurements. In this work, we have developed an EGSnrc-based monte carlo (MC) model of the Zeiss INTRABEAM system to study the source photon spectra and half-value layers (HVLs) of the bare probe and with various spherical applicators. HVLs were calculated using the analytical attenuation of air-kerma spectra. The calculated bare probe spectrum was compared with simulated and measured results taken from literature. Differences in the L-line energies of gold were found between the spectra predicted by EGSnrc and Geant4. This is due to M and N shell averaging during atomic transitions in EGSnrc. The calculated HVLs of the bare probe and spherical applicators are consistent with literature reported measured values.
Craig Kruschwitz, Ming Wu, Ken Moy, Greg Rochau
2008-10-31
We present here results of continued efforts to understand the performance of microchannel plate (MCP)–based, high-speed, gated, x-ray detectors. This work involves the continued improvement of a Monte Carlo simulation code to describe MCP performance coupled with experimental efforts to better characterize such detectors. Our goal is a quantitative description of MCP saturation behavior in both static and pulsed modes. We have developed a new model of charge buildup on the walls of the MCP channels and measured its effect on MCP gain. The results are compared to experimental data obtained with a short-pulse, high-intensity ultraviolet laser; these results clearly demonstrate MCP saturation behavior in both DC and pulsed modes. The simulations compare favorably to the experimental results. The dynamic range of the detectors in pulsed operation is of particular interest when fielding an MCP–based camera. By adjusting the laser flux we study the linear range of the camera. These results, too, are compared to our simulations.
An introduction to Monte Carlo methods
Walter, J.-C.; Barkema, G. T.
2015-01-01
Monte Carlo simulations are methods for simulating statistical systems. The aim is to generate a representative ensemble of configurations to access thermodynamical quantities without the need to solve the system analytically or to perform an exact enumeration. The main principles of Monte Carlo simulations are ergodicity and detailed balance. The Ising model is a lattice spin system with nearest neighbor interactions that is appropriate to illustrate different examples of Monte Carlo simulations. It displays a second order phase transition between disordered (high temperature) and ordered (low temperature) phases, leading to different strategies of simulations. The Metropolis algorithm and the Glauber dynamics are efficient at high temperature. Close to the critical temperature, where the spins display long range correlations, cluster algorithms are more efficient. We introduce the rejection free (or continuous time) algorithm and describe in details an interesting alternative representation of the Ising model using graphs instead of spins with the so-called Worm algorithm. We conclude with an important discussion of the dynamical effects such as thermalization and correlation time.
Sparrow, Victor W.; Pierce, Allan D.
1992-01-01
A theory which gives statistical predictions for how often sonic booms propagating through the earth's turbulent boundary layer will encounter caustics, given the spectral properties of the atmospheric turbulence, is outlined. The theory is simple but approximately accounts for the variation of ray tube areas along ray paths. This theory predicts that the variation of ray tube areas is determined by the product of two similar area factors, psi (x) and phi (x), each satisfying a generic harmonic oscillator equation. If an area factor increases the peak acoustic pressure decreases, and if the factor decreases the peak acoustic pressure increases. Additionally, if an area factor decreases to zero and becomes negative, the ray has propagated through a caustic, which contributes a phase change of 90 degrees to the wave. Thus, it is clear that the number of times that a sonic boom wave passes through a caustic should be related to the distorted boom waveform received on the ground. Examples are given based on a characterization of atmospheric turbulence due to the structure function of Tatarski as modified by Crow.
Antonov, Lubomir Dimitrov; Andreetta, Christian; Hamelryck, Thomas Wim
2013-01-01
Inference of protein structure from experimental data is of crucial interest in science, medicine and biotechnology. Low-resolution methods, such as small angle X-ray scattering (SAXS), play a major role in investigating important biological questions regarding the structure of proteins in solution...
Amato, Ernesto; Italiano, Antonio; Leotta, Salvatore; Pergolizzi, Stefano; Torrisi, Lorenzo
2013-01-01
Gold nanoparticles (GNPs) are a promising radiosensitizer agent in radiotherapy. Through a simulation performed with the Geant4 Monte Carlo code, we evaluated the dose enhancement effect of GNPs during therapies with an x-ray tube operating at 150 kV (E = 55 keV and E(max) = 150 keV) and we studied the impact of GNP diffusion out of the tumour vessels, in terms of antiangiogenic and cytotoxic effects. Firstly, a single x-ray beam was assumed to irradiate a parallelepiped volume of soft tissue, in which a GNP-doped "target" volume was placed at different depths. Average dose enhancement factors (DEF) in presence of GNPs were obtained as a function of the target depth and GNP concentration, uniformly distributed; values ranging between 1.6 for 10 mg Au/g at 0 cm and 7.2 for 200 mg Au/g at 5 cm were determined. Furtherly, a second geometry was adopted, in which a blood capillary vessel (10 μm thick and 10 μm of inner radius) was placed at the centre of a cubic volume of soft tissue; doses and DEFs to the capillary endothelium as well as to the surrounding viable tumour were evaluated, for different models of GNP diffusion. Our results indicate that the radial DEF profiles around the vessel are in close relationship with the radial profiles of GNP concentration assumed, except for at sharp gradients of concentration. DEFs at the endothelium ranged from 1.6 to 6.5, for GNP concentrations in the blood of 10 and 200 mg/ml, respectively. These data can be helpful for the development of new and more specific GNP-based radiosensitizers of potential interest in radiotherapy, exploiting the combined benefit of anti-angiogenic and cytotoxic dose enhancement effects.
Setiani, Tia Dwi; Suprijadi, Haryanto, Freddy
2016-03-01
Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 - 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 108 and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.
Jones, Bernard L; Cho, Sang Hyun, E-mail: scho@gatech.edu [Nuclear/Radiological Engineering and Medical Physics Programs, Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332-0405 (United States)
2011-06-21
A recent study investigated the feasibility to develop a bench-top x-ray fluorescence computed tomography (XFCT) system capable of determining the spatial distribution and concentration of gold nanoparticles (GNPs) in vivo using a diagnostic energy range polychromatic (i.e. 110 kVp) pencil-beam source. In this follow-up study, we examined the feasibility of a polychromatic cone-beam implementation of XFCT by Monte Carlo (MC) simulations using the MCNP5 code. In the current MC model, cylindrical columns with various sizes (5-10 mm in diameter) containing water loaded with GNPs (0.1-2% gold by weight) were inserted into a 5 cm diameter cylindrical polymethyl methacrylate phantom. The phantom was then irradiated by a lead-filtered 110 kVp x-ray source, and the resulting gold fluorescence and Compton-scattered photons were collected by a series of energy-sensitive tallies after passing through lead parallel-hole collimators. A maximum-likelihood iterative reconstruction algorithm was implemented to reconstruct the image of GNP-loaded objects within the phantom. The effects of attenuation of both the primary beam through the phantom and the gold fluorescence photons en route to the detector were corrected during the image reconstruction. Accurate images of the GNP-containing phantom were successfully reconstructed for three different phantom configurations, with both spatial distribution and relative concentration of GNPs well identified. The pixel intensity of regions containing GNPs was linearly proportional to the gold concentration. The current MC study strongly suggests the possibility of developing a bench-top, polychromatic, cone-beam XFCT system for in vivo imaging.
Jones, Bernard L.; Cho, Sang Hyun
2011-06-01
A recent study investigated the feasibility to develop a bench-top x-ray fluorescence computed tomography (XFCT) system capable of determining the spatial distribution and concentration of gold nanoparticles (GNPs) in vivo using a diagnostic energy range polychromatic (i.e. 110 kVp) pencil-beam source. In this follow-up study, we examined the feasibility of a polychromatic cone-beam implementation of XFCT by Monte Carlo (MC) simulations using the MCNP5 code. In the current MC model, cylindrical columns with various sizes (5-10 mm in diameter) containing water loaded with GNPs (0.1-2% gold by weight) were inserted into a 5 cm diameter cylindrical polymethyl methacrylate phantom. The phantom was then irradiated by a lead-filtered 110 kVp x-ray source, and the resulting gold fluorescence and Compton-scattered photons were collected by a series of energy-sensitive tallies after passing through lead parallel-hole collimators. A maximum-likelihood iterative reconstruction algorithm was implemented to reconstruct the image of GNP-loaded objects within the phantom. The effects of attenuation of both the primary beam through the phantom and the gold fluorescence photons en route to the detector were corrected during the image reconstruction. Accurate images of the GNP-containing phantom were successfully reconstructed for three different phantom configurations, with both spatial distribution and relative concentration of GNPs well identified. The pixel intensity of regions containing GNPs was linearly proportional to the gold concentration. The current MC study strongly suggests the possibility of developing a bench-top, polychromatic, cone-beam XFCT system for in vivo imaging.
Díez, A; Largo, J; Solana, J R
2006-08-21
Computer simulations have been performed for fluids with van der Waals potential, that is, hard spheres with attractive inverse power tails, to determine the equation of state and the excess energy. On the other hand, the first- and second-order perturbative contributions to the energy and the zero- and first-order perturbative contributions to the compressibility factor have been determined too from Monte Carlo simulations performed on the reference hard-sphere system. The aim was to test the reliability of this "exact" perturbation theory. It has been found that the results obtained from the Monte Carlo perturbation theory for these two thermodynamic properties agree well with the direct Monte Carlo simulations. Moreover, it has been found that results from the Barker-Henderson [J. Chem. Phys. 47, 2856 (1967)] perturbation theory are in good agreement with those from the exact perturbation theory.
Computed radiography simulation using the Monte Carlo code MCNPX
Correa, S.C.A. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Centro Universitario Estadual da Zona Oeste (CCMAT)/UEZO, Av. Manuel Caldeira de Alvarenga, 1203, Campo Grande, 23070-200, Rio de Janeiro, RJ (Brazil); Souza, E.M. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Silva, A.X., E-mail: ademir@con.ufrj.b [PEN/COPPE-DNC/Poli CT, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Cassiano, D.H. [Instituto de Radioprotecao e Dosimetria/CNEN Av. Salvador Allende, s/n, Recreio, 22780-160, Rio de Janeiro, RJ (Brazil); Lopes, R.T. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil)
2010-09-15
Simulating X-ray images has been of great interest in recent years as it makes possible an analysis of how X-ray images are affected owing to relevant operating parameters. In this paper, a procedure for simulating computed radiographic images using the Monte Carlo code MCNPX is proposed. The sensitivity curve of the BaFBr image plate detector as well as the characteristic noise of a 16-bit computed radiography system were considered during the methodology's development. The results obtained confirm that the proposed procedure for simulating computed radiographic images is satisfactory, as it allows obtaining results comparable with experimental data.
Status of Monte-Carlo Event Generators
Hoeche, Stefan; /SLAC
2011-08-11
Recent progress on general-purpose Monte-Carlo event generators is reviewed with emphasis on the simulation of hard QCD processes and subsequent parton cascades. Describing full final states of high-energy particle collisions in contemporary experiments is an intricate task. Hundreds of particles are typically produced, and the reactions involve both large and small momentum transfer. The high-dimensional phase space makes an exact solution of the problem impossible. Instead, one typically resorts to regarding events as factorized into different steps, ordered descending in the mass scales or invariant momentum transfers which are involved. In this picture, a hard interaction, described through fixed-order perturbation theory, is followed by multiple Bremsstrahlung emissions off initial- and final-state and, finally, by the hadronization process, which binds QCD partons into color-neutral hadrons. Each of these steps can be treated independently, which is the basic concept inherent to general-purpose event generators. Their development is nowadays often focused on an improved description of radiative corrections to hard processes through perturbative QCD. In this context, the concept of jets is introduced, which allows to relate sprays of hadronic particles in detectors to the partons in perturbation theory. In this talk, we briefly review recent progress on perturbative QCD in event generation. The main focus lies on the general-purpose Monte-Carlo programs HERWIG, PYTHIA and SHERPA, which will be the workhorses for LHC phenomenology. A detailed description of the physics models included in these generators can be found in [8]. We also discuss matrix-element generators, which provide the parton-level input for general-purpose Monte Carlo.
Quantum Monte Carlo for vibrating molecules
Brown, W.R. [Univ. of California, Berkeley, CA (United States). Chemistry Dept.]|[Lawrence Berkeley National Lab., CA (United States). Chemical Sciences Div.
1996-08-01
Quantum Monte Carlo (QMC) has successfully computed the total electronic energies of atoms and molecules. The main goal of this work is to use correlation function quantum Monte Carlo (CFQMC) to compute the vibrational state energies of molecules given a potential energy surface (PES). In CFQMC, an ensemble of random walkers simulate the diffusion and branching processes of the imaginary-time time dependent Schroedinger equation in order to evaluate the matrix elements. The program QMCVIB was written to perform multi-state VMC and CFQMC calculations and employed for several calculations of the H{sub 2}O and C{sub 3} vibrational states, using 7 PES`s, 3 trial wavefunction forms, two methods of non-linear basis function parameter optimization, and on both serial and parallel computers. In order to construct accurate trial wavefunctions different wavefunctions forms were required for H{sub 2}O and C{sub 3}. In order to construct accurate trial wavefunctions for C{sub 3}, the non-linear parameters were optimized with respect to the sum of the energies of several low-lying vibrational states. In order to stabilize the statistical error estimates for C{sub 3} the Monte Carlo data was collected into blocks. Accurate vibrational state energies were computed using both serial and parallel QMCVIB programs. Comparison of vibrational state energies computed from the three C{sub 3} PES`s suggested that a non-linear equilibrium geometry PES is the most accurate and that discrete potential representations may be used to conveniently determine vibrational state energies.
A Monte Carlo algorithm for degenerate plasmas
Turrell, A.E., E-mail: a.turrell09@imperial.ac.uk; Sherlock, M.; Rose, S.J.
2013-09-15
A procedure for performing Monte Carlo calculations of plasmas with an arbitrary level of degeneracy is outlined. It has possible applications in inertial confinement fusion and astrophysics. Degenerate particles are initialised according to the Fermi–Dirac distribution function, and scattering is via a Pauli blocked binary collision approximation. The algorithm is tested against degenerate electron–ion equilibration, and the degenerate resistivity transport coefficient from unmagnetised first order transport theory. The code is applied to the cold fuel shell and alpha particle equilibration problem of inertial confinement fusion.
A note on simultaneous Monte Carlo tests
Hahn, Ute
In this short note, Monte Carlo tests of goodness of fit for data of the form X(t), t ∈ I are considered, that reject the null hypothesis if X(t) leaves an acceptance region bounded by an upper and lower curve for some t in I. A construction of the acceptance region is proposed that complies to a...... to a given target level of rejection, and yields exact p-values. The construction is based on pointwise quantiles, estimated from simulated realizations of X(t) under the null hypothesis....
Archimedes, the Free Monte Carlo simulator
Sellier, Jean Michel D
2012-01-01
Archimedes is the GNU package for Monte Carlo simulations of electron transport in semiconductor devices. The first release appeared in 2004 and since then it has been improved with many new features like quantum corrections, magnetic fields, new materials, GUI, etc. This document represents the first attempt to have a complete manual. Many of the Physics models implemented are described and a detailed description is presented to make the user able to write his/her own input deck. Please, feel free to contact the author if you want to contribute to the project.
Cluster hybrid Monte Carlo simulation algorithms
Plascak, J. A.; Ferrenberg, Alan M.; Landau, D. P.
2002-06-01
We show that addition of Metropolis single spin flips to the Wolff cluster-flipping Monte Carlo procedure leads to a dramatic increase in performance for the spin-1/2 Ising model. We also show that adding Wolff cluster flipping to the Metropolis or heat bath algorithms in systems where just cluster flipping is not immediately obvious (such as the spin-3/2 Ising model) can substantially reduce the statistical errors of the simulations. A further advantage of these methods is that systematic errors introduced by the use of imperfect random-number generation may be largely healed by hybridizing single spin flips with cluster flipping.
Introduction to Cluster Monte Carlo Algorithms
Luijten, E.
This chapter provides an introduction to cluster Monte Carlo algorithms for classical statistical-mechanical systems. A brief review of the conventional Metropolis algorithm is given, followed by a detailed discussion of the lattice cluster algorithm developed by Swendsen and Wang and the single-cluster variant introduced by Wolff. For continuum systems, the geometric cluster algorithm of Dress and Krauth is described. It is shown how their geometric approach can be generalized to incorporate particle interactions beyond hardcore repulsions, thus forging a connection between the lattice and continuum approaches. Several illustrative examples are discussed.
Monte Carlo simulation for the transport beamline
Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania (Italy); Attili, A.; Marchetto, F.; Russo, G. [INFN, Sezione di Torino, Via P.Giuria, 1 10125 Torino (Italy); Cirrone, G. A. P.; Schillaci, F.; Scuderi, V. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Institute of Physics Czech Academy of Science, ELI-Beamlines project, Na Slovance 2, Prague (Czech Republic); Carpinelli, M. [INFN Sezione di Cagliari, c/o Dipartimento di Fisica, Università di Cagliari, Cagliari (Italy); Tramontana, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Università di Catania, Dipartimento di Fisica e Astronomia, Via S. Sofia 64, Catania (Italy)
2013-07-26
In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.
Mosaic crystal algorithm for Monte Carlo simulations
Seeger, P A
2002-01-01
An algorithm is presented for calculating reflectivity, absorption, and scattering of mosaic crystals in Monte Carlo simulations of neutron instruments. The algorithm uses multi-step transport through the crystal with an exact solution of the Darwin equations at each step. It relies on the kinematical model for Bragg reflection (with parameters adjusted to reproduce experimental data). For computation of thermal effects (the Debye-Waller factor and coherent inelastic scattering), an expansion of the Debye integral as a rapidly converging series of exponential terms is also presented. Any crystal geometry and plane orientation may be treated. The algorithm has been incorporated into the neutron instrument simulation package NISP. (orig.)
Diffusion quantum Monte Carlo for molecules
Lester, W.A. Jr.
1986-07-01
A quantum mechanical Monte Carlo method has been used for the treatment of molecular problems. The imaginary-time Schroedinger equation written with a shift in zero energy (E/sub T/ - V(R)) can be interpreted as a generalized diffusion equation with a position-dependent rate or branching term. Since diffusion is the continuum limit of a random walk, one may simulate the Schroedinger equation with a function psi (note, not psi/sup 2/) as a density of ''walks.'' The walks undergo an exponential birth and death as given by the rate term. 16 refs., 2 tabs.
Marcus, Ryan C. [Los Alamos National Laboratory
2012-07-24
Overview of this presentation is (1) Exascale computing - different technologies, getting there; (2) high-performance proof-of-concept MCMini - features and results; and (3) OpenCL toolkit - Oatmeal (OpenCL Automatic Memory Allocation Library) - purpose and features. Despite driver issues, OpenCL seems like a good, hardware agnostic tool. MCMini demonstrates the possibility for GPGPU-based Monte Carlo methods - it shows great scaling for HPC application and algorithmic equivalence. Oatmeal provides a flexible framework to aid in the development of scientific OpenCL codes.
Ahmad, Syed Bilal, E-mail: ahmadsb@mcmaster.ca [TAB-104D, Medical Physics and Applied Radiation Sciences, McMaster University, Hamilton, Ontario, Canada L8S 4K1 (Canada); Thompson, Jeroen E., E-mail: Jeroen.thompson@gmail.com [Medical Physics and Applied Radiation Sciences, McMaster University, Hamilton, Ontario, Canada L8S 4K1 (Canada); McNeill, Fiona E., E-mail: fmcneill@mcmaster.ca [Medical Physics and Applied Radiation Sciences, McMaster University, Hamilton, Ontario, Canada L8S 4K1 (Canada); Byun, Soo Hyun, E-mail: soohyun@mcmaster.ca [Medical Physics and Applied Radiation Sciences, McMaster University, Hamilton, Ontario, Canada L8S 4K1 (Canada); Prestwich, William V., E-mail: prestwic@mcmaster.ca [Medical Physics and Applied Radiation Sciences, McMaster University, Hamilton, Ontario, Canada L8S 4K1 (Canada)
2013-01-15
The goal of a microbeam is to deliver a highly localized and small dose to the biological medium. This can be achieved by using a set of collimators that confine the charged particle beam to a very small spatial area of the order of microns in diameter. By using a system that combines an appropriate beam detection method that signals to a beam shut-down mechanism, a predetermined and counted number of energetic particles can be delivered to targeted biological cells. Since the shutter and the collimators block a significant proportion of the beam, there is a probability of the production of low energy X-rays and secondary electrons through interactions with the beam. There is little information in the biological microbeam literature on potential X-ray production. We therefore used Monte Carlo simulations to investigate the potential production of particle-induced X-rays and secondary electrons in the collimation system (which is predominantly made of tungsten) and the subsequent possible effects on the total absorbed dose delivered to the biological medium. We found, through the simulation, no evidence of the escape of X-rays or secondary electrons from the collimation system for proton energies up to 3 MeV as we found that the thickness of the collimators is sufficient to reabsorb all of the generated low energy X-rays and secondary electrons. However, if the proton energy exceeds 3 MeV our simulations suggest that 10 keV X-rays can escape the collimator and expose the overlying layer of cells and medium. If the proton energy is further increased to 4.5 MeV or beyond, the collimator can become a significant source of 10 keV and 59 keV X-rays. These additional radiation fields could have effects on cells and these results should be verified through experimental measurement. We suggest that researchers using biological microbeams at higher energies need to be aware that cells may be exposed to a mixed LET radiation field and be careful in their interpretation of
A Chaparian
2014-01-01
Full Text Available The objectives of this paper were calculation and comparison of the effective doses, the risks of exposure-induced cancer, and dose reduction in the gonads for male and female patients in different projections of some X-ray examinations. Radiographies of lumbar spine [in the eight projections of anteroposterior (AP, posteroanterior (PA, right lateral (RLAT, left lateral (LLAT, right anterior-posterior oblique (RAO, left anterior-posterior oblique (LAO, right posterior-anterior oblique (RPO, and left posterior-anterior oblique (LPO], abdomen (in the two projections of AP and PA, and pelvis (in the two projections of AP and PA were investigated. A solid-state dosimeter was used for the measuring of the entrance skin exposure. A Monte Carlo program was used for calculation of effective doses, the risks of radiation-induced cancer, and doses to the gonads related to the different projections. Results of this study showed that PA projection of abdomen, lumbar spine, and pelvis radiographies caused 50%-57% lower effective doses than AP projection and 50%-60% reduction in radiation risks. Also use of LAO projection of lumbar spine X-ray examination caused 53% lower effective dose than RPO projection and 56% and 63% reduction in radiation risk for male and female, respectively, and RAO projection caused 28% lower effective dose than LPO projection and 52% and 39% reduction in radiation risk for males and females, respectively. About dose reduction in the gonads, using of the PA position rather than AP in the radiographies of the abdomen, lumbar spine, and pelvis can result in reduction of the ovaries doses in women, 38%, 31%, and 25%, respectively and reduction of the testicles doses in males, 76%, 86%, and 94%, respectively. Also for oblique projections of lumbar spine X-ray examination, with employment of LAO rather than RPO and also RAO rather than LPO, demonstrated 22% and 13% reductions to the ovaries doses and 66% and 54% reductions in the
State-of-the-art Monte Carlo 1988
Soran, P.D.
1988-06-28
Particle transport calculations in highly dimensional and physically complex geometries, such as detector calibration, radiation shielding, space reactors, and oil-well logging, generally require Monte Carlo transport techniques. Monte Carlo particle transport can be performed on a variety of computers ranging from APOLLOs to VAXs. Some of the hardware and software developments, which now permit Monte Carlo methods to be routinely used, are reviewed in this paper. The development of inexpensive, large, fast computer memory, coupled with fast central processing units, permits Monte Carlo calculations to be performed on workstations, minicomputers, and supercomputers. The Monte Carlo renaissance is further aided by innovations in computer architecture and software development. Advances in vectorization and parallelization architecture have resulted in the development of new algorithms which have greatly reduced processing times. Finally, the renewed interest in Monte Carlo has spawned new variance reduction techniques which are being implemented in large computer codes. 45 refs.
Monte Carlo Simulations: Number of Iterations and Accuracy
2015-07-01
Jessica Schultheis for her editorial review. vi INTENTIONALLY LEFT BLANK. 1 1. Introduction Monte Carlo (MC) methods1 are often used...ARL-TN-0684 ● JULY 2015 US Army Research Laboratory Monte Carlo Simulations: Number of Iterations and Accuracy by William...needed. Do not return it to the originator. ARL-TN-0684 ● JULY 2015 US Army Research Laboratory Monte Carlo Simulations: Number
Discrete diffusion Monte Carlo for frequency-dependent radiative transfer
Densmore, Jeffrey D [Los Alamos National Laboratory; Kelly, Thompson G [Los Alamos National Laboratory; Urbatish, Todd J [Los Alamos National Laboratory
2010-11-17
Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique.
Alternative Monte Carlo Approach for General Global Illumination
徐庆; 李朋; 徐源; 孙济洲
2004-01-01
An alternative Monte Carlo strategy for the computation of global illumination problem was presented.The proposed approach provided a new and optimal way for solving Monte Carlo global illumination based on the zero variance importance sampling procedure. A new importance driven Monte Carlo global illumination algorithm in the framework of the new computing scheme was developed and implemented. Results, which were obtained by rendering test scenes, show that this new framework and the newly derived algorithm are effective and promising.
Validation of Compton Scattering Monte Carlo Simulation Models
Weidenspointner, Georg; Hauf, Steffen; Hoff, Gabriela; Kuster, Markus; Pia, Maria Grazia; Saracco, Paolo
2014-01-01
Several models for the Monte Carlo simulation of Compton scattering on electrons are quantitatively evaluated with respect to a large collection of experimental data retrieved from the literature. Some of these models are currently implemented in general purpose Monte Carlo systems; some have been implemented and evaluated for possible use in Monte Carlo particle transport for the first time in this study. Here we present first and preliminary results concerning total and differential Compton scattering cross sections.
Multiple Monte Carlo Testing with Applications in Spatial Point Processes
Mrkvička, Tomáš; Myllymäki, Mari; Hahn, Ute
with a function as the test statistic, 3) several Monte Carlo tests with functions as test statistics. The rank test has correct (global) type I error in each case and it is accompanied with a p-value and with a graphical interpretation which shows which subtest or which distances of the used test function......The rank envelope test (Myllym\\"aki et al., Global envelope tests for spatial processes, arXiv:1307.0239 [stat.ME]) is proposed as a solution to multiple testing problem for Monte Carlo tests. Three different situations are recognized: 1) a few univariate Monte Carlo tests, 2) a Monte Carlo test...
THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE
WATERS, LAURIE S. [Los Alamos National Laboratory; MCKINNEY, GREGG W. [Los Alamos National Laboratory; DURKEE, JOE W. [Los Alamos National Laboratory; FENSIN, MICHAEL L. [Los Alamos National Laboratory; JAMES, MICHAEL R. [Los Alamos National Laboratory; JOHNS, RUSSELL C. [Los Alamos National Laboratory; PELOWITZ, DENISE B. [Los Alamos National Laboratory
2007-01-10
MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.
Multi-Index Monte Carlo (MIMC)
Haji Ali, Abdul Lateef
2015-01-07
We propose and analyze a novel Multi-Index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles’s seminal work, instead of using first-order differences as in MLMC, we use in MIMC high-order mixed differences to reduce the variance of the hierarchical differences dramatically. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be of Total Degree (TD) type. When using such sets, MIMC yields new and improved complexity results, which are natural generalizations of Giles’s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence.
Chemical application of diffusion quantum Monte Carlo
Reynolds, P. J.; Lester, W. A., Jr.
1983-10-01
The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. As an example the singlet-triplet splitting of the energy of the methylene molecule CH2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on our VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX is discussed. Since CH2 has only eight electrons, most of the loops in this application are fairly short. The longest inner loops run over the set of atomic basis functions. The CPU time dependence obtained versus the number of basis functions is discussed and compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures. Finally, preliminary work on restructuring the algorithm to compute the separate Monte Carlo realizations in parallel is discussed.
Multi-Index Monte Carlo (MIMC)
Haji Ali, Abdul Lateef
2016-01-06
We propose and analyze a novel Multi-Index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles s seminal work, instead of using first-order differences as in MLMC, we use in MIMC high-order mixed differences to reduce the variance of the hierarchical differences dramatically. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be of Total Degree (TD) type. When using such sets, MIMC yields new and improved complexity results, which are natural generalizations of Giles s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence, O(TOL-2).
Discrete range clustering using Monte Carlo methods
Chatterji, G. B.; Sridhar, B.
1993-01-01
For automatic obstacle avoidance guidance during rotorcraft low altitude flight, a reliable model of the nearby environment is needed. Such a model may be constructed by applying surface fitting techniques to the dense range map obtained by active sensing using radars. However, for covertness, passive sensing techniques using electro-optic sensors are desirable. As opposed to the dense range map obtained via active sensing, passive sensing algorithms produce reliable range at sparse locations, and therefore, surface fitting techniques to fill the gaps in the range measurement are not directly applicable. Both for automatic guidance and as a display for aiding the pilot, these discrete ranges need to be grouped into sets which correspond to objects in the nearby environment. The focus of this paper is on using Monte Carlo methods for clustering range points into meaningful groups. One of the aims of the paper is to explore whether simulated annealing methods offer significant advantage over the basic Monte Carlo method for this class of problems. We compare three different approaches and present application results of these algorithms to a laboratory image sequence and a helicopter flight sequence.
Quantum Monte Carlo Calculations of Neutron Matter
Carlson, J; Ravenhall, D G
2003-01-01
Uniform neutron matter is approximated by a cubic box containing a finite number of neutrons, with periodic boundary conditions. We report variational and Green's function Monte Carlo calculations of the ground state of fourteen neutrons in a periodic box using the Argonne $\\vep $ two-nucleon interaction at densities up to one and half times the nuclear matter density. The effects of the finite box size are estimated using variational wave functions together with cluster expansion and chain summation techniques. They are small at subnuclear densities. We discuss the expansion of the energy of low-density neutron gas in powers of its Fermi momentum. This expansion is strongly modified by the large nn scattering length, and does not begin with the Fermi-gas kinetic energy as assumed in both Skyrme and relativistic mean field theories. The leading term of neutron gas energy is ~ half the Fermi-gas kinetic energy. The quantum Monte Carlo results are also used to calibrate the accuracy of variational calculations ...
Information Geometry and Sequential Monte Carlo
Sim, Aaron; Stumpf, Michael P H
2012-01-01
This paper explores the application of methods from information geometry to the sequential Monte Carlo (SMC) sampler. In particular the Riemannian manifold Metropolis-adjusted Langevin algorithm (mMALA) is adapted for the transition kernels in SMC. Similar to its function in Markov chain Monte Carlo methods, the mMALA is a fully adaptable kernel which allows for efficient sampling of high-dimensional and highly correlated parameter spaces. We set up the theoretical framework for its use in SMC with a focus on the application to the problem of sequential Bayesian inference for dynamical systems as modelled by sets of ordinary differential equations. In addition, we argue that defining the sequence of distributions on geodesics optimises the effective sample sizes in the SMC run. We illustrate the application of the methodology by inferring the parameters of simulated Lotka-Volterra and Fitzhugh-Nagumo models. In particular we demonstrate that compared to employing a standard adaptive random walk kernel, the SM...
Quantum Monte Carlo Endstation for Petascale Computing
Lubos Mitas
2011-01-26
NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum Monte Carlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum Monte Carlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum Monte Carlo code (QWalk, www.qwalk.org) which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13
Lee, Youngjin; Lee, Amy Candy; Kim, Hee-Joung
2016-09-01
Recently, significant effort has been spent on the development of photons counting detector (PCD) based on a CdTe for applications in X-ray imaging system. The motivation of developing PCDs is higher image quality. Especially, the K-edge subtraction (KES) imaging technique using a PCD is able to improve image quality and useful for increasing the contrast resolution of a target material by utilizing contrast agent. Based on above-mentioned technique, we presented an idea for an improved K-edge log-subtraction (KELS) imaging technique. The KELS imaging technique based on the PCDs can be realized by using different subtraction energy width of the energy window. In this study, the effects of the KELS imaging technique and subtraction energy width of the energy window was investigated with respect to the contrast, standard deviation, and CNR with a Monte Carlo simulation. We simulated the PCD X-ray imaging system based on a CdTe and polymethylmethacrylate (PMMA) phantom which consists of the various iodine contrast agents. To acquired KELS images, images of the phantom using above and below the iodine contrast agent K-edge absorption energy (33.2 keV) have been acquired at different energy range. According to the results, the contrast and standard deviation were decreased, when subtraction energy width of the energy window is increased. Also, the CNR using a KELS imaging technique is higher than that of the images acquired by using whole energy range. Especially, the maximum differences of CNR between whole energy range and KELS images using a 1, 2, and 3 mm diameter iodine contrast agent were acquired 11.33, 8.73, and 8.29 times, respectively. Additionally, the optimum subtraction energy width of the energy window can be acquired at 5, 4, and 3 keV for the 1, 2, and 3 mm diameter iodine contrast agent, respectively. In conclusion, we successfully established an improved KELS imaging technique and optimized subtraction energy width of the energy window, and based on
Lee, Youngjin, E-mail: radioyoungj@gmail.com [Department of Radiological Science, Eulji University, 553, Sanseong-daero, Sujeong-gu, Seongnam-si, Gyeonggi-do (Korea, Republic of); Lee, Amy Candy [Department of Mathematics and Statistics, McGill University (Canada); Kim, Hee-Joung [Department of Radiological Science and Radiation Convergence Engineering, Yonsei University (Korea, Republic of)
2016-09-11
Recently, significant effort has been spent on the development of photons counting detector (PCD) based on a CdTe for applications in X-ray imaging system. The motivation of developing PCDs is higher image quality. Especially, the K-edge subtraction (KES) imaging technique using a PCD is able to improve image quality and useful for increasing the contrast resolution of a target material by utilizing contrast agent. Based on above-mentioned technique, we presented an idea for an improved K-edge log-subtraction (KELS) imaging technique. The KELS imaging technique based on the PCDs can be realized by using different subtraction energy width of the energy window. In this study, the effects of the KELS imaging technique and subtraction energy width of the energy window was investigated with respect to the contrast, standard deviation, and CNR with a Monte Carlo simulation. We simulated the PCD X-ray imaging system based on a CdTe and polymethylmethacrylate (PMMA) phantom which consists of the various iodine contrast agents. To acquired KELS images, images of the phantom using above and below the iodine contrast agent K-edge absorption energy (33.2 keV) have been acquired at different energy range. According to the results, the contrast and standard deviation were decreased, when subtraction energy width of the energy window is increased. Also, the CNR using a KELS imaging technique is higher than that of the images acquired by using whole energy range. Especially, the maximum differences of CNR between whole energy range and KELS images using a 1, 2, and 3 mm diameter iodine contrast agent were acquired 11.33, 8.73, and 8.29 times, respectively. Additionally, the optimum subtraction energy width of the energy window can be acquired at 5, 4, and 3 keV for the 1, 2, and 3 mm diameter iodine contrast agent, respectively. In conclusion, we successfully established an improved KELS imaging technique and optimized subtraction energy width of the energy window, and based on
Monte Carlo Numerical Models for Nuclear Logging Applications
Fusheng Li
2012-06-01
Full Text Available Nuclear logging is one of most important logging services provided by many oil service companies. The main parameters of interest are formation porosity, bulk density, and natural radiation. Other services are also provided from using complex nuclear logging tools, such as formation lithology/mineralogy, etc. Some parameters can be measured by using neutron logging tools and some can only be measured by using a gamma ray tool. To understand the response of nuclear logging tools, the neutron transport/diffusion theory and photon diffusion theory are needed. Unfortunately, for most cases there are no analytical answers if complex tool geometry is involved. For many years, Monte Carlo numerical models have been used by nuclear scientists in the well logging industry to address these challenges. The models have been widely employed in the optimization of nuclear logging tool design, and the development of interpretation methods for nuclear logs. They have also been used to predict the response of nuclear logging systems for forward simulation problems. In this case, the system parameters including geometry, materials and nuclear sources, etc., are pre-defined and the transportation and interactions of nuclear particles (such as neutrons, photons and/or electrons in the regions of interest are simulated according to detailed nuclear physics theory and their nuclear cross-section data (probability of interacting. Then the deposited energies of particles entering the detectors are recorded and tallied and the tool responses to such a scenario are generated. A general-purpose code named Monte Carlo N– Particle (MCNP has been the industry-standard for some time. In this paper, we briefly introduce the fundamental principles of Monte Carlo numerical modeling and review the physics of MCNP. Some of the latest developments of Monte Carlo Models are also reviewed. A variety of examples are presented to illustrate the uses of Monte Carlo numerical models
Monte Carlo simulations for design of the KFUPM PGNAA facility
Naqvi, A A; Maslehuddin, M; Kidwai, S
2003-01-01
Monte Carlo simulations were carried out to design a 2.8 MeV neutron-based prompt gamma ray neutron activation analysis (PGNAA) setup for elemental analysis of cement samples. The elemental analysis was carried out using prompt gamma rays produced through capture of thermal neutrons in sample nuclei. The basic design of the PGNAA setup consists of a cylindrical cement sample enclosed in a cylindrical high-density polyethylene moderator placed between a neutron source and a gamma ray detector. In these simulations the predominant geometrical parameters of the PGNAA setup were optimized, including moderator size, sample size and shielding of the detector. Using the results of the simulations, an experimental PGNAA setup was then fabricated at the 350 kV Accelerator Laboratory of this University. The design calculations were checked experimentally through thermal neutron flux measurements inside the PGNAA moderator. A test prompt gamma ray spectrum of the PGNAA setup was also acquired from a Portland cement samp...
TRIPOLI-3: a neutron/photon Monte Carlo transport code
Nimal, J.C.; Vergnaud, T. [Commissariat a l' Energie Atomique, Gif-sur-Yvette (France). Service d' Etudes de Reacteurs et de Mathematiques Appliquees
2001-07-01
The present version of TRIPOLI-3 solves the transport equation for coupled neutron and gamma ray problems in three dimensional geometries by using the Monte Carlo method. This code is devoted both to shielding and criticality problems. The most important feature for particle transport equation solving is the fine treatment of the physical phenomena and sophisticated biasing technics useful for deep penetrations. The code is used either for shielding design studies or for reference and benchmark to validate cross sections. Neutronic studies are essentially cell or small core calculations and criticality problems. TRIPOLI-3 has been used as reference method, for example, for resonance self shielding qualification. (orig.)
Discrete angle biasing in Monte Carlo radiation transport
Cramer, S.N.
1988-05-01
An angular biasing procedure is presented for use in Monte Carlo radiation transport with discretized scattering angle data. As in more general studies, the method is shown to reduce statistical weight fluctuations when it is combined with the exponential transformation. This discrete data application has a simple analytic form which is problem independent. The results from a sample problem illustrate the variance reduction and efficiency characteristics of the combined biasing procedures, and a large neutron and gamma ray integral experiment is also calculated. A proposal is given for the possible code generation of the biasing parameter p and the preferential direction /ovr/Omega///sub 0/ used in the combined biasing schemes.
Monte Carlo Simulation Of Emission Tomography And Other Medical Imaging Techniques
Harrison, Robert L.
2010-01-01
An introduction to Monte Carlo simulation of emission tomography. This paper reviews the history and principles of Monte Carlo simulation, then applies these principles to emission tomography using the public domain simulation package SimSET (a Simulation System for Emission Tomography) as an example. Finally, the paper discusses how the methods are modified for X-ray computed tomography and radiotherapy simulations. PMID:20733931
Forbang, R Teboh [John Hopkins University, Baltimore, MD (United States)
2014-06-01
Purpose: MultiPlan, the treatment planning system for the CyberKnife Robotic Radiosurgery system offers two approaches to dose computation, namely Ray-Tracing (RT), the default technique and Monte Carlo (MC), an option. RT is deterministic, however it accounts for primary heterogeneity only. MC on the other hand has an uncertainty associated with the calculation results. The advantage is that in addition, it accounts for heterogeneity effects on the scattered dose. Not all sites will benefit from MC. The goal of this work was to focus on central nervous system (CNS) tumors and compare dosimetrically, treatment plans computed with RT versus MC. Methods: Treatment plans were computed using both RT and MC for sites covering (a) the brain (b) C-spine (c) upper T-spine (d) lower T-spine (e) L-spine and (f) sacrum. RT was first used to compute clinically valid treatment plans. Then the same treatment parameters, monitor units, beam weights, etc., were used in the MC algorithm to compute the dose distribution. The plans were then compared for tumor coverage to illustrate the difference if any. All MC calculations were performed at a 1% uncertainty. Results: Using the RT technique, the tumor coverage for the brain, C-spine (C3–C7), upper T-spine (T4–T6), lower T-spine (T10), Lspine (L2) and sacrum were 96.8%, 93.1%, 97.2%, 87.3%, 91.1%, and 95.3%. The corresponding tumor coverage based on the MC approach was 98.2%, 95.3%, 87.55%, 88.2%, 92.5%, and 95.3%. It should be noted that the acceptable planning target coverage for our clinical practice is >95%. The coverage can be compromised for spine tumors to spare normal tissues such as the spinal cord. Conclusion: For treatment planning involving the CNS, RT and MC appear to be similar for most sites but for the T-spine area where most of the beams traverse lung tissue. In this case, MC is highly recommended.
Mampuya, Wambaka Ange [Department of Radiation Oncology and Image–Applied Therapy, Graduate School of Medicine, Kyoto University, Kyoto (Japan); Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.jp [Department of Radiation Oncology and Image–Applied Therapy, Graduate School of Medicine, Kyoto University, Kyoto (Japan); Nakamura, Akira; Nakamura, Mitsuhiro; Mukumoto, Nobutaka; Miyabe, Yuki; Narabayashi, Masaru; Sakanaka, Katsuyuki; Mizowaki, Takashi; Hiraoka, Masahiro [Department of Radiation Oncology and Image–Applied Therapy, Graduate School of Medicine, Kyoto University, Kyoto (Japan)
2013-04-01
The objective of this study was to evaluate the differences in dose-volumetric data obtained using the analytical anisotropic algorithm (AAA) vs the x-ray voxel Monte Carlo (XVMC) algorithm for stereotactic body radiation therapy (SBRT) for lung cancer. Dose-volumetric data from 20 patients treated with SBRT for solitary lung cancer generated using the iPlan XVMC for the Novalis system consisting of a 6-MV linear accelerator and micro-multileaf collimators were recalculated with the AAA in Eclipse using the same monitor units and identical beam setup. The mean isocenter dose was 100.2% and 98.7% of the prescribed dose according to XVMC and AAA, respectively. Mean values of the maximal dose (D{sub max}), the minimal dose (D{sub min}), and dose received by 95% volume (D{sub 95}) for the planning target volume (PTV) with XVMC were 104.3%, 75.1%, and 86.2%, respectively. When recalculated with the AAA, those values were 100.8%, 77.1%, and 85.4%, respectively. Mean dose parameter values considered for the normal lung, namely the mean lung dose, V{sub 5}, and V{sub 20}, were 3.7 Gy, 19.4%, and 5.0% for XVMC and 3.6 Gy, 18.3%, and 4.7% for the AAA, respectively. All of these dose-volumetric differences between the 2 algorithms were within 5% of the prescribed dose. The effect of PTV size and tumor location, respectively, on the differences in dose parameters for the PTV between the AAA and XVMC was evaluated. A significant effect of the PTV on the difference in D{sub 95} between the AAA and XVMC was observed (p = 0.03). Differences in the marginal doses, namely D{sub min} and D{sub 95}, were statistically significant between peripherally and centrally located tumors (p = 0.04 and p = 0.02, respectively). Tumor location and volume might have an effect on the differences in dose-volumetric parameters. The differences between AAA and XVMC were considered to be within an acceptable range (<5 percentage points)
Variational Monte Carlo study of pentaquark states
Mark W. Paris
2005-07-01
Accurate numerical solution of the five-body Schrodinger equation is effected via variational Monte Carlo. The spectrum is assumed to exhibit a narrow resonance with strangeness S=+1. A fully antisymmetrized and pair-correlated five-quark wave function is obtained for the assumed non-relativistic Hamiltonian which has spin, isospin, and color dependent pair interactions and many-body confining terms which are fixed by the non-exotic spectra. Gauge field dynamics are modeled via flux tube exchange factors. The energy determined for the ground states with J=1/2 and negative (positive) parity is 2.22 GeV (2.50 GeV). A lower energy negative parity state is consistent with recent lattice results. The short-range structure of the state is analyzed via its diquark content.
Monte Carlo simulation of neutron scattering instruments
Seeger, P.A.; Daemen, L.L.; Hjelm, R.P. Jr.
1998-12-01
A code package consisting of the Monte Carlo Library MCLIB, the executing code MC{_}RUN, the web application MC{_}Web, and various ancillary codes is proposed as an open standard for simulation of neutron scattering instruments. The architecture of the package includes structures to define surfaces, regions, and optical elements contained in regions. A particle is defined by its vector position and velocity, its time of flight, its mass and charge, and a polarization vector. The MC{_}RUN code handles neutron transport and bookkeeping, while the action on the neutron within any region is computed using algorithms that may be deterministic, probabilistic, or a combination. Complete versatility is possible because the existing library may be supplemented by any procedures a user is able to code. Some examples are shown.
Atomistic Monte Carlo simulation of lipid membranes
Wüstner, Daniel; Sklenar, Heinz
2014-01-01
, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential......Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction...... into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches...
Experimental Monte Carlo Quantum Process Certification
Steffen, L; Fedorov, A; Baur, M; Wallraff, A
2012-01-01
Experimental implementations of quantum information processing have now reached a level of sophistication where quantum process tomography is impractical. The number of experimental settings as well as the computational cost of the data post-processing now translates to days of effort to characterize even experiments with as few as 8 qubits. Recently a more practical approach to determine the fidelity of an experimental quantum process has been proposed, where the experimental data is compared directly to an ideal process using Monte Carlo sampling. Here we present an experimental implementation of this scheme in a circuit quantum electrodynamics setup to determine the fidelity of two qubit gates, such as the cphase and the cnot gate, and three qubit gates, such as the Toffoli gate and two sequential cphase gates.
Gas discharges modeling by Monte Carlo technique
Savić Marija
2010-01-01
Full Text Available The basic assumption of the Townsend theory - that ions produce secondary electrons - is valid only in a very narrow range of the reduced electric field E/N. In accordance with the revised Townsend theory that was suggested by Phelps and Petrović, secondary electrons are produced in collisions of ions, fast neutrals, metastable atoms or photons with the cathode, or in gas phase ionizations by fast neutrals. In this paper we tried to build up a Monte Carlo code that can be used to calculate secondary electron yields for different types of particles. The obtained results are in good agreement with the analytical results of Phelps and. Petrović [Plasma Sourc. Sci. Technol. 8 (1999 R1].
On nonlinear Markov chain Monte Carlo
Andrieu, Christophe; Doucet, Arnaud; Del Moral, Pierre; 10.3150/10-BEJ307
2011-01-01
Let $\\mathscr{P}(E)$ be the space of probability measures on a measurable space $(E,\\mathcal{E})$. In this paper we introduce a class of nonlinear Markov chain Monte Carlo (MCMC) methods for simulating from a probability measure $\\pi\\in\\mathscr{P}(E)$. Nonlinear Markov kernels (see [Feynman--Kac Formulae: Genealogical and Interacting Particle Systems with Applications (2004) Springer]) $K:\\mathscr{P}(E)\\times E\\rightarrow\\mathscr{P}(E)$ can be constructed to, in some sense, improve over MCMC methods. However, such nonlinear kernels cannot be simulated exactly, so approximations of the nonlinear kernels are constructed using auxiliary or potentially self-interacting chains. Several nonlinear kernels are presented and it is demonstrated that, under some conditions, the associated approximations exhibit a strong law of large numbers; our proof technique is via the Poisson equation and Foster--Lyapunov conditions. We investigate the performance of our approximations with some simulations.
Monte Carlo exploration of warped Higgsless models
Hewett, JoAnne L.; Lillie, Benjamin; Rizzo, Thomas Gerard [Stanford Linear Accelerator Center, 2575 Sand Hill Rd., Menlo Park, CA, 94025 (United States)]. E-mail: rizzo@slac.stanford.edu
2004-10-01
We have performed a detailed Monte Carlo exploration of the parameter space for a warped Higgsless model of electroweak symmetry breaking in 5 dimensions. This model is based on the SU(2){sub L} x SU(2){sub R} x U(1){sub B-L} gauge group in an AdS{sub 5} bulk with arbitrary gauge kinetic terms on both the Planck and TeV branes. Constraints arising from precision electroweak measurements and collider data are found to be relatively easy to satisfy. We show, however, that the additional requirement of perturbative unitarity up to the cut-off, {approx_equal} 10 TeV, in W{sub L}{sup +}W{sub L}{sup -} elastic scattering in the absence of dangerous tachyons eliminates all models. If successful models of this class exist, they must be highly fine-tuned. (author)
Monte Carlo Exploration of Warped Higgsless Models
Hewett, J L; Rizzo, T G
2004-01-01
We have performed a detailed Monte Carlo exploration of the parameter space for a warped Higgsless model of electroweak symmetry breaking in 5 dimensions. This model is based on the $SU(2)_L\\times SU(2)_R\\times U(1)_{B-L}$ gauge group in an AdS$_5$ bulk with arbitrary gauge kinetic terms on both the Planck and TeV branes. Constraints arising from precision electroweak measurements and collider data are found to be relatively easy to satisfy. We show, however, that the additional requirement of perturbative unitarity up to the cut-off, $\\simeq 10$ TeV, in $W_L^+W_L^-$ elastic scattering in the absence of dangerous tachyons eliminates all models. If successful models of this class exist, they must be highly fine-tuned.
Monte Carlo Implementation of Polarized Hadronization
Matevosyan, Hrayr H; Thomas, Anthony W
2016-01-01
We study the polarized quark hadronization in a Monte Carlo (MC) framework based on the recent extension of the quark-jet framework, where a self-consistent treatment of the quark polarization transfer in a sequential hadronization picture has been presented. Here, we first adopt this approach for MC simulations of hadronization process with finite number of produced hadrons, expressing the relevant probabilities in terms of the eight leading twist quark-to-quark transverse momentum dependent (TMD) splitting functions (SFs) for elementary $q \\to q'+h$ transition. We present explicit expressions for the unpolarized and Collins fragmentation functions (FFs) of unpolarized hadrons emitted at rank two. Further, we demonstrate that all the current spectator-type model calculations of the leading twist quark-to-quark TMD SFs violate the positivity constraints, and propose quark model based ansatz for these input functions that circumvents the problem. We validate our MC framework by explicitly proving the absence o...
Commensurabilities between ETNOs: a Monte Carlo survey
Marcos, C de la Fuente
2016-01-01
Many asteroids in the main and trans-Neptunian belts are trapped in mean motion resonances with Jupiter and Neptune, respectively. As a side effect, they experience accidental commensurabilities among themselves. These commensurabilities define characteristic patterns that can be used to trace the source of the observed resonant behaviour. Here, we explore systematically the existence of commensurabilities between the known ETNOs using their heliocentric and barycentric semimajor axes, their uncertainties, and Monte Carlo techniques. We find that the commensurability patterns present in the known ETNO population resemble those found in the main and trans-Neptunian belts. Although based on small number statistics, such patterns can only be properly explained if most, if not all, of the known ETNOs are subjected to the resonant gravitational perturbations of yet undetected trans-Plutonian planets. We show explicitly that some of the statistically significant commensurabilities are compatible with the Planet Nin...
Variable length trajectory compressible hybrid Monte Carlo
Nishimura, Akihiko
2016-01-01
Hybrid Monte Carlo (HMC) generates samples from a prescribed probability distribution in a configuration space by simulating Hamiltonian dynamics, followed by the Metropolis (-Hastings) acceptance/rejection step. Compressible HMC (CHMC) generalizes HMC to a situation in which the dynamics is reversible but not necessarily Hamiltonian. This article presents a framework to further extend the algorithm. Within the existing framework, each trajectory of the dynamics must be integrated for the same amount of (random) time to generate a valid Metropolis proposal. Our generalized acceptance/rejection mechanism allows a more deliberate choice of the integration time for each trajectory. The proposed algorithm in particular enables an effective application of variable step size integrators to HMC-type sampling algorithms based on reversible dynamics. The potential of our framework is further demonstrated by another extension of HMC which reduces the wasted computations due to unstable numerical approximations and corr...
Nuclear reactions in Monte Carlo codes.
Ferrari, A; Sala, P R
2002-01-01
The physics foundations of hadronic interactions as implemented in most Monte Carlo codes are presented together with a few practical examples. The description of the relevant physics is presented schematically split into the major steps in order to stress the different approaches required for the full understanding of nuclear reactions at intermediate and high energies. Due to the complexity of the problem, only a few semi-qualitative arguments are developed in this paper. The description will be necessarily schematic and somewhat incomplete, but hopefully it will be useful for a first introduction into this topic. Examples are shown mostly for the high energy regime, where all mechanisms mentioned in the paper are at work and to which perhaps most of the readers are less accustomed. Examples for lower energies can be found in the references.
Atomistic Monte Carlo simulation of lipid membranes
Wüstner, Daniel; Sklenar, Heinz
2014-01-01
Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction......, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential...... of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol....
Geometric Monte Carlo and Black Janus Geometries
Bak, Dongsu; Kim, Kyung Kiu; Min, Hyunsoo; Song, Jeong-Pil
2016-01-01
We describe an application of the Monte Carlo method to the Janus deformation of the black brane background. We present numerical results for three and five dimensional black Janus geometries with planar and spherical interfaces. In particular, we argue that the 5D geometry with a spherical interface has an application in understanding the finite temperature bag-like QCD model via the AdS/CFT correspondence. The accuracy and convergence of the algorithm are evaluated with respect to the grid spacing. The systematic errors of the method are determined using an exact solution of 3D black Janus. This numerical approach for solving linear problems is unaffected initial guess of a trial solution and can handle an arbitrary geometry under various boundary conditions in the presence of source fields.
Accurate barrier heights using diffusion Monte Carlo
Krongchon, Kittithat; Wagner, Lucas K
2016-01-01
Fixed node diffusion Monte Carlo (DMC) has been performed on a test set of forward and reverse barrier heights for 19 non-hydrogen-transfer reactions, and the nodal error has been assessed. The DMC results are robust to changes in the nodal surface, as assessed by using different mean-field techniques to generate single determinant wave functions. Using these single determinant nodal surfaces, DMC results in errors of 1.5(5) kcal/mol on barrier heights. Using the large data set of DMC energies, we attempted to find good descriptors of the fixed node error. It does not correlate with a number of descriptors including change in density, but does correlate with the gap between the highest occupied and lowest unoccupied orbital energies in the mean-field calculation.
Recent Developments in Quantum Monte Carlo: Methods and Applications
Aspuru-Guzik, Alan; Austin, Brian; Domin, Dominik; Galek, Peter T. A.; Handy, Nicholas; Prasad, Rajendra; Salomon-Ferrer, Romelia; Umezawa, Naoto; Lester, William A.
2007-12-01
The quantum Monte Carlo method in the diffusion Monte Carlo form has become recognized for its capability of describing the electronic structure of atomic, molecular and condensed matter systems to high accuracy. This talk will briefly outline the method with emphasis on recent developments connected with trial function construction, linear scaling, and applications to selected systems.
QUANTUM MONTE-CARLO SIMULATIONS - ALGORITHMS, LIMITATIONS AND APPLICATIONS
DERAEDT, H
1992-01-01
A survey is given of Quantum Monte Carlo methods currently used to simulate quantum lattice models. The formalisms employed to construct the simulation algorithms are sketched. The origin of fundamental (minus sign) problems which limit the applicability of the Quantum Monte Carlo approach is shown
QWalk: A Quantum Monte Carlo Program for Electronic Structure
Wagner, Lucas K; Mitas, Lubos
2007-01-01
We describe QWalk, a new computational package capable of performing Quantum Monte Carlo electronic structure calculations for molecules and solids with many electrons. We describe the structure of the program and its implementation of Quantum Monte Carlo methods. It is open-source, licensed under the GPL, and available at the web site http://www.qwalk.org
Quantum Monte Carlo Simulations : Algorithms, Limitations and Applications
Raedt, H. De
1992-01-01
A survey is given of Quantum Monte Carlo methods currently used to simulate quantum lattice models. The formalisms employed to construct the simulation algorithms are sketched. The origin of fundamental (minus sign) problems which limit the applicability of the Quantum Monte Carlo approach is shown
Reporting Monte Carlo Studies in Structural Equation Modeling
Boomsma, Anne
2013-01-01
In structural equation modeling, Monte Carlo simulations have been used increasingly over the last two decades, as an inventory from the journal Structural Equation Modeling illustrates. Reaching out to a broad audience, this article provides guidelines for reporting Monte Carlo studies in that fiel
Practical schemes for accurate forces in quantum Monte Carlo
Moroni, S.; Saccani, S.; Filippi, Claudia
2014-01-01
While the computation of interatomic forces has become a well-established practice within variational Monte Carlo (VMC), the use of the more accurate Fixed-Node Diffusion Monte Carlo (DMC) method is still largely limited to the computation of total energies on structures obtained at a lower level of
Efficiency and accuracy of Monte Carlo (importance) sampling
Waarts, P.H.
2003-01-01
Monte Carlo Analysis is often regarded as the most simple and accurate reliability method. Be-sides it is the most transparent method. The only problem is the accuracy in correlation with the efficiency. Monte Carlo gets less efficient or less accurate when very low probabilities are to be computed
The Monte Carlo Method. Popular Lectures in Mathematics.
Sobol', I. M.
The Monte Carlo Method is a method of approximately solving mathematical and physical problems by the simulation of random quantities. The principal goal of this booklet is to suggest to specialists in all areas that they will encounter problems which can be solved by the Monte Carlo Method. Part I of the booklet discusses the simulation of random…
Forest canopy BRDF simulation using Monte Carlo method
Huang, J.; Wu, B.; Zeng, Y.; Tian, Y.
2006-01-01
Monte Carlo method is a random statistic method, which has been widely used to simulate the Bidirectional Reflectance Distribution Function (BRDF) of vegetation canopy in the field of visible remote sensing. The random process between photons and forest canopy was designed using Monte Carlo method.
Sensitivity of Monte Carlo simulations to input distributions
RamoRao, B. S.; Srikanta Mishra, S.; McNeish, J.; Andrews, R. W.
2001-07-01
The sensitivity of the results of a Monte Carlo simulation to the shapes and moments of the probability distributions of the input variables is studied. An economical computational scheme is presented as an alternative to the replicate Monte Carlo simulations and is explained with an illustrative example. (Author) 4 refs.
Quantum Monte Carlo using a Stochastic Poisson Solver
Das, D; Martin, R M; Kalos, M H
2005-05-06
Quantum Monte Carlo (QMC) is an extremely powerful method to treat many-body systems. Usually quantum Monte Carlo has been applied in cases where the interaction potential has a simple analytic form, like the 1/r Coulomb potential. However, in a complicated environment as in a semiconductor heterostructure, the evaluation of the interaction itself becomes a non-trivial problem. Obtaining the potential from any grid-based finite-difference method, for every walker and every step is unfeasible. We demonstrate an alternative approach of solving the Poisson equation by a classical Monte Carlo within the overall quantum Monte Carlo scheme. We have developed a modified ''Walk On Spheres'' algorithm using Green's function techniques, which can efficiently account for the interaction energy of walker configurations, typical of quantum Monte Carlo algorithms. This stochastically obtained potential can be easily incorporated within popular quantum Monte Carlo techniques like variational Monte Carlo (VMC) or diffusion Monte Carlo (DMC). We demonstrate the validity of this method by studying a simple problem, the polarization of a helium atom in the electric field of an infinite capacitor.
Further experience in Bayesian analysis using Monte Carlo Integration
H.K. van Dijk (Herman); T. Kloek (Teun)
1980-01-01
textabstractAn earlier paper [Kloek and Van Dijk (1978)] is extended in three ways. First, Monte Carlo integration is performed in a nine-dimensional parameter space of Klein's model I [Klein (1950)]. Second, Monte Carlo is used as a tool for the elicitation of a uniform prior on a finite region by
New Approaches and Applications for Monte Carlo Perturbation Theory
Aufiero, Manuele; Bidaud, Adrien; Kotlyar, Dan; Leppänen, Jaakko; Palmiotti, Giuseppe; Salvatores, Massimo; Sen, Sonat; Shwageraus, Eugene; Fratoni, Massimiliano
2017-02-01
This paper presents some of the recent and new advancements in the extension of Monte Carlo Perturbation Theory methodologies and application. In particular, the discussed problems involve Brunup calculation, perturbation calculation based on continuous energy functions, and Monte Carlo Perturbation Theory in loosely coupled systems.
Forest canopy BRDF simulation using Monte Carlo method
Huang, J.; Wu, B.; Zeng, Y.; Tian, Y.
2006-01-01
Monte Carlo method is a random statistic method, which has been widely used to simulate the Bidirectional Reflectance Distribution Function (BRDF) of vegetation canopy in the field of visible remote sensing. The random process between photons and forest canopy was designed using Monte Carlo method.
Practical schemes for accurate forces in quantum Monte Carlo
Moroni, S.; Saccani, S.; Filippi, C.
2014-01-01
While the computation of interatomic forces has become a well-established practice within variational Monte Carlo (VMC), the use of the more accurate Fixed-Node Diffusion Monte Carlo (DMC) method is still largely limited to the computation of total energies on structures obtained at a lower level of
CERN Summer Student Report 2016 Monte Carlo Data Base Improvement
Caciulescu, Alexandru Razvan
2016-01-01
During my Summer Student project I worked on improving the Monte Carlo Data Base and MonALISA services for the ALICE Collaboration. The project included learning the infrastructure for tracking and monitoring of the Monte Carlo productions as well as developing a new RESTful API for seamless integration with the JIRA issue tracking framework.
Accelerated GPU based SPECT Monte Carlo simulations
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-01
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency
Monte Carlo modelling of TRIGA research reactor
El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.
2010-10-01
The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.
Accelerated GPU based SPECT Monte Carlo simulations.
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-07
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational
Monte Carlo scatter correction for SPECT
Liu, Zemei
The goal of this dissertation is to present a quantitatively accurate and computationally fast scatter correction method that is robust and easily accessible for routine applications in SPECT imaging. A Monte Carlo based scatter estimation method is investigated and developed further. The Monte Carlo simulation program SIMIND (Simulating Medical Imaging Nuclear Detectors), was specifically developed to simulate clinical SPECT systems. The SIMIND scatter estimation (SSE) method was developed further using a multithreading technique to distribute the scatter estimation task across multiple threads running concurrently on multi-core CPU's to accelerate the scatter estimation process. An analytical collimator that ensures less noise was used during SSE. The research includes the addition to SIMIND of charge transport modeling in cadmium zinc telluride (CZT) detectors. Phenomena associated with radiation-induced charge transport including charge trapping, charge diffusion, charge sharing between neighboring detector pixels, as well as uncertainties in the detection process are addressed. Experimental measurements and simulation studies were designed for scintillation crystal based SPECT and CZT based SPECT systems to verify and evaluate the expanded SSE method. Jaszczak Deluxe and Anthropomorphic Torso Phantoms (Data Spectrum Corporation, Hillsborough, NC, USA) were used for experimental measurements and digital versions of the same phantoms employed during simulations to mimic experimental acquisitions. This study design enabled easy comparison of experimental and simulated data. The results have consistently shown that the SSE method performed similarly or better than the triple energy window (TEW) and effective scatter source estimation (ESSE) methods for experiments on all the clinical SPECT systems. The SSE method is proven to be a viable method for scatter estimation for routine clinical use.
Fission Matrix Capability for MCNP Monte Carlo
Carney, Sean E. [Los Alamos National Laboratory; Brown, Forrest B. [Los Alamos National Laboratory; Kiedrowski, Brian C. [Los Alamos National Laboratory; Martin, William R. [Los Alamos National Laboratory
2012-09-05
In a Monte Carlo criticality calculation, before the tallying of quantities can begin, a converged fission source (the fundamental eigenvector of the fission kernel) is required. Tallies of interest may include powers, absorption rates, leakage rates, or the multiplication factor (the fundamental eigenvalue of the fission kernel, k{sub eff}). Just as in the power iteration method of linear algebra, if the dominance ratio (the ratio of the first and zeroth eigenvalues) is high, many iterations of neutron history simulations are required to isolate the fundamental mode of the problem. Optically large systems have large dominance ratios, and systems containing poor neutron communication between regions are also slow to converge. The fission matrix method, implemented into MCNP[1], addresses these problems. When Monte Carlo random walk from a source is executed, the fission kernel is stochastically applied to the source. Random numbers are used for: distances to collision, reaction types, scattering physics, fission reactions, etc. This method is used because the fission kernel is a complex, 7-dimensional operator that is not explicitly known. Deterministic methods use approximations/discretization in energy, space, and direction to the kernel. Consequently, they are faster. Monte Carlo directly simulates the physics, which necessitates the use of random sampling. Because of this statistical noise, common convergence acceleration methods used in deterministic methods do not work. In the fission matrix method, we are using the random walk information not only to build the next-iteration fission source, but also a spatially-averaged fission kernel. Just like in deterministic methods, this involves approximation and discretization. The approximation is the tallying of the spatially-discretized fission kernel with an incorrect fission source. We address this by making the spatial mesh fine enough that this error is negligible. As a consequence of discretization we get a
Vectorized Monte Carlo methods for reactor lattice analysis
Brown, F. B.
1984-01-01
Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.
Quantum Monte Carlo methods algorithms for lattice models
Gubernatis, James; Werner, Philipp
2016-01-01
Featuring detailed explanations of the major algorithms used in quantum Monte Carlo simulations, this is the first textbook of its kind to provide a pedagogical overview of the field and its applications. The book provides a comprehensive introduction to the Monte Carlo method, its use, and its foundations, and examines algorithms for the simulation of quantum many-body lattice problems at finite and zero temperature. These algorithms include continuous-time loop and cluster algorithms for quantum spins, determinant methods for simulating fermions, power methods for computing ground and excited states, and the variational Monte Carlo method. Also discussed are continuous-time algorithms for quantum impurity models and their use within dynamical mean-field theory, along with algorithms for analytically continuing imaginary-time quantum Monte Carlo data. The parallelization of Monte Carlo simulations is also addressed. This is an essential resource for graduate students, teachers, and researchers interested in ...
Application of Monte Carlo methods in tomotherapy and radiation biophysics
Hsiao, Ya-Yun
Helical tomotherapy is an attractive treatment for cancer therapy because highly conformal dose distributions can be achieved while the on-board megavoltage CT provides simultaneous images for accurate patient positioning. The convolution/superposition (C/S) dose calculation methods typically used for Tomotherapy treatment planning may overestimate skin (superficial) doses by 3-13%. Although more accurate than C/S methods, Monte Carlo (MC) simulations are too slow for routine clinical treatment planning. However, the computational requirements of MC can be reduced by developing a source model for the parts of the accelerator that do not change from patient to patient. This source model then becomes the starting point for additional simulations of the penetration of radiation through patient. In the first section of this dissertation, a source model for a helical tomotherapy is constructed by condensing information from MC simulations into series of analytical formulas. The MC calculated percentage depth dose and beam profiles computed using the source model agree within 2% of measurements for a wide range of field sizes, which suggests that the proposed source model provides an adequate representation of the tomotherapy head for dose calculations. Monte Carlo methods are a versatile technique for simulating many physical, chemical and biological processes. In the second major of this thesis, a new methodology is developed to simulate of the induction of DNA damage by low-energy photons. First, the PENELOPE Monte Carlo radiation transport code is used to estimate the spectrum of initial electrons produced by photons. The initial spectrum of electrons are then combined with DNA damage yields for monoenergetic electrons from the fast Monte Carlo damage simulation (MCDS) developed earlier by Semenenko and Stewart (Purdue University). Single- and double-strand break yields predicted by the proposed methodology are in good agreement (1%) with the results of published
Baräo, Fernando; Nakagawa, Masayuki; Távora, Luis; Vaz, Pedro
2001-01-01
This book focusses on the state of the art of Monte Carlo methods in radiation physics and particle transport simulation and applications, the latter involving in particular, the use and development of electron--gamma, neutron--gamma and hadronic codes. Besides the basic theory and the methods employed, special attention is paid to algorithm development for modeling, and the analysis of experiments and measurements in a variety of fields ranging from particle to medical physics.
Kawano, Toshihiko [Los Alamos National Laboratory; Talou, Patrick [Los Alamos National Laboratory; Watanabe, Takehito [Los Alamos National Laboratory; Chadwick, Mark [Los Alamos National Laboratory
2010-01-01
Monte Carlo simulations for particle and {gamma}-ray emissions from an excited nucleus based on the Hauser-Feshbach statistical theory are performed to obtain correlated information between emitted particles and {gamma}-rays. We calculate neutron induced reactions on {sup 51}V to demonstrate unique advantages of the Monte Carlo method. which are the correlated {gamma}-rays in the neutron radiative capture reaction, the neutron and {gamma}-ray correlation, and the particle-particle correlations at higher energies. It is shown that properties in nuclear reactions that are difficult to study with a deterministic method can be obtained with the Monte Carlo simulations.
Jaradat, Adnan K; Biggs, Peter J
2007-05-01
The calculation of shielding barrier thicknesses for radiation therapy facilities according to the NCRP formalism is based on the use of broad beams (that is, the maximum possible field sizes). However, in practice, treatment fields used in radiation therapy are, on average, less than half the maximum size. Indeed, many contemporary treatment techniques call for reduced field sizes to reduce co-morbidity and the risk of second cancers. Therefore, published tenth value layers (TVLs) for shielding materials do not apply to these very small fields. There is, hence, a need to determine the TVLs for various beam modalities as a function of field size. The attenuation of (60)Co gamma rays and photons of 4, 6, 10, 15, and 18 MV bremsstrahlung x ray beams by concrete has been studied using the Monte Carlo technique (MCNP version 4C2) for beams of half-opening angles of 0 degrees , 3 degrees , 6 degrees , 9 degrees , 12 degrees , and 14 degrees . The distance between the x-ray source and the distal surface of the shielding wall was fixed at 600 cm, a distance that is typical for modern radiation therapy rooms. The maximum concrete thickness varied between 76.5 cm and 151.5 cm for (60)Co and 18 MV x rays, respectively. Detectors were placed at 630 cm, 700 cm, and 800 cm from the source. TVLs have been determined down to the third TVL. Energy spectra for 4, 6, 10, 15, and 18 MV x rays for 10 x 10 cm(2) and 40 x 40 cm(2) field sizes were used to generate depth dose curves in water that were compared with experimentally measured values.
Pozuelo, F.; Querol, A.; Juste, B.; Gallardo, S.; Rodenas, J.; Verdu, G.
2012-07-01
Obtaining the primary spectrum of X-rays to determine the quality of a photon beam produced by an X-ray tube, since the dosimetric characteristics of a radiation beam to have a direct relation to the primary X-ray spectrum. In this work are studied, the depth dose curves obtained in the energy range of diagnostic radiology, between 40 and 130 keV.
Arabi, Hosein; Asl, Ali Reza Kamali; Ay, Mohammad Reza; Zaidi, Habib
2011-01-01
Purpose: The variable resolution x-ray (VRX) CT scanner provides substantial improvement in the spatial resolution by matching the scanner's field of view (FOV) to the size of the object being imaged. Intercell x-ray cross-talk is one of the most important factors limiting the spatial resolution of
Sharma, Diksha; Badano, Aldo
2013-03-01
hybridMANTIS is a Monte Carlo package for modeling indirect x-ray imagers using columnar geometry based on a hybrid concept that maximizes the utilization of available CPU and graphics processing unit processors in a workstation. The authors compare hybridMANTIS x-ray response simulations to previously published MANTIS and experimental data for four cesium iodide scintillator screens. These screens have a variety of reflective and absorptive surfaces with different thicknesses. The authors analyze hybridMANTIS results in terms of modulation transfer function and calculate the root mean square difference and Swank factors from simulated and experimental results. The comparison suggests that hybridMANTIS better matches the experimental data as compared to MANTIS, especially at high spatial frequencies and for the thicker screens. hybridMANTIS simulations are much faster than MANTIS with speed-ups up to 5260. hybridMANTIS is a useful tool for improved description and optimization of image acquisition stages in medical imaging systems and for modeling the forward problem in iterative reconstruction algorithms.
Evaluation of atomic electron binding energies for Monte Carlo particle transport
Pia, Maria Grazia; Batic, Matej; Begalli, Marcia; Kim, Chan Hyeong; Quintieri, Lina; Saracco, Paolo
2011-01-01
A survey of atomic binding energies used by general purpose Monte Carlo systems is reported. Various compilations of these parameters have been evaluated; their accuracy is estimated with respect to experimental data. Their effects on physics quantities relevant to Monte Carlo particle transport are highlighted: X-ray fluorescence emission, electron and proton ionization cross sections, and Doppler broadening in Compton scattering. The effects due to different binding energies are quantified with respect to experimental data. The results of the analysis provide quantitative ground for the selection of binding energies to optimize the accuracy of Monte Carlo simulation in experimental use cases. Recommendations on software design dealing with these parameters and on the improvement of data libraries for Monte Carlo simulation are discussed.
Monte Carlo and analytic simulations in nanoparticle-enhanced radiation therapy
Paro AD
2016-09-01
Full Text Available Autumn D Paro,1 Mainul Hossain,2 Thomas J Webster,1,3,4 Ming Su1,4 1Department of Chemical Engineering, Northeastern University, Boston, MA, USA; 2NanoScience Technology Center and School of Electrical Engineering and Computer Science, University of Central Florida, Orlando, Florida, USA; 3Excellence for Advanced Materials Research, King Abdulaziz University, Jeddah, Saudi Arabia; 4Wenzhou Institute of Biomaterials and Engineering, Chinese Academy of Science, Wenzhou Medical University, Zhejiang, People’s Republic of China Abstract: Analytical and Monte Carlo simulations have been used to predict dose enhancement factors in nanoparticle-enhanced X-ray radiation therapy. Both simulations predict an increase in dose enhancement in the presence of nanoparticles, but the two methods predict different levels of enhancement over the studied energy, nanoparticle materials, and concentration regime for several reasons. The Monte Carlo simulation calculates energy deposited by electrons and photons, while the analytical one only calculates energy deposited by source photons and photoelectrons; the Monte Carlo simulation accounts for electron–hole recombination, while the analytical one does not; and the Monte Carlo simulation randomly samples photon or electron path and accounts for particle interactions, while the analytical simulation assumes a linear trajectory. This study demonstrates that the Monte Carlo simulation will be a better choice to evaluate dose enhancement with nanoparticles in radiation therapy. Keywords: nanoparticle, dose enhancement, Monte Carlo simulation, analytical simulation, radiation therapy, tumor cell, X-ray
:,; Bi, X J; Chen, D; Chen, W Y; Cui, S W; Danzengluobu,; Ding, L K; Ding, X H; Feng, C F; Feng, Zhaoyang; Feng, Z Y; Gou, Q B; Guo, H W; Guo, Y Q; He, H H; He, Z T; Hibino, K; Hotta, N; Hu, Haibing; Hu, H B; Huang, J; Li, W J; Jia, H Y; Jiang, L; Kajino, F; Kasahara, K; Katayose, Y; Kato, C; Kawata, K; Labaciren,; Le, G M; Li, A F; Liu, C; Liu, J S; Lu, H; Meng, X R; Mizutani, K; Munakata, K; Nanjo, H; Nishizawa, M; Ohnishi, M; Ohta, I; Ozawa, S; Qian, X L; Qu, X B; Saito, T; Saito, T Y; Sakata, M; Sako, T K; Shao, J; Shibata, M; Shiomi, A; Shirai, T; Sugimoto, H; Takita, M; Tan, Y H; Tateyama, N; Torii, S; Tsuchiya, H; Udo, S; Wang, H; Wu, H R; Xue, L; Yamamoto, Y; Yang, Z; Yasue, S; Yuan, A F; Yuda, T; Zhai, L M; Zhang, H M; Zhang, J L; Zhang, X Y; Zhang, Y; Zhang, Yi; Zhang, Ying; Zhaxisangzhu,; Zhou, X X
2013-01-01
A new hybrid experiment has been started by AS{\\gamma} experiment at Tibet, China, since August 2011, which consists of a low threshold burst-detector-grid (YAC-II, Yangbajing Air shower Core array), the Tibet air-shower array (Tibet-III) and a large underground water Cherenkov muon detector (MD). In this paper, the capability of the measurement of the chemical components (proton, helium and iron) with use of the (Tibet-III+YAC-II) is investigated by means of an extensive Monte Carlo simulation in which the secondary particles are propagated through the (Tibet-III+YAC-II) array and an artificial neural network (ANN) method is applied for the primary mass separation. Our simulation shows that the new installation is powerful to study the chemical compositions, in particular, to obtain the primary energy spectrum of the major component at the knee.
Esteve Sanchez, S.; Gil Conde, M.; Contreras Gonzalez, J. L.; Rosado, J.; Pazyi, V.
2013-07-01
When a gamma-ray beam crosses the border between two media characterized by atomic number very different is they produce effects on the distribution of doses near the border difficult to predict with simple models. The case of rays gamma affecting a lead glass is particularly interesting for its application to shielding of common use. interested in studying the importance of the residual dose after the shield. (Author)
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
Lopez-Tarjuelo, J.; Garcia-Molla, R.; Suan-Senabre, X. J.; Quiros-Higueras, J. Q.; Santos-Serra, A.; Marco-Blancas, N.; Calzada-Feliu, S.
2013-07-01
It has been done the acceptance for use clinical Monaco computerized planning system, based on an on a virtual model of the energy yield of the head of the linear electron Accelerator and that performs the calculation of the dose with an algorithm of x-rays (XVMC) based on Monte Carlo algorithm. (Author)
Fayos Ferrer, F.; Antolin Sanmartin, E.; Simon de Blas, R.; Palazon Cano, I.; Bertomeu Padin, T.; Gutierrez Sarraga, J.; Rey Portoles, G.
2011-07-01
This paper is subjected to various tests including Monte Carlo dosimetric the code in the latest versions of Multi plan Accuracy planner. They compare their results and Ray-Tracing Algorithm (RT), present from the earliest versions, with the experimental results obtained by photographic dosimetry and ionization chamber measurements.
Safigholi, Habib; Faghihi, Reza; Jashni, Somaye Karimi; Meigooni, Ali S. [Faculty of Engineering, Science and Research Branch, Islamic Azad University, Fars, 73481-13111, Persepolis (Iran, Islamic Republic of); Department of Nuclear Engineering and Radiation Research Center, Shiraz University, 71936-16548, Shiraz (Iran, Islamic Republic of); Shiraz University of Medical Sciences, 71348-14336, Shiraz (Iran, Islamic Republic of); Department of Radiation therapy, Comprehensive Cancer Center of Nevada, 3730 South Eastern Avenue, Las Vegas, Nevada 89169 (United States)
2012-04-15
Purpose: The goal of this study is to determine a method for Monte Carlo (MC) characterization of the miniature electronic brachytherapy x-ray sources (MEBXS) and to set dosimetric parameters according to TG-43U1 formalism. TG-43U1 parameters were used to get optimal designs of MEBXS. Parameters that affect the dose distribution such as anode shapes, target thickness, target angles, and electron beam source characteristics were evaluated. Optimized MEBXS designs were obtained and used to determine radial dose functions and 2D anisotropy functions in the electron energy range of 25-80 keV. Methods: Tungsten anode material was considered in two different geometries, hemispherical and conical-hemisphere. These configurations were analyzed by the 4C MC code with several different optimization techniques. The first optimization compared target thickness layers versus electron energy. These optimized thicknesses were compared with published results by Ihsan et al.[Nucl. Instrum. Methods Phys. Res. B 264, 371-377 (2007)]. The second optimization evaluated electron source characteristics by changing the cathode shapes and electron energies. Electron sources studied included; (1) point sources, (2) uniform cylinders, and (3) nonuniform cylindrical shell geometries. The third optimization was used to assess the apex angle of the conical-hemisphere target. The goal of these optimizations was to produce 2D-dose anisotropy functions closer to unity. An overall optimized MEBXS was developed from this analysis. The results obtained from this model were compared to known characteristics of HDR {sup 125}I, LDR {sup 103}Pd, and Xoft Axxent electronic brachytherapy source (XAEBS) [Med. Phys. 33, 4020-4032 (2006)]. Results: The optimized anode thicknesses as a function of electron energy is fitted by the linear equation Y ({mu}m) = 0.0459X (keV)-0.7342. The optimized electron source geometry is obtained for a disk-shaped parallel beam (uniform cylinder) with 0.9 mm radius. The TG-43
Information-Geometric Markov Chain Monte Carlo Methods Using Diffusions
Samuel Livingstone
2014-06-01
Full Text Available Recent work incorporating geometric ideas in Markov chain Monte Carlo is reviewed in order to highlight these advances and their possible application in a range of domains beyond statistics. A full exposition of Markov chains and their use in Monte Carlo simulation for statistical inference and molecular dynamics is provided, with particular emphasis on methods based on Langevin diffusions. After this, geometric concepts in Markov chain Monte Carlo are introduced. A full derivation of the Langevin diffusion on a Riemannian manifold is given, together with a discussion of the appropriate Riemannian metric choice for different problems. A survey of applications is provided, and some open questions are discussed.
The Monte Carlo method the method of statistical trials
Shreider, YuA
1966-01-01
The Monte Carlo Method: The Method of Statistical Trials is a systematic account of the fundamental concepts and techniques of the Monte Carlo method, together with its range of applications. Some of these applications include the computation of definite integrals, neutron physics, and in the investigation of servicing processes. This volume is comprised of seven chapters and begins with an overview of the basic features of the Monte Carlo method and typical examples of its application to simple problems in computational mathematics. The next chapter examines the computation of multi-dimensio
Virtual detector characterisation with Monte-Carlo simulations
Sukowski, F.; Yaneu Yaneu, J. F.; Salamon, M.; Ebert, S.; Uhlmann, N.
2009-08-01
In the field of X-ray imaging flat-panel detectors which convert X-rays into electrical signals, are widely used. For different applications, detectors differ in several specific parameters that can be used for characterizing the detector. At the Development Center X-ray Technology EZRT we studied the question how well these characteristics can be determined by only knowing the layer composition of a detector. In order to determine the required parameters, the Monte-Carlo (MC) simulation program ROSI [J. Giersch et al., Nucl. Instr. and Meth. A 509 (2003) 151] was used while taking into account all primary and secondary particle interactions as well as the focal spot size of the X-ray tube. For the study, the Hamamatsu C9311DK [Technical Datasheet Hamamatsu C9311DK flat panel sensor, Hamamatsu Photonics, ( www.hamamatsu.com)], a scintillator-based detector, and the Ajat DIC 100TL [Technical description of Ajat DIC 100TL, Ajat Oy Ltd., ( www.ajat.fi)], a direct converting semiconductor detector, were used. The layer compositions of the two detectors were implemented into the MC simulation program. The following characteristics were measured [N. Uhlmann et al., Nucl. Instr. and Meth. A 591 (2008) 46] and compared to simulation results: The basic spatial resolution (BSR), the modulation transfer function (MTF), the contrast sensitivity (CS) and the specific material thickness range (SMTR). To take scattering of optical photons into account DETECT2000 [C. Moisan et al., DETECT2000—A Program for Modeling Optical Properties of Scintillators, Department of Electrical and Computer Engineering, Laval University, Quebec City, 2000], another Monte-Carlo simulation was used.
Monte Carlo simulations for heavy ion dosimetry
Geithner, O.
2006-07-26
Water-to-air stopping power ratio (s{sub w,air}) calculations for the ionization chamber dosimetry of clinically relevant ion beams with initial energies from 50 to 450 MeV/u have been performed using the Monte Carlo technique. To simulate the transport of a particle in water the computer code SHIELD-HIT v2 was used which is a substantially modified version of its predecessor SHIELD-HIT v1. The code was partially rewritten, replacing formerly used single precision variables with double precision variables. The lowest particle transport specific energy was decreased from 1 MeV/u down to 10 keV/u by modifying the Bethe- Bloch formula, thus widening its range for medical dosimetry applications. Optional MSTAR and ICRU-73 stopping power data were included. The fragmentation model was verified using all available experimental data and some parameters were adjusted. The present code version shows excellent agreement with experimental data. Additional to the calculations of stopping power ratios, s{sub w,air}, the influence of fragments and I-values on s{sub w,air} for carbon ion beams was investigated. The value of s{sub w,air} deviates as much as 2.3% at the Bragg peak from the recommended by TRS-398 constant value of 1.130 for an energy of 50 MeV/u. (orig.)
Rare event simulation using Monte Carlo methods
Rubino, Gerardo
2009-01-01
In a probabilistic model, a rare event is an event with a very small probability of occurrence. The forecasting of rare events is a formidable task but is important in many areas. For instance a catastrophic failure in a transport system or in a nuclear power plant, the failure of an information processing system in a bank, or in the communication network of a group of banks, leading to financial losses. Being able to evaluate the probability of rare events is therefore a critical issue. Monte Carlo Methods, the simulation of corresponding models, are used to analyze rare events. This book sets out to present the mathematical tools available for the efficient simulation of rare events. Importance sampling and splitting are presented along with an exposition of how to apply these tools to a variety of fields ranging from performance and dependability evaluation of complex systems, typically in computer science or in telecommunications, to chemical reaction analysis in biology or particle transport in physics. ...
A continuation multilevel Monte Carlo algorithm
Collier, Nathan
2014-09-05
We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending when the required error tolerance is satisfied. CMLMC assumes discretization hierarchies that are defined a priori for each level and are geometrically refined across levels. The actual choice of computational work across levels is based on parametric models for the average cost per sample and the corresponding variance and weak error. These parameters are calibrated using Bayesian estimation, taking particular notice of the deepest levels of the discretization hierarchy, where only few realizations are available to produce the estimates. The resulting CMLMC estimator exhibits a non-trivial splitting between bias and statistical contributions. We also show the asymptotic normality of the statistical error in the MLMC estimator and justify in this way our error estimate that allows prescribing both required accuracy and confidence in the final result. Numerical results substantiate the above results and illustrate the corresponding computational savings in examples that are described in terms of differential equations either driven by random measures or with random coefficients. © 2014, Springer Science+Business Media Dordrecht.
Monte Carlo Simulations of the Photospheric Process
Santana, Rodolfo; Hernandez, Roberto A; Kumar, Pawan
2015-01-01
We present a Monte Carlo (MC) code we wrote to simulate the photospheric process and to study the photospheric spectrum above the peak energy. Our simulations were performed with a photon to electron ratio $N_{\\gamma}/N_{e} = 10^{5}$, as determined by observations of the GRB prompt emission. We searched an exhaustive parameter space to determine if the photospheric process can match the observed high-energy spectrum of the prompt emission. If we do not consider electron re-heating, we determined that the best conditions to produce the observed high-energy spectrum are low photon temperatures and high optical depths. However, for these simulations, the spectrum peaks at an energy below 300 keV by a factor $\\sim 10$. For the cases we consider with higher photon temperatures and lower optical depths, we demonstrate that additional energy in the electrons is required to produce a power-law spectrum above the peak-energy. By considering electron re-heating near the photosphere, the spectrum for these simulations h...
Finding Planet Nine: a Monte Carlo approach
Marcos, C de la Fuente
2016-01-01
Planet Nine is a hypothetical planet located well beyond Pluto that has been proposed in an attempt to explain the observed clustering in physical space of the perihelia of six extreme trans-Neptunian objects or ETNOs. The predicted approximate values of its orbital elements include a semimajor axis of 700 au, an eccentricity of 0.6, an inclination of 30 degrees, and an argument of perihelion of 150 degrees. Searching for this putative planet is already under way. Here, we use a Monte Carlo approach to create a synthetic population of Planet Nine orbits and study its visibility statistically in terms of various parameters and focusing on the aphelion configuration. Our analysis shows that, if Planet Nine exists and is at aphelion, it might be found projected against one out of four specific areas in the sky. Each area is linked to a particular value of the longitude of the ascending node and two of them are compatible with an apsidal antialignment scenario. In addition and after studying the current statistic...
Atomistic Monte Carlo simulation of lipid membranes.
Wüstner, Daniel; Sklenar, Heinz
2014-01-24
Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC) local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA) for the phospholipid dipalmitoylphosphatidylcholine (DPPC). We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol.
Parallel Monte Carlo Simulation of Aerosol Dynamics
Kun Zhou
2014-02-01
Full Text Available A highly efficient Monte Carlo (MC algorithm is developed for the numerical simulation of aerosol dynamics, that is, nucleation, surface growth, and coagulation. Nucleation and surface growth are handled with deterministic means, while coagulation is simulated with a stochastic method (Marcus-Lushnikov stochastic process. Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI. The parallel computing efficiency is investigated through numerical examples. Near 60% parallel efficiency is achieved for the maximum testing case with 3.7 million MC particles running on 93 parallel computing nodes. The algorithm is verified through simulating various testing cases and comparing the simulation results with available analytical and/or other numerical solutions. Generally, it is found that only small number (hundreds or thousands of MC particles is necessary to accurately predict the aerosol particle number density, volume fraction, and so forth, that is, low order moments of the Particle Size Distribution (PSD function. Accurately predicting the high order moments of the PSD needs to dramatically increase the number of MC particles.
Monte Carlo simulations of Protein Adsorption
Sharma, Sumit; Kumar, Sanat K.; Belfort, Georges
2008-03-01
Amyloidogenic diseases, such as, Alzheimer's are caused by adsorption and aggregation of partially unfolded proteins. Adsorption of proteins is a concern in design of biomedical devices, such as dialysis membranes. Protein adsorption is often accompanied by conformational rearrangements in protein molecules. Such conformational rearrangements are thought to affect many properties of adsorbed protein molecules such as their adhesion strength to the surface, biological activity, and aggregation tendency. It has been experimentally shown that many naturally occurring proteins, upon adsorption to hydrophobic surfaces, undergo a helix to sheet or random coil secondary structural rearrangement. However, to better understand the equilibrium structural complexities of this phenomenon, we have performed Monte Carlo (MC) simulations of adsorption of a four helix bundle, modeled as a lattice protein, and studied the adsorption behavior and equilibrium protein conformations at different temperatures and degrees of surface hydrophobicity. To study the free energy and entropic effects on adsorption, Canonical ensemble MC simulations have been combined with Weighted Histogram Analysis Method(WHAM). Conformational transitions of proteins on surfaces will be discussed as a function of surface hydrophobicity and compared to analogous bulk transitions.
Monte Carlo simulations of the NIMROD diffractometer
Botti, A. [University of Roma TRE, Rome (Italy)]. E-mail: botti@fis.uniroma3.it; Ricci, M.A. [University of Roma TRE, Rome (Italy); Bowron, D.T. [ISIS-Rutherford Appleton Laboratory, Chilton (United Kingdom); Soper, A.K. [ISIS-Rutherford Appleton Laboratory, Chilton (United Kingdom)
2006-11-15
The near and intermediate range order diffractometer (NIMROD) has been selected as a day one instrument on the second target station at ISIS. Uniquely, NIMROD will provide continuous access to particle separations ranging from the interatomic (<1A) to the mesoscopic (<300A). This instrument is mainly designed for structural investigations, although the possibility of putting a Fermi chopper (and corresponding NIMONIC chopper) in the incident beam line, will potentially allow the performance of low resolution inelastic scattering measurements. The performance characteristics of the TOF diffractometer have been simulated by means of a series of Monte Carlo calculations. In particular, the flux as a function of the transferred momentum Q as well as the resolution in Q and transferred energy have been estimated. Moreover, the possibility of including a honeycomb collimator in order to achieve better resolution has been tested. Here, we want to present the design of this diffractometer that will bridge the gap between wide- and small-angle neutron scattering experiments.
Monte Carlo Simulation of River Meander Modelling
Posner, A. J.; Duan, J. G.
2010-12-01
This study first compares the first order analytical solutions for flow field by Ikeda et. al. (1981) and Johanesson and Parker (1989b). Ikeda et. al.’s (1981) linear bank erosion model was implemented to predict the rate of bank erosion in which the bank erosion coefficient is treated as a stochastic variable that varies with physical properties of the bank (e.g. cohesiveness, stratigraphy, vegetation density). The developed model was used to predict the evolution of meandering planforms. Then, the modeling results were analyzed and compared to the observed data. Since the migration of meandering channel consists of downstream translation, lateral expansion, and downstream or upstream rotations. Several measures are formulated in order to determine which of the resulting planform is closest to the experimental measured one. Results from the deterministic model highly depend on the calibrated erosion coefficient. Since field measurements are always limited, the stochastic model yielded more realistic predictions of meandering planform evolutions. Due to the random nature of bank erosion coefficient, the meandering planform evolution is a stochastic process that can only be accurately predicted by a stochastic model. Quasi-2D Ikeda (1989) flow solution with Monte Carlo Simulation of Bank Erosion Coefficient.
Commensurabilities between ETNOs: a Monte Carlo survey
de la Fuente Marcos, C.; de la Fuente Marcos, R.
2016-07-01
Many asteroids in the main and trans-Neptunian belts are trapped in mean motion resonances with Jupiter and Neptune, respectively. As a side effect, they experience accidental commensurabilities among themselves. These commensurabilities define characteristic patterns that can be used to trace the source of the observed resonant behaviour. Here, we explore systematically the existence of commensurabilities between the known ETNOs using their heliocentric and barycentric semimajor axes, their uncertainties, and Monte Carlo techniques. We find that the commensurability patterns present in the known ETNO population resemble those found in the main and trans-Neptunian belts. Although based on small number statistics, such patterns can only be properly explained if most, if not all, of the known ETNOs are subjected to the resonant gravitational perturbations of yet undetected trans-Plutonian planets. We show explicitly that some of the statistically significant commensurabilities are compatible with the Planet Nine hypothesis; in particular, a number of objects may be trapped in the 5:3 and 3:1 mean motion resonances with a putative Planet Nine with semimajor axis ˜700 au.
Diffusion Monte Carlo in internal coordinates.
Petit, Andrew S; McCoy, Anne B
2013-08-15
An internal coordinate extension of diffusion Monte Carlo (DMC) is described as a first step toward a generalized reduced-dimensional DMC approach. The method places no constraints on the choice of internal coordinates other than the requirement that they all be independent. Using H(3)(+) and its isotopologues as model systems, the methodology is shown to be capable of successfully describing the ground state properties of molecules that undergo large amplitude, zero-point vibrational motions. Combining the approach developed here with the fixed-node approximation allows vibrationally excited states to be treated. Analysis of the ground state probability distribution is shown to provide important insights into the set of internal coordinates that are less strongly coupled and therefore more suitable for use as the nodal coordinates for the fixed-node DMC calculations. In particular, the curvilinear normal mode coordinates are found to provide reasonable nodal surfaces for the fundamentals of H(2)D(+) and D(2)H(+) despite both molecules being highly fluxional.
Monte Carlo simulations for focusing elliptical guides
Valicu, Roxana [FRM2 Garching, Muenchen (Germany); Boeni, Peter [E20, TU Muenchen (Germany)
2009-07-01
The aim of the Monte Carlo simulations using McStas Programme was to improve the focusing of the neutron beam existing at PGAA (FRM II) by prolongation of the existing elliptic guide (coated now with supermirrors with m=3) with a new part. First we have tried with an initial length of the additional guide of 7,5cm and coatings for the neutron guide of supermirrors with m=4,5 and 6. The gain (calculated by dividing the intensity in the focal point after adding the guide by the intensity at the focal point with the initial guide) obtained for this coatings indicated that a coating with m=5 would be appropriate for a first trial. The next step was to vary the length of the additional guide for this m value and therefore choosing the appropriate length for the maximal gain. With the m value and the length of the guide fixed we have introduced an aperture 1 cm before the focal point and we have varied the radius of this aperture in order to obtain a focused beam. We have observed a dramatic decrease in the size of the beam in the focal point after introducing this aperture. The simulation results, the gains obtained and the evolution of the beam size will be presented.
Monte Carlo Production Management at CMS
Boudoul, G.; Pol, A; Srimanobhas, P; Vlimant, J R; Franzoni, Giovanni
2015-01-01
The analysis of the LHC data at the Compact Muon Solenoid (CMS) experiment requires the production of a large number of simulated events.During the runI of LHC (2010-2012), CMS has produced over 12 Billion simulated events,organized in approximately sixty different campaigns each emulating specific detector conditions and LHC running conditions (pile up).In order toaggregate the information needed for the configuration and prioritization of the events production,assure the book-keeping and of all the processing requests placed by the physics analysis groups,and to interface with the CMS production infrastructure,the web-based service Monte Carlo Management (McM) has been developed and put in production in 2012.McM is based on recent server infrastructure technology (CherryPy + java) and relies on a CouchDB database back-end.This contribution will coverthe one and half year of operational experience managing samples of simulated events for CMS,the evolution of its functionalitiesand the extension of its capabi...
Monte Carlo models of dust coagulation
Zsom, Andras
2010-01-01
The thesis deals with the first stage of planet formation, namely dust coagulation from micron to millimeter sizes in circumstellar disks. For the first time, we collect and compile the recent laboratory experiments on dust aggregates into a collision model that can be implemented into dust coagulation models. We put this model into a Monte Carlo code that uses representative particles to simulate dust evolution. Simulations are performed using three different disk models in a local box (0D) located at 1 AU distance from the central star. We find that the dust evolution does not follow the previously assumed growth-fragmentation cycle, but growth is halted by bouncing before the fragmentation regime is reached. We call this the bouncing barrier which is an additional obstacle during the already complex formation process of planetesimals. The absence of the growth-fragmentation cycle and the halted growth has two important consequences for planet formation. 1) It is observed that disk atmospheres are dusty thr...
Atomistic Monte Carlo Simulation of Lipid Membranes
Daniel Wüstner
2014-01-01
Full Text Available Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA for the phospholipid dipalmitoylphosphatidylcholine (DPPC. We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol.
Parallel Monte Carlo simulation of aerosol dynamics
Zhou, K.
2014-01-01
A highly efficient Monte Carlo (MC) algorithm is developed for the numerical simulation of aerosol dynamics, that is, nucleation, surface growth, and coagulation. Nucleation and surface growth are handled with deterministic means, while coagulation is simulated with a stochastic method (Marcus-Lushnikov stochastic process). Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI). The parallel computing efficiency is investigated through numerical examples. Near 60% parallel efficiency is achieved for the maximum testing case with 3.7 million MC particles running on 93 parallel computing nodes. The algorithm is verified through simulating various testing cases and comparing the simulation results with available analytical and/or other numerical solutions. Generally, it is found that only small number (hundreds or thousands) of MC particles is necessary to accurately predict the aerosol particle number density, volume fraction, and so forth, that is, low order moments of the Particle Size Distribution (PSD) function. Accurately predicting the high order moments of the PSD needs to dramatically increase the number of MC particles. 2014 Kun Zhou et al.
Measuring Berry curvature with quantum Monte Carlo
Kolodrubetz, Michael
2014-01-01
The Berry curvature and its descendant, the Berry phase, play an important role in quantum mechanics. They can be used to understand the Aharonov-Bohm effect, define topological Chern numbers, and generally to investigate the geometric properties of a quantum ground state manifold. While Berry curvature has been well-studied in the regimes of few-body physics and non-interacting particles, its use in the regime of strong interactions is hindered by the lack of numerical methods to solve it. In this paper we fill this gap by implementing a quantum Monte Carlo method to solve for the Berry curvature, based on interpreting Berry curvature as a leading correction to imaginary time ramps. We demonstrate our algorithm using the transverse-field Ising model in one and two dimensions, the latter of which is non-integrable. Despite the fact that the Berry curvature gives information about the phase of the wave function, we show that our algorithm has no sign or phase problem for standard sign-problem-free Hamiltonians...
Un-Hong Wong
2014-01-01
Full Text Available In this paper, we model the reflectance of the lunar regolith by a new method combining Monte Carlo ray tracing and Hapke’s model. The existing modeling methods exploit either a radiative transfer model or a geometric optical model. However, the measured data from an Interference Imaging spectrometer (IIM on an orbiter were affected not only by the composition of minerals but also by the environmental factors. These factors cannot be well addressed by a single model alone. Our method implemented Monte Carlo ray tracing for simulating the large-scale effects such as the reflection of topography of the lunar soil and Hapke’s model for calculating the reflection intensity of the internal scattering effects of particles of the lunar soil. Therefore, both the large-scale and microscale effects are considered in our method, providing a more accurate modeling of the reflectance of the lunar regolith. Simulation results using the Lunar Soil Characterization Consortium (LSCC data and Chang’E-1 elevation map show that our method is effective and useful. We have also applied our method to Chang’E-1 IIM data for removing the influence of lunar topography to the reflectance of the lunar soil and to generate more realistic visualizations of the lunar surface.
Monte Carlo simulation experiments on box-type radon dosimeter
Jamil, Khalid; Kamran, Muhammad; Illahi, Ahsan; Manzoor, Shahid
2014-11-01
Epidemiological studies show that inhalation of radon gas (222Rn) may be carcinogenic especially to mine workers, people living in closed indoor energy conserved environments and underground dwellers. It is, therefore, of paramount importance to measure the 222Rn concentrations (Bq/m3) in indoors environments. For this purpose, box-type passive radon dosimeters employing ion track detector like CR-39 are widely used. Fraction of the number of radon alphas emitted in the volume of the box type dosimeter resulting in latent track formation on CR-39 is the latent track registration efficiency. Latent track registration efficiency is ultimately required to evaluate the radon concentration which consequently determines the effective dose and the radiological hazards. In this research, Monte Carlo simulation experiments were carried out to study the alpha latent track registration efficiency for box type radon dosimeter as a function of dosimeter's dimensions and range of alpha particles in air. Two different self developed Monte Carlo simulation techniques were employed namely: (a) Surface ratio (SURA) method and (b) Ray hitting (RAHI) method. Monte Carlo simulation experiments revealed that there are two types of efficiencies i.e. intrinsic efficiency (ηint) and alpha hit efficiency (ηhit). The ηint depends upon only on the dimensions of the dosimeter and ηhit depends both upon dimensions of the dosimeter and range of the alpha particles. The total latent track registration efficiency is the product of both intrinsic and hit efficiencies. It has been concluded that if diagonal length of box type dosimeter is kept smaller than the range of alpha particle then hit efficiency is achieved as 100%. Nevertheless the intrinsic efficiency keeps playing its role. The Monte Carlo simulation experimental results have been found helpful to understand the intricate track registration mechanisms in the box type dosimeter. This paper explains that how radon concentration from the
Monte Carlo simulation experiments on box-type radon dosimeter
Jamil, Khalid, E-mail: kjamil@comsats.edu.pk; Kamran, Muhammad; Illahi, Ahsan; Manzoor, Shahid
2014-11-11
Epidemiological studies show that inhalation of radon gas ({sup 222}Rn) may be carcinogenic especially to mine workers, people living in closed indoor energy conserved environments and underground dwellers. It is, therefore, of paramount importance to measure the {sup 222}Rn concentrations (Bq/m{sup 3}) in indoors environments. For this purpose, box-type passive radon dosimeters employing ion track detector like CR-39 are widely used. Fraction of the number of radon alphas emitted in the volume of the box type dosimeter resulting in latent track formation on CR-39 is the latent track registration efficiency. Latent track registration efficiency is ultimately required to evaluate the radon concentration which consequently determines the effective dose and the radiological hazards. In this research, Monte Carlo simulation experiments were carried out to study the alpha latent track registration efficiency for box type radon dosimeter as a function of dosimeter’s dimensions and range of alpha particles in air. Two different self developed Monte Carlo simulation techniques were employed namely: (a) Surface ratio (SURA) method and (b) Ray hitting (RAHI) method. Monte Carlo simulation experiments revealed that there are two types of efficiencies i.e. intrinsic efficiency (η{sub int}) and alpha hit efficiency (η{sub hit}). The η{sub int} depends upon only on the dimensions of the dosimeter and η{sub hit} depends both upon dimensions of the dosimeter and range of the alpha particles. The total latent track registration efficiency is the product of both intrinsic and hit efficiencies. It has been concluded that if diagonal length of box type dosimeter is kept smaller than the range of alpha particle then hit efficiency is achieved as 100%. Nevertheless the intrinsic efficiency keeps playing its role. The Monte Carlo simulation experimental results have been found helpful to understand the intricate track registration mechanisms in the box type dosimeter. This paper
Grau Malonda, A.; Garcia-Torano, E.
1983-07-01
Interaction and absorption probabilities for gamma-rays with energies between 1 and 1000 KeV have been computed and tabulated. Toluene based scintillator solution has been assumed in the computation. Both, point sources and homogeneously dispersed radioactive material have been assumed. These tables may be applied to cylinders with radii between 1.25 cm and 0.25 cm and heights between 4.07 cm and 0.20 cm. (Author) 26 refs.
Kato, Hideki; Minami, Kazuyuki; Asada, Yasuki; Suzuki, Shoichi
2016-05-01
To obtain patient entrance surface dose in X-ray photography, a calculation method based on measured exposure or air kerma radiated from X-ray tube is generally used. Two factors are necessary for this calculation: (1) exposure/air kerma to absorb dose conversion factor and (2) back-scatter factor (BSF) based on X-ray quality and on field size. These BSFs are commonly obtained by interpolation from existent data which were given for a water phantom whose entrance surface is flat. Since patient's surface in X-ray photograph is not flat, some error may occur when existent BSF is used in this calculation. In this article, BSF for water phantom with cylindrical surface and elliptic cylinder surface were calculated by means of the Monte Carlo simulation. And these BSFs were compared with BSF for flat surface phantom. As a result (1) radius of curvature of cylindrical phantom or horizontal axis of elliptic cylinder phantom is smaller, (2) half value layer of X-ray is larger, (3) field size is larger, difference of these BSF with that for flat surface phantom tends to be larger. Maximum difference by calculation condition assumed in this article was more than 10%. The cause of this difference is because scattering volume in irradiated body of cylindrical or elliptic cylinder phantom is smaller than flat surface phantom. To obtain patient entrance surface dose more precisely, it is necessary to use BSF respectively calculated for phantom resembling patient's body such as cylindrical or elliptic cylinder phantom by means of the Monte Carlo simulation.
Monte-Carlo simulation-based statistical modeling
Chen, John
2017-01-01
This book brings together expert researchers engaged in Monte-Carlo simulation-based statistical modeling, offering them a forum to present and discuss recent issues in methodological development as well as public health applications. It is divided into three parts, with the first providing an overview of Monte-Carlo techniques, the second focusing on missing data Monte-Carlo methods, and the third addressing Bayesian and general statistical modeling using Monte-Carlo simulations. The data and computer programs used here will also be made publicly available, allowing readers to replicate the model development and data analysis presented in each chapter, and to readily apply them in their own research. Featuring highly topical content, the book has the potential to impact model development and data analyses across a wide spectrum of fields, and to spark further research in this direction.
EXTENDED MONTE CARLO LOCALIZATION ALGORITHM FOR MOBILE SENSOR NETWORKS
无
2008-01-01
A real-world localization system for wireless sensor networks that adapts for mobility and irregular radio propagation model is considered.The traditional range-based techniques and recent range-free localization schemes are not welt competent for localization in mobile sensor networks,while the probabilistic approach of Bayesian filtering with particle-based density representations provides a comprehensive solution to such localization problem.Monte Carlo localization is a Bayesian filtering method that approximates the mobile node’S location by a set of weighted particles.In this paper,an enhanced Monte Carlo localization algorithm-Extended Monte Carlo Localization (Ext-MCL) is suitable for the practical wireless network environment where the radio propagation model is irregular.Simulation results show the proposal gets better localization accuracy and higher localizable node number than previously proposed Monte Carlo localization schemes not only for ideal radio model,but also for irregular one.
On the Markov Chain Monte Carlo (MCMC) method
Rajeeva L Karandikar
2006-04-01
Markov Chain Monte Carlo (MCMC) is a popular method used to generate samples from arbitrary distributions, which may be speciﬁed indirectly. In this article, we give an introduction to this method along with some examples.
Bayesian phylogeny analysis via stochastic approximation Monte Carlo
Cheon, Sooyoung
2009-11-01
Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time. © 2009 Elsevier Inc. All rights reserved.
Bayesian phylogeny analysis via stochastic approximation Monte Carlo.
Cheon, Sooyoung; Liang, Faming
2009-11-01
Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time.
Monte Carlo techniques for analyzing deep penetration problems
Cramer, S.N.; Gonnord, J.; Hendricks, J.S.
1985-01-01
A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications. 29 refs.
Monte Carlo simulations: Hidden errors from ``good'' random number generators
Ferrenberg, Alan M.; Landau, D. P.; Wong, Y. Joanna
1992-12-01
The Wolff algorithm is now accepted as the best cluster-flipping Monte Carlo algorithm for beating ``critical slowing down.'' We show how this method can yield incorrect answers due to subtle correlations in ``high quality'' random number generators.
An Introduction to Multilevel Monte Carlo for Option Valuation
Higham, Desmond J
2015-01-01
Monte Carlo is a simple and flexible tool that is widely used in computational finance. In this context, it is common for the quantity of interest to be the expected value of a random variable defined via a stochastic differential equation. In 2008, Giles proposed a remarkable improvement to the approach of discretizing with a numerical method and applying standard Monte Carlo. His multilevel Monte Carlo method offers an order of speed up given by the inverse of epsilon, where epsilon is the required accuracy. So computations can run 100 times more quickly when two digits of accuracy are required. The multilevel philosophy has since been adopted by a range of researchers and a wealth of practically significant results has arisen, most of which have yet to make their way into the expository literature. In this work, we give a brief, accessible, introduction to multilevel Monte Carlo and summarize recent results applicable to the task of option evaluation.
MODELING LEACHING OF VIRUSES BY THE MONTE CARLO METHOD
A predictive screening model was developed for fate and transport of viruses in the unsaturated zone. A database of input parameters allowed Monte Carlo analysis with the model. The resulting kernel densities of predicted attenuation during percolation indicated very ...
A MONTE-CARLO METHOD FOR ESTIMATING THE CORRELATION EXPONENT
MIKOSCH, T; WANG, QA
1995-01-01
We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.
Using Supervised Learning to Improve Monte Carlo Integral Estimation
Tracey, Brendan; Alonso, Juan J
2011-01-01
Monte Carlo (MC) techniques are often used to estimate integrals of a multivariate function using randomly generated samples of the function. In light of the increasing interest in uncertainty quantification and robust design applications in aerospace engineering, the calculation of expected values of such functions (e.g. performance measures) becomes important. However, MC techniques often suffer from high variance and slow convergence as the number of samples increases. In this paper we present Stacked Monte Carlo (StackMC), a new method for post-processing an existing set of MC samples to improve the associated integral estimate. StackMC is based on the supervised learning techniques of fitting functions and cross validation. It should reduce the variance of any type of Monte Carlo integral estimate (simple sampling, importance sampling, quasi-Monte Carlo, MCMC, etc.) without adding bias. We report on an extensive set of experiments confirming that the StackMC estimate of an integral is more accurate than ...
A MONTE-CARLO METHOD FOR ESTIMATING THE CORRELATION EXPONENT
MIKOSCH, T; WANG, QA
We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.
Monte Carlo methods and applications in nuclear physics
Carlson, J.
1990-01-01
Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs.
Public Infrastructure for Monte Carlo Simulation: publicMC@BATAN
Waskita, A A; Akbar, Z; Handoko, L T; 10.1063/1.3462759
2010-01-01
The first cluster-based public computing for Monte Carlo simulation in Indonesia is introduced. The system has been developed to enable public to perform Monte Carlo simulation on a parallel computer through an integrated and user friendly dynamic web interface. The beta version, so called publicMC@BATAN, has been released and implemented for internal users at the National Nuclear Energy Agency (BATAN). In this paper the concept and architecture of publicMC@BATAN are presented.
Radiative Equilibrium and Temperature Correction in Monte Carlo Radiation Transfer
Bjorkman, J. E.; Wood, Kenneth
2001-01-01
We describe a general radiative equilibrium and temperature correction procedure for use in Monte Carlo radiation transfer codes with sources of temperature-independent opacity, such as astrophysical dust. The technique utilizes the fact that Monte Carlo simulations track individual photon packets, so we may easily determine where their energy is absorbed. When a packet is absorbed, it heats a particular cell within the envelope, raising its temperature. To enforce radiative equilibrium, the ...
Chemical accuracy from quantum Monte Carlo for the Benzene Dimer
Azadi, Sam; Cohen, R. E
2015-01-01
We report an accurate study of interactions between Benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory (DFT) using different van der Waals (vdW) functionals. In our QMC calculations, we use accurate correlated trial wave functions including three-body Jastrow factors, and backflow transformations. We consider two benzene molecules in the parallel displaced (PD) geometry, and fin...
de Finetti Priors using Markov chain Monte Carlo computations.
Bacallado, Sergio; Diaconis, Persi; Holmes, Susan
2015-07-01
Recent advances in Monte Carlo methods allow us to revisit work by de Finetti who suggested the use of approximate exchangeability in the analyses of contingency tables. This paper gives examples of computational implementations using Metropolis Hastings, Langevin and Hamiltonian Monte Carlo to compute posterior distributions for test statistics relevant for testing independence, reversible or three way models for discrete exponential families using polynomial priors and Gröbner bases.
Event-chain Monte Carlo for classical continuous spin models
Michel, Manon; Mayer, Johannes; Krauth, Werner
2015-10-01
We apply the event-chain Monte Carlo algorithm to classical continuum spin models on a lattice and clarify the condition for its validity. In the two-dimensional XY model, it outperforms the local Monte Carlo algorithm by two orders of magnitude, although it remains slower than the Wolff cluster algorithm. In the three-dimensional XY spin glass model at low temperature, the event-chain algorithm is far superior to the other algorithms.
Confidence and efficiency scaling in Variational Quantum Monte Carlo calculations
Delyon, François; Holzmann, Markus
2016-01-01
Based on the central limit theorem, we discuss the problem of evaluation of the statistical error of Monte Carlo calculations using a time discretized diffusion process. We present a robust and practical method to determine the effective variance of general observables and show how to verify the equilibrium hypothesis by the Kolmogorov-Smirnov test. We then derive scaling laws of the efficiency illustrated by Variational Monte Carlo calculations on the two dimensional electron gas.
Study of the Transition Flow Regime using Monte Carlo Methods
Hassan, H. A.
1999-01-01
This NASA Cooperative Agreement presents a study of the Transition Flow Regime Using Monte Carlo Methods. The topics included in this final report are: 1) New Direct Simulation Monte Carlo (DSMC) procedures; 2) The DS3W and DS2A Programs; 3) Papers presented; 4) Miscellaneous Applications and Program Modifications; 5) Solution of Transitional Wake Flows at Mach 10; and 6) Turbulence Modeling of Shock-Dominated Fows with a k-Enstrophy Formulation.
Monte Carlo Simulation of Optical Properties of Wake Bubbles
CAO Jing; WANG Jiang-An; JIANG Xing-Zhou; SHI Sheng-Wei
2007-01-01
Based on Mie scattering theory and the theory of multiple light scattering, the light scattering properties of air bubbles in a wake are analysed by Monte Carlo simulation. The results show that backscattering is enhanced obviously due to the existence of bubbles, especially with the increase of bubble density, and that it is feasible to use the Monte Carlo method to study the properties of light scattering by air bubbles.
Successful combination of the stochastic linearization and Monte Carlo methods
Elishakoff, I.; Colombi, P.
1993-01-01
A combination of a stochastic linearization and Monte Carlo techniques is presented for the first time in literature. A system with separable nonlinear damping and nonlinear restoring force is considered. The proposed combination of the energy-wise linearization with the Monte Carlo method yields an error under 5 percent, which corresponds to the error reduction associated with the conventional stochastic linearization by a factor of 4.6.
Confidence and efficiency scaling in variational quantum Monte Carlo calculations
Delyon, F.; Bernu, B.; Holzmann, Markus
2017-02-01
Based on the central limit theorem, we discuss the problem of evaluation of the statistical error of Monte Carlo calculations using a time-discretized diffusion process. We present a robust and practical method to determine the effective variance of general observables and show how to verify the equilibrium hypothesis by the Kolmogorov-Smirnov test. We then derive scaling laws of the efficiency illustrated by variational Monte Carlo calculations on the two-dimensional electron gas.
Monte Carlo methods for light propagation in biological tissues
Vinckenbosch, Laura; Lacaux, Céline; Tindel, Samy; Thomassin, Magalie; Obara, Tiphaine
2016-01-01
Light propagation in turbid media is driven by the equation of radiative transfer. We give a formal probabilistic representation of its solution in the framework of biological tissues and we implement algorithms based on Monte Carlo methods in order to estimate the quantity of light that is received by a homogeneous tissue when emitted by an optic fiber. A variance reduction method is studied and implemented, as well as a Markov chain Monte Carlo method based on the Metropolis–Hastings algori...
Multiscale Monte Carlo equilibration: pure Yang-Mills theory
Endres, Michael G; Detmold, William; Orginos, Kostas; Pochinsky, Andrew V
2015-01-01
We present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.
Monte Carlo method for solving a parabolic problem
Tian Yi
2016-01-01
Full Text Available In this paper, we present a numerical method based on random sampling for a parabolic problem. This method combines use of the Crank-Nicolson method and Monte Carlo method. In the numerical algorithm, we first discretize governing equations by Crank-Nicolson method, and obtain a large sparse system of linear algebraic equations, then use Monte Carlo method to solve the linear algebraic equations. To illustrate the usefulness of this technique, we apply it to some test problems.
MONTE CARLO SIMULATION OF CHARGED PARTICLE IN AN ELECTRONEGATIVE PLASMA
L SETTAOUTI
2003-12-01
Full Text Available Interest in radio frequency (rf discharges has grown tremendously in recent years due to their importance in microelectronic technologies. Especially interesting are the properties of discharges in electronegative gases which are most frequently used for technological applications. Monte Carlo simulation have become increasingly important as a simulation tool particularly in the area of plasma physics. In this work, we present some detailed properties of rf plasmas obtained by Monte Carlo simulation code, in SF6
Monte Carlo Volcano Seismic Moment Tensors
Waite, G. P.; Brill, K. A.; Lanza, F.
2015-12-01
Inverse modeling of volcano seismic sources can provide insight into the geometry and dynamics of volcanic conduits. But given the logistical challenges of working on an active volcano, seismic networks are typically deficient in spatial and temporal coverage; this potentially leads to large errors in source models. In addition, uncertainties in the centroid location and moment-tensor components, including volumetric components, are difficult to constrain from the linear inversion results, which leads to a poor understanding of the model space. In this study, we employ a nonlinear inversion using a Monte Carlo scheme with the objective of defining robustly resolved elements of model space. The model space is randomized by centroid location and moment tensor eigenvectors. Point sources densely sample the summit area and moment tensors are constrained to a randomly chosen geometry within the inversion; Green's functions for the random moment tensors are all calculated from modeled single forces, making the nonlinear inversion computationally reasonable. We apply this method to very-long-period (VLP) seismic events that accompany minor eruptions at Fuego volcano, Guatemala. The library of single force Green's functions is computed with a 3D finite-difference modeling algorithm through a homogeneous velocity-density model that includes topography, for a 3D grid of nodes, spaced 40 m apart, within the summit region. The homogenous velocity and density model is justified by long wavelength of VLP data. The nonlinear inversion reveals well resolved model features and informs the interpretation through a better understanding of the possible models. This approach can also be used to evaluate possible station geometries in order to optimize networks prior to deployment.
Quantum Monte Carlo with directed loops.
Syljuåsen, Olav F; Sandvik, Anders W
2002-10-01
We introduce the concept of directed loops in stochastic series expansion and path-integral quantum Monte Carlo methods. Using the detailed balance rules for directed loops, we show that it is possible to smoothly connect generally applicable simulation schemes (in which it is necessary to include backtracking processes in the loop construction) to more restricted loop algorithms that can be constructed only for a limited range of Hamiltonians (where backtracking can be avoided). The "algorithmic discontinuities" between general and special points (or regions) in parameter space can hence be eliminated. As a specific example, we consider the anisotropic S=1/2 Heisenberg antiferromagnet in an external magnetic field. We show that directed-loop simulations are very efficient for the full range of magnetic fields (zero to the saturation point) and anisotropies. In particular, for weak fields and anisotropies, the autocorrelations are significantly reduced relative to those of previous approaches. The back-tracking probability vanishes continuously as the isotropic Heisenberg point is approached. For the XY model, we show that back tracking can be avoided for all fields extending up to the saturation field. The method is hence particularly efficient in this case. We use directed-loop simulations to study the magnetization process in the two-dimensional Heisenberg model at very low temperatures. For LxL lattices with L up to 64, we utilize the step structure in the magnetization curve to extract gaps between different spin sectors. Finite-size scaling of the gaps gives an accurate estimate of the transverse susceptibility in the thermodynamic limit: chi( perpendicular )=0.0659+/-0.0002.
Monte Carlo simulation of large electron fields
Faddegon, Bruce A.; Perl, Joseph; Asai, Makoto
2008-03-01
Two Monte Carlo systems, EGSnrc and Geant4, the latter with two different 'physics lists,' were used to calculate dose distributions in large electron fields used in radiotherapy. Source and geometry parameters were adjusted to match calculated results to measurement. Both codes were capable of accurately reproducing the measured dose distributions of the six electron beams available on the accelerator. Depth penetration matched the average measured with a diode and parallel-plate chamber to 0.04 cm or better. Calculated depth dose curves agreed to 2% with diode measurements in the build-up region, although for the lower beam energies there was a discrepancy of up to 5% in this region when calculated results are compared to parallel-plate measurements. Dose profiles at the depth of maximum dose matched to 2-3% in the central 25 cm of the field, corresponding to the field size of the largest applicator. A 4% match was obtained outside the central region. The discrepancy observed in the bremsstrahlung tail in published results that used EGS4 is no longer evident. Simulations with the different codes and physics lists used different source energies, incident beam angles, thicknesses of the primary foils, and distance between the primary and secondary foil. The true source and geometry parameters were not known with sufficient accuracy to determine which parameter set, including the energy of the source, was closest to the truth. These results underscore the requirement for experimental benchmarks of depth penetration and electron scatter for beam energies and foils relevant to radiotherapy.
Dosimetry applications in GATE Monte Carlo toolkit.
Papadimitroulas, Panagiotis
2017-02-21
Monte Carlo (MC) simulations are a well-established method for studying physical processes in medical physics. The purpose of this review is to present GATE dosimetry applications on diagnostic and therapeutic simulated protocols. There is a significant need for accurate quantification of the absorbed dose in several specific applications such as preclinical and pediatric studies. GATE is an open-source MC toolkit for simulating imaging, radiotherapy (RT) and dosimetry applications in a user-friendly environment, which is well validated and widely accepted by the scientific community. In RT applications, during treatment planning, it is essential to accurately assess the deposited energy and the absorbed dose per tissue/organ of interest, as well as the local statistical uncertainty. Several types of realistic dosimetric applications are described including: molecular imaging, radio-immunotherapy, radiotherapy and brachytherapy. GATE has been efficiently used in several applications, such as Dose Point Kernels, S-values, Brachytherapy parameters, and has been compared against various MC codes which are considered as standard tools for decades. Furthermore, the presented studies show reliable modeling of particle beams when comparing experimental with simulated data. Examples of different dosimetric protocols are reported for individualized dosimetry and simulations combining imaging and therapy dose monitoring, with the use of modern computational phantoms. Personalization of medical protocols can be achieved by combining GATE MC simulations with anthropomorphic computational models and clinical anatomical data. This is a review study, covering several dosimetric applications of GATE, and the different tools used for modeling realistic clinical acquisitions with accurate dose assessment. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Monte Carlo implementation of polarized hadronization
Matevosyan, Hrayr H.; Kotzinian, Aram; Thomas, Anthony W.
2017-01-01
We study the polarized quark hadronization in a Monte Carlo (MC) framework based on the recent extension of the quark-jet framework, where a self-consistent treatment of the quark polarization transfer in a sequential hadronization picture has been presented. Here, we first adopt this approach for MC simulations of the hadronization process with a finite number of produced hadrons, expressing the relevant probabilities in terms of the eight leading twist quark-to-quark transverse-momentum-dependent (TMD) splitting functions (SFs) for elementary q →q'+h transition. We present explicit expressions for the unpolarized and Collins fragmentation functions (FFs) of unpolarized hadrons emitted at rank 2. Further, we demonstrate that all the current spectator-type model calculations of the leading twist quark-to-quark TMD SFs violate the positivity constraints, and we propose a quark model based ansatz for these input functions that circumvents the problem. We validate our MC framework by explicitly proving the absence of unphysical azimuthal modulations of the computed polarized FFs, and by precisely reproducing the earlier derived explicit results for rank-2 pions. Finally, we present the full results for pion unpolarized and Collins FFs, as well as the corresponding analyzing powers from high statistics MC simulations with a large number of produced hadrons for two different model input elementary SFs. The results for both sets of input functions exhibit the same general features of an opposite signed Collins function for favored and unfavored channels at large z and, at the same time, demonstrate the flexibility of the quark-jet framework by producing significantly different dependences of the results at mid to low z for the two model inputs.
kmos: A lattice kinetic Monte Carlo framework
Hoffmann, Max J.; Matera, Sebastian; Reuter, Karsten
2014-07-01
Kinetic Monte Carlo (kMC) simulations have emerged as a key tool for microkinetic modeling in heterogeneous catalysis and other materials applications. Systems, where site-specificity of all elementary reactions allows a mapping onto a lattice of discrete active sites, can be addressed within the particularly efficient lattice kMC approach. To this end we describe the versatile kmos software package, which offers a most user-friendly implementation, execution, and evaluation of lattice kMC models of arbitrary complexity in one- to three-dimensional lattice systems, involving multiple active sites in periodic or aperiodic arrangements, as well as site-resolved pairwise and higher-order lateral interactions. Conceptually, kmos achieves a maximum runtime performance which is essentially independent of lattice size by generating code for the efficiency-determining local update of available events that is optimized for a defined kMC model. For this model definition and the control of all runtime and evaluation aspects kmos offers a high-level application programming interface. Usage proceeds interactively, via scripts, or a graphical user interface, which visualizes the model geometry, the lattice occupations and rates of selected elementary reactions, while allowing on-the-fly changes of simulation parameters. We demonstrate the performance and scaling of kmos with the application to kMC models for surface catalytic processes, where for given operation conditions (temperature and partial pressures of all reactants) central simulation outcomes are catalytic activity and selectivities, surface composition, and mechanistic insight into the occurrence of individual elementary processes in the reaction network.
Perturbation Monte Carlo methods for tissue structure alterations.
Nguyen, Jennifer; Hayakawa, Carole K; Mourant, Judith R; Spanier, Jerome
2013-01-01
This paper describes an extension of the perturbation Monte Carlo method to model light transport when the phase function is arbitrarily perturbed. Current perturbation Monte Carlo methods allow perturbation of both the scattering and absorption coefficients, however, the phase function can not be varied. The more complex method we develop and test here is not limited in this way. We derive a rigorous perturbation Monte Carlo extension that can be applied to a large family of important biomedical light transport problems and demonstrate its greater computational efficiency compared with using conventional Monte Carlo simulations to produce forward transport problem solutions. The gains of the perturbation method occur because only a single baseline Monte Carlo simulation is needed to obtain forward solutions to other closely related problems whose input is described by perturbing one or more parameters from the input of the baseline problem. The new perturbation Monte Carlo methods are tested using tissue light scattering parameters relevant to epithelia where many tumors originate. The tissue model has parameters for the number density and average size of three classes of scatterers; whole nuclei, organelles such as lysosomes and mitochondria, and small particles such as ribosomes or large protein complexes. When these parameters or the wavelength is varied the scattering coefficient and the phase function vary. Perturbation calculations give accurate results over variations of ∼15-25% of the scattering parameters.
A Survey on Multilevel Monte Carlo for European Options
Masoud Moharamnejad
2016-03-01
Full Text Available One of the most applicable and common methods for pricing options is the Monte Carlo simulation. Among the advantages of this method we can name ease of use, being suitable for different types of options including vanilla options and exotic options. On one hand, convergence rate of Monte Carlo's variance is , which has a slow convergence in responding problems, such that for achieving accuracy of ε for a d dimensional problem, computation complexity would be . Thus, various methods have been proposed in Monte Carlo framework to increase the convergence rate of variance as variance reduction methods. One of the recent methods was proposed by Gills in 2006, is the multilevel Monte Carlo method. This method besides reducing the computationcomplexity to while being used in Euler discretizing and to while being used in Milsteindiscretizing method, has the capacity to be combined with other variance reduction methods. In this article, multilevel Monte Carlo using Euler and Milsteindiscretizing methods is adopted for comparing computation complexity with standard Monte Carlo method in pricing European call options.
Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments
Pevey, Ronald E.
2005-09-15
Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL.
Bayesian Optimal Experimental Design Using Multilevel Monte Carlo
Issaid, Chaouki Ben
2015-01-07
Experimental design is very important since experiments are often resource-exhaustive and time-consuming. We carry out experimental design in the Bayesian framework. To measure the amount of information, which can be extracted from the data in an experiment, we use the expected information gain as the utility function, which specifically is the expected logarithmic ratio between the posterior and prior distributions. Optimizing this utility function enables us to design experiments that yield the most informative data for our purpose. One of the major difficulties in evaluating the expected information gain is that the integral is nested and can be high dimensional. We propose using Multilevel Monte Carlo techniques to accelerate the computation of the nested high dimensional integral. The advantages are twofold. First, the Multilevel Monte Carlo can significantly reduce the cost of the nested integral for a given tolerance, by using an optimal sample distribution among different sample averages of the inner integrals. Second, the Multilevel Monte Carlo method imposes less assumptions, such as the concentration of measures, required by Laplace method. We test our Multilevel Monte Carlo technique using a numerical example on the design of sensor deployment for a Darcy flow problem governed by one dimensional Laplace equation. We also compare the performance of the Multilevel Monte Carlo, Laplace approximation and direct double loop Monte Carlo.
Monte Carlo Simulation Tool Installation and Operation Guide
Aguayo Navarrete, Estanislao; Ankney, Austin S.; Berguson, Timothy J.; Kouzes, Richard T.; Orrell, John L.; Troy, Meredith D.; Wiseman, Clinton G.
2013-09-02
This document provides information on software and procedures for Monte Carlo simulations based on the Geant4 toolkit, the ROOT data analysis software and the CRY cosmic ray library. These tools have been chosen for its application to shield design and activation studies as part of the simulation task for the Majorana Collaboration. This document includes instructions for installation, operation and modification of the simulation code in a high cyber-security computing environment, such as the Pacific Northwest National Laboratory network. It is intended as a living document, and will be periodically updated. It is a starting point for information collection by an experimenter, and is not the definitive source. Users should consult with one of the authors for guidance on how to find the most current information for their needs.
High-Pressure Hydrogen Sulfide by Diffusion Quantum Monte Carlo
Azadi, Sam
2016-01-01
We use the diffusion quantum Monte Carlo to revisit the enthalpy-pressure phase diagram of the various products from the different proposed decompositions of H$_2$S at pressures above 150~GPa. Our results entails a revision of the ground-state enthalpy-pressure phase diagram. Specifically, we find that the C2/c HS$_2$ structure is persistent up to 440~GPa before undergoing a phase transition into the C2/m phase. Contrary to density functional theory, our calculations suggest that the C2/m phase of HS is more stable than the I4$_1$/amd HS structure over the whole pressure range from 150 to 400 GPa. Moreover, we predict that the Im-3m phase is the most likely candidate for H$_3$S, which is consistent with recent experimental x-ray diffraction measurements.
Monte Carlo systems used for treatment planning and dose verification
Brualla, Lorenzo [Universitaetsklinikum Essen, NCTeam, Strahlenklinik, Essen (Germany); Rodriguez, Miguel [Centro Medico Paitilla, Balboa (Panama); Lallena, Antonio M. [Universidad de Granada, Departamento de Fisica Atomica, Molecular y Nuclear, Granada (Spain)
2017-04-15
General-purpose radiation transport Monte Carlo codes have been used for estimation of the absorbed dose distribution in external photon and electron beam radiotherapy patients since several decades. Results obtained with these codes are usually more accurate than those provided by treatment planning systems based on non-stochastic methods. Traditionally, absorbed dose computations based on general-purpose Monte Carlo codes have been used only for research, owing to the difficulties associated with setting up a simulation and the long computation time required. To take advantage of radiation transport Monte Carlo codes applied to routine clinical practice, researchers and private companies have developed treatment planning and dose verification systems that are partly or fully based on fast Monte Carlo algorithms. This review presents a comprehensive list of the currently existing Monte Carlo systems that can be used to calculate or verify an external photon and electron beam radiotherapy treatment plan. Particular attention is given to those systems that are distributed, either freely or commercially, and that do not require programming tasks from the end user. These systems are compared in terms of features and the simulation time required to compute a set of benchmark calculations. (orig.) [German] Seit mehreren Jahrzehnten werden allgemein anwendbare Monte-Carlo-Codes zur Simulation des Strahlungstransports benutzt, um die Verteilung der absorbierten Dosis in der perkutanen Strahlentherapie mit Photonen und Elektronen zu evaluieren. Die damit erzielten Ergebnisse sind meist akkurater als solche, die mit nichtstochastischen Methoden herkoemmlicher Bestrahlungsplanungssysteme erzielt werden koennen. Wegen des damit verbundenen Arbeitsaufwands und der langen Dauer der Berechnungen wurden Monte-Carlo-Simulationen von Dosisverteilungen in der konventionellen Strahlentherapie in der Vergangenheit im Wesentlichen in der Forschung eingesetzt. Im Bemuehen, Monte-Carlo
Monte Carlo Techniques for Nuclear Systems - Theory Lectures
Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications Group; Univ. of New Mexico, Albuquerque, NM (United States). Nuclear Engineering Dept.
2016-11-29
These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations
Reducing quasi-ergodicity in a double well potential by Tsallis Monte Carlo simulation
Iwamatsu, Masao; Okabe, Yutaka
2000-01-01
A new Monte Carlo scheme based on the system of Tsallis's generalized statistical mechanics is applied to a simple double well potential to calculate the canonical thermal average of potential energy. Although we observed serious quasi-ergodicity when using the standard Metropolis Monte Carlo algorithm, this problem is largely reduced by the use of the new Monte Carlo algorithm. Therefore the ergodicity is guaranteed even for short Monte Carlo steps if we use this new canonical Monte Carlo sc...
Finding organic vapors - a Monte Carlo approach
Vuollekoski, Henri; Boy, Michael; Kerminen, Veli-Matti; Kulmala, Markku
2010-05-01
drawbacks in accuracy, the inability to find diurnal variation and the lack of size resolution. Here, we aim to shed some light onto the problem by applying an ad hoc Monte Carlo algorithm to a well established aerosol dynamical model, the University of Helsinki Multicomponent Aerosol model (UHMA). By performing a side-by-side comparison with measurement data within the algorithm, this approach has the significant advantage of decreasing the amount of manual labor. But more importantly, by basing the comparison on particle number size distribution data - a quantity that can be quite reliably measured - the accuracy of the results is good.
Coherent Scattering Imaging Monte Carlo Simulation
Hassan, Laila Abdulgalil Rafik
Conventional mammography has poor contrast between healthy and cancerous tissues due to the small difference in attenuation properties. Coherent scatter potentially provides more information because interference of coherently scattered radiation depends on the average intermolecular spacing, and can be used to characterize tissue types. However, typical coherent scatter analysis techniques are not compatible with rapid low dose screening techniques. Coherent scatter slot scan imaging is a novel imaging technique which provides new information with higher contrast. In this work a simulation of coherent scatter was performed for slot scan imaging to assess its performance and provide system optimization. In coherent scatter imaging, the coherent scatter is exploited using a conventional slot scan mammography system with anti-scatter grids tilted at the characteristic angle of cancerous tissues. A Monte Carlo simulation was used to simulate the coherent scatter imaging. System optimization was performed across several parameters, including source voltage, tilt angle, grid distances, grid ratio, and shielding geometry. The contrast increased as the grid tilt angle increased beyond the characteristic angle for the modeled carcinoma. A grid tilt angle of 16 degrees yielded the highest contrast and signal to noise ratio (SNR). Also, contrast increased as the source voltage increased. Increasing grid ratio improved contrast at the expense of decreasing SNR. A grid ratio of 10:1 was sufficient to give a good contrast without reducing the intensity to a noise level. The optimal source to sample distance was determined to be such that the source should be located at the focal distance of the grid. A carcinoma lump of 0.5x0.5x0.5 cm3 in size was detectable which is reasonable considering the high noise due to the usage of relatively small number of incident photons for computational reasons. A further study is needed to study the effect of breast density and breast thickness
Maucec, M.; Rigollet, C.
2004-01-01
The performance of a detection system based on the pulsed fast/thermal neutron analysis technique was assessed using Monte Carlo simulations. The aim was to develop and implement simulation methods, to support and advance the data analysis techniques of the characteristic gamma-ray spectra, potentia
An unbiased Hessian representation for Monte Carlo PDFs
Carrazza, Stefano; Forte, Stefano [Universita di Milano, TIF Lab, Dipartimento di Fisica, Milan (Italy); INFN, Sezione di Milano (Italy); Kassabov, Zahari [Universita di Milano, TIF Lab, Dipartimento di Fisica, Milan (Italy); Universita di Torino, Dipartimento di Fisica, Turin (Italy); INFN, Sezione di Torino (Italy); Latorre, Jose Ignacio [Universitat de Barcelona, Departament d' Estructura i Constituents de la Materia, Barcelona (Spain); Rojo, Juan [University of Oxford, Rudolf Peierls Centre for Theoretical Physics, Oxford (United Kingdom)
2015-08-15
We develop a methodology for the construction of a Hessian representation of Monte Carlo sets of parton distributions, based on the use of a subset of the Monte Carlo PDF replicas as an unbiased linear basis, and of a genetic algorithm for the determination of the optimal basis. We validate the methodology by first showing that it faithfully reproduces a native Monte Carlo PDF set (NNPDF3.0), and then, that if applied to Hessian PDF set (MMHT14) which was transformed into a Monte Carlo set, it gives back the starting PDFs with minimal information loss. We then show that, when applied to a large Monte Carlo PDF set obtained as combination of several underlying sets, the methodology leads to a Hessian representation in terms of a rather smaller set of parameters (MC-H PDFs), thereby providing an alternative implementation of the recently suggested Meta-PDF idea and a Hessian version of the recently suggested PDF compression algorithm (CMC-PDFs). The mc2hessian conversion code is made publicly available together with (through LHAPDF6) a Hessian representations of the NNPDF3.0 set, and the MC-H PDF set. (orig.)
An Unbiased Hessian Representation for Monte Carlo PDFs
Carrazza, Stefano; Kassabov, Zahari; Latorre, Jose Ignacio; Rojo, Juan
2015-01-01
We develop a methodology for the construction of a Hessian representation of Monte Carlo sets of parton distributions, based on the use of a subset of the Monte Carlo PDF replicas as an unbiased linear basis, and of a genetic algorithm for the determination of the optimal basis. We validate the methodology by first showing that it faithfully reproduces a native Monte Carlo PDF set (NNPDF3.0), and then, that if applied to Hessian PDF set (MMHT14) which was transformed into a Monte Carlo set, it gives back the starting PDFs with minimal information loss. We then show that, when applied to a large Monte Carlo PDF set obtained as combination of several underlying sets, the methodology leads to a Hessian representation in terms of a rather smaller set of parameters (CMC-H PDFs), thereby providing an alternative implementation of the recently suggested Meta-PDF idea and a Hessian version of the recently suggested PDF compression algorithm (CMC-PDFs). The mc2hessian conversion code is made publicly available togethe...
Monte Carlo evaluation of kerma in an HDR brachytherapy bunker
Perez-Calatayud, J [Department of Atomic, Molecular and Nuclear Physics, and IFIC, CSIC-University of Valencia, Burjassot (Spain); Granero, D [Department of Atomic, Molecular and Nuclear Physics, and IFIC, CSIC-University of Valencia, Burjassot (Spain); Ballester, F [Department of Atomic, Molecular and Nuclear Physics, and IFIC, CSIC-University of Valencia, Burjassot (Spain); Casal, E [Department of Atomic, Molecular and Nuclear Physics, and IFIC, CSIC-University of Valencia, Burjassot (Spain); Crispin, V [FIVO, Fundacion Instituto Valenciano De OncologIa, Valencia (Spain); Puchades, V [Grupo IMO-SFA, Madrid (Spain); Leon, A [Department of Chemistry and Nuclear Engineering, Polytechnic University of Valencia, Valencia (Spain); Verdu, G [Department of Chemistry and Nuclear Engineering, Polytechnic University of Valencia, Valencia (Spain)
2004-12-21
In recent years, the use of high dose rate (HDR) after-loader machines has greatly increased due to the shift from traditional Cs-137/Ir-192 low dose rate (LDR) to HDR brachytherapy. The method used to calculate the required concrete and, where appropriate, lead shielding in the door is based on analytical methods provided by documents published by the ICRP, the IAEA and the NCRP. The purpose of this study is to perform a more realistic kerma evaluation at the entrance maze door of an HDR bunker using the Monte Carlo code GEANT4. The Monte Carlo results were validated experimentally. The spectrum at the maze entrance door, obtained with Monte Carlo, has an average energy of about 110 keV, maintaining a similar value along the length of the maze. The comparison of results from the aforementioned values with the Monte Carlo ones shows that results obtained using the albedo coefficient from the ICRP document more closely match those given by the Monte Carlo method, although the maximum value given by MC calculations is 30% greater. (note)
Meric, N; Bor, D
1999-01-01
Scatter fractions have been determined experimentally for lucite, polyethylene, polypropylene, aluminium and copper of varying thicknesses using a polyenergetic broad X-ray beam of 67 kVp. Simulation of the experiment has been carried out by the Monte Carlo technique under the same input conditions. Comparison of the measured and predicted data with each other and with the previously reported values has been given. The Monte Carlo calculations have also been carried out for water, bakelite and bone to examine the dependence of scatter fraction on the density of the scatterer.
Monte Carlo studies of model Langmuir monolayers.
Opps, S B; Yang, B; Gray, C G; Sullivan, D E
2001-04-01
This paper examines some of the basic properties of a model Langmuir monolayer, consisting of surfactant molecules deposited onto a water subphase. The surfactants are modeled as rigid rods composed of a head and tail segment of diameters sigma(hh) and sigma(tt), respectively. The tails consist of n(t) approximately 4-7 effective monomers representing methylene groups. These rigid rods interact via site-site Lennard-Jones potentials with different interaction parameters for the tail-tail, head-tail, and head-head interactions. In a previous paper, we studied the ground-state properties of this system using a Landau approach. In the present paper, Monte Carlo simulations were performed in the canonical ensemble to elucidate the finite-temperature behavior of this system. Simulation techniques, incorporating a system of dynamic filters, allow us to decrease CPU time with negligible statistical error. This paper focuses on several of the key parameters, such as density, head-tail diameter mismatch, and chain length, responsible for driving transitions from uniformly tilted to untilted phases and between different tilt-ordered phases. Upon varying the density of the system, with sigma(hh)=sigma(tt), we observe a transition from a tilted (NNN)-condensed phase to an untilted-liquid phase and, upon comparison with recent experiments with fatty acid-alcohol and fatty acid-ester mixtures [M. C. Shih, M. K. Durbin, A. Malik, P. Zschack, and P. Dutta, J. Chem. Phys. 101, 9132 (1994); E. Teer, C. M. Knobler, C. Lautz, S. Wurlitzer, J. Kildae, and T. M. Fischer, J. Chem. Phys. 106, 1913 (1997)], we identify this as the L'(2)/Ov-L1 phase boundary. By varying the head-tail diameter ratio, we observe a decrease in T(c) with increasing mismatch. However, as the chain length was increased we observed that the transition temperatures increased and differences in T(c) due to head-tail diameter mismatch were diminished. In most of the present research, the water was treated as a hard
Calibration and Monte Carlo modelling of neutron long counters
Tagziria, H
2000-01-01
The Monte Carlo technique has become a very powerful tool in radiation transport as full advantage is taken of enhanced cross-section data, more powerful computers and statistical techniques, together with better characterisation of neutron and photon source spectra. At the National Physical Laboratory, calculations using the Monte Carlo radiation transport code MCNP-4B have been combined with accurate measurements to characterise two long counters routinely used to standardise monoenergetic neutron fields. New and more accurate response function curves have been produced for both long counters. A novel approach using Monte Carlo methods has been developed, validated and used to model the response function of the counters and determine more accurately their effective centres, which have always been difficult to establish experimentally. Calculations and measurements agree well, especially for the De Pangher long counter for which details of the design and constructional material are well known. The sensitivit...
Vectorizing and macrotasking Monte Carlo neutral particle algorithms
Heifetz, D.B.
1987-04-01
Monte Carlo algorithms for computing neutral particle transport in plasmas have been vectorized and macrotasked. The techniques used are directly applicable to Monte Carlo calculations of neutron and photon transport, and Monte Carlo integration schemes in general. A highly vectorized code was achieved by calculating test flight trajectories in loops over arrays of flight data, isolating the conditional branches to as few a number of loops as possible. A number of solutions are discussed to the problem of gaps appearing in the arrays due to completed flights, which impede vectorization. A simple and effective implementation of macrotasking is achieved by dividing the calculation of the test flight profile among several processors. A tree of random numbers is used to ensure reproducible results. The additional memory required for each task may preclude using a larger number of tasks. In future machines, the limit of macrotasking may be possible, with each test flight, and split test flight, being a separate task.
Properties of Reactive Oxygen Species by Quantum Monte Carlo
Zen, Andrea; Guidoni, Leonardo
2014-01-01
The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of Chemistry, Biology and Atmospheric Science. Nevertheless, the electronic structure of such species is a challenge for ab-initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal ...
LCG MCDB - a Knowledgebase of Monte Carlo Simulated Events
Belov, S; Galkin, E; Gusev, A; Pokorski, Witold; Sherstnev, A V
2008-01-01
In this paper we report on LCG Monte Carlo Data Base (MCDB) and software which has been developed to operate MCDB. The main purpose of the LCG MCDB project is to provide a storage and documentation system for sophisticated event samples simulated for the LHC collaborations by experts. In many cases, the modern Monte Carlo simulation of physical processes requires expert knowledge in Monte Carlo generators or significant amount of CPU time to produce the events. MCDB is a knowledgebase mainly to accumulate simulated events of this type. The main motivation behind LCG MCDB is to make the sophisticated MC event samples available for various physical groups. All the data from MCDB is accessible in several convenient ways. LCG MCDB is being developed within the CERN LCG Application Area Simulation project.
The Monte Carlo method in quantum field theory
Morningstar, C
2007-01-01
This series of six lectures is an introduction to using the Monte Carlo method to carry out nonperturbative studies in quantum field theories. Path integrals in quantum field theory are reviewed, and their evaluation by the Monte Carlo method with Markov-chain based importance sampling is presented. Properties of Markov chains are discussed in detail and several proofs are presented, culminating in the fundamental limit theorem for irreducible Markov chains. The example of a real scalar field theory is used to illustrate the Metropolis-Hastings method and to demonstrate the effectiveness of an action-preserving (microcanonical) local updating algorithm in reducing autocorrelations. The goal of these lectures is to provide the beginner with the basic skills needed to start carrying out Monte Carlo studies in quantum field theories, as well as to present the underlying theoretical foundations of the method.
TAKING THE NEXT STEP WITH INTELLIGENT MONTE CARLO
Booth, T.E.; Carlson, J.A. [and others
2000-10-01
For many scientific calculations, Monte Carlo is the only practical method available. Unfortunately, standard Monte Carlo methods converge slowly as the square root of the computer time. We have shown, both numerically and theoretically, that the convergence rate can be increased dramatically if the Monte Carlo algorithm is allowed to adapt based on what it has learned from previous samples. As the learning continues, computational efficiency increases, often geometrically fast. The particle transport work achieved geometric convergence for a two-region problem as well as for problems with rapidly changing nuclear data. The statistics work provided theoretical proof of geometic convergence for continuous transport problems and promising initial results for airborne migration of particles. The statistical physics work applied adaptive methods to a variety of physical problems including the three-dimensional Ising glass, quantum scattering, and eigenvalue problems.
Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations
Hoogenboom, J. Eduard; Dufek, Jan
2014-06-01
This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.
Monte Carlo tests of the ELIPGRID-PC algorithm
Davidson, J.R.
1995-04-01
The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM{reg_sign} PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within {plus_minus}0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangular sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error.
Efficiency of Monte Carlo sampling in chaotic systems.
Leitão, Jorge C; Lopes, J M Viana Parente; Altmann, Eduardo G
2014-11-01
In this paper we investigate how the complexity of chaotic phase spaces affect the efficiency of importance sampling Monte Carlo simulations. We focus on flat-histogram simulations of the distribution of finite-time Lyapunov exponent in a simple chaotic system and obtain analytically that the computational effort: (i) scales polynomially with the finite time, a tremendous improvement over the exponential scaling obtained in uniform sampling simulations; and (ii) the polynomial scaling is suboptimal, a phenomenon known as critical slowing down. We show that critical slowing down appears because of the limited possibilities to issue a local proposal in the Monte Carlo procedure when it is applied to chaotic systems. These results show how generic properties of chaotic systems limit the efficiency of Monte Carlo simulations.
Sequential Monte Carlo on large binary sampling spaces
Schäfer, Christian
2011-01-01
A Monte Carlo algorithm is said to be adaptive if it automatically calibrates its current proposal distribution using past simulations. The choice of the parametric family that defines the set of proposal distributions is critical for a good performance. In this paper, we present such a parametric family for adaptive sampling on high-dimensional binary spaces. A practical motivation for this problem is variable selection in a linear regression context. We want to sample from a Bayesian posterior distribution on the model space using an appropriate version of Sequential Monte Carlo. Raw versions of Sequential Monte Carlo are easily implemented using binary vectors with independent components. For high-dimensional problems, however, these simple proposals do not yield satisfactory results. The key to an efficient adaptive algorithm are binary parametric families which take correlations into account, analogously to the multivariate normal distribution on continuous spaces. We provide a review of models for binar...
Monte Carlo simulation of laser attenuation characteristics in fog
Wang, Hong-Xia; Sun, Chao; Zhu, You-zhang; Sun, Hong-hui; Li, Pan-shi
2011-06-01
Based on the Mie scattering theory and the gamma size distribution model, the scattering extinction parameter of spherical fog-drop is calculated. For the transmission attenuation of the laser in the fog, a Monte Carlo simulation model is established, and the impact of attenuation ratio on visibility and field angle is computed and analysed using the program developed by MATLAB language. The results of the Monte Carlo method in this paper are compared with the results of single scattering method. The results show that the influence of multiple scattering need to be considered when the visibility is low, and single scattering calculations have larger errors. The phenomenon of multiple scattering can be interpreted more better when the Monte Carlo is used to calculate the attenuation ratio of the laser transmitting in the fog.
VARIATIONAL MONTE-CARLO APPROACH FOR ARTICULATED OBJECT TRACKING
Kartik Dwivedi
2013-12-01
Full Text Available In this paper, we describe a novel variational Monte Carlo approach for modeling and tracking body parts of articulated objects. An articulated object (human target is represented as a dynamic Markov network of the different constituent parts. The proposed approach combines local information of individual body parts and other spatial constraints influenced by neighboring parts. The movement of the relative parts of the articulated body is modeled with local information of displacements from the Markov network and the global information from other neighboring parts. We explore the effect of certain model parameters (including the number of parts tracked; number of Monte-Carlo cycles, etc. on system accuracy and show that ourvariational Monte Carlo approach achieves better efficiency and effectiveness compared to other methods on a number of real-time video datasets containing single targets.
Meaningful timescales from Monte Carlo simulations of molecular systems
Costa, Liborio I
2016-01-01
A new Markov Chain Monte Carlo method for simulating the dynamics of molecular systems with atomistic detail is introduced. In contrast to traditional Kinetic Monte Carlo approaches, where the state of the system is associated with minima in the energy landscape, in the proposed method, the state of the system is associated with the set of paths traveled by the atoms and the transition probabilities for an atom to be displaced are proportional to the corresponding velocities. In this way, the number of possible state-to-state transitions is reduced to a discrete set, and a direct link between the Monte Carlo time step and true physical time is naturally established. The resulting rejection-free algorithm is validated against event-driven molecular dynamics: the equilibrium and non-equilibrium dynamics of hard disks converge to the exact results with decreasing displacement size.
Monte Carlo Methods for Tempo Tracking and Rhythm Quantization
Cemgil, A T; 10.1613/jair.1121
2011-01-01
We present a probabilistic generative model for timing deviations in expressive music performance. The structure of the proposed model is equivalent to a switching state space model. The switch variables correspond to discrete note locations as in a musical score. The continuous hidden variables denote the tempo. We formulate two well known music recognition problems, namely tempo tracking and automatic transcription (rhythm quantization) as filtering and maximum a posteriori (MAP) state estimation tasks. Exact computation of posterior features such as the MAP state is intractable in this model class, so we introduce Monte Carlo methods for integration and optimization. We compare Markov Chain Monte Carlo (MCMC) methods (such as Gibbs sampling, simulated annealing and iterative improvement) and sequential Monte Carlo methods (particle filters). Our simulation results suggest better results with sequential methods. The methods can be applied in both online and batch scenarios such as tempo tracking and transcr...
Introduction to the variational and diffusion Monte Carlo methods
Toulouse, Julien; Umrigar, C J
2015-01-01
We provide a pedagogical introduction to the two main variants of real-space quantum Monte Carlo methods for electronic-structure calculations: variational Monte Carlo (VMC) and diffusion Monte Carlo (DMC). Assuming no prior knowledge on the subject, we review in depth the Metropolis-Hastings algorithm used in VMC for sampling the square of an approximate wave function, discussing details important for applications to electronic systems. We also review in detail the more sophisticated DMC algorithm within the fixed-node approximation, introduced to avoid the infamous Fermionic sign problem, which allows one to sample a more accurate approximation to the ground-state wave function. Throughout this review, we discuss the statistical methods used for evaluating expectation values and statistical uncertainties. In particular, we show how to estimate nonlinear functions of expectation values and their statistical uncertainties.
Monte Carlo Simulation in Statistical Physics An Introduction
Binder, Kurt
2010-01-01
Monte Carlo Simulation in Statistical Physics deals with the computer simulation of many-body systems in condensed-matter physics and related fields of physics, chemistry and beyond, to traffic flows, stock market fluctuations, etc.). Using random numbers generated by a computer, probability distributions are calculated, allowing the estimation of the thermodynamic properties of various systems. This book describes the theoretical background to several variants of these Monte Carlo methods and gives a systematic presentation from which newcomers can learn to perform such simulations and to analyze their results. The fifth edition covers Classical as well as Quantum Monte Carlo methods. Furthermore a new chapter on the sampling of free-energy landscapes has been added. To help students in their work a special web server has been installed to host programs and discussion groups (http://wwwcp.tphys.uni-heidelberg.de). Prof. Binder was awarded the Berni J. Alder CECAM Award for Computational Physics 2001 as well ...
Applicability of Quasi-Monte Carlo for lattice systems
Ammon, Andreas; Jansen, Karl; Leovey, Hernan; Griewank, Andreas; Müller-Preussker, Micheal
2013-01-01
This project investigates the applicability of quasi-Monte Carlo methods to Euclidean lattice systems in order to improve the asymptotic error scaling of observables for such theories. The error of an observable calculated by averaging over random observations generated from ordinary Monte Carlo simulations scales like $N^{-1/2}$, where $N$ is the number of observations. By means of quasi-Monte Carlo methods it is possible to improve this scaling for certain problems to $N^{-1}$, or even further if the problems are regular enough. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling of all investigated observables in both cases.
Failure Probability Estimation of Wind Turbines by Enhanced Monte Carlo
Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Naess, Arvid
2012-01-01
This paper discusses the estimation of the failure probability of wind turbines required by codes of practice for designing them. The Standard Monte Carlo (SMC) simulations may be used for this reason conceptually as an alternative to the popular Peaks-Over-Threshold (POT) method. However......, estimation of very low failure probabilities with SMC simulations leads to unacceptably high computational costs. In this study, an Enhanced Monte Carlo (EMC) method is proposed that overcomes this obstacle. The method has advantages over both POT and SMC in terms of its low computational cost and accuracy...... is controlled by the pitch controller. This provides a fair framework for comparison of the behavior and failure event of the wind turbine with emphasis on the effect of the pitch controller. The Enhanced Monte Carlo method is then applied to the model and the failure probabilities of the model are estimated...
Penna, Rodrigo [UNI-BH, Belo Horizonte, MG (Brazil). Dept. de Ciencias Biologicas, Ambientais e da Saude (DCBAS/DCET); Silva, Clemente Jose Gusmao Carneiro da [Universidade Estadual de Santa Cruz, UESC, Ilheus, BA (Brazil); Gomes, Paulo Mauricio Costa [Universidade FUMEC, Belo Horizonte, MG (Brazil)
2008-07-01
Viability of building a nuclear wood densimeter based on low energy photons Compton scattering was done using Monte Carlo code (MCNP- 4C). It is simulated a collimated 60 keV beam of gamma rays emitted by {sup 241}Am source reaching wood blocks. Backscattered radiation by these blocks was calculated. Photons scattered were correlated with blocks of different wood densities. Results showed a linear relationship on wood density and scattered photons, therefore the viability of this wood densimeter. (author)
Implementation of Monte Carlo Simulations for the Gamma Knife System
Xiong, W [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Huang, D [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Lee, L [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Feng, J [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Morris, K [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Calugaru, E [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Burman, C [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Li, J [Fox Chase Cancer Center, 333 Cottman Ave., Philadelphia, PA 17111 (United States); Ma, C-M [Fox Chase Cancer Center, 333 Cottman Ave., Philadelphia, PA 17111 (United States)
2007-06-15
Currently the Gamma Knife system is accompanied with a treatment planning system, Leksell GammaPlan (LGP) which is a standard, computer-based treatment planning system for Gamma Knife radiosurgery. In LGP, the dose calculation algorithm does not consider the scatter dose contributions and the inhomogeneity effect due to the skull and air cavities. To improve the dose calculation accuracy, Monte Carlo simulations have been implemented for the Gamma Knife planning system. In this work, the 201 Cobalt-60 sources in the Gamma Knife unit are considered to have the same activity. Each Cobalt-60 source is contained in a cylindric stainless steel capsule. The particle phase space information is stored in four beam data files, which are collected in the inner sides of the 4 treatment helmets, after the Cobalt beam passes through the stationary and helmet collimators. Patient geometries are rebuilt from patient CT data. Twenty two Patients are included in the Monte Carlo simulation for this study. The dose is calculated using Monte Carlo in both homogenous and inhomogeneous geometries with identical beam parameters. To investigate the attenuation effect of the skull bone the dose in a 16cm diameter spherical QA phantom is measured with and without a 1.5mm Lead-covering and also simulated using Monte Carlo. The dose ratios with and without the 1.5mm Lead-covering are 89.8% based on measurements and 89.2% according to Monte Carlo for a 18mm-collimator Helmet. For patient geometries, the Monte Carlo results show that although the relative isodose lines remain almost the same with and without inhomogeneity corrections, the difference in the absolute dose is clinically significant. The average inhomogeneity correction is (3.9 {+-} 0.90) % for the 22 patients investigated. These results suggest that the inhomogeneity effect should be considered in the dose calculation for Gamma Knife treatment planning.
Monte Carlo Simulation for LINAC Standoff Interrogation of Nuclear Material
Clarke, Shaun D [ORNL; Flaska, Marek [ORNL; Miller, Thomas Martin [ORNL; Protopopescu, Vladimir A [ORNL; Pozzi, Sara A [ORNL
2007-06-01
The development of new techniques for the interrogation of shielded nuclear materials relies on the use of Monte Carlo codes to accurately simulate the entire system, including the interrogation source, the fissile target and the detection environment. The objective of this modeling effort is to develop analysis tools and methods-based on a relevant scenario-which may be applied to the design of future systems for active interrogation at a standoff. For the specific scenario considered here, the analysis will focus on providing the information needed to determine the type and optimum position of the detectors. This report describes the results of simulations for a detection system employing gamma rays to interrogate fissile and nonfissile targets. The simulations were performed using specialized versions of the codes MCNPX and MCNP-PoliMi. Both prompt neutron and gamma ray and delayed neutron fluxes have been mapped in three dimensions. The time dependence of the prompt neutrons in the system has also been characterized For this particular scenario, the flux maps generated with the Monte Carlo model indicate that the detectors should be placed approximately 50 cm behind the exit of the accelerator, 40 cm away from the vehicle, and 150 cm above the ground. This position minimizes the number of neutrons coming from the accelerator structure and also receives the maximum flux of prompt neutrons coming from the source. The lead shielding around the accelerator minimizes the gamma-ray background from the accelerator in this area. The number of delayed neutrons emitted from the target is approximately seven orders of magnitude less than the prompt neutrons emitted from the system. Therefore, in order to possibly detect the delayed neutrons, the detectors should be active only after all prompt neutrons have scattered out of the system. Preliminary results have shown this time to be greater than 5 ?s after the accelerator pulse. This type of system is illustrative of a
A standard Event Class for Monte Carlo Generators
L.A.Gerren; M.Fischler
2001-01-01
StdHepC++[1]is a CLHEP[2] Monte Carlo event class library which provides a common interface to Monte Carlo Event Generators,This work is an extensive redesign of the StdHep Fortran interface to use the full power of object oriented design,A generated event maps naturally onto the Directed Acyclic Graph concept and we have used the HepMC classes to implement this.The full implementation allows the user to combine events to simulate beam pileup and access them transparently as though they were a single event.
Parallelization of Monte Carlo codes MVP/GMVP
Nagaya, Yasunobu; Mori, Takamasa; Nakagawa, Masayuki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Sasaki, Makoto
1998-03-01
General-purpose Monte Carlo codes MVP/GMVP are well-vectorized and thus enable us to perform high-speed Monte Carlo calculations. In order to achieve more speedups, we parallelized the codes on the different types of the parallel processing platforms. The platforms reported are a distributed-memory vector-parallel computer Fujitsu VPP500, a distributed-memory massively parallel computer Intel Paragon and a distributed-memory scalar-parallel computer Hitachi SR2201. As mentioned generally, ideal speedup could be obtained for large-scale problems but parallelization efficiency got worse as the batch size per a processing element (PE) was smaller. (author)
Parton distribution functions in Monte Carlo factorisation scheme
Jadach, S.; Płaczek, W.; Sapeta, S.; Siódmok, A.; Skrzypek, M.
2016-12-01
A next step in development of the KrkNLO method of including complete NLO QCD corrections to hard processes in a LO parton-shower Monte Carlo is presented. It consists of a generalisation of the method, previously used for the Drell-Yan process, to Higgs-boson production. This extension is accompanied with the complete description of parton distribution functions in a dedicated, Monte Carlo factorisation scheme, applicable to any process of production of one or more colour-neutral particles in hadron-hadron collisions.
Kinetic Monte Carlo method applied to nucleic acid hairpin folding.
Sauerwine, Ben; Widom, Michael
2011-12-01
Kinetic Monte Carlo on coarse-grained systems, such as nucleic acid secondary structure, is advantageous for being able to access behavior at long time scales, even minutes or hours. Transition rates between coarse-grained states depend upon intermediate barriers, which are not directly simulated. We propose an Arrhenius rate model and an intermediate energy model that incorporates the effects of the barrier between simulated states without enlarging the state space itself. Applying our Arrhenius rate model to DNA hairpin folding, we demonstrate improved agreement with experiment compared to the usual kinetic Monte Carlo model. Further improvement results from including rigidity of single-stranded stacking.
Quasi-Monte Carlo methods for the Heston model
Jan Baldeaux; Dale Roberts
2012-01-01
In this paper, we discuss the application of quasi-Monte Carlo methods to the Heston model. We base our algorithms on the Broadie-Kaya algorithm, an exact simulation scheme for the Heston model. As the joint transition densities are not available in closed-form, the Linear Transformation method due to Imai and Tan, a popular and widely applicable method to improve the effectiveness of quasi-Monte Carlo methods, cannot be employed in the context of path-dependent options when the underlying pr...
An overview of Monte Carlo treatment planning for radiotherapy.
Spezi, Emiliano; Lewis, Geraint
2008-01-01
The implementation of Monte Carlo dose calculation algorithms in clinical radiotherapy treatment planning systems has been anticipated for many years. Despite a continuous increase of interest in Monte Carlo Treatment Planning (MCTP), its introduction into clinical practice has been delayed by the extent of calculation time required. The development of newer and faster MC codes is behind the commercialisation of the first MC-based treatment planning systems. The intended scope of this article is to provide the reader with a compact 'primer' on different approaches to MCTP with particular attention to the latest developments in the field.
Applications of quantum Monte Carlo methods in condensed systems
Kolorenc, Jindrich
2010-01-01
The quantum Monte Carlo methods represent a powerful and broadly applicable computational tool for finding very accurate solutions of the stationary Schroedinger equation for atoms, molecules, solids and a variety of model systems. The algorithms are intrinsically parallel and are able to take full advantage of the present-day high-performance computing systems. This review article concentrates on the fixed-node/fixed-phase diffusion Monte Carlo method with emphasis on its applications to electronic structure of solids and other extended many-particle systems.
Monte Carlo simulation of electron slowing down in indium
Rouabah, Z.; Hannachi, M. [Materials and Electronic Systems Laboratory (LMSE), University of Bordj Bou Arreridj, Bordj Bou Arreridj (Algeria); Champion, C. [Université de Bordeaux 1, CNRS/IN2P3, Centre d’Etudes Nucléaires de Bordeaux-Gradignan, (CENBG), Gradignan (France); Bouarissa, N., E-mail: n_bouarissa@yahoo.fr [Laboratory of Materials Physics and its Applications, University of M' sila, 28000 M' sila (Algeria)
2015-07-15
Highlights: • Electron scattering in indium targets. • Modeling of elastic cross-sections. • Monte Carlo simulation of low energy electrons. - Abstract: In the current study, we aim at simulating via a detailed Monte Carlo code, the electron penetration in a semi-infinite indium medium for incident energies ranging from 0.5 to 5 keV. Electron range, backscattering coefficients, mean penetration depths as well as stopping profiles are then reported. The results may be seen as the first predictions for low-energy electron penetration in indium target.
Monte Carlo methods and models in finance and insurance
Korn, Ralf
2010-01-01
Offering a unique balance between applications and calculations, this book incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The book enables readers to find the right algorithm for a desired application and illustrates complicated methods and algorithms with simple applicat
Utilising Monte Carlo Simulation for the Valuation of Mining Concessions
Rosli Said
2005-12-01
Full Text Available Valuation involves the analyses of various input data to produce an estimated value. Since each input is itself often an estimate, there is an element of uncertainty in the input. This leads to uncertainty in the resultant output value. It is argued that a valuation must also convey information on the uncertainty, so as to be more meaningful and informative to the user. The Monte Carlo simulation technique can generate the information on uncertainty and is therefore potentially useful to valuation. This paper reports on the investigation that has been conducted to apply Monte Carlo simulation technique in mineral valuation, more specifically, in the valuation of a quarry concession.
PEPSI — a Monte Carlo generator for polarized leptoproduction
Mankiewicz, L.; Schäfer, A.; Veltri, M.
1992-09-01
We describe PEPSI (Polarized Electron Proton Scattering Interactions), a Monte Carlo program for polarized deep inelastic leptoproduction mediated by electromagnetic interaction, and explain how to use it. The code is a modification of the LEPTO 4.3 Lund Monte Carlo for unpolarized scattering. The hard virtual gamma-parton scattering is generated according to the polarization-dependent QCD cross-section of the first order in α S. PEPSI requires the standard polarization-independent JETSET routines to simulate the fragmentation into final hadrons.
THE APPLICATION OF MONTE CARLO SIMULATION FOR A DECISION PROBLEM
Çiğdem ALABAŞ
2001-01-01
Full Text Available The ultimate goal of the standard decision tree approach is to calculate the expected value of a selected performance measure. In the real-world situations, the decision problems become very complex as the uncertainty factors increase. In such cases, decision analysis using standard decision tree approach is not useful. One way of overcoming this difficulty is the Monte Carlo simulation. In this study, a Monte Carlo simulation model is developed for a complex problem and statistical analysis is performed to make the best decision.
Accuracy Analysis of Assembly Success Rate with Monte Carlo Simulations
仲昕; 杨汝清; 周兵
2003-01-01
Monte Carlo simulation was applied to Assembly Success Rate (ASR) analyses.ASR of two peg-in-hole robot assemblies was used as an example by taking component parts' sizes,manufacturing tolerances and robot repeatability into account.A statistic arithmetic expression was proposed and deduced in this paper,which offers an alternative method of estimating the accuracy of ASR,without having to repeat the simulations.This statistic method also helps to choose a suitable sample size,if error reduction is desired.Monte Carlo simulation results demonstrated the feasibility of the method.
Novel Quantum Monte Carlo Approaches for Quantum Liquids
Rubenstein, Brenda M.
Quantum Monte Carlo methods are a powerful suite of techniques for solving the quantum many-body problem. By using random numbers to stochastically sample quantum properties, QMC methods are capable of studying low-temperature quantum systems well beyond the reach of conventional deterministic techniques. QMC techniques have likewise been indispensible tools for augmenting our current knowledge of superfluidity and superconductivity. In this thesis, I present two new quantum Monte Carlo techniques, the Monte Carlo Power Method and Bose-Fermi Auxiliary-Field Quantum Monte Carlo, and apply previously developed Path Integral Monte Carlo methods to explore two new phases of quantum hard spheres and hydrogen. I lay the foundation for a subsequent description of my research by first reviewing the physics of quantum liquids in Chapter One and the mathematics behind Quantum Monte Carlo algorithms in Chapter Two. I then discuss the Monte Carlo Power Method, a stochastic way of computing the first several extremal eigenvalues of a matrix too memory-intensive to be stored and therefore diagonalized. As an illustration of the technique, I demonstrate how it can be used to determine the second eigenvalues of the transition matrices of several popular Monte Carlo algorithms. This information may be used to quantify how rapidly a Monte Carlo algorithm is converging to the equilibrium probability distribution it is sampling. I next present the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm. This algorithm generalizes the well-known Auxiliary-Field Quantum Monte Carlo algorithm for fermions to bosons and Bose-Fermi mixtures. Despite some shortcomings, the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm represents the first exact technique capable of studying Bose-Fermi mixtures of any size in any dimension. In Chapter Six, I describe a new Constant Stress Path Integral Monte Carlo algorithm for the study of quantum mechanical systems under high pressures. While
Fission source sampling in coupled Monte Carlo simulations
Olsen, Boerge; Dufek, Jan [KTH Royal Inst. of Technology, Stockholm (Sweden). Div. of Nuclear Research Technology
2017-05-15
We study fission source sampling methods suitable for the iterative way of solving coupled Monte Carlo neutronics problems. Specifically, we address the question as to how the initial Monte Carlo fission source should be optimally sampled at the beginning of each iteration step. We compare numerically two approaches of sampling the initial fission source; the tested techniques are derived from well-known methods for iterating the neutron flux in coupled simulations. The first technique samples the initial fission source using the source from the previous iteration step, while the other technique uses a combination of all previous steps for this purpose. We observe that the previous-step approach performs the best.
Monte Carlo simulation of electrons in dense gases
Tattersall, Wade; Boyle, Greg; Cocks, Daniel; Buckman, Stephen; White, Ron
2014-10-01
We implement a Monte-Carlo simulation modelling the transport of electrons and positrons in dense gases and liquids, by using a dynamic structure factor that allows us to construct structure-modified effective cross sections. These account for the coherent effects caused by interactions with the relatively dense medium. The dynamic structure factor also allows us to model thermal gases in the same manner, without needing to directly sample the velocities of the neutral particles. We present the results of a series of Monte Carlo simulations that verify and apply this new technique, and make comparisons with macroscopic predictions and Boltzmann equation solutions. Financial support of the Australian Research Council.
Green's function monte carlo and the many-fermion problem
Kalos, M. H.
The application of Green's function Monte Carlo to many body problems is outlined. For boson problems, the method is well developed and practical. An "efficiency principle",importance sampling, can be used to reduce variance. Fermion problems are more difficult because spatially antisymmetric functions must be represented as a difference of two density functions. Naively treated, this leads to a rapid growth of Monte Carlo error. Methods for overcoming the difficulty are discussed. Satisfactory algorithms exist for few-body problems; for many-body problems more work is needed, but it is likely that adequate methods will soon be available.
Cosmological Markov Chain Monte Carlo simulation with Cmbeasy
Müller, C M
2004-01-01
We introduce a Markov Chain Monte Carlo simulation and data analysis package for the cosmological computation package Cmbeasy. We have taken special care in implementing an adaptive step algorithm for the Markov Chain Monte Carlo in order to improve convergence. Data analysis routines are provided which allow to test models of the Universe against up-to-date measurements of the Cosmic Microwave Background, Supernovae Ia and Large Scale Structure. The observational data is provided with the software for convenient usage. The package is publicly available as part of the Cmbeasy software at www.cmbeasy.org.
Krumer, Zachar; van Sark, Wilfried G. J. H. M.; de Mello Donegá, Celso; Schropp, Ruud E. I.
2013-09-01
Luminescent solar concentrators (LSCs) are low cost photovoltaic devices, which reduce the amount of necessary semiconductor material per unit area of a photovoltaic solar energy converter by means of concentration. The device is comprised of a thin plastic plate in which luminescent species (fluorophores) have been incorporated.The fluorophores absorb the solar light and radiatively re-emit a part of the energy. Total internal reflection traps most of the emitted light inside the plate and wave-guides it to a narrow side facet with a solar cell attached, where conversion into electricity occurs. The eciency of such devices is as yet rather low, due to several loss mechanisms, of which self-absorption is of high importance. Combined ray-tracing and Monte-Carlosimulations is a widely used tool for efficiency estimations of LSC-devices prior to manufacturing. We have applied this method to a model experiment, in which we analysed the impact of self-absorption onto LSC-efficiency of fluorophores with different absorption/emission-spectral overlap (Stokes-shift): several organic dyes and semiconductor quantum dots (single compound and core/shell of type-II). These results are compared with the ones obtained experimentally demonstrating a good agreement. The validated model is used to investigate systematically the influence of spectral separation and luminescence quantum efficiency on the intensity loss inconsequence of increased self-absorption. The results are used to adopt a quantity called the self-absorption cross-section and establish it as reliable criterion for self-absorption properties of materials that can be obtained from fundamental data and has a more universal scope of application, than the currently used Stokes-shift.
Evaluation of Monte Carlo tools for high energy atmospheric physics
Rutjes, Casper; Sarria, David; Broberg Skeltved, Alexander; Luque, Alejandro; Diniz, Gabriel; Østgaard, Nikolai; Ebert, Ute
2016-11-01
The emerging field of high energy atmospheric physics (HEAP) includes terrestrial gamma-ray flashes, electron-positron beams and gamma-ray glows from thunderstorms. Similar emissions of high energy particles occur in pulsed high voltage discharges. Understanding these phenomena requires appropriate models for the interaction of electrons, positrons and photons of up to 40 MeV energy with atmospheric air. In this paper, we benchmark the performance of the Monte Carlo codes Geant4, EGS5 and FLUKA developed in other fields of physics and of the custom-made codes GRRR and MC-PEPTITA against each other within the parameter regime relevant for high energy atmospheric physics. We focus on basic tests, namely on the evolution of monoenergetic and directed beams of electrons, positrons and photons with kinetic energies between 100 keV and 40 MeV through homogeneous air in the absence of electric and magnetic fields, using a low energy cutoff of 50 keV. We discuss important differences between the results of the different codes and provide plausible explanations. We also test the computational performance of the codes. The Supplement contains all results, providing a first benchmark for present and future custom-made codes that are more flexible in including electrodynamic interactions.
The Monte Carlo code MCSHAPE: Main features and recent developments
Scot, Viviana, E-mail: viviana.scot@unibo.it; Fernandez, Jorge E.
2015-06-01
MCSHAPE is a general purpose Monte Carlo code developed at the University of Bologna to simulate the diffusion of X- and gamma-ray photons with the special feature of describing the full evolution of the photon polarization state along the interactions with the target. The prevailing photon–matter interactions in the energy range 1–1000 keV, Compton and Rayleigh scattering and photoelectric effect, are considered. All the parameters that characterize the photon transport can be suitably defined: (i) the source intensity, (ii) its full polarization state as a function of energy, (iii) the number of collisions, and (iv) the energy interval and resolution of the simulation. It is possible to visualize the results for selected groups of interactions. MCSHAPE simulates the propagation in heterogeneous media of polarized photons (from synchrotron sources) or of partially polarized sources (from X-ray tubes). In this paper, the main features of MCSHAPE are illustrated with some examples and a comparison with experimental data. - Highlights: • MCSHAPE is an MC code for the simulation of the diffusion of photons in the matter. • It includes the proper description of the evolution of the photon polarization state. • The polarization state is described by means of the Stokes vector, I, Q, U, V. • MCSHAPE includes the computation of the detector influence in the measured spectrum. • MCSHAPE features are illustrated with examples and comparison with experiments.
Liaparinos, Panagiotis F; Kandarakis, Ioannis S; Cavouras, Dionisis A; Delis, Harry B; Panayiotakis, George S
2007-05-01
Lu2SiO5: Ce (LSO) scintillator is a relatively new luminescent material which has been successfully applied in positron emission tomography systems. Since it has been recently commercially available in powder form, it could be of value to investigate its performance for use in x-ray projection imaging as both physical and scintillating properties indicate a promising material for such applications. In the present study, a custom and validated Monte Carlo simulation code was used in order to examine the performance of LSO, under diagnostic radiology (mammography and general radiography) conditions. The Monte Carlo code was based on a model using Mie scattering theory for the description of light attenuation. Imaging characteristics, related to image brightness, spatial resolution and noise of LSO screens were predicted using only physical parameters of the phosphor. The overall performance of LSO powder phosphor screens was investigated in terms of the: (i) quantum detection efficiency (ii) emitted K-characteristic radiation (iii) luminescence efficiency (iv) modulation transfer function (v) Swank factor and (vi) zero-frequency detective quantum efficiency [DQE(0)]. Results were compared to the traditional rare-earth Gd2O2S:Tb (GOS) phosphor material. The relative luminescence efficiency of LSO phosphor was found inferior to that of GOS. This is due to the lower intrinsic conversion efficiency of LSO (0.08 instead of 0.15 of GOS) and the relatively high light extinction coefficient mext of this phosphor (0.239 mircom(-1) instead of 0.218 /microm(-1) for GOS). However, the property of increased light extinction combined with the rather sharp angular distribution of scattered light photons (anisotropy factor g=0.624 for LSO instead of 0.494 for GOS) reduce lateral light spreading and improve spatial resolution. In addition, LSO screens were found to exhibit better x-ray absorption as well as higher signal to noise transfer properties in the energy range from 18 keV up
Stochastic simulation and Monte-Carlo methods; Simulation stochastique et methodes de Monte-Carlo
Graham, C. [Centre National de la Recherche Scientifique (CNRS), 91 - Gif-sur-Yvette (France); Ecole Polytechnique, 91 - Palaiseau (France); Talay, D. [Institut National de Recherche en Informatique et en Automatique (INRIA), 78 - Le Chesnay (France); Ecole Polytechnique, 91 - Palaiseau (France)
2011-07-01
This book presents some numerical probabilistic methods of simulation with their convergence speed. It combines mathematical precision and numerical developments, each proposed method belonging to a precise theoretical context developed in a rigorous and self-sufficient manner. After some recalls about the big numbers law and the basics of probabilistic simulation, the authors introduce the martingales and their main properties. Then, they develop a chapter on non-asymptotic estimations of Monte-Carlo method errors. This chapter gives a recall of the central limit theorem and precises its convergence speed. It introduces the Log-Sobolev and concentration inequalities, about which the study has greatly developed during the last years. This chapter ends with some variance reduction techniques. In order to demonstrate in a rigorous way the simulation results of stochastic processes, the authors introduce the basic notions of probabilities and of stochastic calculus, in particular the essential basics of Ito calculus, adapted to each numerical method proposed. They successively study the construction and important properties of the Poisson process, of the jump and deterministic Markov processes (linked to transport equations), and of the solutions of stochastic differential equations. Numerical methods are then developed and the convergence speed results of algorithms are rigorously demonstrated. In passing, the authors describe the probabilistic interpretation basics of the parabolic partial derivative equations. Non-trivial applications to real applied problems are also developed. (J.S.)
Burkatzki, Mark Thomas
2008-07-01
The author presents scalar-relativistic energy-consistent Hartree-Fock pseudopotentials for the main-group and 3d-transition-metal elements. The pseudopotentials do not exhibit a singularity at the nucleus and are therefore suitable for quantum Monte Carlo (QMC) calculations. The author demonstrates their transferability through extensive benchmark calculations of atomic excitation spectra as well as molecular properties. In particular, the author computes the vibrational frequencies and binding energies of 26 first- and second-row diatomic molecules using post Hartree-Fock methods, finding excellent agreement with the corresponding all-electron values. The author shows that the presented pseudopotentials give superior accuracy than other existing pseudopotentials constructed specifically for QMC. The localization error and the efficiency in QMC are discussed. The author also presents QMC calculations for selected atomic and diatomic 3d-transitionmetal systems. Finally, valence basis sets of different sizes (VnZ with n=D,T,Q,5 for 1st and 2nd row; with n=D,T for 3rd to 5th row; with n=D,T,Q for the 3d transition metals) optimized for the pseudopotentials are presented. (orig.)
Longitudinal development of extensive air showers: hybrid code SENECA and full Monte Carlo
Ortiz, J A; De Souza, V; Ortiz, Jeferson A.; Tanco, Gustavo Medina
2004-01-01
New experiments, exploring the ultra-high energy tail of the cosmic ray spectrum with unprecedented detail, are exerting a severe pressure on extensive air hower modeling. Detailed fast codes are in need in order to extract and understand the richness of information now available. Some hybrid simulation codes have been proposed recently to this effect (e.g., the combination of the traditional Monte Carlo scheme and system of cascade equations or pre-simulated air showers). In this context, we explore the potential of SENECA, an efficient hybrid tridimensional simulation code, as a valid practical alternative to full Monte Carlo simulations of extensive air showers generated by ultra-high energy cosmic rays. We extensively compare hybrid method with the traditional, but time consuming, full Monte Carlo code CORSIKA which is the de facto standard in the field. The hybrid scheme of the SENECA code is based on the simulation of each particle with the traditional Monte Carlo method at two steps of the shower devel...
Effective quantum Monte Carlo algorithm for modeling strongly correlated systems
Kashurnikov, V. A.; Krasavin, A. V.
2007-01-01
A new effective Monte Carlo algorithm based on principles of continuous time is presented. It allows calculating, in an arbitrary discrete basis, thermodynamic quantities and linear response of mixed boson-fermion, spin-boson, and other strongly correlated systems which admit no analytic description
Time management for Monte-Carlo tree search in Go
Baier, Hendrik; Winands, Mark H M
2012-01-01
The dominant approach for programs playing the game of Go is nowadays Monte-Carlo Tree Search (MCTS). While MCTS allows for fine-grained time control, little has been published on time management for MCTS programs under tournament conditions. This paper investigates the effects that various time-man
Variational Monte Carlo calculations of few-body nuclei
Wiringa, R.B.
1986-01-01
The variational Monte Carlo method is described. Results for the binding energies, density distributions, momentum distributions, and static longitudinal structure functions of the /sup 3/H, /sup 3/He, and /sup 4/He ground states, and for the energies of the low-lying scattering states in /sup 4/He are presented. 25 refs., 3 figs.
Monte Carlo studies of nuclei and quantum liquid drops
Pandharipande, V.R.; Pieper, S.C.
1989-01-01
The progress in application of variational and Green's function Monte Carlo methods to nuclei is reviewed. The nature of single-particle orbitals in correlated quantum liquid drops is discussed, and it is suggested that the difference between quasi-particle and mean-field orbitals may be of importance in nuclear structure physics. 27 refs., 7 figs., 2 tabs.
Data libraries as a collaborative tool across Monte Carlo codes
Augelli, Mauro; Han, Mincheol; Hauf, Steffen; Kim, Chan-Hyeung; Kuster, Markus; Pia, Maria Grazia; Quintieri, Lina; Saracco, Paolo; Seo, Hee; Sudhakar, Manju; Eidenspointner, Georg; Zoglauer, Andreas
2010-01-01
The role of data libraries in Monte Carlo simulation is discussed. A number of data libraries currently in preparation are reviewed; their data are critically examined with respect to the state-of-the-art in the respective fields. Extensive tests with respect to experimental data have been performed for the validation of their content.
A separable shadow Hamiltonian hybrid Monte Carlo method.
Sweet, Christopher R; Hampton, Scott S; Skeel, Robert D; Izaguirre, Jesús A
2009-11-07
Hybrid Monte Carlo (HMC) is a rigorous sampling method that uses molecular dynamics (MD) as a global Monte Carlo move. The acceptance rate of HMC decays exponentially with system size. The shadow hybrid Monte Carlo (SHMC) was previously introduced to reduce this performance degradation by sampling instead from the shadow Hamiltonian defined for MD when using a symplectic integrator. SHMC's performance is limited by the need to generate momenta for the MD step from a nonseparable shadow Hamiltonian. We introduce the separable shadow Hamiltonian hybrid Monte Carlo (S2HMC) method based on a formulation of the leapfrog/Verlet integrator that corresponds to a separable shadow Hamiltonian, which allows efficient generation of momenta. S2HMC gives the acceptance rate of a fourth order integrator at the cost of a second-order integrator. Through numerical experiments we show that S2HMC consistently gives a speedup greater than two over HMC for systems with more than 4000 atoms for the same variance. By comparison, SHMC gave a maximum speedup of only 1.6 over HMC. S2HMC has the additional advantage of not requiring any user parameters beyond those of HMC. S2HMC is available in the program PROTOMOL 2.1. A Python version, adequate for didactic purposes, is also in MDL (http://mdlab.sourceforge.net/s2hmc).
Quantum Monte Carlo diagonalization method as a variational calculation
Mizusaki, Takahiro; Otsuka, Takaharu [Tokyo Univ. (Japan). Dept. of Physics; Honma, Michio
1997-05-01
A stochastic method for performing large-scale shell model calculations is presented, which utilizes the auxiliary field Monte Carlo technique and diagonalization method. This method overcomes the limitation of the conventional shell model diagonalization and can extremely widen the feasibility of shell model calculations with realistic interactions for spectroscopic study of nuclear structure. (author)
Monte Carlo simulation of quantum statistical lattice models
Raedt, Hans De; Lagendijk, Ad
1985-01-01
In this article we review recent developments in computational methods for quantum statistical lattice problems. We begin by giving the necessary mathematical basis, the generalized Trotter formula, and discuss the computational tools, exact summations and Monte Carlo simulation, that will be used t
Distributed and Adaptive Darting Monte Carlo through Regenerations
Ahn, S.; Chen, Y.; Welling, M.
2013-01-01
Darting Monte Carlo (DMC) is a MCMC procedure designed to effectively mix between multiple modes of a probability distribution. We propose an adaptive and distributed version of this method by using regenerations. This allows us to run multiple chains in parallel and adapt the shape of the jump regi
A novel Monte Carlo approach to hybrid local volatility models
A.W. van der Stoep (Anton); L.A. Grzelak (Lech Aleksander); C.W. Oosterlee (Cornelis)
2017-01-01
textabstractWe present in a Monte Carlo simulation framework, a novel approach for the evaluation of hybrid local volatility [Risk, 1994, 7, 18–20], [Int. J. Theor. Appl. Finance, 1998, 1, 61–110] models. In particular, we consider the stochastic local volatility model—see e.g. Lipton et al. [Quant.
SPANDY: a Monte Carlo program for gas target scattering geometry
Jarmie, N.; Jett, J.H.; Niethammer, A.C.
1977-02-01
A Monte Carlo computer program is presented that simulates a two-slit gas target scattering geometry. The program is useful in estimating effects due to finite geometry and multiple scattering in the target foil. Details of the program are presented and experience with a specific example is discussed.
Monte Carlo Simulation of Partially Confined Flexible Polymers
Hermsen, G.F.; de Geeter, B.A.; van der Vegt, N.F.A.; Wessling, Matthias
2002-01-01
We have studied conformational properties of flexible polymers partially confined to narrow pores of different size using configurational biased Monte Carlo simulations under athermal conditions. The asphericity of the chain has been studied as a function of its center of mass position along the por
Tackling the premature convergence problem in Monte-Carlo localization
Kootstra, G.; de Boer, B.
2009-01-01
Monte-Carlo localization uses particle filtering to estimate the position of the robot. The method is known to suffer from the loss of potential positions when there is ambiguity present in the environment. Since many indoor environments are highly symmetric, this problem of premature convergence is
Nonequilibrium Candidate Monte Carlo Simulations with Configurational Freezing Schemes.
Giovannelli, Edoardo; Gellini, Cristina; Pietraperzia, Giangaetano; Cardini, Gianni; Chelli, Riccardo
2014-10-14
Nonequilibrium Candidate Monte Carlo simulation [Nilmeier et al., Proc. Natl. Acad. Sci. U.S.A. 2011, 108, E1009-E1018] is a tool devised to design Monte Carlo moves with high acceptance probabilities that connect uncorrelated configurations. Such moves are generated through nonequilibrium driven dynamics, producing candidate configurations accepted with a Monte Carlo-like criterion that preserves the equilibrium distribution. The probability of accepting a candidate configuration as the next sample in the Markov chain basically depends on the work performed on the system during the nonequilibrium trajectory and increases with decreasing such a work. It is thus strategically relevant to find ways of producing nonequilibrium moves with low work, namely moves where dissipation is as low as possible. This is the goal of our methodology, in which we combine Nonequilibrium Candidate Monte Carlo with Configurational Freezing schemes developed by Nicolini et al. (J. Chem. Theory Comput. 2011, 7, 582-593). The idea is to limit the configurational sampling to particles of a well-established region of the simulation sample, namely the region where dissipation occurs, while leaving fixed the other particles. This allows to make the system relaxation faster around the region perturbed by the finite-time switching move and hence to reduce the dissipated work, eventually enhancing the probability of accepting the generated move. Our combined approach enhances significantly configurational sampling, as shown by the case of a bistable dimer immersed in a dense fluid.