WorldWideScience

Sample records for carlo computer techniques

  1. Monte Carlo techniques in radiation therapy

    CERN Document Server

    Verhaegen, Frank

    2013-01-01

    Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...

  2. Monte Carlo techniques for analyzing deep-penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1986-01-01

    Current methods and difficulties in Monte Carlo deep-penetration calculations are reviewed, including statistical uncertainty and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multigroup Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications

  3. Variance Reduction Techniques in Monte Carlo Methods

    NARCIS (Netherlands)

    Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.

    2010-01-01

    Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the

  4. Monte Carlo techniques for analyzing deep penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1985-01-01

    A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications

  5. Monte Carlo techniques for analyzing deep penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1985-01-01

    A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications. 29 refs

  6. Radiotherapy Monte Carlo simulation using cloud computing technology.

    Science.gov (United States)

    Poole, C M; Cornelius, I; Trapp, J V; Langton, C M

    2012-12-01

    Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1/n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal.

  7. Radiotherapy Monte Carlo simulation using cloud computing technology

    International Nuclear Information System (INIS)

    Poole, C.M.; Cornelius, I.; Trapp, J.V.; Langton, C.M.

    2012-01-01

    Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1/n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal.

  8. Monte Carlo strategies in scientific computing

    CERN Document Server

    Liu, Jun S

    2008-01-01

    This paperback edition is a reprint of the 2001 Springer edition This book provides a self-contained and up-to-date treatment of the Monte Carlo method and develops a common framework under which various Monte Carlo techniques can be "standardized" and compared Given the interdisciplinary nature of the topics and a moderate prerequisite for the reader, this book should be of interest to a broad audience of quantitative researchers such as computational biologists, computer scientists, econometricians, engineers, probabilists, and statisticians It can also be used as the textbook for a graduate-level course on Monte Carlo methods Many problems discussed in the alter chapters can be potential thesis topics for masters’ or PhD students in statistics or computer science departments Jun Liu is Professor of Statistics at Harvard University, with a courtesy Professor appointment at Harvard Biostatistics Department Professor Liu was the recipient of the 2002 COPSS Presidents' Award, the most prestigious one for sta...

  9. Monte Carlo techniques in diagnostic and therapeutic nuclear medicine

    International Nuclear Information System (INIS)

    Zaidi, H.

    2002-01-01

    Monte Carlo techniques have become one of the most popular tools in different areas of medical radiation physics following the development and subsequent implementation of powerful computing systems for clinical use. In particular, they have been extensively applied to simulate processes involving random behaviour and to quantify physical parameters that are difficult or even impossible to calculate analytically or to determine by experimental measurements. The use of the Monte Carlo method to simulate radiation transport turned out to be the most accurate means of predicting absorbed dose distributions and other quantities of interest in the radiation treatment of cancer patients using either external or radionuclide radiotherapy. The same trend has occurred for the estimation of the absorbed dose in diagnostic procedures using radionuclides. There is broad consensus in accepting that the earliest Monte Carlo calculations in medical radiation physics were made in the area of nuclear medicine, where the technique was used for dosimetry modelling and computations. Formalism and data based on Monte Carlo calculations, developed by the Medical Internal Radiation Dose (MIRD) committee of the Society of Nuclear Medicine, were published in a series of supplements to the Journal of Nuclear Medicine, the first one being released in 1968. Some of these pamphlets made extensive use of Monte Carlo calculations to derive specific absorbed fractions for electron and photon sources uniformly distributed in organs of mathematical phantoms. Interest in Monte Carlo-based dose calculations with β-emitters has been revived with the application of radiolabelled monoclonal antibodies to radioimmunotherapy. As a consequence of this generalized use, many questions are being raised primarily about the need and potential of Monte Carlo techniques, but also about how accurate it really is, what would it take to apply it clinically and make it available widely to the medical physics

  10. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    International Nuclear Information System (INIS)

    Brown, Forrest B.; Univ. of New Mexico, Albuquerque, NM

    2016-01-01

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations

  11. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications Group; Univ. of New Mexico, Albuquerque, NM (United States). Nuclear Engineering Dept.

    2016-11-29

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations

  12. Investigation of pattern recognition techniques for the indentification of splitting surfaces in Monte Carlo particle transport calculations

    International Nuclear Information System (INIS)

    Macdonald, J.L.

    1975-08-01

    Statistical and deterministic pattern recognition systems are designed to classify the state space of a Monte Carlo transport problem into importance regions. The surfaces separating the regions can be used for particle splitting and Russian roulette in state space in order to reduce the variance of the Monte Carlo tally. Computer experiments are performed to evaluate the performance of the technique using one and two dimensional Monte Carlo problems. Additional experiments are performed to determine the sensitivity of the technique to various pattern recognition and Monte Carlo problem dependent parameters. A system for applying the technique to a general purpose Monte Carlo code is described. An estimate of the computer time required by the technique is made in order to determine its effectiveness as a variance reduction device. It is recommended that the technique be further investigated in a general purpose Monte Carlo code. (auth)

  13. A computer code package for electron transport Monte Carlo simulation

    International Nuclear Information System (INIS)

    Popescu, Lucretiu M.

    1999-01-01

    A computer code package was developed for solving various electron transport problems by Monte Carlo simulation. It is based on condensed history Monte Carlo algorithm. In order to get reliable results over wide ranges of electron energies and target atomic numbers, specific techniques of electron transport were implemented such as: Moliere multiscatter angular distributions, Blunck-Leisegang multiscatter energy distribution, sampling of electron-electron and Bremsstrahlung individual interactions. Path-length and lateral displacement corrections algorithms and the module for computing collision, radiative and total restricted stopping powers and ranges of electrons are also included. Comparisons of simulation results with experimental measurements are finally presented. (author)

  14. 11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing

    CERN Document Server

    Nuyens, Dirk

    2016-01-01

    This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.

  15. Characterization of decommissioned reactor internals: Monte Carlo analysis technique

    International Nuclear Information System (INIS)

    Reid, B.D.; Love, E.F.; Luksic, A.T.

    1993-03-01

    This study discusses computer analysis techniques for determining activation levels of irradiated reactor component hardware to yield data for the Department of Energy's Greater-Than-Class C Low-Level Radioactive Waste Program. The study recommends the Monte Carlo Neutron/Photon (MCNP) computer code as the best analysis tool for this application and compares the technique to direct sampling methodology. To implement the MCNP analysis, a computer model would be developed to reflect the geometry, material composition, and power history of an existing shutdown reactor. MCNP analysis would then be performed using the computer model, and the results would be validated by comparison to laboratory analysis results from samples taken from the shutdown reactor. The report estimates uncertainties for each step of the computational and laboratory analyses; the overall uncertainty of the MCNP results is projected to be ±35%. The primary source of uncertainty is identified as the material composition of the components, and research is suggested to address that uncertainty

  16. Variational Monte Carlo Technique

    Indian Academy of Sciences (India)

    ias

    on the development of nuclear weapons in Los Alamos ..... cantly improved the paper. ... Carlo simulations of solids, Reviews of Modern Physics, Vol.73, pp.33– ... The computer algorithms are usually based on a random seed that starts the ...

  17. Uncertainty analysis in Monte Carlo criticality computations

    International Nuclear Information System (INIS)

    Qi Ao

    2011-01-01

    Highlights: ► Two types of uncertainty methods for k eff Monte Carlo computations are examined. ► Sampling method has the least restrictions on perturbation but computing resources. ► Analytical method is limited to small perturbation on material properties. ► Practicality relies on efficiency, multiparameter applicability and data availability. - Abstract: Uncertainty analysis is imperative for nuclear criticality risk assessments when using Monte Carlo neutron transport methods to predict the effective neutron multiplication factor (k eff ) for fissionable material systems. For the validation of Monte Carlo codes for criticality computations against benchmark experiments, code accuracy and precision are measured by both the computational bias and uncertainty in the bias. The uncertainty in the bias accounts for known or quantified experimental, computational and model uncertainties. For the application of Monte Carlo codes for criticality analysis of fissionable material systems, an administrative margin of subcriticality must be imposed to provide additional assurance of subcriticality for any unknown or unquantified uncertainties. Because of a substantial impact of the administrative margin of subcriticality on economics and safety of nuclear fuel cycle operations, recently increasing interests in reducing the administrative margin of subcriticality make the uncertainty analysis in criticality safety computations more risk-significant. This paper provides an overview of two most popular k eff uncertainty analysis methods for Monte Carlo criticality computations: (1) sampling-based methods, and (2) analytical methods. Examples are given to demonstrate their usage in the k eff uncertainty analysis due to uncertainties in both neutronic and non-neutronic parameters of fissionable material systems.

  18. Variational Monte Carlo Technique

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 19; Issue 8. Variational Monte Carlo Technique: Ground State Energies of Quantum Mechanical Systems. Sukanta Deb. General Article Volume 19 Issue 8 August 2014 pp 713-739 ...

  19. Monte Carlo technique for very large ising models

    Science.gov (United States)

    Kalle, C.; Winkelmann, V.

    1982-08-01

    Rebbi's multispin coding technique is improved and applied to the kinetic Ising model with size 600*600*600. We give the central part of our computer program (for a CDC Cyber 76), which will be helpful also in a simulation of smaller systems, and describe the other tricks necessary to go to large lattices. The magnetization M at T=1.4* T c is found to decay asymptotically as exp(-t/2.90) if t is measured in Monte Carlo steps per spin, and M( t = 0) = 1 initially.

  20. Elements of Monte Carlo techniques

    International Nuclear Information System (INIS)

    Nagarajan, P.S.

    2000-01-01

    The Monte Carlo method is essentially mimicking the real world physical processes at the microscopic level. With the incredible increase in computing speeds and ever decreasing computing costs, there is widespread use of the method for practical problems. The method is used in calculating algorithm-generated sequences known as pseudo random sequence (prs)., probability density function (pdf), test for randomness, extension to multidimensional integration etc

  1. Monte Carlo simulations on SIMD computer architectures

    International Nuclear Information System (INIS)

    Burmester, C.P.; Gronsky, R.; Wille, L.T.

    1992-01-01

    In this paper algorithmic considerations regarding the implementation of various materials science applications of the Monte Carlo technique to single instruction multiple data (SIMD) computer architectures are presented. In particular, implementation of the Ising model with nearest, next nearest, and long range screened Coulomb interactions on the SIMD architecture MasPar MP-1 (DEC mpp-12000) series of massively parallel computers is demonstrated. Methods of code development which optimize processor array use and minimize inter-processor communication are presented including lattice partitioning and the use of processor array spanning tree structures for data reduction. Both geometric and algorithmic parallel approaches are utilized. Benchmarks in terms of Monte Carl updates per second for the MasPar architecture are presented and compared to values reported in the literature from comparable studies on other architectures

  2. Scalable and massively parallel Monte Carlo photon transport simulations for heterogeneous computing platforms.

    Science.gov (United States)

    Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian

    2018-01-01

    We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  3. Monte Carlo Techniques for the Comprehensive Modeling of Isotopic Inventories in Future Nuclear Systems and Fuel Cycles. Final Report

    International Nuclear Information System (INIS)

    Paul P.H. Wilson

    2005-01-01

    The development of Monte Carlo techniques for isotopic inventory analysis has been explored in order to facilitate the modeling of systems with flowing streams of material through varying neutron irradiation environments. This represents a novel application of Monte Carlo methods to a field that has traditionally relied on deterministic solutions to systems of first-order differential equations. The Monte Carlo techniques were based largely on the known modeling techniques of Monte Carlo radiation transport, but with important differences, particularly in the area of variance reduction and efficiency measurement. The software that was developed to implement and test these methods now provides a basis for validating approximate modeling techniques that are available to deterministic methodologies. The Monte Carlo methods have been shown to be effective in reproducing the solutions of simple problems that are possible using both stochastic and deterministic methods. The Monte Carlo methods are also effective for tracking flows of materials through complex systems including the ability to model removal of individual elements or isotopes in the system. Computational performance is best for flows that have characteristic times that are large fractions of the system lifetime. As the characteristic times become short, leading to thousands or millions of passes through the system, the computational performance drops significantly. Further research is underway to determine modeling techniques to improve performance within this range of problems. This report describes the technical development of Monte Carlo techniques for isotopic inventory analysis. The primary motivation for this solution methodology is the ability to model systems of flowing material being exposed to varying and stochastically varying radiation environments. The methodology was developed in three stages: analog methods which model each atom with true reaction probabilities (Section 2), non-analog methods

  4. Novel hybrid Monte Carlo/deterministic technique for shutdown dose rate analyses of fusion energy systems

    International Nuclear Information System (INIS)

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.; Grove, Robert E.

    2014-01-01

    Highlights: •Develop the novel Multi-Step CADIS (MS-CADIS) hybrid Monte Carlo/deterministic method for multi-step shielding analyses. •Accurately calculate shutdown dose rates using full-scale Monte Carlo models of fusion energy systems. •Demonstrate the dramatic efficiency improvement of the MS-CADIS method for the rigorous two step calculations of the shutdown dose rate in fusion reactors. -- Abstract: The rigorous 2-step (R2S) computational system uses three-dimensional Monte Carlo transport simulations to calculate the shutdown dose rate (SDDR) in fusion reactors. Accurate full-scale R2S calculations are impractical in fusion reactors because they require calculating space- and energy-dependent neutron fluxes everywhere inside the reactor. The use of global Monte Carlo variance reduction techniques was suggested for accelerating the R2S neutron transport calculation. However, the prohibitive computational costs of these approaches, which increase with the problem size and amount of shielding materials, inhibit their ability to accurately predict the SDDR in fusion energy systems using full-scale modeling of an entire fusion plant. This paper describes a novel hybrid Monte Carlo/deterministic methodology that uses the Consistent Adjoint Driven Importance Sampling (CADIS) method but focuses on multi-step shielding calculations. The Multi-Step CADIS (MS-CADIS) methodology speeds up the R2S neutron Monte Carlo calculation using an importance function that represents the neutron importance to the final SDDR. Using a simplified example, preliminary results showed that the use of MS-CADIS enhanced the efficiency of the neutron Monte Carlo simulation of an SDDR calculation by a factor of 550 compared to standard global variance reduction techniques, and that the efficiency enhancement compared to analog Monte Carlo is higher than a factor of 10,000

  5. Skin fluorescence model based on the Monte Carlo technique

    Science.gov (United States)

    Churmakov, Dmitry Y.; Meglinski, Igor V.; Piletsky, Sergey A.; Greenhalgh, Douglas A.

    2003-10-01

    The novel Monte Carlo technique of simulation of spatial fluorescence distribution within the human skin is presented. The computational model of skin takes into account spatial distribution of fluorophores following the collagen fibers packing, whereas in epidermis and stratum corneum the distribution of fluorophores assumed to be homogeneous. The results of simulation suggest that distribution of auto-fluorescence is significantly suppressed in the NIR spectral region, while fluorescence of sensor layer embedded in epidermis is localized at the adjusted depth. The model is also able to simulate the skin fluorescence spectra.

  6. Pore-scale uncertainty quantification with multilevel Monte Carlo

    KAUST Repository

    Icardi, Matteo; Hoel, Haakon; Long, Quan; Tempone, Raul

    2014-01-01

    . Since there are no generic ways to parametrize the randomness in the porescale structures, Monte Carlo techniques are the most accessible to compute statistics. We propose a multilevel Monte Carlo (MLMC) technique to reduce the computational cost

  7. A parallelization study of the general purpose Monte Carlo code MCNP4 on a distributed memory highly parallel computer

    International Nuclear Information System (INIS)

    Yamazaki, Takao; Fujisaki, Masahide; Okuda, Motoi; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka

    1993-01-01

    The general purpose Monte Carlo code MCNP4 has been implemented on the Fujitsu AP1000 distributed memory highly parallel computer. Parallelization techniques developed and studied are reported. A shielding analysis function of the MCNP4 code is parallelized in this study. A technique to map a history to each processor dynamically and to map control process to a certain processor was applied. The efficiency of parallelized code is up to 80% for a typical practical problem with 512 processors. These results demonstrate the advantages of a highly parallel computer to the conventional computers in the field of shielding analysis by Monte Carlo method. (orig.)

  8. Application of artificial intelligence techniques to the acceleration of Monte Carlo transport calculations

    International Nuclear Information System (INIS)

    Maconald, J.L.; Cashwell, E.D.

    1978-09-01

    The techniques of learning theory and pattern recognition are used to learn splitting surface locations for the Monte Carlo neutron transport code MCN. A study is performed to determine default values for several pattern recognition and learning parameters. The modified MCN code is used to reduce computer cost for several nontrivial example problems

  9. Collimator performance evaluation by Monte-Carlo techniques

    International Nuclear Information System (INIS)

    Milanesi, L.; Bettinardi, V.; Bellotti, E.; Gilardi, M.C.; Todd-Pokropek, A.; Fazio, F.

    1985-01-01

    A computer program using Monte-Carlo techniques has been developed to simulate gamma camera collimator performance. Input data include hole length, septum thickness, hole size and shape, collimator material, source characteristics, source to collimator distance and medium, radiation energy, total events number. Agreement between Monte-Carlo simulations and experimental measurements was found for commercial hexagonal parallel hole collimators in terms of septal penetration, transfer function and sensitivity. The method was then used to rationalize collimator design for tomographic brain studies. A radius of ration of 15 cm was assumed. By keeping constant resolution at 15 cm (FWHM = 1.3.cm), SPECT response to a point source was obtained in scattering medium for three theoretical collimators. Sensitivity was maximized in the first collimator, uniformity of resolution response in the third, while the second represented a trade-off between the two. The high sensitivity design may be superior in the hot spot and/or low activity situation, while for distributed sources of high activity an uniform resolution response should be preferred. The method can be used to personalize collimator design to different clinical needs in SPECT

  10. Parallel computing by Monte Carlo codes MVP/GMVP

    International Nuclear Information System (INIS)

    Nagaya, Yasunobu; Nakagawa, Masayuki; Mori, Takamasa

    2001-01-01

    General-purpose Monte Carlo codes MVP/GMVP are well-vectorized and thus enable us to perform high-speed Monte Carlo calculations. In order to achieve more speedups, we parallelized the codes on the different types of parallel computing platforms or by using a standard parallelization library MPI. The platforms used for benchmark calculations are a distributed-memory vector-parallel computer Fujitsu VPP500, a distributed-memory massively parallel computer Intel paragon and a distributed-memory scalar-parallel computer Hitachi SR2201, IBM SP2. As mentioned generally, linear speedup could be obtained for large-scale problems but parallelization efficiency decreased as the batch size per a processing element(PE) was smaller. It was also found that the statistical uncertainty for assembly powers was less than 0.1% by the PWR full-core calculation with more than 10 million histories and it took about 1.5 hours by massively parallel computing. (author)

  11. Computer system for Monte Carlo experimentation

    International Nuclear Information System (INIS)

    Grier, D.A.

    1986-01-01

    A new computer system for Monte Carlo Experimentation is presented. The new system speeds and simplifies the process of coding and preparing a Monte Carlo Experiment; it also encourages the proper design of Monte Carlo Experiments, and the careful analysis of the experimental results. A new functional language is the core of this system. Monte Carlo Experiments, and their experimental designs, are programmed in this new language; those programs are compiled into Fortran output. The Fortran output is then compiled and executed. The experimental results are analyzed with a standard statistics package such as Si, Isp, or Minitab or with a user-supplied program. Both the experimental results and the experimental design may be directly loaded into the workspace of those packages. The new functional language frees programmers from many of the details of programming an experiment. Experimental designs such as factorial, fractional factorial, or latin square are easily described by the control structures and expressions of the language. Specific mathematical modes are generated by the routines of the language

  12. Multilevel Monte Carlo in Approximate Bayesian Computation

    KAUST Repository

    Jasra, Ajay

    2017-02-13

    In the following article we consider approximate Bayesian computation (ABC) inference. We introduce a method for numerically approximating ABC posteriors using the multilevel Monte Carlo (MLMC). A sequential Monte Carlo version of the approach is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.

  13. Monte Carlo computation in the applied research of nuclear technology

    International Nuclear Information System (INIS)

    Xu Shuyan; Liu Baojie; Li Qin

    2007-01-01

    This article briefly introduces Monte Carlo Methods and their properties. It narrates the Monte Carlo methods with emphasis in their applications to several domains of nuclear technology. Monte Carlo simulation methods and several commonly used computer software to implement them are also introduced. The proposed methods are demonstrated by a real example. (authors)

  14. Determination of true coincidence correction factors using Monte-Carlo simulation techniques

    Directory of Open Access Journals (Sweden)

    Chionis Dionysios A.

    2014-01-01

    Full Text Available Aim of this work is the numerical calculation of the true coincidence correction factors by means of Monte-Carlo simulation techniques. For this purpose, the Monte Carlo computer code PENELOPE was used and the main program PENMAIN was properly modified in order to include the effect of the true coincidence phenomenon. The modified main program that takes into consideration the true coincidence phenomenon was used for the full energy peak efficiency determination of an XtRa Ge detector with relative efficiency 104% and the results obtained for the 1173 keV and 1332 keV photons of 60Co were found consistent with respective experimental ones. The true coincidence correction factors were calculated as the ratio of the full energy peak efficiencies was determined from the original main program PENMAIN and the modified main program PENMAIN. The developed technique was applied for 57Co, 88Y, and 134Cs and for two source-to-detector geometries. The results obtained were compared with true coincidence correction factors calculated from the "TrueCoinc" program and the relative bias was found to be less than 2%, 4%, and 8% for 57Co, 88Y, and 134Cs, respectively.

  15. Computation cluster for Monte Carlo calculations

    Energy Technology Data Exchange (ETDEWEB)

    Petriska, M.; Vitazek, K.; Farkas, G.; Stacho, M.; Michalek, S. [Dep. Of Nuclear Physics and Technology, Faculty of Electrical Engineering and Information, Technology, Slovak Technical University, Ilkovicova 3, 81219 Bratislava (Slovakia)

    2010-07-01

    Two computation clusters based on Rocks Clusters 5.1 Linux distribution with Intel Core Duo and Intel Core Quad based computers were made at the Department of the Nuclear Physics and Technology. Clusters were used for Monte Carlo calculations, specifically for MCNP calculations applied in Nuclear reactor core simulations. Optimization for computation speed was made on hardware and software basis. Hardware cluster parameters, such as size of the memory, network speed, CPU speed, number of processors per computation, number of processors in one computer were tested for shortening the calculation time. For software optimization, different Fortran compilers, MPI implementations and CPU multi-core libraries were tested. Finally computer cluster was used in finding the weighting functions of neutron ex-core detectors of VVER-440. (authors)

  16. Computation cluster for Monte Carlo calculations

    International Nuclear Information System (INIS)

    Petriska, M.; Vitazek, K.; Farkas, G.; Stacho, M.; Michalek, S.

    2010-01-01

    Two computation clusters based on Rocks Clusters 5.1 Linux distribution with Intel Core Duo and Intel Core Quad based computers were made at the Department of the Nuclear Physics and Technology. Clusters were used for Monte Carlo calculations, specifically for MCNP calculations applied in Nuclear reactor core simulations. Optimization for computation speed was made on hardware and software basis. Hardware cluster parameters, such as size of the memory, network speed, CPU speed, number of processors per computation, number of processors in one computer were tested for shortening the calculation time. For software optimization, different Fortran compilers, MPI implementations and CPU multi-core libraries were tested. Finally computer cluster was used in finding the weighting functions of neutron ex-core detectors of VVER-440. (authors)

  17. PELE:  Protein Energy Landscape Exploration. A Novel Monte Carlo Based Technique.

    Science.gov (United States)

    Borrelli, Kenneth W; Vitalis, Andreas; Alcantara, Raul; Guallar, Victor

    2005-11-01

    Combining protein structure prediction algorithms and Metropolis Monte Carlo techniques, we provide a novel method to explore all-atom energy landscapes. The core of the technique is based on a steered localized perturbation followed by side-chain sampling as well as minimization cycles. The algorithm and its application to ligand diffusion are presented here. Ligand exit pathways are successfully modeled for different systems containing ligands of various sizes:  carbon monoxide in myoglobin, camphor in cytochrome P450cam, and palmitic acid in the intestinal fatty-acid-binding protein. These initial applications reveal the potential of this new technique in mapping millisecond-time-scale processes. The computational cost associated with the exploration is significantly less than that of conventional MD simulations.

  18. Hybrid Monte Carlo methods in computational finance

    NARCIS (Netherlands)

    Leitao Rodriguez, A.

    2017-01-01

    Monte Carlo methods are highly appreciated and intensively employed in computational finance in the context of financial derivatives valuation or risk management. The method offers valuable advantages like flexibility, easy interpretation and straightforward implementation. Furthermore, the

  19. Modeling with data tools and techniques for scientific computing

    CERN Document Server

    Klemens, Ben

    2009-01-01

    Modeling with Data fully explains how to execute computationally intensive analyses on very large data sets, showing readers how to determine the best methods for solving a variety of different problems, how to create and debug statistical models, and how to run an analysis and evaluate the results. Ben Klemens introduces a set of open and unlimited tools, and uses them to demonstrate data management, analysis, and simulation techniques essential for dealing with large data sets and computationally intensive procedures. He then demonstrates how to easily apply these tools to the many threads of statistical technique, including classical, Bayesian, maximum likelihood, and Monte Carlo methods

  20. Applications of Monte Carlo method in Medical Physics

    International Nuclear Information System (INIS)

    Diez Rios, A.; Labajos, M.

    1989-01-01

    The basic ideas of Monte Carlo techniques are presented. Random numbers and their generation by congruential methods, which underlie Monte Carlo calculations are shown. Monte Carlo techniques to solve integrals are discussed. The evaluation of a simple monodimensional integral with a known answer, by means of two different Monte Carlo approaches are discussed. The basic principles to simualate on a computer photon histories reduce variance and the current applications in Medical Physics are commented. (Author)

  1. Parallel Monte Carlo simulations on an ARC-enabled computing grid

    International Nuclear Information System (INIS)

    Nilsen, Jon K; Samset, Bjørn H

    2011-01-01

    Grid computing opens new possibilities for running heavy Monte Carlo simulations of physical systems in parallel. The presentation gives an overview of GaMPI, a system for running an MPI-based random walker simulation on grid resources. Integrating the ARC middleware and the new storage system Chelonia with the Ganga grid job submission and control system, we show that MPI jobs can be run on a world-wide computing grid with good performance and promising scaling properties. Results for relatively communication-heavy Monte Carlo simulations run on multiple heterogeneous, ARC-enabled computing clusters in several countries are presented.

  2. Monte Carlo simulation with the Gate software using grid computing

    International Nuclear Information System (INIS)

    Reuillon, R.; Hill, D.R.C.; Gouinaud, C.; El Bitar, Z.; Breton, V.; Buvat, I.

    2009-03-01

    Monte Carlo simulations are widely used in emission tomography, for protocol optimization, design of processing or data analysis methods, tomographic reconstruction, or tomograph design optimization. Monte Carlo simulations needing many replicates to obtain good statistical results can be easily executed in parallel using the 'Multiple Replications In Parallel' approach. However, several precautions have to be taken in the generation of the parallel streams of pseudo-random numbers. In this paper, we present the distribution of Monte Carlo simulations performed with the GATE software using local clusters and grid computing. We obtained very convincing results with this large medical application, thanks to the EGEE Grid (Enabling Grid for E-science), achieving in one week computations that could have taken more than 3 years of processing on a single computer. This work has been achieved thanks to a generic object-oriented toolbox called DistMe which we designed to automate this kind of parallelization for Monte Carlo simulations. This toolbox, written in Java is freely available on SourceForge and helped to ensure a rigorous distribution of pseudo-random number streams. It is based on the use of a documented XML format for random numbers generators statuses. (authors)

  3. Monte Carlo codes and Monte Carlo simulator program

    International Nuclear Information System (INIS)

    Higuchi, Kenji; Asai, Kiyoshi; Suganuma, Masayuki.

    1990-03-01

    Four typical Monte Carlo codes KENO-IV, MORSE, MCNP and VIM have been vectorized on VP-100 at Computing Center, JAERI. The problems in vector processing of Monte Carlo codes on vector processors have become clear through the work. As the result, it is recognized that these are difficulties to obtain good performance in vector processing of Monte Carlo codes. A Monte Carlo computing machine, which processes the Monte Carlo codes with high performances is being developed at our Computing Center since 1987. The concept of Monte Carlo computing machine and its performance have been investigated and estimated by using a software simulator. In this report the problems in vectorization of Monte Carlo codes, Monte Carlo pipelines proposed to mitigate these difficulties and the results of the performance estimation of the Monte Carlo computing machine by the simulator are described. (author)

  4. Timesaving techniques for decision of electron-molecule collisions in Monte Carlo simulation of electrical discharges

    International Nuclear Information System (INIS)

    Sugawara, Hirotake; Mori, Naoki; Sakai, Yosuke; Suda, Yoshiyuki

    2007-01-01

    Techniques to reduce the computational load for determination of electron-molecule collisions in Monte Carlo simulations of electrical discharges have been presented. By enhancing the detection efficiency of the no-collision case in the decision scheme of the collisional events, we can decrease the frequency of access to time-consuming subroutines to calculate the electron collision cross sections of the gas molecules for obtaining the collision probability. A benchmark test and an estimation to evaluate the present techniques have shown a practical timesaving efficiency

  5. CloudMC: a cloud computing application for Monte Carlo simulation

    International Nuclear Information System (INIS)

    Miras, H; Jiménez, R; Miras, C; Gomà, C

    2013-01-01

    This work presents CloudMC, a cloud computing application—developed in Windows Azure®, the platform of the Microsoft® cloud—for the parallelization of Monte Carlo simulations in a dynamic virtual cluster. CloudMC is a web application designed to be independent of the Monte Carlo code in which the simulations are based—the simulations just need to be of the form: input files → executable → output files. To study the performance of CloudMC in Windows Azure®, Monte Carlo simulations with penelope were performed on different instance (virtual machine) sizes, and for different number of instances. The instance size was found to have no effect on the simulation runtime. It was also found that the decrease in time with the number of instances followed Amdahl's law, with a slight deviation due to the increase in the fraction of non-parallelizable time with increasing number of instances. A simulation that would have required 30 h of CPU on a single instance was completed in 48.6 min when executed on 64 instances in parallel (speedup of 37 ×). Furthermore, the use of cloud computing for parallel computing offers some advantages over conventional clusters: high accessibility, scalability and pay per usage. Therefore, it is strongly believed that cloud computing will play an important role in making Monte Carlo dose calculation a reality in future clinical practice. (note)

  6. CloudMC: a cloud computing application for Monte Carlo simulation.

    Science.gov (United States)

    Miras, H; Jiménez, R; Miras, C; Gomà, C

    2013-04-21

    This work presents CloudMC, a cloud computing application-developed in Windows Azure®, the platform of the Microsoft® cloud-for the parallelization of Monte Carlo simulations in a dynamic virtual cluster. CloudMC is a web application designed to be independent of the Monte Carlo code in which the simulations are based-the simulations just need to be of the form: input files → executable → output files. To study the performance of CloudMC in Windows Azure®, Monte Carlo simulations with penelope were performed on different instance (virtual machine) sizes, and for different number of instances. The instance size was found to have no effect on the simulation runtime. It was also found that the decrease in time with the number of instances followed Amdahl's law, with a slight deviation due to the increase in the fraction of non-parallelizable time with increasing number of instances. A simulation that would have required 30 h of CPU on a single instance was completed in 48.6 min when executed on 64 instances in parallel (speedup of 37 ×). Furthermore, the use of cloud computing for parallel computing offers some advantages over conventional clusters: high accessibility, scalability and pay per usage. Therefore, it is strongly believed that cloud computing will play an important role in making Monte Carlo dose calculation a reality in future clinical practice.

  7. Monte Carlo simulation of Markov unreliability models

    International Nuclear Information System (INIS)

    Lewis, E.E.; Boehm, F.

    1984-01-01

    A Monte Carlo method is formulated for the evaluation of the unrealibility of complex systems with known component failure and repair rates. The formulation is in terms of a Markov process allowing dependences between components to be modeled and computational efficiencies to be achieved in the Monte Carlo simulation. Two variance reduction techniques, forced transition and failure biasing, are employed to increase computational efficiency of the random walk procedure. For an example problem these result in improved computational efficiency by more than three orders of magnitudes over analog Monte Carlo. The method is generalized to treat problems with distributed failure and repair rate data, and a batching technique is introduced and shown to result in substantial increases in computational efficiency for an example problem. A method for separating the variance due to the data uncertainty from that due to the finite number of random walks is presented. (orig.)

  8. SU-E-T-222: Computational Optimization of Monte Carlo Simulation On 4D Treatment Planning Using the Cloud Computing Technology

    International Nuclear Information System (INIS)

    Chow, J

    2015-01-01

    Purpose: This study evaluated the efficiency of 4D lung radiation treatment planning using Monte Carlo simulation on the cloud. The EGSnrc Monte Carlo code was used in dose calculation on the 4D-CT image set. Methods: 4D lung radiation treatment plan was created by the DOSCTP linked to the cloud, based on the Amazon elastic compute cloud platform. Dose calculation was carried out by Monte Carlo simulation on the 4D-CT image set on the cloud, and results were sent to the FFD4D image deformation program for dose reconstruction. The dependence of computing time for treatment plan on the number of compute node was optimized with variations of the number of CT image set in the breathing cycle and dose reconstruction time of the FFD4D. Results: It is found that the dependence of computing time on the number of compute node was affected by the diminishing return of the number of node used in Monte Carlo simulation. Moreover, the performance of the 4D treatment planning could be optimized by using smaller than 10 compute nodes on the cloud. The effects of the number of image set and dose reconstruction time on the dependence of computing time on the number of node were not significant, as more than 15 compute nodes were used in Monte Carlo simulations. Conclusion: The issue of long computing time in 4D treatment plan, requiring Monte Carlo dose calculations in all CT image sets in the breathing cycle, can be solved using the cloud computing technology. It is concluded that the optimized number of compute node selected in simulation should be between 5 and 15, as the dependence of computing time on the number of node is significant

  9. SU-E-T-222: Computational Optimization of Monte Carlo Simulation On 4D Treatment Planning Using the Cloud Computing Technology

    Energy Technology Data Exchange (ETDEWEB)

    Chow, J [Princess Margaret Cancer Center, Toronto, ON (Canada)

    2015-06-15

    Purpose: This study evaluated the efficiency of 4D lung radiation treatment planning using Monte Carlo simulation on the cloud. The EGSnrc Monte Carlo code was used in dose calculation on the 4D-CT image set. Methods: 4D lung radiation treatment plan was created by the DOSCTP linked to the cloud, based on the Amazon elastic compute cloud platform. Dose calculation was carried out by Monte Carlo simulation on the 4D-CT image set on the cloud, and results were sent to the FFD4D image deformation program for dose reconstruction. The dependence of computing time for treatment plan on the number of compute node was optimized with variations of the number of CT image set in the breathing cycle and dose reconstruction time of the FFD4D. Results: It is found that the dependence of computing time on the number of compute node was affected by the diminishing return of the number of node used in Monte Carlo simulation. Moreover, the performance of the 4D treatment planning could be optimized by using smaller than 10 compute nodes on the cloud. The effects of the number of image set and dose reconstruction time on the dependence of computing time on the number of node were not significant, as more than 15 compute nodes were used in Monte Carlo simulations. Conclusion: The issue of long computing time in 4D treatment plan, requiring Monte Carlo dose calculations in all CT image sets in the breathing cycle, can be solved using the cloud computing technology. It is concluded that the optimized number of compute node selected in simulation should be between 5 and 15, as the dependence of computing time on the number of node is significant.

  10. Monte Carlo methods of PageRank computation

    NARCIS (Netherlands)

    Litvak, Nelli

    2004-01-01

    We describe and analyze an on-line Monte Carlo method of PageRank computation. The PageRank is being estimated basing on results of a large number of short independent simulation runs initiated from each page that contains outgoing hyperlinks. The method does not require any storage of the hyperlink

  11. Importance estimation in Monte Carlo modelling of neutron and photon transport

    International Nuclear Information System (INIS)

    Mickael, M.W.

    1992-01-01

    The estimation of neutron and photon importance in a three-dimensional geometry is achieved using a coupled Monte Carlo and diffusion theory calculation. The parameters required for the solution of the multigroup adjoint diffusion equation are estimated from an analog Monte Carlo simulation of the system under investigation. The solution of the adjoint diffusion equation is then used as an estimate of the particle importance in the actual simulation. This approach provides an automated and efficient variance reduction method for Monte Carlo simulations. The technique has been successfully applied to Monte Carlo simulation of neutron and coupled neutron-photon transport in the nuclear well-logging field. The results show that the importance maps obtained in a few minutes of computer time using this technique are in good agreement with Monte Carlo generated importance maps that require prohibitive computing times. The application of this method to Monte Carlo modelling of the response of neutron porosity and pulsed neutron instruments has resulted in major reductions in computation time. (Author)

  12. Three-dimensional coupled Monte Carlo-discrete ordinates computational scheme for shielding calculations of large and complex nuclear facilities

    International Nuclear Information System (INIS)

    Chen, Y.; Fischer, U.

    2005-01-01

    Shielding calculations of advanced nuclear facilities such as accelerator based neutron sources or fusion devices of the tokamak type are complicated due to their complex geometries and their large dimensions, including bulk shields of several meters thickness. While the complexity of the geometry in the shielding calculation can be hardly handled by the discrete ordinates method, the deep penetration of radiation through bulk shields is a severe challenge for the Monte Carlo particle transport technique. This work proposes a dedicated computational scheme for coupled Monte Carlo-Discrete Ordinates transport calculations to handle this kind of shielding problems. The Monte Carlo technique is used to simulate the particle generation and transport in the target region with both complex geometry and reaction physics, and the discrete ordinates method is used to treat the deep penetration problem in the bulk shield. The coupling scheme has been implemented in a program system by loosely integrating the Monte Carlo transport code MCNP, the three-dimensional discrete ordinates code TORT and a newly developed coupling interface program for mapping process. Test calculations were performed with comparison to MCNP solutions. Satisfactory agreements were obtained between these two approaches. The program system has been chosen to treat the complicated shielding problem of the accelerator-based IFMIF neutron source. The successful application demonstrates that coupling scheme with the program system is a useful computational tool for the shielding analysis of complex and large nuclear facilities. (authors)

  13. Monte Carlo simulations of neutron scattering instruments

    International Nuclear Information System (INIS)

    Aestrand, Per-Olof; Copenhagen Univ.; Lefmann, K.; Nielsen, K.

    2001-01-01

    A Monte Carlo simulation is an important computational tool used in many areas of science and engineering. The use of Monte Carlo techniques for simulating neutron scattering instruments is discussed. The basic ideas, techniques and approximations are presented. Since the construction of a neutron scattering instrument is very expensive, Monte Carlo software used for design of instruments have to be validated and tested extensively. The McStas software was designed with these aspects in mind and some of the basic principles of the McStas software will be discussed. Finally, some future prospects are discussed for using Monte Carlo simulations in optimizing neutron scattering experiments. (R.P.)

  14. The Monte Carlo method the method of statistical trials

    CERN Document Server

    Shreider, YuA

    1966-01-01

    The Monte Carlo Method: The Method of Statistical Trials is a systematic account of the fundamental concepts and techniques of the Monte Carlo method, together with its range of applications. Some of these applications include the computation of definite integrals, neutron physics, and in the investigation of servicing processes. This volume is comprised of seven chapters and begins with an overview of the basic features of the Monte Carlo method and typical examples of its application to simple problems in computational mathematics. The next chapter examines the computation of multi-dimensio

  15. Particle Markov Chain Monte Carlo Techniques of Unobserved Component Time Series Models Using Ox

    DEFF Research Database (Denmark)

    Nonejad, Nima

    This paper details Particle Markov chain Monte Carlo techniques for analysis of unobserved component time series models using several economic data sets. PMCMC combines the particle filter with the Metropolis-Hastings algorithm. Overall PMCMC provides a very compelling, computationally fast...... and efficient framework for estimation. These advantages are used to for instance estimate stochastic volatility models with leverage effect or with Student-t distributed errors. We also model changing time series characteristics of the US inflation rate by considering a heteroskedastic ARFIMA model where...

  16. Practical adjoint Monte Carlo technique for fixed-source and eigenfunction neutron transport problems

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.

    1981-01-01

    An adjoint Monte Carlo technique is described for the solution of neutron transport problems. The optimum biasing function for a zero-variance collision estimator is derived. The optimum treatment of an analog of a non-velocity thermal group has also been derived. The method is extended to multiplying systems, especially for eigenfunction problems to enable the estimate of averages over the unknown fundamental neutron flux distribution. A versatile computer code, FOCUS, has been written, based on the described theory. Numerical examples are given for a shielding problem and a critical assembly, illustrating the performance of the FOCUS code. 19 refs

  17. Radiation doses in volume-of-interest breast computed tomography—A Monte Carlo simulation study

    Energy Technology Data Exchange (ETDEWEB)

    Lai, Chao-Jen, E-mail: cjlai3711@gmail.com; Zhong, Yuncheng; Yi, Ying; Wang, Tianpeng; Shaw, Chris C. [Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030-4009 (United States)

    2015-06-15

    Purpose: Cone beam breast computed tomography (breast CT) with true three-dimensional, nearly isotropic spatial resolution has been developed and investigated over the past decade to overcome the problem of lesions overlapping with breast anatomical structures on two-dimensional mammographic images. However, the ability of breast CT to detect small objects, such as tissue structure edges and small calcifications, is limited. To resolve this problem, the authors proposed and developed a volume-of-interest (VOI) breast CT technique to image a small VOI using a higher radiation dose to improve that region’s visibility. In this study, the authors performed Monte Carlo simulations to estimate average breast dose and average glandular dose (AGD) for the VOI breast CT technique. Methods: Electron–Gamma-Shower system code-based Monte Carlo codes were used to simulate breast CT. The Monte Carlo codes estimated were validated using physical measurements of air kerma ratios and point doses in phantoms with an ion chamber and optically stimulated luminescence dosimeters. The validated full cone x-ray source was then collimated to simulate half cone beam x-rays to image digital pendant-geometry, hemi-ellipsoidal, homogeneous breast phantoms and to estimate breast doses with full field scans. 13-cm in diameter, 10-cm long hemi-ellipsoidal homogeneous phantoms were used to simulate median breasts. Breast compositions of 25% and 50% volumetric glandular fractions (VGFs) were used to investigate the influence on breast dose. The simulated half cone beam x-rays were then collimated to a narrow x-ray beam with an area of 2.5 × 2.5 cm{sup 2} field of view at the isocenter plane and to perform VOI field scans. The Monte Carlo results for the full field scans and the VOI field scans were then used to estimate the AGD for the VOI breast CT technique. Results: The ratios of air kerma ratios and dose measurement results from the Monte Carlo simulation to those from the physical

  18. Multilevel sequential Monte-Carlo samplers

    KAUST Repository

    Jasra, Ajay

    2016-01-01

    Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.

  19. Multilevel sequential Monte-Carlo samplers

    KAUST Repository

    Jasra, Ajay

    2016-01-05

    Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.

  20. Validation of variance reduction techniques in Mediso (SPIRIT DH-V) SPECT system by Monte Carlo

    International Nuclear Information System (INIS)

    Rodriguez Marrero, J. P.; Diaz Garcia, A.; Gomez Facenda, A.

    2015-01-01

    Monte Carlo simulation of nuclear medical imaging systems is a widely used method for reproducing their operation in a real clinical environment, There are several Single Photon Emission Tomography (SPECT) systems in Cuba. For this reason it is clearly necessary to introduce a reliable and fast simulation platform in order to obtain consistent image data. This data will reproduce the original measurements conditions. In order to fulfill these requirements Monte Carlo platform GAMOS (Geant4 Medicine Oriented Architecture for Applications) have been used. Due to the very size and complex configuration of parallel hole collimators in real clinical SPECT systems, Monte Carlo simulation usually consumes excessively high time and computing resources. main goal of the present work is to optimize the efficiency of calculation by means of new GAMOS functionality. There were developed and validated two GAMOS variance reduction techniques to speed up calculations. These procedures focus and limit transport of gamma quanta inside the collimator. The obtained results were asses experimentally in Mediso (SPIRIT DH-V) SPECT system. Main quality control parameters, such as sensitivity and spatial resolution were determined. Differences of 4.6% sensitivity and 8.7% spatial resolution were reported against manufacturer values. Simulation time was decreased up to 650 times. Using these techniques it was possible to perform several studies in almost 8 hours each. (Author)

  1. Quantum Monte Carlo approaches for correlated systems

    CERN Document Server

    Becca, Federico

    2017-01-01

    Over the past several decades, computational approaches to studying strongly-interacting systems have become increasingly varied and sophisticated. This book provides a comprehensive introduction to state-of-the-art quantum Monte Carlo techniques relevant for applications in correlated systems. Providing a clear overview of variational wave functions, and featuring a detailed presentation of stochastic samplings including Markov chains and Langevin dynamics, which are developed into a discussion of Monte Carlo methods. The variational technique is described, from foundations to a detailed description of its algorithms. Further topics discussed include optimisation techniques, real-time dynamics and projection methods, including Green's function, reptation and auxiliary-field Monte Carlo, from basic definitions to advanced algorithms for efficient codes, and the book concludes with recent developments on the continuum space. Quantum Monte Carlo Approaches for Correlated Systems provides an extensive reference ...

  2. Jet-images: computer vision inspired techniques for jet tagging

    Energy Technology Data Exchange (ETDEWEB)

    Cogan, Josh; Kagan, Michael; Strauss, Emanuel; Schwarztman, Ariel [SLAC National Accelerator Laboratory,Menlo Park, CA 94028 (United States)

    2015-02-18

    We introduce a novel approach to jet tagging and classification through the use of techniques inspired by computer vision. Drawing parallels to the problem of facial recognition in images, we define a jet-image using calorimeter towers as the elements of the image and establish jet-image preprocessing methods. For the jet-image processing step, we develop a discriminant for classifying the jet-images derived using Fisher discriminant analysis. The effectiveness of the technique is shown within the context of identifying boosted hadronic W boson decays with respect to a background of quark- and gluon-initiated jets. Using Monte Carlo simulation, we demonstrate that the performance of this technique introduces additional discriminating power over other substructure approaches, and gives significant insight into the internal structure of jets.

  3. Jet-images: computer vision inspired techniques for jet tagging

    International Nuclear Information System (INIS)

    Cogan, Josh; Kagan, Michael; Strauss, Emanuel; Schwarztman, Ariel

    2015-01-01

    We introduce a novel approach to jet tagging and classification through the use of techniques inspired by computer vision. Drawing parallels to the problem of facial recognition in images, we define a jet-image using calorimeter towers as the elements of the image and establish jet-image preprocessing methods. For the jet-image processing step, we develop a discriminant for classifying the jet-images derived using Fisher discriminant analysis. The effectiveness of the technique is shown within the context of identifying boosted hadronic W boson decays with respect to a background of quark- and gluon-initiated jets. Using Monte Carlo simulation, we demonstrate that the performance of this technique introduces additional discriminating power over other substructure approaches, and gives significant insight into the internal structure of jets.

  4. Advanced Multilevel Monte Carlo Methods

    KAUST Repository

    Jasra, Ajay

    2017-04-24

    This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.

  5. Advanced Multilevel Monte Carlo Methods

    KAUST Repository

    Jasra, Ajay; Law, Kody; Suciu, Carina

    2017-01-01

    This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.

  6. Variation in computer time with geometry prescription in monte carlo code KENO-IV

    International Nuclear Information System (INIS)

    Gopalakrishnan, C.R.

    1988-01-01

    In most studies, the Monte Carlo criticality code KENO-IV has been compared with other Monte Carlo codes, but evaluation of its performance with different box descriptions has not been done so far. In Monte Carlo computations, any fractional savings of computing time is highly desirable. Variation in computation time with box description in KENO for two different fast reactor fuel subassemblies of FBTR and PFBR is studied. The K eff of an infinite array of fuel subassemblies is calculated by modelling the subassemblies in two different ways (i) multi-region, (ii) multi-box. In addition to these two cases, excess reactivity calculations of FBTR are also performed in two ways to study this effect in a complex geometry. It is observed that the K eff values calculated by multi-region and multi-box models agree very well. However the increase in computation time from the multi-box to the multi-region is considerable, while the difference in computer storage requirements for the two models is negligible. This variation in computing time arises from the way the neutron is tracked in the two cases. (author)

  7. Radiation transport simulation in gamma irradiator systems using E G S 4 Monte Carlo code and dose mapping calculations based on point kernel technique

    International Nuclear Information System (INIS)

    Raisali, G.R.

    1992-01-01

    A series of computer codes based on point kernel technique and also Monte Carlo method have been developed. These codes perform radiation transport calculations for irradiator systems having cartesian, cylindrical and mixed geometries. The monte Carlo calculations, the computer code 'EGS4' has been applied to a radiation processing type problem. This code has been acompanied by a specific user code. The set of codes developed include: GCELLS, DOSMAPM, DOSMAPC2 which simulate the radiation transport in gamma irradiator systems having cylinderical, cartesian, and mixed geometries, respectively. The program 'DOSMAP3' based on point kernel technique, has been also developed for dose rate mapping calculations in carrier type gamma irradiators. Another computer program 'CYLDETM' as a user code for EGS4 has been also developed to simulate dose variations near the interface of heterogeneous media in gamma irradiator systems. In addition a system of computer codes 'PRODMIX' has been developed which calculates the absorbed dose in the products with different densities. validation studies of the calculated results versus experimental dosimetry has been performed and good agreement has been obtained

  8. Monte Carlo simulation for IRRMA

    International Nuclear Information System (INIS)

    Gardner, R.P.; Liu Lianyan

    2000-01-01

    Monte Carlo simulation is fast becoming a standard approach for many radiation applications that were previously treated almost entirely by experimental techniques. This is certainly true for Industrial Radiation and Radioisotope Measurement Applications - IRRMA. The reasons for this include: (1) the increased cost and inadequacy of experimentation for design and interpretation purposes; (2) the availability of low cost, large memory, and fast personal computers; and (3) the general availability of general purpose Monte Carlo codes that are increasingly user-friendly, efficient, and accurate. This paper discusses the history and present status of Monte Carlo simulation for IRRMA including the general purpose (GP) and specific purpose (SP) Monte Carlo codes and future needs - primarily from the experience of the authors

  9. Development and validation of Monte Carlo dose computations for contrast-enhanced stereotactic synchrotron radiation therapy

    International Nuclear Information System (INIS)

    Vautrin, M.

    2011-01-01

    Contrast-enhanced stereotactic synchrotron radiation therapy (SSRT) is an innovative technique based on localized dose-enhancement effects obtained by reinforced photoelectric absorption in the tumor. Medium energy monochromatic X-rays (50 - 100 keV) are used for irradiating tumors previously loaded with a high-Z element. Clinical trials of SSRT are being prepared at the European Synchrotron Radiation Facility (ESRF), an iodinated contrast agent will be used. In order to compute the energy deposited in the patient (dose), a dedicated treatment planning system (TPS) has been developed for the clinical trials, based on the ISOgray TPS. This work focuses on the SSRT specific modifications of the TPS, especially to the PENELOPE-based Monte Carlo dose engine. The TPS uses a dedicated Monte Carlo simulation of medium energy polarized photons to compute the deposited energy in the patient. Simulations are performed considering the synchrotron source, the modeled beamline geometry and finally the patient. Specific materials were also implemented in the voxelized geometry of the patient, to consider iodine concentrations in the tumor. The computation process has been optimized and parallelized. Finally a specific computation of absolute doses and associated irradiation times (instead of monitor units) was implemented. The dedicated TPS was validated with depth dose curves, dose profiles and absolute dose measurements performed at the ESRF in a water tank and solid water phantoms with or without bone slabs. (author) [fr

  10. ARCHER, a new Monte Carlo software tool for emerging heterogeneous computing environments

    International Nuclear Information System (INIS)

    Xu, X. George; Liu, Tianyu; Su, Lin; Du, Xining; Riblett, Matthew; Ji, Wei; Gu, Deyang; Carothers, Christopher D.; Shephard, Mark S.; Brown, Forrest B.; Kalra, Mannudeep K.; Liu, Bob

    2015-01-01

    Highlights: • A fast Monte Carlo based radiation transport code ARCHER was developed. • ARCHER supports different hardware including CPU, GPU and Intel Xeon Phi coprocessor. • Code is benchmarked again MCNP for medical applications. • A typical CT scan dose simulation only takes 6.8 s on an NVIDIA M2090 GPU. • GPU and coprocessor-based codes are 5–8 times faster than the CPU-based codes. - Abstract: The Monte Carlo radiation transport community faces a number of challenges associated with peta- and exa-scale computing systems that rely increasingly on heterogeneous architectures involving hardware accelerators such as GPUs and Xeon Phi coprocessors. Existing Monte Carlo codes and methods must be strategically upgraded to meet emerging hardware and software needs. In this paper, we describe the development of a software, called ARCHER (Accelerated Radiation-transport Computations in Heterogeneous EnviRonments), which is designed as a versatile testbed for future Monte Carlo codes. Preliminary results from five projects in nuclear engineering and medical physics are presented

  11. Neutron flux calculation by means of Monte Carlo methods

    International Nuclear Information System (INIS)

    Barz, H.U.; Eichhorn, M.

    1988-01-01

    In this report a survey of modern neutron flux calculation procedures by means of Monte Carlo methods is given. Due to the progress in the development of variance reduction techniques and the improvements of computational techniques this method is of increasing importance. The basic ideas in application of Monte Carlo methods are briefly outlined. In more detail various possibilities of non-analog games and estimation procedures are presented, problems in the field of optimizing the variance reduction techniques are discussed. In the last part some important international Monte Carlo codes and own codes of the authors are listed and special applications are described. (author)

  12. Improving computational efficiency of Monte Carlo simulations with variance reduction

    International Nuclear Information System (INIS)

    Turner, A.; Davis, A.

    2013-01-01

    CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)

  13. Microcanonical Monte Carlo approach for computing melting curves by atomistic simulations

    OpenAIRE

    Davis, Sergio; Gutiérrez, Gonzalo

    2017-01-01

    We report microcanonical Monte Carlo simulations of melting and superheating of a generic, Lennard-Jones system starting from the crystalline phase. The isochoric curve, the melting temperature $T_m$ and the critical superheating temperature $T_{LS}$ obtained are in close agreement (well within the microcanonical temperature fluctuations) with standard molecular dynamics one-phase and two-phase methods. These results validate the use of microcanonical Monte Carlo to compute melting points, a ...

  14. Weighted-delta-tracking for Monte Carlo particle transport

    International Nuclear Information System (INIS)

    Morgan, L.W.G.; Kotlyar, D.

    2015-01-01

    Highlights: • This paper presents an alteration to the Monte Carlo Woodcock tracking technique. • The alteration improves computational efficiency within regions of high absorbers. • The rejection technique is replaced by a statistical weighting mechanism. • The modified Woodcock method is shown to be faster than standard Woodcock tracking. • The modified Woodcock method achieves a lower variance, given a specified accuracy. - Abstract: Monte Carlo particle transport (MCPT) codes are incredibly powerful and versatile tools to simulate particle behavior in a multitude of scenarios, such as core/criticality studies, radiation protection, shielding, medicine and fusion research to name just a small subset applications. However, MCPT codes can be very computationally expensive to run when the model geometry contains large attenuation depths and/or contains many components. This paper proposes a simple modification to the Woodcock tracking method used by some Monte Carlo particle transport codes. The Woodcock method utilizes the rejection method for sampling virtual collisions as a method to remove collision distance sampling at material boundaries. However, it suffers from poor computational efficiency when the sample acceptance rate is low. The proposed method removes rejection sampling from the Woodcock method in favor of a statistical weighting scheme, which improves the computational efficiency of a Monte Carlo particle tracking code. It is shown that the modified Woodcock method is less computationally expensive than standard ray-tracing and rejection-based Woodcock tracking methods and achieves a lower variance, given a specified accuracy

  15. Reactor physics simulations with coupled Monte Carlo calculation and computational fluid dynamics

    International Nuclear Information System (INIS)

    Seker, V.; Thomas, J.W.; Downar, T.J.

    2007-01-01

    A computational code system based on coupling the Monte Carlo code MCNP5 and the Computational Fluid Dynamics (CFD) code STAR-CD was developed as an audit tool for lower order nuclear reactor calculations. This paper presents the methodology of the developed computer program 'McSTAR'. McSTAR is written in FORTRAN90 programming language and couples MCNP5 and the commercial CFD code STAR-CD. MCNP uses a continuous energy cross section library produced by the NJOY code system from the raw ENDF/B data. A major part of the work was to develop and implement methods to update the cross section library with the temperature distribution calculated by STARCD for every region. Three different methods were investigated and implemented in McSTAR. The user subroutines in STAR-CD are modified to read the power density data and assign them to the appropriate variables in the program and to write an output data file containing the temperature, density and indexing information to perform the mapping between MCNP and STAR-CD cells. Preliminary testing of the code was performed using a 3x3 PWR pin-cell problem. The preliminary results are compared with those obtained from a STAR-CD coupled calculation with the deterministic transport code DeCART. Good agreement in the k eff and the power profile was observed. Increased computational capabilities and improvements in computational methods have accelerated interest in high fidelity modeling of nuclear reactor cores during the last several years. High-fidelity has been achieved by utilizing full core neutron transport solutions for the neutronics calculation and computational fluid dynamics solutions for the thermal-hydraulics calculation. Previous researchers have reported the coupling of 3D deterministic neutron transport method to CFD and their application to practical reactor analysis problems. One of the principal motivations of the work here was to utilize Monte Carlo methods to validate the coupled deterministic neutron transport

  16. Computed radiography simulation using the Monte Carlo code MCNPX

    International Nuclear Information System (INIS)

    Correa, S.C.A.; Souza, E.M.; Silva, A.X.; Lopes, R.T.

    2009-01-01

    Simulating x-ray images has been of great interest in recent years as it makes possible an analysis of how x-ray images are affected owing to relevant operating parameters. In this paper, a procedure for simulating computed radiographic images using the Monte Carlo code MCNPX is proposed. The sensitivity curve of the BaFBr image plate detector as well as the characteristic noise of a 16-bit computed radiography system were considered during the methodology's development. The results obtained confirm that the proposed procedure for simulating computed radiographic images is satisfactory, as it allows obtaining results comparable with experimental data. (author)

  17. Computed radiography simulation using the Monte Carlo code MCNPX

    Energy Technology Data Exchange (ETDEWEB)

    Correa, S.C.A. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Centro Universitario Estadual da Zona Oeste (CCMAT)/UEZO, Av. Manuel Caldeira de Alvarenga, 1203, Campo Grande, 23070-200, Rio de Janeiro, RJ (Brazil); Souza, E.M. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Silva, A.X., E-mail: ademir@con.ufrj.b [PEN/COPPE-DNC/Poli CT, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Cassiano, D.H. [Instituto de Radioprotecao e Dosimetria/CNEN Av. Salvador Allende, s/n, Recreio, 22780-160, Rio de Janeiro, RJ (Brazil); Lopes, R.T. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil)

    2010-09-15

    Simulating X-ray images has been of great interest in recent years as it makes possible an analysis of how X-ray images are affected owing to relevant operating parameters. In this paper, a procedure for simulating computed radiographic images using the Monte Carlo code MCNPX is proposed. The sensitivity curve of the BaFBr image plate detector as well as the characteristic noise of a 16-bit computed radiography system were considered during the methodology's development. The results obtained confirm that the proposed procedure for simulating computed radiographic images is satisfactory, as it allows obtaining results comparable with experimental data.

  18. Pore-scale uncertainty quantification with multilevel Monte Carlo

    KAUST Repository

    Icardi, Matteo

    2014-01-06

    Computational fluid dynamics (CFD) simulations of pore-scale transport processes in porous media have recently gained large popularity. However the geometrical details of the pore structures can be known only in a very low number of samples and the detailed flow computations can be carried out only on a limited number of cases. The explicit introduction of randomness in the geometry and in other setup parameters can be crucial for the optimization of pore-scale investigations for random homogenization. Since there are no generic ways to parametrize the randomness in the porescale structures, Monte Carlo techniques are the most accessible to compute statistics. We propose a multilevel Monte Carlo (MLMC) technique to reduce the computational cost of estimating quantities of interest within a prescribed accuracy constraint. Random samples of pore geometries with a hierarchy of geometrical complexities and grid refinements, are synthetically generated and used to propagate the uncertainties in the flow simulations and compute statistics of macro-scale effective parameters.

  19. Computer simulation of stochastic processes through model-sampling (Monte Carlo) techniques.

    Science.gov (United States)

    Sheppard, C W.

    1969-03-01

    A simple Monte Carlo simulation program is outlined which can be used for the investigation of random-walk problems, for example in diffusion, or the movement of tracers in the blood circulation. The results given by the simulation are compared with those predicted by well-established theory, and it is shown how the model can be expanded to deal with drift, and with reflexion from or adsorption at a boundary.

  20. Cost effective distributed computing for Monte Carlo radiation dosimetry

    International Nuclear Information System (INIS)

    Wise, K.N.; Webb, D.V.

    2000-01-01

    Full text: An inexpensive computing facility has been established for performing repetitive Monte Carlo simulations with the BEAM and EGS4/EGSnrc codes of linear accelerator beams, for calculating effective dose from diagnostic imaging procedures and of ion chambers and phantoms used for the Australian high energy absorbed dose standards. The facility currently consists of 3 dual-processor 450 MHz processor PCs linked by a high speed LAN. The 3 PCs can be accessed either locally from a single keyboard/monitor/mouse combination using a SwitchView controller or remotely via a computer network from PCs with suitable communications software (e.g. Telnet, Kermit etc). All 3 PCs are identically configured to have the Red Hat Linux 6.0 operating system. A Fortran compiler and the BEAM and EGS4/EGSnrc codes are available on the 3 PCs. The preparation of sequences of jobs utilising the Monte Carlo codes is simplified using load-distributing software (enFuzion 6.0 marketed by TurboLinux Inc, formerly Cluster from Active Tools) which efficiently distributes the computing load amongst all 6 processors. We describe 3 applications of the system - (a) energy spectra from radiotherapy sources, (b) mean mass-energy absorption coefficients and stopping powers for absolute absorbed dose standards and (c) dosimetry for diagnostic procedures; (a) and (b) are based on the transport codes BEAM and FLURZnrc while (c) is a Fortran/EGS code developed at ARPANSA. Efficiency gains ranged from 3 for (c) to close to the theoretical maximum of 6 for (a) and (b), with the gain depending on the amount of 'bookkeeping' to begin each task and the time taken to complete a single task. We have found the use of a load-balancing batch processing system with many PCs to be an economical way of achieving greater productivity for Monte Carlo calculations or of any computer intensive task requiring many runs with different parameters. Copyright (2000) Australasian College of Physical Scientists and

  1. An efficient search method for finding the critical slip surface using the compositional Monte Carlo technique

    International Nuclear Information System (INIS)

    Goshtasbi, K.; Ahmadi, M; Naeimi, Y.

    2008-01-01

    Locating the critical slip surface and the associated minimum factor of safety are two complementary parts in a slope stability analysis. A large number of computer programs exist to solve slope stability problems. Most of these programs, however, have used inefficient and unreliable search procedures to locate the global minimum factor of safety. This paper presents an efficient and reliable method to determine the global minimum factor of safety coupled with a modified version of the Monte Carlo technique. Examples arc presented to illustrate the reliability of the proposed method

  2. Fourier path-integral Monte Carlo methods: Partial averaging

    International Nuclear Information System (INIS)

    Doll, J.D.; Coalson, R.D.; Freeman, D.L.

    1985-01-01

    Monte Carlo Fourier path-integral techniques are explored. It is shown that fluctuation renormalization techniques provide an effective means for treating the effects of high-order Fourier contributions. The resulting formalism is rapidly convergent, is computationally convenient, and has potentially useful variational aspects

  3. Linking computer-aided design (CAD) to Geant4-based Monte Carlo simulations for precise implementation of complex treatment head geometries

    International Nuclear Information System (INIS)

    Constantin, Magdalena; Constantin, Dragos E; Keall, Paul J; Narula, Anisha; Svatos, Michelle; Perl, Joseph

    2010-01-01

    Most of the treatment head components of medical linear accelerators used in radiation therapy have complex geometrical shapes. They are typically designed using computer-aided design (CAD) applications. In Monte Carlo simulations of radiotherapy beam transport through the treatment head components, the relevant beam-generating and beam-modifying devices are inserted in the simulation toolkit using geometrical approximations of these components. Depending on their complexity, such approximations may introduce errors that can be propagated throughout the simulation. This drawback can be minimized by exporting a more precise geometry of the linac components from CAD and importing it into the Monte Carlo simulation environment. We present a technique that links three-dimensional CAD drawings of the treatment head components to Geant4 Monte Carlo simulations of dose deposition. (note)

  4. Proton therapy analysis using the Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Noshad, Houshyar [Center for Theoretical Physics and Mathematics, AEOI, P.O. Box 14155-1339, Tehran (Iran, Islamic Republic of)]. E-mail: hnoshad@aeoi.org.ir; Givechi, Nasim [Islamic Azad University, Science and Research Branch, Tehran (Iran, Islamic Republic of)

    2005-10-01

    The range and straggling data obtained from the transport of ions in matter (TRIM) computer program were used to determine the trajectories of monoenergetic 60 MeV protons in muscle tissue by using the Monte Carlo technique. The appropriate profile for the shape of a proton pencil beam in proton therapy as well as the dose deposited in the tissue were computed. The good agreements between our results as compared with the corresponding experimental values are presented here to show the reliability of our Monte Carlo method.

  5. CMS Monte Carlo production in the WLCG computing grid

    International Nuclear Information System (INIS)

    Hernandez, J M; Kreuzer, P; Hof, C; Khomitch, A; Mohapatra, A; Filippis, N D; Pompili, A; My, S; Abbrescia, M; Maggi, G; Donvito, G; Weirdt, S D; Maes, J; Mulders, P v; Villella, I; Wakefield, S; Guan, W; Fanfani, A; Evans, D; Flossdorf, A

    2008-01-01

    Monte Carlo production in CMS has received a major boost in performance and scale since the past CHEP06 conference. The production system has been re-engineered in order to incorporate the experience gained in running the previous system and to integrate production with the new CMS event data model, data management system and data processing framework. The system is interfaced to the two major computing Grids used by CMS, the LHC Computing Grid (LCG) and the Open Science Grid (OSG). Operational experience and integration aspects of the new CMS Monte Carlo production system is presented together with an analysis of production statistics. The new system automatically handles job submission, resource monitoring, job queuing, job distribution according to the available resources, data merging, registration of data into the data bookkeeping, data location, data transfer and placement systems. Compared to the previous production system automation, reliability and performance have been considerably improved. A more efficient use of computing resources and a better handling of the inherent Grid unreliability have resulted in an increase of production scale by about an order of magnitude, capable of running in parallel at the order of ten thousand jobs and yielding more than two million events per day

  6. Combinatorial nuclear level density by a Monte Carlo method

    International Nuclear Information System (INIS)

    Cerf, N.

    1994-01-01

    We present a new combinatorial method for the calculation of the nuclear level density. It is based on a Monte Carlo technique, in order to avoid a direct counting procedure which is generally impracticable for high-A nuclei. The Monte Carlo simulation, making use of the Metropolis sampling scheme, allows a computationally fast estimate of the level density for many fermion systems in large shell model spaces. We emphasize the advantages of this Monte Carlo approach, particularly concerning the prediction of the spin and parity distributions of the excited states,and compare our results with those derived from a traditional combinatorial or a statistical method. Such a Monte Carlo technique seems very promising to determine accurate level densities in a large energy range for nuclear reaction calculations

  7. Propagation of nuclear data uncertainties in fuel cycle calculations using Monte-Carlo technique

    International Nuclear Information System (INIS)

    Diez, C.J.; Cabellos, O.; Martinez, J.S.

    2011-01-01

    Nowadays, the knowledge of uncertainty propagation in depletion calculations is a critical issue because of the safety and economical performance of fuel cycles. Response magnitudes such as decay heat, radiotoxicity and isotopic inventory and their uncertainties should be known to handle spent fuel in present fuel cycles (e.g. high burnup fuel programme) and furthermore in new fuel cycles designs (e.g. fast breeder reactors and ADS). To deal with this task, there are different error propagation techniques, deterministic (adjoint/forward sensitivity analysis) and stochastic (Monte-Carlo technique) to evaluate the error in response magnitudes due to nuclear data uncertainties. In our previous works, cross-section uncertainties were propagated using a Monte-Carlo technique to calculate the uncertainty of response magnitudes such as decay heat and neutron emission. Also, the propagation of decay data, fission yield and cross-section uncertainties was performed, but only isotopic composition was the response magnitude calculated. Following the previous technique, the nuclear data uncertainties are taken into account and propagated to response magnitudes, decay heat and radiotoxicity. These uncertainties are assessed during cooling time. To evaluate this Monte-Carlo technique, two different applications are performed. First, a fission pulse decay heat calculation is carried out to check the Monte-Carlo technique, using decay data and fission yields uncertainties. Then, the results, experimental data and reference calculation (JEFF Report20), are compared. Second, we assess the impact of basic nuclear data (activation cross-section, decay data and fission yields) uncertainties on relevant fuel cycle parameters (decay heat and radiotoxicity) for a conceptual design of a modular European Facility for Industrial Transmutation (EFIT) fuel cycle. After identifying which time steps have higher uncertainties, an assessment of which uncertainties have more relevance is performed

  8. A Monte Carlo Sampling Technique for Multi-phonon Processes

    Energy Technology Data Exchange (ETDEWEB)

    Hoegberg, Thure

    1961-12-15

    A sampling technique for selecting scattering angle and energy gain in Monte Carlo calculations of neutron thermalization is described. It is supposed that the scattering is separated into processes involving different numbers of phonons. The number of phonons involved is first determined. Scattering angle and energy gain are then chosen by using special properties of the multi-phonon term.

  9. Markov chains analytic and Monte Carlo computations

    CERN Document Server

    Graham, Carl

    2014-01-01

    Markov Chains: Analytic and Monte Carlo Computations introduces the main notions related to Markov chains and provides explanations on how to characterize, simulate, and recognize them. Starting with basic notions, this book leads progressively to advanced and recent topics in the field, allowing the reader to master the main aspects of the classical theory. This book also features: Numerous exercises with solutions as well as extended case studies.A detailed and rigorous presentation of Markov chains with discrete time and state space.An appendix presenting probabilistic notions that are nec

  10. Linear filtering applied to Monte Carlo criticality calculations

    International Nuclear Information System (INIS)

    Morrison, G.W.; Pike, D.H.; Petrie, L.M.

    1975-01-01

    A significant improvement in the acceleration of the convergence of the eigenvalue computed by Monte Carlo techniques has been developed by applying linear filtering theory to Monte Carlo calculations for multiplying systems. A Kalman filter was applied to a KENO Monte Carlo calculation of an experimental critical system consisting of eight interacting units of fissile material. A comparison of the filter estimate and the Monte Carlo realization was made. The Kalman filter converged in five iterations to 0.9977. After 95 iterations, the average k-eff from the Monte Carlo calculation was 0.9981. This demonstrates that the Kalman filter has the potential of reducing the calculational effort of multiplying systems. Other examples and results are discussed

  11. Two improved Monte Carlo photon cross section techniques

    International Nuclear Information System (INIS)

    Scudiere, M.B.

    1978-01-01

    Truncated series of Legendre coefficients and polynomials are often used in multigroup transport computer codes to describe group-to-group angular density transfer functions. Imposition of group structure on the energy continuum may create discontinuities in the first derivative of these functions. Because of the nature of these discontinuities efficient and accurate full-range polynomial expansions are not practically obtainable. Two separate and distinct methods for Monte Carlo photon transport are presented which eliminate essentially all major disadvantages of truncated expansions. In the first method, partial-range expansions are applied between the discontinuities. Here accurate low-order representations are obtained, which yield modest savings in computer charges. The second method employs unique properties of functions to replace them with a few smooth well-behaved representations. This method brings about a considerable savings in computer memory requirements. In addition, accuracy of the first method is maintained, while execution times are reduced even further

  12. Time Series Analysis of Monte Carlo Fission Sources - I: Dominance Ratio Computation

    International Nuclear Information System (INIS)

    Ueki, Taro; Brown, Forrest B.; Parsons, D. Kent; Warsa, James S.

    2004-01-01

    In the nuclear engineering community, the error propagation of the Monte Carlo fission source distribution through cycles is known to be a linear Markov process when the number of histories per cycle is sufficiently large. In the statistics community, linear Markov processes with linear observation functions are known to have an autoregressive moving average (ARMA) representation of orders p and p - 1. Therefore, one can perform ARMA fitting of the binned Monte Carlo fission source in order to compute physical and statistical quantities relevant to nuclear criticality analysis. In this work, the ARMA fitting of a binary Monte Carlo fission source has been successfully developed as a method to compute the dominance ratio, i.e., the ratio of the second-largest to the largest eigenvalues. The method is free of binning mesh refinement and does not require the alteration of the basic source iteration cycle algorithm. Numerical results are presented for problems with one-group isotropic, two-group linearly anisotropic, and continuous-energy cross sections. Also, a strategy for the analysis of eigenmodes higher than the second-largest eigenvalue is demonstrated numerically

  13. Monte Carlo and Quasi-Monte Carlo Sampling

    CERN Document Server

    Lemieux, Christiane

    2009-01-01

    Presents essential tools for using quasi-Monte Carlo sampling in practice. This book focuses on issues related to Monte Carlo methods - uniform and non-uniform random number generation, variance reduction techniques. It covers several aspects of quasi-Monte Carlo methods.

  14. Present status and future prospects of neutronics Monte Carlo

    International Nuclear Information System (INIS)

    Gelbard, E.M.

    1990-01-01

    It is fair to say that the Monte Carlo method, over the last decade, has grown steadily more important as a neutronics computational tool. Apparently this has happened for assorted reasons. Thus, for example, as the power of computers has increased, the cost of the method has dropped, steadily becoming less and less of an obstacle to its use. In addition, more and more sophisticated input processors have now made it feasible to model extremely complicated systems routinely with really remarkable fidelity. Finally, as we demand greater and greater precision in reactor calculations, Monte Carlo is often found to be the only method accurate enough for use in benchmarking. Cross section uncertainties are now almost the only inherent limitations in our Monte Carlo capabilities. For this reason Monte Carlo has come to occupy a special position, interposed between experiment and other computational techniques. More and more often deterministic methods are tested by comparison with Monte Carlo, and cross sections are tested by comparing Monte Carlo with experiment. In this way one can distinguish very clearly between errors due to flaws in our numerical methods, and those due to deficiencies in cross section files. The special role of Monte Carlo as a benchmarking tool, often the only available benchmarking tool, makes it crucially important that this method should be polished to perfection. Problems relating to Eigenvalue calculations, variance reduction and the use of advanced computers are reviewed in this paper. (author)

  15. Shell model the Monte Carlo way

    International Nuclear Information System (INIS)

    Ormand, W.E.

    1995-01-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined

  16. Shell model the Monte Carlo way

    Energy Technology Data Exchange (ETDEWEB)

    Ormand, W.E.

    1995-03-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.

  17. The adaptation method in the Monte Carlo simulation for computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hyoung Gun; Yoon, Chang Yeon; Lee, Won Ho [Dept. of Bio-convergence Engineering, Korea University, Seoul (Korea, Republic of); Cho, Seung Ryong [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Park, Sung Ho [Dept. of Neurosurgery, Ulsan University Hospital, Ulsan (Korea, Republic of)

    2015-06-15

    The patient dose incurred from diagnostic procedures during advanced radiotherapy has become an important issue. Many researchers in medical physics are using computational simulations to calculate complex parameters in experiments. However, extended computation times make it difficult for personal computers to run the conventional Monte Carlo method to simulate radiological images with high-flux photons such as images produced by computed tomography (CT). To minimize the computation time without degrading imaging quality, we applied a deterministic adaptation to the Monte Carlo calculation and verified its effectiveness by simulating CT image reconstruction for an image evaluation phantom (Catphan; Phantom Laboratory, New York NY, USA) and a human-like voxel phantom (KTMAN-2) (Los Alamos National Laboratory, Los Alamos, NM, USA). For the deterministic adaptation, the relationship between iteration numbers and the simulations was estimated and the option to simulate scattered radiation was evaluated. The processing times of simulations using the adaptive method were at least 500 times faster than those using a conventional statistical process. In addition, compared with the conventional statistical method, the adaptive method provided images that were more similar to the experimental images, which proved that the adaptive method was highly effective for a simulation that requires a large number of iterations-assuming no radiation scattering in the vicinity of detectors minimized artifacts in the reconstructed image.

  18. The electron transport problem sampling by Monte Carlo individual collision technique

    International Nuclear Information System (INIS)

    Androsenko, P.A.; Belousov, V.I.

    2005-01-01

    The problem of electron transport is of most interest in all fields of the modern science. To solve this problem the Monte Carlo sampling has to be used. The electron transport is characterized by a large number of individual interactions. To simulate electron transport the 'condensed history' technique may be used where a large number of collisions are grouped into a single step to be sampled randomly. Another kind of Monte Carlo sampling is the individual collision technique. In comparison with condensed history technique researcher has the incontestable advantages. For example one does not need to give parameters altered by condensed history technique like upper limit for electron energy, resolution, number of sub-steps etc. Also the condensed history technique may lose some very important tracks of electrons because of its limited nature by step parameters of particle movement and due to weakness of algorithms for example energy indexing algorithm. There are no these disadvantages in the individual collision technique. This report presents some sampling algorithms of new version BRAND code where above mentioned technique is used. All information on electrons was taken from Endf-6 files. They are the important part of BRAND. These files have not been processed but directly taken from electron information source. Four kinds of interaction like the elastic interaction, the Bremsstrahlung, the atomic excitation and the atomic electro-ionization were considered. In this report some results of sampling are presented after comparison with analogs. For example the endovascular radiotherapy problem (P2) of QUADOS2002 was presented in comparison with another techniques that are usually used. (authors)

  19. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Badal, A [U.S. Food and Drug Administration (CDRH/OSEL), Silver Spring, MD (United States); Zbijewski, W [Johns Hopkins University, Baltimore, MD (United States); Bolch, W [University of Florida, Gainesville, FL (United States); Sechopoulos, I [Emory University, Atlanta, GA (United States)

    2014-06-15

    Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods, are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the

  20. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging

    International Nuclear Information System (INIS)

    Badal, A; Zbijewski, W; Bolch, W; Sechopoulos, I

    2014-01-01

    Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods, are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10 7 xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual

  1. Electron Irradiation of Conjunctival Lymphoma-Monte Carlo Simulation of the Minute Dose Distribution and Technique Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Brualla, Lorenzo, E-mail: lorenzo.brualla@uni-due.de [NCTeam, Strahlenklinik, Universitaetsklinikum Essen, Essen (Germany); Zaragoza, Francisco J.; Sempau, Josep [Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Barcelona (Spain); Wittig, Andrea [Department of Radiation Oncology, University Hospital Giessen and Marburg, Philipps-University Marburg, Marburg (Germany); Sauerwein, Wolfgang [NCTeam, Strahlenklinik, Universitaetsklinikum Essen, Essen (Germany)

    2012-07-15

    Purpose: External beam radiotherapy is the only conservative curative approach for Stage I non-Hodgkin lymphomas of the conjunctiva. The target volume is geometrically complex because it includes the eyeball and lid conjunctiva. Furthermore, the target volume is adjacent to radiosensitive structures, including the lens, lacrimal glands, cornea, retina, and papilla. The radiotherapy planning and optimization requires accurate calculation of the dose in these anatomical structures that are much smaller than the structures traditionally considered in radiotherapy. Neither conventional treatment planning systems nor dosimetric measurements can reliably determine the dose distribution in these small irradiated volumes. Methods and Materials: The Monte Carlo simulations of a Varian Clinac 2100 C/D and human eye were performed using the PENELOPE and PENEASYLINAC codes. Dose distributions and dose volume histograms were calculated for the bulbar conjunctiva, cornea, lens, retina, papilla, lacrimal gland, and anterior and posterior hemispheres. Results: The simulated results allow choosing the most adequate treatment setup configuration, which is an electron beam energy of 6 MeV with additional bolus and collimation by a cerrobend block with a central cylindrical hole of 3.0 cm diameter and central cylindrical rod of 1.0 cm diameter. Conclusions: Monte Carlo simulation is a useful method to calculate the minute dose distribution in ocular tissue and to optimize the electron irradiation technique in highly critical structures. Using a voxelized eye phantom based on patient computed tomography images, the dose distribution can be estimated with a standard statistical uncertainty of less than 2.4% in 3 min using a computing cluster with 30 cores, which makes this planning technique clinically relevant.

  2. A general transform for variance reduction in Monte Carlo simulations

    International Nuclear Information System (INIS)

    Becker, T.L.; Larsen, E.W.

    2011-01-01

    This paper describes a general transform to reduce the variance of the Monte Carlo estimate of some desired solution, such as flux or biological dose. This transform implicitly includes many standard variance reduction techniques, including source biasing, collision biasing, the exponential transform for path-length stretching, and weight windows. Rather than optimizing each of these techniques separately or choosing semi-empirical biasing parameters based on the experience of a seasoned Monte Carlo practitioner, this General Transform unites all these variance techniques to achieve one objective: a distribution of Monte Carlo particles that attempts to optimize the desired solution. Specifically, this transform allows Monte Carlo particles to be distributed according to the user's specification by using information obtained from a computationally inexpensive deterministic simulation of the problem. For this reason, we consider the General Transform to be a hybrid Monte Carlo/Deterministic method. The numerical results con rm that the General Transform distributes particles according to the user-specified distribution and generally provide reasonable results for shielding applications. (author)

  3. Use of Monte Carlo computation in benchmarking radiotherapy treatment planning system algorithms

    International Nuclear Information System (INIS)

    Lewis, R.D.; Ryde, S.J.S.; Seaby, A.W.; Hancock, D.A.; Evans, C.J.

    2000-01-01

    Radiotherapy treatments are becoming more complex, often requiring the dose to be calculated in three dimensions and sometimes involving the application of non-coplanar beams. The ability of treatment planning systems to accurately calculate dose under a range of these and other irradiation conditions requires evaluation. Practical assessment of such arrangements can be problematical, especially when a heterogeneous medium is used. This work describes the use of Monte Carlo computation as a benchmarking tool to assess the dose distribution of external photon beam plans obtained in a simple heterogeneous phantom by several commercially available 3D and 2D treatment planning system algorithms. For comparison, practical measurements were undertaken using film dosimetry. The dose distributions were calculated for a variety of irradiation conditions designed to show the effects of surface obliquity, inhomogeneities and missing tissue above tangential beams. The results show maximum dose differences of 47% between some planning algorithms and film at a point 1 mm below a tangentially irradiated surface. Overall, the dose distribution obtained from film was most faithfully reproduced by the Monte Carlo N-Particle results illustrating the potential of Monte Carlo computation in evaluating treatment planning system algorithms. (author)

  4. Monte Carlo technique for local perturbations in multiplying systems

    International Nuclear Information System (INIS)

    Bernnat, W.

    1974-01-01

    The use of the Monte Carlo method for the calculation of reactivity perturbations in multiplying systems due to changes in geometry or composition requires a correlated sampling technique to make such calculations economical or in the case of very small perturbations even feasible. The technique discussed here is suitable for local perturbations. Very small perturbation regions will be treated by an adjoint mode. The perturbation of the source distribution due to the changed system and its reaction on the reactivity worth or other values of interest is taken into account by a fission matrix method. The formulation of the method and its application are discussed. 10 references. (U.S.)

  5. The electron transport problem sampling by Monte Carlo individual collision technique

    Energy Technology Data Exchange (ETDEWEB)

    Androsenko, P.A.; Belousov, V.I. [Obninsk State Technical Univ. of Nuclear Power Engineering, Kaluga region (Russian Federation)

    2005-07-01

    The problem of electron transport is of most interest in all fields of the modern science. To solve this problem the Monte Carlo sampling has to be used. The electron transport is characterized by a large number of individual interactions. To simulate electron transport the 'condensed history' technique may be used where a large number of collisions are grouped into a single step to be sampled randomly. Another kind of Monte Carlo sampling is the individual collision technique. In comparison with condensed history technique researcher has the incontestable advantages. For example one does not need to give parameters altered by condensed history technique like upper limit for electron energy, resolution, number of sub-steps etc. Also the condensed history technique may lose some very important tracks of electrons because of its limited nature by step parameters of particle movement and due to weakness of algorithms for example energy indexing algorithm. There are no these disadvantages in the individual collision technique. This report presents some sampling algorithms of new version BRAND code where above mentioned technique is used. All information on electrons was taken from Endf-6 files. They are the important part of BRAND. These files have not been processed but directly taken from electron information source. Four kinds of interaction like the elastic interaction, the Bremsstrahlung, the atomic excitation and the atomic electro-ionization were considered. In this report some results of sampling are presented after comparison with analogs. For example the endovascular radiotherapy problem (P2) of QUADOS2002 was presented in comparison with another techniques that are usually used. (authors)

  6. Computational error estimates for Monte Carlo finite element approximation with log normal diffusion coefficients

    KAUST Repository

    Sandberg, Mattias

    2015-01-07

    The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with log normal distributed diffusion coefficients, e.g. modelling ground water flow. Typical models use log normal diffusion coefficients with H¨older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. This talk will address how the total error can be estimated by the computable error.

  7. Monte Carlo simulation of X-ray imaging and spectroscopy experiments using quadric geometry and variance reduction techniques

    Science.gov (United States)

    Golosio, Bruno; Schoonjans, Tom; Brunetti, Antonio; Oliva, Piernicola; Masala, Giovanni Luca

    2014-03-01

    The simulation of X-ray imaging experiments is often performed using deterministic codes, which can be relatively fast and easy to use. However, such codes are generally not suitable for the simulation of even slightly more complex experimental conditions, involving, for instance, first-order or higher-order scattering, X-ray fluorescence emissions, or more complex geometries, particularly for experiments that combine spatial resolution with spectral information. In such cases, simulations are often performed using codes based on the Monte Carlo method. In a simple Monte Carlo approach, the interaction position of an X-ray photon and the state of the photon after an interaction are obtained simply according to the theoretical probability distributions. This approach may be quite inefficient because the final channels of interest may include only a limited region of space or photons produced by a rare interaction, e.g., fluorescent emission from elements with very low concentrations. In the field of X-ray fluorescence spectroscopy, this problem has been solved by combining the Monte Carlo method with variance reduction techniques, which can reduce the computation time by several orders of magnitude. In this work, we present a C++ code for the general simulation of X-ray imaging and spectroscopy experiments, based on the application of the Monte Carlo method in combination with variance reduction techniques, with a description of sample geometry based on quadric surfaces. We describe the benefits of the object-oriented approach in terms of code maintenance, the flexibility of the program for the simulation of different experimental conditions and the possibility of easily adding new modules. Sample applications in the fields of X-ray imaging and X-ray spectroscopy are discussed. Catalogue identifier: AERO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERO_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland

  8. Comparison of internal dose estimates obtained using organ-level, voxel S value, and Monte Carlo techniques

    Energy Technology Data Exchange (ETDEWEB)

    Grimes, Joshua, E-mail: grimes.joshua@mayo.edu [Department of Physics and Astronomy, University of British Columbia, Vancouver V5Z 1L8 (Canada); Celler, Anna [Department of Radiology, University of British Columbia, Vancouver V5Z 1L8 (Canada)

    2014-09-15

    Purpose: The authors’ objective was to compare internal dose estimates obtained using the Organ Level Dose Assessment with Exponential Modeling (OLINDA/EXM) software, the voxel S value technique, and Monte Carlo simulation. Monte Carlo dose estimates were used as the reference standard to assess the impact of patient-specific anatomy on the final dose estimate. Methods: Six patients injected with{sup 99m}Tc-hydrazinonicotinamide-Tyr{sup 3}-octreotide were included in this study. A hybrid planar/SPECT imaging protocol was used to estimate {sup 99m}Tc time-integrated activity coefficients (TIACs) for kidneys, liver, spleen, and tumors. Additionally, TIACs were predicted for {sup 131}I, {sup 177}Lu, and {sup 90}Y assuming the same biological half-lives as the {sup 99m}Tc labeled tracer. The TIACs were used as input for OLINDA/EXM for organ-level dose calculation and voxel level dosimetry was performed using the voxel S value method and Monte Carlo simulation. Dose estimates for {sup 99m}Tc, {sup 131}I, {sup 177}Lu, and {sup 90}Y distributions were evaluated by comparing (i) organ-level S values corresponding to each method, (ii) total tumor and organ doses, (iii) differences in right and left kidney doses, and (iv) voxelized dose distributions calculated by Monte Carlo and the voxel S value technique. Results: The S values for all investigated radionuclides used by OLINDA/EXM and the corresponding patient-specific S values calculated by Monte Carlo agreed within 2.3% on average for self-irradiation, and differed by as much as 105% for cross-organ irradiation. Total organ doses calculated by OLINDA/EXM and the voxel S value technique agreed with Monte Carlo results within approximately ±7%. Differences between right and left kidney doses determined by Monte Carlo were as high as 73%. Comparison of the Monte Carlo and voxel S value dose distributions showed that each method produced similar dose volume histograms with a minimum dose covering 90% of the volume (D90

  9. Stratified source-sampling techniques for Monte Carlo eigenvalue analysis

    International Nuclear Information System (INIS)

    Mohamed, A.

    1998-01-01

    In 1995, at a conference on criticality safety, a special session was devoted to the Monte Carlo ''Eigenvalue of the World'' problem. Argonne presented a paper, at that session, in which the anomalies originally observed in that problem were reproduced in a much simplified model-problem configuration, and removed by a version of stratified source-sampling. In this paper, stratified source-sampling techniques are generalized and applied to three different Eigenvalue of the World configurations which take into account real-world statistical noise sources not included in the model problem, but which differ in the amount of neutronic coupling among the constituents of each configuration. It is concluded that, in Monte Carlo eigenvalue analysis of loosely-coupled arrays, the use of stratified source-sampling reduces the probability of encountering an anomalous result over that if conventional source-sampling methods are used. However, this gain in reliability is substantially less than that observed in the model-problem results

  10. Reliability study of a prestressed concrete beam by Monte Carlo techniques

    International Nuclear Information System (INIS)

    Floris, C.; Migliacci, A.

    1987-01-01

    The safety of a prestressed beam is studied at the third probabilistic level and so calculating the probability of failure (P f ) under known loads. Since the beam is simply supported and subject only to loads perpendicular to its axis, only bending and shear loads are present. Since the ratio between the span and the clear height is over 20 with thus a very considerable shear span, it can be assumed that failure occurs entirely due to the bending moment, with shear having no effect. In order to calculate P f the probability density function (p.d.f.) have to be known both for the stress moment and the resisting moment. Attention here is focused on the construction of the latter. It is shown that it is practically impossible to find the required function analytically. On the other hand, numerical simulation with the help of a computer is particularly convenient. The so-called Monte Carlo techniques were chosen: they are based on the extraction of random numbers and are thus very suitable for simulating random events and quantities. (orig./HP)

  11. Monte Carlo method for calculating the radiation skyshine produced by electron accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Kong Chaocheng [Department of Engineering Physics, Tsinghua University Beijing 100084 (China)]. E-mail: kongchaocheng@tsinghua.org.cn; Li Quanfeng [Department of Engineering Physics, Tsinghua University Beijing 100084 (China); Chen Huaibi [Department of Engineering Physics, Tsinghua University Beijing 100084 (China); Du Taibin [Department of Engineering Physics, Tsinghua University Beijing 100084 (China); Cheng Cheng [Department of Engineering Physics, Tsinghua University Beijing 100084 (China); Tang Chuanxiang [Department of Engineering Physics, Tsinghua University Beijing 100084 (China); Zhu Li [Laboratory of Radiation and Environmental Protection, Tsinghua University, Beijing 100084 (China); Zhang Hui [Laboratory of Radiation and Environmental Protection, Tsinghua University, Beijing 100084 (China); Pei Zhigang [Laboratory of Radiation and Environmental Protection, Tsinghua University, Beijing 100084 (China); Ming Shenjin [Laboratory of Radiation and Environmental Protection, Tsinghua University, Beijing 100084 (China)

    2005-06-01

    Using the MCNP4C Monte Carlo code, the X-ray skyshine produced by 9 MeV, 15 MeV and 21 MeV electron linear accelerators were calculated respectively with a new two-step method combined with the split and roulette variance reduction technique. Results of the Monte Carlo simulation, the empirical formulas used for skyshine calculation and the dose measurements were analyzed and compared. In conclusion, the skyshine dose measurements agreed reasonably with the results computed by the Monte Carlo method, but deviated from computational results given by empirical formulas. The effect on skyshine dose caused by different structures of accelerator head is also discussed in this paper.

  12. Conditional Monte Carlo randomization tests for regression models.

    Science.gov (United States)

    Parhat, Parwen; Rosenberger, William F; Diao, Guoqing

    2014-08-15

    We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.

  13. Back propagation and Monte Carlo algorithms for neural network computations

    International Nuclear Information System (INIS)

    Junczys, R.; Wit, R.

    1996-01-01

    Results of teaching procedures for neural network for two different algorithms are presented. The first one is based on the well known back-propagation technique, the second is an adopted version of the Monte Carlo global minimum seeking method. Combination of these two, different in nature, approaches provides promising results. (author) nature, approaches provides promising results. (author)

  14. Computer simulation of HTGR fuel microspheres using a Monte-Carlo statistical approach

    International Nuclear Information System (INIS)

    Hedrick, C.E.

    1976-01-01

    The concept and computational aspects of a Monte-Carlo statistical approach in relating structure of HTGR fuel microspheres to the uranium content of fuel samples have been verified. Results of the preliminary validation tests and the benefits to be derived from the program are summarized

  15. CTmod—A toolkit for Monte Carlo simulation of projections including scatter in computed tomography

    Czech Academy of Sciences Publication Activity Database

    Malušek, Alexandr; Sandborg, M.; Alm Carlsson, G.

    2008-01-01

    Roč. 90, č. 2 (2008), s. 167-178 ISSN 0169-2607 Institutional research plan: CEZ:AV0Z10480505 Keywords : Monte Carlo * computed tomography * cone beam * scatter Subject RIV: JC - Computer Hardware ; Software Impact factor: 1.220, year: 2008 http://dx.doi.org/10.1016/j.cmpb.2007.12.005

  16. 3D dose imaging for arc therapy techniques by means of Fricke gel dosimetry and dedicated Monte Carlo simulations

    International Nuclear Information System (INIS)

    Valente, Mauro; Castellano, Gustavo; Sosa, Carlos

    2008-01-01

    Full text: Radiotherapy is one of the most effective techniques for tumour treatment and control. During the last years, significant developments were performed regarding both irradiation technology and techniques. However, accurate 3D dosimetric techniques are nowadays not commercially available. Due to their intrinsic characteristics, traditional dosimetric techniques like ionisation chamber, film dosimetry or TLD do not offer proper continuous 3D dose mapping. The possibility of using ferrous sulphate (Fricke) dosimeters suitably fixed to a gel matrix, along with dedicated optical analysis methods, based on light transmission measurements for 3D absorbed dose imaging in tissue-equivalent materials, has become great interest in radiotherapy. Since Gore et al. showed in 1984 that the oxidation of ferrous ions to ferric ions still happen even when fixing the ferrous sulphate solution to a gelatine matrix, important efforts have been dedicated in developing and improving real continuous 3D dosimetric systems based on Fricke solution. The purpose of this work is to investigate the capability and suitability of Fricke gel dosimetry for arc therapy irradiations. The dosimetric system is mainly composed by Fricke gel dosimeters, suitably shaped in form of thin layers and optically analysed by means of visible light transmission measurements, acquiring sample images just before and after irradiation by means of a commercial flatbed-like scanner. Image acquisition, conversion to matrices and further analysis are accomplished by means of dedicated developed software, which includes suitable algorithms for optical density differences calculation and corresponding absorbed dose conversion. Dedicated subroutines allow 3D dose imaging reconstruction from single layer information, by means of computer tomography-like algorithms. Also, dedicated Monte Carlo (PENELOPE) subroutines have been adapted in order to achieve accurate simulation of arc therapy irradiation techniques

  17. Dosimetric study of prostate brachytherapy using techniques of Monte-Carlo simulation, experimental measurements and comparison with a treatment plan

    International Nuclear Information System (INIS)

    Teles, Pedro; Barros, Silvia; Vaz, Pedro; Goncalves, Isabel; Facure, Alessandro; Rosa, Luiz da; Santos, Maira; Pereira Junior, Pedro Paulo; Zankl, Maria

    2013-01-01

    Prostate Brachytherapy is a radiotherapy technique, which consists in inserting a number of radioactive seeds (containing, usually, the following radionuclides 125 l, 241 Am or 103 Pd ) surrounding or in the vicinity of, prostate tumor tissue . The main objective of this technique is to maximize the radiation dose to the tumor and minimize it in other tissues and organs healthy, in order to reduce its morbidity. The absorbed dose distribution in the prostate, using this technique is usually non-homogeneous and time dependent. Various parameters such as the type of seed, the attenuation interactions between them, their geometrical arrangement within the prostate, the actual geometry of the seeds,and further swelling of the prostate gland after implantation greatly influence the course of absorbed dose in the prostate and surrounding areas. Quantification of these parameters is therefore extremely important for dose optimization and improvement of their plans conventional treatment, which in many cases not fully take into account. The Monte Carlo techniques allow to study these parameters quickly and effectively. In this work, we use the program MCNPX and generic voxel phantom (GOLEM) where simulated different geometric arrangements of seeds containing 125 I, Amersham Health model of type 6711 in prostates of different sizes, in order to try to quantify some of the parameters. The computational model was validated using a phantom prostate cubic RW3 type , consisting of tissue equivalent, and thermoluminescent dosimeters. Finally, to have a term of comparison with a treatment real plan it was simulate a treatment plan used in a hospital of Rio de Janeiro, with exactly the same parameters, and our computational model. The results obtained in our study seem to indicate that the parameters described above may be a source of uncertainty in the correct evaluation of the dose required for actual treatment plans. The use of Monte Carlo techniques can serve as a complementary

  18. Simulation and the Monte Carlo method

    CERN Document Server

    Rubinstein, Reuven Y

    2016-01-01

    Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...

  19. A practical look at Monte Carlo variance reduction methods in radiation shielding

    Energy Technology Data Exchange (ETDEWEB)

    Olsher, Richard H. [Los Alamos National Laboratory, Los Alamos (United States)

    2006-04-15

    With the advent of inexpensive computing power over the past two decades, applications of Monte Carlo radiation transport techniques have proliferated dramatically. At Los Alamos, the Monte Carlo codes MCNP5 and MCNPX are used routinely on personal computer platforms for radiation shielding analysis and dosimetry calculations. These codes feature a rich palette of Variance Reduction (VR) techniques. The motivation of VR is to exchange user efficiency for computational efficiency. It has been said that a few hours of user time often reduces computational time by several orders of magnitude. Unfortunately, user time can stretch into the many hours as most VR techniques require significant user experience and intervention for proper optimization. It is the purpose of this paper to outline VR strategies, tested in practice, optimized for several common radiation shielding tasks, with the hope of reducing user setup time for similar problems. A strategy is defined in this context to mean a collection of MCNP radiation transport physics options and VR techniques that work synergistically to optimize a particular shielding task. Examples are offered the areas of source definition, skyshine, streaming, and transmission.

  20. A practical look at Monte Carlo variance reduction methods in radiation shielding

    International Nuclear Information System (INIS)

    Olsher, Richard H.

    2006-01-01

    With the advent of inexpensive computing power over the past two decades, applications of Monte Carlo radiation transport techniques have proliferated dramatically. At Los Alamos, the Monte Carlo codes MCNP5 and MCNPX are used routinely on personal computer platforms for radiation shielding analysis and dosimetry calculations. These codes feature a rich palette of Variance Reduction (VR) techniques. The motivation of VR is to exchange user efficiency for computational efficiency. It has been said that a few hours of user time often reduces computational time by several orders of magnitude. Unfortunately, user time can stretch into the many hours as most VR techniques require significant user experience and intervention for proper optimization. It is the purpose of this paper to outline VR strategies, tested in practice, optimized for several common radiation shielding tasks, with the hope of reducing user setup time for similar problems. A strategy is defined in this context to mean a collection of MCNP radiation transport physics options and VR techniques that work synergistically to optimize a particular shielding task. Examples are offered the areas of source definition, skyshine, streaming, and transmission

  1. Exact Dynamics via Poisson Process: a unifying Monte Carlo paradigm

    Science.gov (United States)

    Gubernatis, James

    2014-03-01

    A common computational task is solving a set of ordinary differential equations (o.d.e.'s). A little known theorem says that the solution of any set of o.d.e.'s is exactly solved by the expectation value over a set of arbitary Poisson processes of a particular function of the elements of the matrix that defines the o.d.e.'s. The theorem thus provides a new starting point to develop real and imaginary-time continous-time solvers for quantum Monte Carlo algorithms, and several simple observations enable various quantum Monte Carlo techniques and variance reduction methods to transfer to a new context. I will state the theorem, note a transformation to a very simple computational scheme, and illustrate the use of some techniques from the directed-loop algorithm in context of the wavefunction Monte Carlo method that is used to solve the Lindblad master equation for the dynamics of open quantum systems. I will end by noting that as the theorem does not depend on the source of the o.d.e.'s coming from quantum mechanics, it also enables the transfer of continuous-time methods from quantum Monte Carlo to the simulation of various classical equations of motion heretofore only solved deterministically.

  2. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    International Nuclear Information System (INIS)

    Booth, T.E.

    1992-12-01

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed

  3. Study of cold neutron sources: Implementation and validation of a complete computation scheme for research reactor using Monte Carlo codes TRIPOLI-4.4 and McStas

    International Nuclear Information System (INIS)

    Campioni, Guillaume; Mounier, Claude

    2006-01-01

    The main goal of the thesis about studies of cold neutrons sources (CNS) in research reactors was to create a complete set of tools to design efficiently CNS. The work raises the problem to run accurate simulations of experimental devices inside reactor reflector valid for parametric studies. On one hand, deterministic codes have reasonable computation times but introduce problems for geometrical description. On the other hand, Monte Carlo codes give the possibility to compute on precise geometry, but need computation times so important that parametric studies are impossible. To decrease this computation time, several developments were made in the Monte Carlo code TRIPOLI-4.4. An uncoupling technique is used to isolate a study zone in the complete reactor geometry. By recording boundary conditions (incoming flux), further simulations can be launched for parametric studies with a computation time reduced by a factor 60 (case of the cold neutron source of the Orphee reactor). The short response time allows to lead parametric studies using Monte Carlo code. Moreover, using biasing methods, the flux can be recorded on the surface of neutrons guides entries (low solid angle) with a further gain of running time. Finally, the implementation of a coupling module between TRIPOLI- 4.4 and the Monte Carlo code McStas for research in condensed matter field gives the possibility to obtain fluxes after transmission through neutrons guides, thus to have the neutron flux received by samples studied by scientists of condensed matter. This set of developments, involving TRIPOLI-4.4 and McStas, represent a complete computation scheme for research reactors: from nuclear core, where neutrons are created, to the exit of neutrons guides, on samples of matter. This complete calculation scheme is tested against ILL4 measurements of flux in cold neutron guides. (authors)

  4. Simulation study on the behavior of X-rays and gamma rays in an inhomogeneous medium using the Monte Carlo technique

    International Nuclear Information System (INIS)

    Murase, Kenya; Kataoka, Masaaki; Kawamura, Masashi; Tamada, Shuji; Hamamoto, Ken

    1989-01-01

    A computer program based on the Monte Carlo technique was developed for the analysis of the behavior of X-rays and gamma rays in an inhomogeneous medium. The statistical weight of a photon was introduced and the survival biasing method was used for reducing the statistical error. This computer program has the mass energy absorption and attenuation coefficients for 69 tissues and organs as a database file, and can be applied to various cases of inhomogeneity. The simulation and experimental results of the central axis percent-depth dose in an inhomogeneous phantom were in good agreement. This computer program will be useful for analysis on the behavior of X-rays and gamma rays in an inhomogeneous medium consisting of various tissues and organs, not only in radiotherapy treatment planning but also in diagnostic radiology and in the field treating radiation protection. (author)

  5. Bayesian Optimal Experimental Design Using Multilevel Monte Carlo

    KAUST Repository

    Ben Issaid, Chaouki; Long, Quan; Scavino, Marco; Tempone, Raul

    2015-01-01

    Experimental design is very important since experiments are often resource-exhaustive and time-consuming. We carry out experimental design in the Bayesian framework. To measure the amount of information, which can be extracted from the data in an experiment, we use the expected information gain as the utility function, which specifically is the expected logarithmic ratio between the posterior and prior distributions. Optimizing this utility function enables us to design experiments that yield the most informative data for our purpose. One of the major difficulties in evaluating the expected information gain is that the integral is nested and can be high dimensional. We propose using Multilevel Monte Carlo techniques to accelerate the computation of the nested high dimensional integral. The advantages are twofold. First, the Multilevel Monte Carlo can significantly reduce the cost of the nested integral for a given tolerance, by using an optimal sample distribution among different sample averages of the inner integrals. Second, the Multilevel Monte Carlo method imposes less assumptions, such as the concentration of measures, required by Laplace method. We test our Multilevel Monte Carlo technique using a numerical example on the design of sensor deployment for a Darcy flow problem governed by one dimensional Laplace equation. We also compare the performance of the Multilevel Monte Carlo, Laplace approximation and direct double loop Monte Carlo.

  6. Bayesian Optimal Experimental Design Using Multilevel Monte Carlo

    KAUST Repository

    Ben Issaid, Chaouki

    2015-01-07

    Experimental design is very important since experiments are often resource-exhaustive and time-consuming. We carry out experimental design in the Bayesian framework. To measure the amount of information, which can be extracted from the data in an experiment, we use the expected information gain as the utility function, which specifically is the expected logarithmic ratio between the posterior and prior distributions. Optimizing this utility function enables us to design experiments that yield the most informative data for our purpose. One of the major difficulties in evaluating the expected information gain is that the integral is nested and can be high dimensional. We propose using Multilevel Monte Carlo techniques to accelerate the computation of the nested high dimensional integral. The advantages are twofold. First, the Multilevel Monte Carlo can significantly reduce the cost of the nested integral for a given tolerance, by using an optimal sample distribution among different sample averages of the inner integrals. Second, the Multilevel Monte Carlo method imposes less assumptions, such as the concentration of measures, required by Laplace method. We test our Multilevel Monte Carlo technique using a numerical example on the design of sensor deployment for a Darcy flow problem governed by one dimensional Laplace equation. We also compare the performance of the Multilevel Monte Carlo, Laplace approximation and direct double loop Monte Carlo.

  7. Computable error estimates for Monte Carlo finite element approximation of elliptic PDE with lognormal diffusion coefficients

    KAUST Repository

    Hall, Eric

    2016-01-09

    The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with lognormal distributed diffusion coefficients, e.g. modeling ground water flow. Typical models use lognormal diffusion coefficients with H´ older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. We address how the total error can be estimated by the computable error.

  8. Monte Carlo in radiotherapy: experience in a distributed computational environment

    Science.gov (United States)

    Caccia, B.; Mattia, M.; Amati, G.; Andenna, C.; Benassi, M.; D'Angelo, A.; Frustagli, G.; Iaccarino, G.; Occhigrossi, A.; Valentini, S.

    2007-06-01

    New technologies in cancer radiotherapy need a more accurate computation of the dose delivered in the radiotherapeutical treatment plan, and it is important to integrate sophisticated mathematical models and advanced computing knowledge into the treatment planning (TP) process. We present some results about using Monte Carlo (MC) codes in dose calculation for treatment planning. A distributed computing resource located in the Technologies and Health Department of the Italian National Institute of Health (ISS) along with other computer facilities (CASPUR - Inter-University Consortium for the Application of Super-Computing for Universities and Research) has been used to perform a fully complete MC simulation to compute dose distribution on phantoms irradiated with a radiotherapy accelerator. Using BEAMnrc and GEANT4 MC based codes we calculated dose distributions on a plain water phantom and air/water phantom. Experimental and calculated dose values below ±2% (for depth between 5 mm and 130 mm) were in agreement both in PDD (Percentage Depth Dose) and transversal sections of the phantom. We consider these results a first step towards a system suitable for medical physics departments to simulate a complete treatment plan using remote computing facilities for MC simulations.

  9. Optimization of the Kinetic Activation-Relaxation Technique, an off-lattice and self-learning kinetic Monte-Carlo method

    International Nuclear Information System (INIS)

    Joly, Jean-François; Béland, Laurent Karim; Brommer, Peter; Mousseau, Normand; El-Mellouhi, Fedwa

    2012-01-01

    We present two major optimizations for the kinetic Activation-Relaxation Technique (k-ART), an off-lattice self-learning kinetic Monte Carlo (KMC) algorithm with on-the-fly event search THAT has been successfully applied to study a number of semiconducting and metallic systems. K-ART is parallelized in a non-trivial way: A master process uses several worker processes to perform independent event searches for possible events, while all bookkeeping and the actual simulation is performed by the master process. Depending on the complexity of the system studied, the parallelization scales well for tens to more than one hundred processes. For dealing with large systems, we present a near order 1 implementation. Techniques such as Verlet lists, cell decomposition and partial force calculations are implemented, and the CPU time per time step scales sublinearly with the number of particles, providing an efficient use of computational resources.

  10. EDITORIAL: International Workshop on Monte Carlo Techniques in Radiotherapy Delivery and Verification

    Science.gov (United States)

    Verhaegen, Frank; Seuntjens, Jan

    2008-03-01

    Monte Carlo particle transport techniques offer exciting tools for radiotherapy research, where they play an increasingly important role. Topics of research related to clinical applications range from treatment planning, motion and registration studies, brachytherapy, verification imaging and dosimetry. The International Workshop on Monte Carlo Techniques in Radiotherapy Delivery and Verification took place in a hotel in Montreal in French Canada, from 29 May-1 June 2007, and was the third workshop to be held on a related topic, which now seems to have become a tri-annual event. About one hundred workers from many different countries participated in the four-day meeting. Seventeen experts in the field were invited to review topics and present their latest work. About half of the audience was made up by young graduate students. In a very full program, 57 papers were presented and 10 posters were on display during most of the meeting. On the evening of the third day a boat trip around the island of Montreal allowed participants to enjoy the city views, and to sample the local cuisine. The topics covered at the workshop included the latest developments in the most popular Monte Carlo transport algorithms, fast Monte Carlo, statistical issues, source modeling, MC treatment planning, modeling of imaging devices for treatment verification, registration and deformation of images and a sizeable number of contributions on brachytherapy. In this volume you will find 27 short papers resulting from the workshop on a variety of topics, some of them on very new stuff such as graphics processing units for fast computing, PET modeling, dual-energy CT, calculations in dynamic phantoms, tomotherapy devices, . . . . We acknowledge the financial support of the National Cancer Institute of Canada, the Institute of Cancer Research of the Canadian Institutes of Health Research, the Association Québécoise des Physicien(ne)s Médicaux Clinique, the Institute of Physics, and Medical

  11. Automated variance reduction of Monte Carlo shielding calculations using the discrete ordinates adjoint function

    International Nuclear Information System (INIS)

    Wagner, J.C.; Haghighat, A.

    1998-01-01

    Although the Monte Carlo method is considered to be the most accurate method available for solving radiation transport problems, its applicability is limited by its computational expense. Thus, biasing techniques, which require intuition, guesswork, and iterations involving manual adjustments, are employed to make reactor shielding calculations feasible. To overcome this difficulty, the authors have developed a method for using the S N adjoint function for automated variance reduction of Monte Carlo calculations through source biasing and consistent transport biasing with the weight window technique. They describe the implementation of this method into the standard production Monte Carlo code MCNP and its application to a realistic calculation, namely, the reactor cavity dosimetry calculation. The computational effectiveness of the method, as demonstrated through the increase in calculational efficiency, is demonstrated and quantified. Important issues associated with this method and its efficient use are addressed and analyzed. Additional benefits in terms of the reduction in time and effort required of the user are difficult to quantify but are possibly as important as the computational efficiency. In general, the automated variance reduction method presented is capable of increases in computational performance on the order of thousands, while at the same time significantly reducing the current requirements for user experience, time, and effort. Therefore, this method can substantially increase the applicability and reliability of Monte Carlo for large, real-world shielding applications

  12. Applications Of Monte Carlo Radiation Transport Simulation Techniques For Predicting Single Event Effects In Microelectronics

    International Nuclear Information System (INIS)

    Warren, Kevin; Reed, Robert; Weller, Robert; Mendenhall, Marcus; Sierawski, Brian; Schrimpf, Ronald

    2011-01-01

    MRED (Monte Carlo Radiative Energy Deposition) is Vanderbilt University's Geant4 application for simulating radiation events in semiconductors. Geant4 is comprised of the best available computational physics models for the transport of radiation through matter. In addition to basic radiation transport physics contained in the Geant4 core, MRED has the capability to track energy loss in tetrahedral geometric objects, includes a cross section biasing and track weighting technique for variance reduction, and additional features relevant to semiconductor device applications. The crucial element of predicting Single Event Upset (SEU) parameters using radiation transport software is the creation of a dosimetry model that accurately approximates the net collected charge at transistor contacts as a function of deposited energy. The dosimetry technique described here is the multiple sensitive volume (MSV) model. It is shown to be a reasonable approximation of the charge collection process and its parameters can be calibrated to experimental measurements of SEU cross sections. The MSV model, within the framework of MRED, is examined for heavy ion and high-energy proton SEU measurements of a static random access memory.

  13. Calibration and Monte Carlo modelling of neutron long counters

    CERN Document Server

    Tagziria, H

    2000-01-01

    The Monte Carlo technique has become a very powerful tool in radiation transport as full advantage is taken of enhanced cross-section data, more powerful computers and statistical techniques, together with better characterisation of neutron and photon source spectra. At the National Physical Laboratory, calculations using the Monte Carlo radiation transport code MCNP-4B have been combined with accurate measurements to characterise two long counters routinely used to standardise monoenergetic neutron fields. New and more accurate response function curves have been produced for both long counters. A novel approach using Monte Carlo methods has been developed, validated and used to model the response function of the counters and determine more accurately their effective centres, which have always been difficult to establish experimentally. Calculations and measurements agree well, especially for the De Pangher long counter for which details of the design and constructional material are well known. The sensitivit...

  14. Monte Carlo calculations on a parallel computer using MORSE-C.G

    International Nuclear Information System (INIS)

    Wood, J.

    1995-01-01

    The general purpose particle transport Monte Carlo code, MORSE-C.G., is implemented on a parallel computing transputer-based system having MIMD architecture. Example problems are solved which are representative of the 3-principal types of problem that can be solved by the original serial code, namely, fixed source, eigenvalue (k-eff) and time-dependent. The results from the parallelized version of the code are compared in tables with the serial code run on a mainframe serial computer, and with an independent, deterministic transport code. The performance of the parallel computer as the number of processors is varied is shown graphically. For the parallel strategy used, the loss of efficiency as the number of processors is increased, is investigated. (author)

  15. Tally and geometry definition influence on the computing time in radiotherapy treatment planning with MCNP Monte Carlo code.

    Science.gov (United States)

    Juste, B; Miro, R; Gallardo, S; Santos, A; Verdu, G

    2006-01-01

    The present work has simulated the photon and electron transport in a Theratron 780 (MDS Nordion) (60)Co radiotherapy unit, using the Monte Carlo transport code, MCNP (Monte Carlo N-Particle), version 5. In order to become computationally more efficient in view of taking part in the practical field of radiotherapy treatment planning, this work is focused mainly on the analysis of dose results and on the required computing time of different tallies applied in the model to speed up calculations.

  16. Monte Carlo applications to radiation shielding problems

    International Nuclear Information System (INIS)

    Subbaiah, K.V.

    2009-01-01

    transport in complex geometries is straightforward, while even the simplest finite geometries (e.g., thin foils) are very difficult to be dealt with by the transport equation. The main drawback of the Monte Carlo method lies in its random nature: all the results are affected by statistical uncertainties, which can be reduced at the expense of increasing the sampled population, and, hence, the computation time. Under special circumstances, the statistical uncertainties may be lowered by using variance-reduction techniques. Monte Carlo methods tend to be used when it is infeasible or impossible to compute an exact result with a deterministic algorithm. The term Monte Carlo was coined in the 1940s by physicists working on nuclear weapon projects in the Los Alamos National Laboratory

  17. Review of the Monte Carlo and deterministic codes in radiation protection and dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Tagziria, H

    2000-02-01

    Modelling a physical system can be carried out either stochastically or deterministically. An example of the former method is the Monte Carlo technique, in which statistically approximate methods are applied to exact models. No transport equation is solved as individual particles are simulated and some specific aspect (tally) of their average behaviour is recorded. The average behaviour of the physical system is then inferred using the central limit theorem. In contrast, deterministic codes use mathematically exact methods that are applied to approximate models to solve the transport equation for the average particle behaviour. The physical system is subdivided in boxes in the phase-space system and particles are followed from one box to the next. The smaller the boxes the better the approximations become. Although the Monte Carlo method has been used for centuries, its more recent manifestation has really emerged from the Manhattan project of the Word War II. Its invention is thought to be mainly due to Metropolis, Ulah (through his interest in poker), Fermi, von Neuman andRichtmeyer. Over the last 20 years or so, the Monte Carlo technique has become a powerful tool in radiation transport. This is due to users taking full advantage of richer cross section data, more powerful computers and Monte Carlo techniques for radiation transport, with high quality physics and better known source spectra. This method is a common sense approach to radiation transport and its success and popularity is quite often also due to necessity, because measurements are not always possible or affordable. In the Monte Carlo method, which is inherently realistic because nature is statistical, a more detailed physics is made possible by isolation of events while rather elaborate geometries can be modelled. Provided that the physics is correct, a simulation is exactly analogous to an experimenter counting particles. In contrast to the deterministic approach, however, a disadvantage of the

  18. Monte Carlo principles and applications

    Energy Technology Data Exchange (ETDEWEB)

    Raeside, D E [Oklahoma Univ., Oklahoma City (USA). Health Sciences Center

    1976-03-01

    The principles underlying the use of Monte Carlo methods are explained, for readers who may not be familiar with the approach. The generation of random numbers is discussed, and the connection between Monte Carlo methods and random numbers is indicated. Outlines of two well established Monte Carlo sampling techniques are given, together with examples illustrating their use. The general techniques for improving the efficiency of Monte Carlo calculations are considered. The literature relevant to the applications of Monte Carlo calculations in medical physics is reviewed.

  19. A Monte Carlo simulation of scattering reduction in spectral x-ray computed tomography

    DEFF Research Database (Denmark)

    Busi, Matteo; Olsen, Ulrik Lund; Bergbäck Knudsen, Erik

    2017-01-01

    In X-ray computed tomography (CT), scattered radiation plays an important role in the accurate reconstruction of the inspected object, leading to a loss of contrast between the different materials in the reconstruction volume and cupping artifacts in the images. We present a Monte Carlo simulation...

  20. Design and study of parallel computing environment of Monte Carlo simulation for particle therapy planning using a public cloud-computing infrastructure

    International Nuclear Information System (INIS)

    Yokohama, Noriya

    2013-01-01

    This report was aimed at structuring the design of architectures and studying performance measurement of a parallel computing environment using a Monte Carlo simulation for particle therapy using a high performance computing (HPC) instance within a public cloud-computing infrastructure. Performance measurements showed an approximately 28 times faster speed than seen with single-thread architecture, combined with improved stability. A study of methods of optimizing the system operations also indicated lower cost. (author)

  1. Higher-order techniques in computational electromagnetics

    CERN Document Server

    Graglia, Roberto D

    2016-01-01

    Higher-Order Techniques in Computational Electromagnetics explains 'high-order' techniques that can significantly improve the accuracy, computational cost, and reliability of computational techniques for high-frequency electromagnetics, such as antennas, microwave devices and radar scattering applications.

  2. Monte Carlo simulation of neutron counters for safeguards applications

    International Nuclear Information System (INIS)

    Looman, Marc; Peerani, Paolo; Tagziria, Hamid

    2009-01-01

    MCNP-PTA is a new Monte Carlo code for the simulation of neutron counters for nuclear safeguards applications developed at the Joint Research Centre (JRC) in Ispra (Italy). After some preliminary considerations outlining the general aspects involved in the computational modelling of neutron counters, this paper describes the specific details and approximations which make up the basis of the model implemented in the code. One of the major improvements allowed by the use of Monte Carlo simulation is a considerable reduction in both the experimental work and in the reference materials required for the calibration of the instruments. This new approach to the calibration of counters using Monte Carlo simulation techniques is also discussed.

  3. Specialized Monte Carlo codes versus general-purpose Monte Carlo codes

    International Nuclear Information System (INIS)

    Moskvin, Vadim; DesRosiers, Colleen; Papiez, Lech; Lu, Xiaoyi

    2002-01-01

    The possibilities of Monte Carlo modeling for dose calculations and optimization treatment are quite limited in radiation oncology applications. The main reason is that the Monte Carlo technique for dose calculations is time consuming while treatment planning may require hundreds of possible cases of dose simulations to be evaluated for dose optimization. The second reason is that general-purpose codes widely used in practice, require an experienced user to customize them for calculations. This paper discusses the concept of Monte Carlo code design that can avoid the main problems that are preventing wide spread use of this simulation technique in medical physics. (authors)

  4. Inverse Monte Carlo: a unified reconstruction algorithm for SPECT

    International Nuclear Information System (INIS)

    Floyd, C.E.; Coleman, R.E.; Jaszczak, R.J.

    1985-01-01

    Inverse Monte Carlo (IMOC) is presented as a unified reconstruction algorithm for Emission Computed Tomography (ECT) providing simultaneous compensation for scatter, attenuation, and the variation of collimator resolution with depth. The technique of inverse Monte Carlo is used to find an inverse solution to the photon transport equation (an integral equation for photon flux from a specified source) for a parameterized source and specific boundary conditions. The system of linear equations so formed is solved to yield the source activity distribution for a set of acquired projections. For the studies presented here, the equations are solved using the EM (Maximum Likelihood) algorithm although other solution algorithms, such as Least Squares, could be employed. While the present results specifically consider the reconstruction of camera-based Single Photon Emission Computed Tomographic (SPECT) images, the technique is equally valid for Positron Emission Tomography (PET) if a Monte Carlo model of such a system is used. As a preliminary evaluation, experimentally acquired SPECT phantom studies for imaging Tc-99m (140 keV) are presented which demonstrate the quantitative compensation for scatter and attenuation for a two dimensional (single slice) reconstruction. The algorithm may be expanded in a straight forward manner to full three dimensional reconstruction including compensation for out of plane scatter

  5. ANALYSIS OF MONTE CARLO SIMULATION SAMPLING TECHNIQUES ON SMALL SIGNAL STABILITY OF WIND GENERATOR- CONNECTED POWER SYSTEM

    Directory of Open Access Journals (Sweden)

    TEMITOPE RAPHAEL AYODELE

    2016-04-01

    Full Text Available Monte Carlo simulation using Simple Random Sampling (SRS technique is popularly known for its ability to handle complex uncertainty problems. However, to produce a reasonable result, it requires huge sample size. This makes it to be computationally expensive, time consuming and unfit for online power system applications. In this article, the performance of Latin Hypercube Sampling (LHS technique is explored and compared with SRS in term of accuracy, robustness and speed for small signal stability application in a wind generator-connected power system. The analysis is performed using probabilistic techniques via eigenvalue analysis on two standard networks (Single Machine Infinite Bus and IEEE 16–machine 68 bus test system. The accuracy of the two sampling techniques is determined by comparing their different sample sizes with the IDEAL (conventional. The robustness is determined based on a significant variance reduction when the experiment is repeated 100 times with different sample sizes using the two sampling techniques in turn. Some of the results show that sample sizes generated from LHS for small signal stability application produces the same result as that of the IDEAL values starting from 100 sample size. This shows that about 100 sample size of random variable generated using LHS method is good enough to produce reasonable results for practical purpose in small signal stability application. It is also revealed that LHS has the least variance when the experiment is repeated 100 times compared to SRS techniques. This signifies the robustness of LHS over that of SRS techniques. 100 sample size of LHS produces the same result as that of the conventional method consisting of 50000 sample size. The reduced sample size required by LHS gives it computational speed advantage (about six times over the conventional method.

  6. Concepts and Plans towards fast large scale Monte Carlo production for the ATLAS Experiment

    Science.gov (United States)

    Ritsch, E.; Atlas Collaboration

    2014-06-01

    The huge success of the physics program of the ATLAS experiment at the Large Hadron Collider (LHC) during Run 1 relies upon a great number of simulated Monte Carlo events. This Monte Carlo production takes the biggest part of the computing resources being in use by ATLAS as of now. In this document we describe the plans to overcome the computing resource limitations for large scale Monte Carlo production in the ATLAS Experiment for Run 2, and beyond. A number of fast detector simulation, digitization and reconstruction techniques are being discussed, based upon a new flexible detector simulation framework. To optimally benefit from these developments, a redesigned ATLAS MC production chain is presented at the end of this document.

  7. History and future perspectives of the Monte Carlo shell model -from Alphleet to K computer-

    International Nuclear Information System (INIS)

    Shimizu, Noritaka; Otsuka, Takaharu; Utsuno, Yutaka; Mizusaki, Takahiro; Honma, Michio; Abe, Takashi

    2013-01-01

    We report a history of the developments of the Monte Carlo shell model (MCSM). The MCSM was proposed in order to perform large-scale shell-model calculations which direct diagonalization method cannot reach. Since 1999 PC clusters were introduced for parallel computation of the MCSM. Since 2011 we participated the High Performance Computing Infrastructure Strategic Program and developed a new MCSM code for current massively parallel computers such as K computer. We discuss future perspectives concerning a new framework and parallel computation of the MCSM by incorporating conjugate gradient method and energy-variance extrapolation

  8. Exploring Various Monte Carlo Simulations for Geoscience Applications

    Science.gov (United States)

    Blais, R.

    2010-12-01

    Computer simulations are increasingly important in geoscience research and development. At the core of stochastic or Monte Carlo simulations are the random number sequences that are assumed to be distributed with specific characteristics. Computer generated random numbers, uniformly distributed on (0, 1), can be very different depending on the selection of pseudo-random number (PRN), or chaotic random number (CRN) generators. Equidistributed quasi-random numbers (QRNs) can also be used in Monte Carlo simulations. In the evaluation of some definite integrals, the resulting error variances can even be of different orders of magnitude. Furthermore, practical techniques for variance reduction such as Importance Sampling and Stratified Sampling can be implemented to significantly improve the results. A comparative analysis of these strategies has been carried out for computational applications in planar and spatial contexts. Based on these experiments, and on examples of geodetic applications of gravimetric terrain corrections and gravity inversion, conclusions and recommendations concerning their performance and general applicability are included.

  9. Exploring pseudo- and chaotic random Monte Carlo simulations

    Science.gov (United States)

    Blais, J. A. Rod; Zhang, Zhan

    2011-07-01

    Computer simulations are an increasingly important area of geoscience research and development. At the core of stochastic or Monte Carlo simulations are the random number sequences that are assumed to be distributed with specific characteristics. Computer-generated random numbers, uniformly distributed on (0, 1), can be very different depending on the selection of pseudo-random number (PRN) or chaotic random number (CRN) generators. In the evaluation of some definite integrals, the resulting error variances can even be of different orders of magnitude. Furthermore, practical techniques for variance reduction such as importance sampling and stratified sampling can be applied in most Monte Carlo simulations and significantly improve the results. A comparative analysis of these strategies has been carried out for computational applications in planar and spatial contexts. Based on these experiments, and on some practical examples of geodetic direct and inverse problems, conclusions and recommendations concerning their performance and general applicability are included.

  10. A Monte Carlo technique for signal level detection in implanted intracranial pressure monitoring.

    Science.gov (United States)

    Avent, R K; Charlton, J D; Nagle, H T; Johnson, R N

    1987-01-01

    Statistical monitoring techniques like CUSUM, Trigg's tracking signal and EMP filtering have a major advantage over more recent techniques, such as Kalman filtering, because of their inherent simplicity. In many biomedical applications, such as electronic implantable devices, these simpler techniques have greater utility because of the reduced requirements on power, logic complexity and sampling speed. The determination of signal means using some of the earlier techniques are reviewed in this paper, and a new Monte Carlo based method with greater capability to sparsely sample a waveform and obtain an accurate mean value is presented. This technique may find widespread use as a trend detection method when reduced power consumption is a requirement.

  11. Advanced Computational Methods for Monte Carlo Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-01-12

    This course is intended for graduate students who already have a basic understanding of Monte Carlo methods. It focuses on advanced topics that may be needed for thesis research, for developing new state-of-the-art methods, or for working with modern production Monte Carlo codes.

  12. TITAN: a computer program for accident occurrence frequency analyses by component Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Nomura, Yasushi [Department of Fuel Cycle Safety Research, Nuclear Safety Research Center, Tokai Research Establishment, Japan Atomic Energy Research Institute, Tokai, Ibaraki (Japan); Tamaki, Hitoshi [Department of Safety Research Technical Support, Tokai Research Establishment, Japan Atomic Energy Research Institute, Tokai, Ibaraki (Japan); Kanai, Shigeru [Fuji Research Institute Corporation, Tokyo (Japan)

    2000-04-01

    In a plant system consisting of complex equipments and components for a reprocessing facility, there might be grace time between an initiating event and a resultant serious accident, allowing operating personnel to take remedial actions, thus, terminating the ongoing accident sequence. A component Monte Carlo simulation computer program TITAN has been developed to analyze such a complex reliability model including the grace time without any difficulty to obtain an accident occurrence frequency. Firstly, basic methods for the component Monte Carlo simulation is introduced to obtain an accident occurrence frequency, and then, the basic performance such as precision, convergence, and parallelization of calculation, is shown through calculation of a prototype accident sequence model. As an example to illustrate applicability to a real scale plant model, a red oil explosion in a German reprocessing plant model is simulated to show that TITAN can give an accident occurrence frequency with relatively good accuracy. Moreover, results of uncertainty analyses by TITAN are rendered to show another performance, and a proposal is made for introducing of a new input-data format to adapt the component Monte Carlo simulation. The present paper describes the calculational method, performance, applicability to a real scale, and new proposal for the TITAN code. In the Appendixes, a conventional analytical method is shown to avoid complex and laborious calculation to obtain a strict solution of accident occurrence frequency, compared with Monte Carlo method. The user's manual and the list/structure of program are also contained in the Appendixes to facilitate TITAN computer program usage. (author)

  13. TITAN: a computer program for accident occurrence frequency analyses by component Monte Carlo simulation

    International Nuclear Information System (INIS)

    Nomura, Yasushi; Tamaki, Hitoshi; Kanai, Shigeru

    2000-04-01

    In a plant system consisting of complex equipments and components for a reprocessing facility, there might be grace time between an initiating event and a resultant serious accident, allowing operating personnel to take remedial actions, thus, terminating the ongoing accident sequence. A component Monte Carlo simulation computer program TITAN has been developed to analyze such a complex reliability model including the grace time without any difficulty to obtain an accident occurrence frequency. Firstly, basic methods for the component Monte Carlo simulation is introduced to obtain an accident occurrence frequency, and then, the basic performance such as precision, convergence, and parallelization of calculation, is shown through calculation of a prototype accident sequence model. As an example to illustrate applicability to a real scale plant model, a red oil explosion in a German reprocessing plant model is simulated to show that TITAN can give an accident occurrence frequency with relatively good accuracy. Moreover, results of uncertainty analyses by TITAN are rendered to show another performance, and a proposal is made for introducing of a new input-data format to adapt the component Monte Carlo simulation. The present paper describes the calculational method, performance, applicability to a real scale, and new proposal for the TITAN code. In the Appendixes, a conventional analytical method is shown to avoid complex and laborious calculation to obtain a strict solution of accident occurrence frequency, compared with Monte Carlo method. The user's manual and the list/structure of program are also contained in the Appendixes to facilitate TITAN computer program usage. (author)

  14. MCT: a Monte Carlo code for time-dependent neutron thermalization problems

    International Nuclear Information System (INIS)

    Cupini, E.; Simonini, R.

    1974-01-01

    In the Monte Carlo simulation of pulse source experiments, the neutron energy spectrum, spatial distribution and total density may be required for a long time after the pulse. If the assemblies are very small, as often occurs in the cases of interest, sophisticated Monte Carlo techniques must be applied which force neutrons to remain in the system during the time interval investigated. In the MCT code a splitting technique has been applied to neutrons exceeding assigned target times, and we have found that this technique compares very favorably with more usual ones, such as the expected leakage probability, giving large gains in computational time and variance. As an example, satisfactory asymptotic thermal spectra with a neutron attenuation of 10 -5 were quickly obtained. (U.S.)

  15. Assessment of fusion facility dose rate map using mesh adaptivity enhancements of hybrid Monte Carlo/deterministic techniques

    International Nuclear Information System (INIS)

    Ibrahim, Ahmad M.; Wilson, Paul P.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Grove, Robert E.

    2014-01-01

    Highlights: •Calculate the prompt dose rate everywhere throughout the entire fusion energy facility. •Utilize FW-CADIS to accurately perform difficult neutronics calculations for fusion energy systems. •Develop three mesh adaptivity algorithms to enhance FW-CADIS efficiency in fusion-neutronics calculations. -- Abstract: Three mesh adaptivity algorithms were developed to facilitate and expedite the use of the CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques in accurate full-scale neutronics simulations of fusion energy systems with immense sizes and complicated geometries. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility and resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation. Additionally, because of the significant increase in the efficiency of FW-CADIS simulations, the three algorithms enabled this difficult calculation to be accurately solved on a regular computer cluster, eliminating the need for a world-class super computer

  16. Reactor perturbation calculations by Monte Carlo methods

    International Nuclear Information System (INIS)

    Gubbins, M.E.

    1965-09-01

    Whilst Monte Carlo methods are useful for reactor calculations involving complicated geometry, it is difficult to apply them to the calculation of perturbation worths because of the large amount of computing time needed to obtain good accuracy. Various ways of overcoming these difficulties are investigated in this report, with the problem of estimating absorbing control rod worths particularly in mind. As a basis for discussion a method of carrying out multigroup reactor calculations by Monte Carlo methods is described. Two methods of estimating a perturbation worth directly, without differencing two quantities of like magnitude, are examined closely but are passed over in favour of a third method based on a correlation technique. This correlation method is described, and demonstrated by a limited range of calculations for absorbing control rods in a fast reactor. In these calculations control rod worths of between 1% and 7% in reactivity are estimated to an accuracy better than 10% (3 standard errors) in about one hour's computing time on the English Electric KDF.9 digital computer. (author)

  17. Quantum Monte Carlo for atoms and molecules

    International Nuclear Information System (INIS)

    Barnett, R.N.

    1989-11-01

    The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H 2 , LiH, Li 2 , and H 2 O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li 2 , and H 2 O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations, the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions

  18. Soft computing techniques in engineering applications

    CERN Document Server

    Zhong, Baojiang

    2014-01-01

    The Soft Computing techniques, which are based on the information processing of biological systems are now massively used in the area of pattern recognition, making prediction & planning, as well as acting on the environment. Ideally speaking, soft computing is not a subject of homogeneous concepts and techniques; rather, it is an amalgamation of distinct methods that confirms to its guiding principle. At present, the main aim of soft computing is to exploit the tolerance for imprecision and uncertainty to achieve tractability, robustness and low solutions cost. The principal constituents of soft computing techniques are probabilistic reasoning, fuzzy logic, neuro-computing, genetic algorithms, belief networks, chaotic systems, as well as learning theory. This book covers contributions from various authors to demonstrate the use of soft computing techniques in various applications of engineering.  

  19. Concepts and Plans towards fast large scale Monte Carlo production for the ATLAS Experiment

    CERN Document Server

    Chapman, J; Duehrssen, M; Elsing, M; Froidevaux, D; Harrington, R; Jansky, R; Langenberg, R; Mandrysch, R; Marshall, Z; Ritsch, E; Salzburger, A

    2014-01-01

    The huge success of the physics program of the ATLAS experiment at the Large Hadron Collider (LHC) during run I relies upon a great number of simulated Monte Carlo events. This Monte Carlo production takes the biggest part of the computing resources being in use by ATLAS as of now. In this document we describe the plans to overcome the computing resource limitations for large scale Monte Carlo production in the ATLAS Experiment for run II, and beyond. A number of fast detector simulation, digitization and reconstruction techniques and are being discussed, based upon a new flexible detector simulation framework. To optimally benefit from these developments, a redesigned ATLAS MC production chain is presented at the end of this document.

  20. Monte Carlo methods

    Directory of Open Access Journals (Sweden)

    Bardenet Rémi

    2013-07-01

    Full Text Available Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of Monte Carlo in experimental physics, and point to landmarks in the literature for the curious reader.

  1. An efficient interpolation technique for jump proposals in reversible-jump Markov chain Monte Carlo calculations

    Science.gov (United States)

    Farr, W. M.; Mandel, I.; Stevens, D.

    2015-01-01

    Selection among alternative theoretical models given an observed dataset is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty and it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted in the Markov chain Monte Carlo (MCMC) algorithm and convergence is correspondingly slow. Here, we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose intermodel jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in modest dimensionality. We show that our technique leads to improved convergence over naive jumps in an RJMCMC, and compare it to other proposals in the literature to improve the convergence of RJMCMCs. We also demonstrate the use of the same interpolation technique as a way to construct efficient ‘global’ proposal distributions for single-model MCMCs without prior knowledge of the structure of the posterior distribution, and discuss improvements that permit the method to be used in higher dimensional spaces efficiently. PMID:26543580

  2. Hybrid Adaptive Multilevel Monte Carlo Algorithm for Non-Smooth Observables of Itô Stochastic Differential Equations

    KAUST Repository

    Rached, Nadhir B.

    2013-01-01

    The Monte Carlo forward Euler method with uniform time stepping is the standard technique to compute an approximation of the expected payoff of a solution of an Itô SDE. For a given accuracy requirement TOL, the complexity of this technique for well

  3. Markov chain Monte Carlo techniques applied to parton distribution functions determination: Proof of concept

    Science.gov (United States)

    Gbedo, Yémalin Gabin; Mangin-Brinet, Mariane

    2017-07-01

    We present a new procedure to determine parton distribution functions (PDFs), based on Markov chain Monte Carlo (MCMC) methods. The aim of this paper is to show that we can replace the standard χ2 minimization by procedures grounded on statistical methods, and on Bayesian inference in particular, thus offering additional insight into the rich field of PDFs determination. After a basic introduction to these techniques, we introduce the algorithm we have chosen to implement—namely Hybrid (or Hamiltonian) Monte Carlo. This algorithm, initially developed for Lattice QCD, turns out to be very interesting when applied to PDFs determination by global analyses; we show that it allows us to circumvent the difficulties due to the high dimensionality of the problem, in particular concerning the acceptance. A first feasibility study is performed and presented, which indicates that Markov chain Monte Carlo can successfully be applied to the extraction of PDFs and of their uncertainties.

  4. A conservative and a hybrid early rejection schemes for accelerating Monte Carlo molecular simulation

    KAUST Repository

    Kadoura, Ahmad Salim

    2014-03-17

    Molecular simulation could provide detailed description of fluid systems when compared to experimental techniques. They can also replace equations of state; however, molecular simulation usually costs considerable computational efforts. Several techniques have been developed to overcome such high computational costs. In this paper, two early rejection schemes, a conservative and a hybrid one, are introduced. In these two methods, undesired configurations generated by the Monte Carlo trials are rejected earlier than it would when using conventional algorithms. The methods are tested for structureless single-component Lennard-Jones particles in both canonical and NVT-Gibbs ensembles. The computational time reduction for both ensembles is observed at a wide range of thermodynamic conditions. Results show that computational time savings are directly proportional to the rejection rate of Monte Carlo trials. The proposed conservative scheme has shown to be successful in saving up to 40% of the computational time in the canonical ensemble and up to 30% in the NVT-Gibbs ensemble when compared to standard algorithms. In addition, it preserves the exact Markov chains produced by the Metropolis scheme. Further enhancement for NVT-Gibbs ensemble is achieved by combining this technique with the bond formation early rejection one. The hybrid method achieves more than 50% saving of the central processing unit (CPU) time.

  5. Mean field simulation for Monte Carlo integration

    CERN Document Server

    Del Moral, Pierre

    2013-01-01

    In the last three decades, there has been a dramatic increase in the use of interacting particle methods as a powerful tool in real-world applications of Monte Carlo simulation in computational physics, population biology, computer sciences, and statistical machine learning. Ideally suited to parallel and distributed computation, these advanced particle algorithms include nonlinear interacting jump diffusions; quantum, diffusion, and resampled Monte Carlo methods; Feynman-Kac particle models; genetic and evolutionary algorithms; sequential Monte Carlo methods; adaptive and interacting Marko

  6. A computer code package for Monte Carlo photon-electron transport simulation Comparisons with experimental benchmarks

    International Nuclear Information System (INIS)

    Popescu, Lucretiu M.

    2000-01-01

    A computer code package (PTSIM) for particle transport Monte Carlo simulation was developed using object oriented techniques of design and programming. A flexible system for simulation of coupled photon, electron transport, facilitating development of efficient simulation applications, was obtained. For photons: Compton and photo-electric effects, pair production and Rayleigh interactions are simulated, while for electrons, a class II condensed history scheme was considered, in which catastrophic interactions (Moeller electron-electron interaction, bremsstrahlung, etc.) are treated in detail and all other interactions with reduced individual effect on electron history are grouped together using continuous slowing down approximation and energy straggling theories. Electron angular straggling is simulated using Moliere theory or a mixed model in which scatters at large angles are treated as distinct events. Comparisons with experimentally benchmarks for electron transmission and bremsstrahlung emissions energy and angular spectra, and for dose calculations are presented

  7. Tripoli-4, a three-dimensional poly-kinetic particle transport Monte-Carlo code

    International Nuclear Information System (INIS)

    Both, J.P.; Lee, Y.K.; Mazzolo, A.; Peneliau, Y.; Petit, O.; Roesslinger, B.; Soldevila, M.

    2003-01-01

    In this updated of the Monte-Carlo transport code Tripoli-4, we list and describe its current main features. The code computes coupled neutron-photon propagation as well as the electron-photon cascade shower. While providing the user with common biasing techniques, it also implements an automatic weighting scheme. Tripoli-4 enables the user to compute the following physical quantities: a flux, a multiplication factor, a current, a reaction rate, a dose equivalent rate as well as deposit of energy and recoil energies. For each interesting physical quantity, a Monte-Carlo simulation offers different types of estimators. Tripoli-4 has support for execution in parallel mode. Special features and applications are also presented

  8. Tripoli-4, a three-dimensional poly-kinetic particle transport Monte-Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Both, J P; Lee, Y K; Mazzolo, A; Peneliau, Y; Petit, O; Roesslinger, B; Soldevila, M [CEA Saclay, Dir. de l' Energie Nucleaire (DEN/DM2S/SERMA/LEPP), 91 - Gif sur Yvette (France)

    2003-07-01

    In this updated of the Monte-Carlo transport code Tripoli-4, we list and describe its current main features. The code computes coupled neutron-photon propagation as well as the electron-photon cascade shower. While providing the user with common biasing techniques, it also implements an automatic weighting scheme. Tripoli-4 enables the user to compute the following physical quantities: a flux, a multiplication factor, a current, a reaction rate, a dose equivalent rate as well as deposit of energy and recoil energies. For each interesting physical quantity, a Monte-Carlo simulation offers different types of estimators. Tripoli-4 has support for execution in parallel mode. Special features and applications are also presented.

  9. Feasibility Study of Core Design with a Monte Carlo Code for APR1400 Initial core

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jinsun; Chang, Do Ik; Seong, Kibong [KEPCO NF, Daejeon (Korea, Republic of)

    2014-10-15

    The Monte Carlo calculation becomes more popular and useful nowadays due to the rapid progress in computing power and parallel calculation techniques. There have been many attempts to analyze a commercial core by Monte Carlo transport code using the enhanced computer capability, recently. In this paper, Monte Carlo calculation of APR1400 initial core has been performed and the results are compared with the calculation results of conventional deterministic code to find out the feasibility of core design using Monte Carlo code. SERPENT, a 3D continuous-energy Monte Carlo reactor physics burnup calculation code is used for this purpose and the KARMA-ASTRA code system, which is used for a deterministic code of comparison. The preliminary investigation for the feasibility of commercial core design with Monte Carlo code was performed in this study. Simplified core geometry modeling was performed for the reactor core surroundings and reactor coolant model is based on two region model. The reactivity difference at HZP ARO condition between Monte Carlo code and the deterministic code is consistent with each other and the reactivity difference during the depletion could be reduced by adopting the realistic moderator temperature. The reactivity difference calculated at HFP, BOC, ARO equilibrium condition was 180 ±9 pcm, with axial moderator temperature of a deterministic code. The computing time will be a significant burden at this time for the application of Monte Carlo code to the commercial core design even with the application of parallel computing because numerous core simulations are required for actual loading pattern search. One of the remedy will be a combination of Monte Carlo code and the deterministic code to generate the physics data. The comparison of physics parameters with sophisticated moderator temperature modeling and depletion will be performed for a further study.

  10. Dosimetry in radiotherapy and brachytherapy by Monte-Carlo GATE simulation on computing grid

    International Nuclear Information System (INIS)

    Thiam, Ch.O.

    2007-10-01

    Accurate radiotherapy treatment requires the delivery of a precise dose to the tumour volume and a good knowledge of the dose deposit to the neighbouring zones. Computation of the treatments is usually carried out by a Treatment Planning System (T.P.S.) which needs to be precise and fast. The G.A.T.E. platform for Monte-Carlo simulation based on G.E.A.N.T.4 is an emerging tool for nuclear medicine application that provides functionalities for fast and reliable dosimetric calculations. In this thesis, we studied in parallel a validation of the G.A.T.E. platform for the modelling of electrons and photons low energy sources and the optimized use of grid infrastructures to reduce simulations computing time. G.A.T.E. was validated for the dose calculation of point kernels for mono-energetic electrons and compared with the results of other Monte-Carlo studies. A detailed study was made on the energy deposit during electrons transport in G.E.A.N.T.4. In order to validate G.A.T.E. for very low energy photons (<35 keV), three models of radioactive sources used in brachytherapy and containing iodine 125 (2301 of Best Medical International; Symmetra of Uro- Med/Bebig and 6711 of Amersham) were simulated. Our results were analyzed according to the recommendations of task group No43 of American Association of Physicists in Medicine (A.A.P.M.). They show a good agreement between G.A.T.E., the reference studies and A.A.P.M. recommended values. The use of Monte-Carlo simulations for a better definition of the dose deposited in the tumour volumes requires long computing time. In order to reduce it, we exploited E.G.E.E. grid infrastructure where simulations are distributed using innovative technologies taking into account the grid status. Time necessary for the computing of a radiotherapy planning simulation using electrons was reduced by a factor 30. A Web platform based on G.E.N.I.U.S. portal was developed to make easily available all the methods to submit and manage G

  11. Error reduction techniques for Monte Carlo neutron transport calculations

    International Nuclear Information System (INIS)

    Ju, J.H.W.

    1981-01-01

    Monte Carlo methods have been widely applied to problems in nuclear physics, mathematical reliability, communication theory, and other areas. The work in this thesis is developed mainly with neutron transport applications in mind. For nuclear reactor and many other applications, random walk processes have been used to estimate multi-dimensional integrals and obtain information about the solution of integral equations. When the analysis is statistically based such calculations are often costly, and the development of efficient estimation techniques plays a critical role in these applications. All of the error reduction techniques developed in this work are applied to model problems. It is found that the nearly optimal parameters selected by the analytic method for use with GWAN estimator are nearly identical to parameters selected by the multistage method. Modified path length estimation (based on the path length importance measure) leads to excellent error reduction in all model problems examined. Finally, it should be pointed out that techniques used for neutron transport problems may be transferred easily to other application areas which are based on random walk processes. The transport problems studied in this dissertation provide exceptionally severe tests of the error reduction potential of any sampling procedure. It is therefore expected that the methods of this dissertation will prove useful in many other application areas

  12. Alpha particle density and energy distributions in tandem mirrors using Monte-Carlo techniques

    International Nuclear Information System (INIS)

    Kerns, J.A.

    1986-05-01

    We have simulated the alpha thermalization process using a Monte-Carlo technique, in which the alpha guiding center is followed between simulated collisions and Spitzer's collision model is used for the alpha-plasma interaction. Monte-Carlo techniques are used to determine the alpha radial birth position, the alpha particle position at a collision, and the angle scatter and dispersion at a collision. The plasma is modeled as a hot reacting core, surrounded by a cold halo plasma (T approx.50 eV). Alpha orbits that intersect the halo lose 90% of their energy to the halo electrons because of the halo drag, which is ten times greater than the drag in the core. The uneven drag across the alpha orbit also produces an outward, radial, guiding center drift. This drag drift is dependent on the plasma density and temperature radial profiles. We have modeled these profiles and have specifically studied a single-scale-length model, in which the density scale length (r/sub pD/) equals the temperature scale length (r/sub pT/), and a two-scale-length model, in which r/sub pD//r/sub pT/ = 1.1

  13. Volume Measurement Algorithm for Food Product with Irregular Shape using Computer Vision based on Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Joko Siswantoro

    2014-11-01

    Full Text Available Volume is one of important issues in the production and processing of food product. Traditionally, volume measurement can be performed using water displacement method based on Archimedes’ principle. Water displacement method is inaccurate and considered as destructive method. Computer vision offers an accurate and nondestructive method in measuring volume of food product. This paper proposes algorithm for volume measurement of irregular shape food product using computer vision based on Monte Carlo method. Five images of object were acquired from five different views and then processed to obtain the silhouettes of object. From the silhouettes of object, Monte Carlo method was performed to approximate the volume of object. The simulation result shows that the algorithm produced high accuracy and precision for volume measurement.

  14. Power distribution system reliability evaluation using dagger-sampling Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Y.; Zhao, S.; Ma, Y. [North China Electric Power Univ., Hebei (China). Dept. of Electrical Engineering

    2009-03-11

    A dagger-sampling Monte Carlo simulation method was used to evaluate power distribution system reliability. The dagger-sampling technique was used to record the failure of a component as an incident and to determine its occurrence probability by generating incident samples using random numbers. The dagger sampling technique was combined with the direct sequential Monte Carlo method to calculate average values of load point indices and system indices. Results of the 2 methods with simulation times of up to 100,000 years were then compared. The comparative evaluation showed that less computing time was required using the dagger-sampling technique due to its higher convergence speed. When simulation times were 1000 years, the dagger-sampling method required 0.05 seconds to accomplish an evaluation, while the direct method required 0.27 seconds. 12 refs., 3 tabs., 4 figs.

  15. Verification of the Monte Carlo differential operator technique for MCNP trademark

    International Nuclear Information System (INIS)

    McKinney, G.W.; Iverson, J.L.

    1996-02-01

    The differential operator perturbation technique has been incorporated into the Monte Carlo N-Particle transport code MCNP and will become a standard feature of future releases. This feature includes first and second order terms of the Taylor series expansion for response perturbations related to cross-section data (i.e., density, composition, etc.). Perturbation and sensitivity analyses can benefit from this technique in that predicted changes in one or more tally responses may be obtained for multiple perturbations in a single run. The user interface is intuitive, yet flexible enough to allow for changes in a specific microscopic cross section over a specified energy range. With this technique, a precise estimate of a small change in response is easily obtained, even when the standard deviation of the unperturbed tally is greater than the change. Furthermore, results presented in this report demonstrate that first and second order terms can offer acceptable accuracy, to within a few percent, for up to 20-30% changes in a response

  16. Application of Monte Carlo codes to neutron dosimetry

    International Nuclear Information System (INIS)

    Prevo, C.T.

    1982-01-01

    In neutron dosimetry, calculations enable one to predict the response of a proposed dosimeter before effort is expended to design and fabricate the neutron instrument or dosimeter. The nature of these calculations requires the use of computer programs that implement mathematical models representing the transport of radiation through attenuating media. Numerical, and in some cases analytical, solutions of these models can be obtained by one of several calculational techniques. All of these techniques are either approximate solutions to the well-known Boltzmann equation or are based on kernels obtained from solutions to the equation. The Boltzmann equation is a precise mathematical description of neutron behavior in terms of position, energy, direction, and time. The solution of the transport equation represents the average value of the particle flux density. Integral forms of the transport equation are generally regarded as the formal basis for the Monte Carlo method, the results of which can in principle be made to approach the exact solution. This paper focuses on the Monte Carlo technique

  17. The OpenMC Monte Carlo particle transport code

    International Nuclear Information System (INIS)

    Romano, Paul K.; Forget, Benoit

    2013-01-01

    Highlights: ► An open source Monte Carlo particle transport code, OpenMC, has been developed. ► Solid geometry and continuous-energy physics allow high-fidelity simulations. ► Development has focused on high performance and modern I/O techniques. ► OpenMC is capable of scaling up to hundreds of thousands of processors. ► Results on a variety of benchmark problems agree with MCNP5. -- Abstract: A new Monte Carlo code called OpenMC is currently under development at the Massachusetts Institute of Technology as a tool for simulation on high-performance computing platforms. Given that many legacy codes do not scale well on existing and future parallel computer architectures, OpenMC has been developed from scratch with a focus on high performance scalable algorithms as well as modern software design practices. The present work describes the methods used in the OpenMC code and demonstrates the performance and accuracy of the code on a variety of problems.

  18. Biases in Monte Carlo eigenvalue calculations

    Energy Technology Data Exchange (ETDEWEB)

    Gelbard, E.M.

    1992-12-01

    The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the ``fixed-source`` case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated (``replicated``) over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here.

  19. Biases in Monte Carlo eigenvalue calculations

    Energy Technology Data Exchange (ETDEWEB)

    Gelbard, E.M.

    1992-01-01

    The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the fixed-source'' case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated ( replicated'') over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here.

  20. Biases in Monte Carlo eigenvalue calculations

    International Nuclear Information System (INIS)

    Gelbard, E.M.

    1992-01-01

    The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the ''fixed-source'' case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated (''replicated'') over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here

  1. Monte Carlo treatment planning and high-resolution alpha-track autoradiography for neutron capture therapy

    Energy Technology Data Exchange (ETDEWEB)

    Zamenhof, R.G.; Lin, K.; Ziegelmiller, D.; Clement, S.; Lui, C.; Harling, O.K.

    Monte Carlo simulations of thermal neutron flux distributions in a mathematical head model have been compared to experimental measurements in a corresponding anthropomorphic gelatin-based head phantom irradiated by a thermal neutron beam as presently available at the MITR-II Research Reactor. Excellent agreement between Monte Carlo and experimental measurements has encouraged us to employ the Monte Carlo simulation technique to approach treatment planning problems in neutron capture therapy. We have also implemented a high-resolution alpha-track autoradiography technique originally developed in our laboratory at MIT. Initial autoradiograms produced by this technique meet our expectations in terms of the high resolution available and the ability to etch tracks without concommitant destruction of stained tissue. Our preliminary results with computer-aided track distribution analysis indicate that this approach is very promising in being able to quantify boron distributions in tissue at the subcellular level with a minimum amount of operator effort necessary.

  2. Development of 1-year-old computational phantom and calculation of organ doses during CT scans using Monte Carlo simulation

    International Nuclear Information System (INIS)

    Pan, Yuxi; Qiu, Rui; Ge, Chaoyong; Xie, Wenzhang; Li, Junli; Gao, Linfeng; Zheng, Junzheng

    2014-01-01

    With the rapidly growing number of CT examinations, the consequential radiation risk has aroused more and more attention. The average dose in each organ during CT scans can only be obtained by using Monte Carlo simulation with computational phantoms. Since children tend to have higher radiation sensitivity than adults, the radiation dose of pediatric CT examinations requires special attention and needs to be assessed accurately. So far, studies on organ doses from CT exposures for pediatric patients are still limited. In this work, a 1-year-old computational phantom was constructed. The body contour was obtained from the CT images of a 1-year-old physical phantom and the internal organs were deformed from an existing Chinese reference adult phantom. To ensure the organ locations in the 1-year-old computational phantom were consistent with those of the physical phantom, the organ locations in 1-year-old computational phantom were manually adjusted one by one, and the organ masses were adjusted to the corresponding Chinese reference values. Moreover, a CT scanner model was developed using the Monte Carlo technique and the 1-year-old computational phantom was applied to estimate organ doses derived from simulated CT exposures. As a result, a database including doses to 36 organs and tissues from 47 single axial scans was built. It has been verified by calculation that doses of axial scans are close to those of helical scans; therefore, this database could be applied to helical scans as well. Organ doses were calculated using the database and compared with those obtained from the measurements made in the physical phantom for helical scans. The differences between simulation and measurement were less than 25% for all organs. The result shows that the 1-year-old phantom developed in this work can be used to calculate organ doses in CT exposures, and the dose database provides a method for the estimation of 1-year-old patient doses in a variety of CT examinations. (paper)

  3. Automated importance generation and biasing techniques for Monte Carlo shielding techniques by the TRIPOLI-3 code

    International Nuclear Information System (INIS)

    Both, J.P.; Nimal, J.C.; Vergnaud, T.

    1990-01-01

    We discuss an automated biasing procedure for generating the parameters necessary to achieve efficient Monte Carlo biasing shielding calculations. The biasing techniques considered here are exponential transform and collision biasing deriving from the concept of the biased game based on the importance function. We use a simple model of the importance function with exponential attenuation as the distance to the detector increases. This importance function is generated on a three-dimensional mesh including geometry and with graph theory algorithms. This scheme is currently being implemented in the third version of the neutron and gamma ray transport code TRIPOLI-3. (author)

  4. Statistical and Computational Techniques in Manufacturing

    CERN Document Server

    2012-01-01

    In recent years, interest in developing statistical and computational techniques for applied manufacturing engineering has been increased. Today, due to the great complexity of manufacturing engineering and the high number of parameters used, conventional approaches are no longer sufficient. Therefore, in manufacturing, statistical and computational techniques have achieved several applications, namely, modelling and simulation manufacturing processes, optimization manufacturing parameters, monitoring and control, computer-aided process planning, etc. The present book aims to provide recent information on statistical and computational techniques applied in manufacturing engineering. The content is suitable for final undergraduate engineering courses or as a subject on manufacturing at the postgraduate level. This book serves as a useful reference for academics, statistical and computational science researchers, mechanical, manufacturing and industrial engineers, and professionals in industries related to manu...

  5. Calculation of total counting efficiency of a NaI(Tl) detector by hybrid Monte-Carlo method for point and disk sources

    Energy Technology Data Exchange (ETDEWEB)

    Yalcin, S. [Education Faculty, Kastamonu University, 37200 Kastamonu (Turkey)], E-mail: yalcin@gazi.edu.tr; Gurler, O.; Kaynak, G. [Department of Physics, Faculty of Arts and Sciences, Uludag University, Gorukle Campus, 16059 Bursa (Turkey); Gundogdu, O. [Department of Physics, School of Engineering and Physical Sciences, University of Surrey, Guildford GU2 7XH (United Kingdom)

    2007-10-15

    This paper presents results on the total gamma counting efficiency of a NaI(Tl) detector from point and disk sources. The directions of photons emitted from the source were determined by Monte-Carlo techniques and the photon path lengths in the detector were determined by analytic equations depending on photon directions. This is called the hybrid Monte-Carlo method where analytical expressions are incorporated into the Monte-Carlo simulations. A major advantage of this technique is the short computation time compared to other techniques on similar computational platforms. Another advantage is the flexibility for inputting detector-related parameters (such as source-detector distance, detector radius, source radius, detector linear attenuation coefficient) into the algorithm developed, thus making it an easy and flexible method to apply to other detector systems and configurations. The results of the total counting efficiency model put forward for point and disc sources were compared with the previous work reported in the literature.

  6. Calculation of total counting efficiency of a NaI(Tl) detector by hybrid Monte-Carlo method for point and disk sources

    International Nuclear Information System (INIS)

    Yalcin, S.; Gurler, O.; Kaynak, G.; Gundogdu, O.

    2007-01-01

    This paper presents results on the total gamma counting efficiency of a NaI(Tl) detector from point and disk sources. The directions of photons emitted from the source were determined by Monte-Carlo techniques and the photon path lengths in the detector were determined by analytic equations depending on photon directions. This is called the hybrid Monte-Carlo method where analytical expressions are incorporated into the Monte-Carlo simulations. A major advantage of this technique is the short computation time compared to other techniques on similar computational platforms. Another advantage is the flexibility for inputting detector-related parameters (such as source-detector distance, detector radius, source radius, detector linear attenuation coefficient) into the algorithm developed, thus making it an easy and flexible method to apply to other detector systems and configurations. The results of the total counting efficiency model put forward for point and disc sources were compared with the previous work reported in the literature

  7. Atmosphere Re-Entry Simulation Using Direct Simulation Monte Carlo (DSMC Method

    Directory of Open Access Journals (Sweden)

    Francesco Pellicani

    2016-05-01

    Full Text Available Hypersonic re-entry vehicles aerothermodynamic investigations provide fundamental information to other important disciplines like materials and structures, assisting the development of thermal protection systems (TPS efficient and with a low weight. In the transitional flow regime, where thermal and chemical equilibrium is almost absent, a new numerical method for such studies has been introduced, the direct simulation Monte Carlo (DSMC numerical technique. The acceptance and applicability of the DSMC method have increased significantly in the 50 years since its invention thanks to the increase in computer speed and to the parallel computing. Anyway, further verification and validation efforts are needed to lead to its greater acceptance. In this study, the Monte Carlo simulator OpenFOAM and Sparta have been studied and benchmarked against numerical and theoretical data for inert and chemically reactive flows and the same will be done against experimental data in the near future. The results show the validity of the data found with the DSMC. The best setting of the fundamental parameters used by a DSMC simulator are presented for each software and they are compared with the guidelines deriving from the theory behind the Monte Carlo method. In particular, the number of particles per cell was found to be the most relevant parameter to achieve valid and optimized results. It is shown how a simulation with a mean value of one particle per cell gives sufficiently good results with very low computational resources. This achievement aims to reconsider the correct investigation method in the transitional regime where both the direct simulation Monte Carlo (DSMC and the computational fluid-dynamics (CFD can work, but with a different computational effort.

  8. Reactor physics simulations with coupled Monte Carlo calculation and computational fluid dynamics

    International Nuclear Information System (INIS)

    Seker, V.; Thomas, J. W.; Downar, T. J.

    2007-01-01

    The interest in high fidelity modeling of nuclear reactor cores has increased over the last few years and has become computationally more feasible because of the dramatic improvements in processor speed and the availability of low cost parallel platforms. In the research here high fidelity, multi-physics analyses was performed by solving the neutron transport equation using Monte Carlo methods and by solving the thermal-hydraulics equations using computational fluid dynamics. A computation tool based on coupling the Monte Carlo code MCNP5 and the Computational Fluid Dynamics (CFD) code STAR-CD was developed as an audit tool for lower order nuclear reactor calculations. This paper presents the methodology of the developed computer program 'McSTAR' along with the verification and validation efforts. McSTAR is written in PERL programming language and couples MCNP5 and the commercial CFD code STAR-CD. MCNP uses a continuous energy cross section library produced by the NJOY code system from the raw ENDF/B data. A major part of the work was to develop and implement methods to update the cross section library with the temperature distribution calculated by STAR-CD for every region. Three different methods were investigated and two of them are implemented in McSTAR. The user subroutines in STAR-CD are modified to read the power density data and assign them to the appropriate variables in the program and to write an output data file containing the temperature, density and indexing information to perform the mapping between MCNP and STAR-CD cells. The necessary input file manipulation, data file generation, normalization and multi-processor calculation settings are all done through the program flow in McSTAR. Initial testing of the code was performed using a single pin cell and a 3X3 PWR pin-cell problem. The preliminary results of the single pin-cell problem are compared with those obtained from a STAR-CD coupled calculation with the deterministic transport code De

  9. GATE: Improving the computational efficiency

    International Nuclear Information System (INIS)

    Staelens, S.; De Beenhouwer, J.; Kruecker, D.; Maigne, L.; Rannou, F.; Ferrer, L.; D'Asseler, Y.; Buvat, I.; Lemahieu, I.

    2006-01-01

    GATE is a software dedicated to Monte Carlo simulations in Single Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET). An important disadvantage of those simulations is the fundamental burden of computation time. This manuscript describes three different techniques in order to improve the efficiency of those simulations. Firstly, the implementation of variance reduction techniques (VRTs), more specifically the incorporation of geometrical importance sampling, is discussed. After this, the newly designed cluster version of the GATE software is described. The experiments have shown that GATE simulations scale very well on a cluster of homogeneous computers. Finally, an elaboration on the deployment of GATE on the Enabling Grids for E-Science in Europe (EGEE) grid will conclude the description of efficiency enhancement efforts. The three aforementioned methods improve the efficiency of GATE to a large extent and make realistic patient-specific overnight Monte Carlo simulations achievable

  10. Stabilizing canonical-ensemble calculations in the auxiliary-field Monte Carlo method

    Science.gov (United States)

    Gilbreth, C. N.; Alhassid, Y.

    2015-03-01

    Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method.

  11. Vectorizing and macrotasking Monte Carlo neutral particle algorithms

    International Nuclear Information System (INIS)

    Heifetz, D.B.

    1987-04-01

    Monte Carlo algorithms for computing neutral particle transport in plasmas have been vectorized and macrotasked. The techniques used are directly applicable to Monte Carlo calculations of neutron and photon transport, and Monte Carlo integration schemes in general. A highly vectorized code was achieved by calculating test flight trajectories in loops over arrays of flight data, isolating the conditional branches to as few a number of loops as possible. A number of solutions are discussed to the problem of gaps appearing in the arrays due to completed flights, which impede vectorization. A simple and effective implementation of macrotasking is achieved by dividing the calculation of the test flight profile among several processors. A tree of random numbers is used to ensure reproducible results. The additional memory required for each task may preclude using a larger number of tasks. In future machines, the limit of macrotasking may be possible, with each test flight, and split test flight, being a separate task

  12. A hybrid transport-diffusion method for Monte Carlo radiative-transfer simulations

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Urbatsch, Todd J.; Evans, Thomas M.; Buksas, Michael W.

    2007-01-01

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Monte Carlo particle-transport simulations in diffusive media. If standard Monte Carlo is used in such media, particle histories will consist of many small steps, resulting in a computationally expensive calculation. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Each discrete step replaces many small Monte Carlo steps, thus increasing the efficiency of the simulation. In addition, given that DDMC is based on a diffusion equation, it should produce accurate solutions if used judiciously. In practice, DDMC is combined with standard Monte Carlo to form a hybrid transport-diffusion method that can accurately simulate problems with both diffusive and non-diffusive regions. In this paper, we extend previously developed DDMC techniques in several ways that improve the accuracy and utility of DDMC for nonlinear, time-dependent, radiative-transfer calculations. The use of DDMC in these types of problems is advantageous since, due to the underlying linearizations, optically thick regions appear to be diffusive. First, we employ a diffusion equation that is discretized in space but is continuous in time. Not only is this methodology theoretically more accurate than temporally discretized DDMC techniques, but it also has the benefit that a particle's time is always known. Thus, there is no ambiguity regarding what time to assign a particle that leaves an optically thick region (where DDMC is used) and begins transporting by standard Monte Carlo in an optically thin region. Also, we treat the interface between optically thick and optically thin regions with an improved method, based on the asymptotic diffusion-limit boundary condition, that can produce accurate results regardless of the angular distribution of the incident Monte Carlo particles. Finally, we develop a technique for estimating radiation momentum deposition during the

  13. Use of Monte Carlo Methods for determination of isodose curves in brachytherapy; Uso de tecnicas Monte Carlo para determinacao de curvas de isodose em braquiterapia

    Energy Technology Data Exchange (ETDEWEB)

    Vieira, Jose Wilson

    2001-08-01

    Brachytherapy is a special form of cancer treatment in which the radioactive source is very close to or inside the tumor with the objective of causing the necrosis of the cancerous tissue. The intensity of cell response to the radiation varies according to the tissue type and degree of differentiation. Since the malign cells are less differentiated than the normal ones, they are more sensitive to the radiation. This is the basis for radiotherapy techniques. Institutes that work with the application of high dose rates use sophisticated computer programs to calculate the necessary dose to achieve the necrosis of the tumor and the same time, minimizing the irradiation of tissues and organs of the neighborhood. With knowledge the characteristics of the source and the tumor, it is possible to trace isodose curves with the necessary information for planning the brachytherapy in patients. The objective of this work is, using Monte Carlo techniques, to develop a computer program - the ISODOSE - which allows to determine isodose curves in turn of linear radioactive sources used in brachytherapy. The development of ISODOSE is important because the available commercial programs, in general, are very expensive and practically inaccessible to small clinics. The use of Monte Carlo techniques is viable because they avoid problems inherent to analytic solutions as, for instance , the integration of functions with singularities in its domain. The results of ISODOSE were compared with similar data found in the literature and also with those obtained at the institutes of radiotherapy of the 'Hospital do Cancer do Recife' and of the 'Hospital Portugues do Recife'. ISODOSE presented good performance, mainly, due to the Monte Carlo techniques, that allowed a quite detailed drawing of the isodose curves in turn of linear sources. (author)

  14. Use of Monte Carlo Methods for determination of isodose curves in brachytherapy; Uso de tecnicas Monte Carlo para determinacao de curvas de isodose em braquiterapia

    Energy Technology Data Exchange (ETDEWEB)

    Vieira, Jose Wilson

    2001-08-01

    Brachytherapy is a special form of cancer treatment in which the radioactive source is very close to or inside the tumor with the objective of causing the necrosis of the cancerous tissue. The intensity of cell response to the radiation varies according to the tissue type and degree of differentiation. Since the malign cells are less differentiated than the normal ones, they are more sensitive to the radiation. This is the basis for radiotherapy techniques. Institutes that work with the application of high dose rates use sophisticated computer programs to calculate the necessary dose to achieve the necrosis of the tumor and the same time, minimizing the irradiation of tissues and organs of the neighborhood. With knowledge the characteristics of the source and the tumor, it is possible to trace isodose curves with the necessary information for planning the brachytherapy in patients. The objective of this work is, using Monte Carlo techniques, to develop a computer program - the ISODOSE - which allows to determine isodose curves in turn of linear radioactive sources used in brachytherapy. The development of ISODOSE is important because the available commercial programs, in general, are very expensive and practically inaccessible to small clinics. The use of Monte Carlo techniques is viable because they avoid problems inherent to analytic solutions as, for instance , the integration of functions with singularities in its domain. The results of ISODOSE were compared with similar data found in the literature and also with those obtained at the institutes of radiotherapy of the 'Hospital do Cancer do Recife' and of the 'Hospital Portugues do Recife'. ISODOSE presented good performance, mainly, due to the Monte Carlo techniques, that allowed a quite detailed drawing of the isodose curves in turn of linear sources. (author)

  15. Efficiency of the delta-tracking technique for Monte Carlo calculations applied to neutron-transport simulations of the advanced Candu reactor design

    International Nuclear Information System (INIS)

    Arsenault, Benoit; Le Tellier, Romain; Hebert, Alain

    2008-01-01

    The paper presents the results of a first implementation of a Monte Carlo module in DRAGON Version 4 based on the delta-tracking technique. The Monte Carlo module uses the geometry and the self-shielded multigroup cross-sections calculated with a deterministic model. The module has been tested with three different configurations of an ACR TM -type lattice. The paper also discusses the impact of this approach on the efficiency of the Monte Carlo module. (authors)

  16. Diffusion Monte Carlo approach versus adiabatic computation for local Hamiltonians

    Science.gov (United States)

    Bringewatt, Jacob; Dorland, William; Jordan, Stephen P.; Mink, Alan

    2018-02-01

    Most research regarding quantum adiabatic optimization has focused on stoquastic Hamiltonians, whose ground states can be expressed with only real non-negative amplitudes and thus for whom destructive interference is not manifest. This raises the question of whether classical Monte Carlo algorithms can efficiently simulate quantum adiabatic optimization with stoquastic Hamiltonians. Recent results have given counterexamples in which path-integral and diffusion Monte Carlo fail to do so. However, most adiabatic optimization algorithms, such as for solving MAX-k -SAT problems, use k -local Hamiltonians, whereas our previous counterexample for diffusion Monte Carlo involved n -body interactions. Here we present a 6-local counterexample which demonstrates that even for these local Hamiltonians there are cases where diffusion Monte Carlo cannot efficiently simulate quantum adiabatic optimization. Furthermore, we perform empirical testing of diffusion Monte Carlo on a standard well-studied class of permutation-symmetric tunneling problems and similarly find large advantages for quantum optimization over diffusion Monte Carlo.

  17. Vectorization and multitasking with a Monte-Carlo code for neutron transport problems

    International Nuclear Information System (INIS)

    Chauvet, Y.

    1985-04-01

    This paper summarizes two improvements of a Monte Carlo code by resorting to vectorization and multitasking techniques. After a short presentation of the physical problem to solve and a description of the main difficulties to produce an efficient coding, this paper introduces the vectorization principles employed and briefly describes how the vectorized algorithm works. Next, measured performances on CRAY 1S, CYBER 205 and CRAY X-MP are compared. The second part of this paper is devoted to multitasking technique. Starting from the standard multitasking tools available with FORTRAN on CRAY X-MP/4, a multitasked algorithm and its measured speed-ups are presented. In conclusion we prove that vector and parallel computers are a great opportunity for such Monte Carlo algorithms

  18. Monte Carlo wave packet approach to dissociative multiple ionization in diatomic molecules

    DEFF Research Database (Denmark)

    Leth, Henriette Astrup; Madsen, Lars Bojer; Mølmer, Klaus

    2010-01-01

    A detailed description of the Monte Carlo wave packet technique applied to dissociative multiple ionization of diatomic molecules in short intense laser pulses is presented. The Monte Carlo wave packet technique relies on the Born-Oppenheimer separation of electronic and nuclear dynamics...... and provides a consistent theoretical framework for treating simultaneously both ionization and dissociation. By simulating the detection of continuum electrons and collapsing the system onto either the neutral, singly ionized or doubly ionized states in every time step the nuclear dynamics can be solved....... The computational effort is restricted and the model is applicable to any molecular system where electronic Born-Oppenheimer curves, dipole moment functions, and ionization rates as a function of nuclear coordinates can be determined....

  19. Application of Macro Response Monte Carlo method for electron spectrum simulation

    International Nuclear Information System (INIS)

    Perles, L.A.; Almeida, A. de

    2007-01-01

    During the past years several variance reduction techniques for Monte Carlo electron transport have been developed in order to reduce the electron computation time transport for absorbed dose distribution. We have implemented the Macro Response Monte Carlo (MRMC) method to evaluate the electron spectrum which can be used as a phase space input for others simulation programs. Such technique uses probability distributions for electron histories previously simulated in spheres (called kugels). These probabilities are used to sample the primary electron final state, as well as the creation secondary electrons and photons. We have compared the MRMC electron spectra simulated in homogeneous phantom against the Geant4 spectra. The results showed an agreement better than 6% in the spectra peak energies and that MRMC code is up to 12 time faster than Geant4 simulations

  20. Review of the Monte Carlo and deterministic codes in radiation protection and dosimetry

    International Nuclear Information System (INIS)

    Tagziria, H.

    2000-02-01

    Modelling a physical system can be carried out either stochastically or deterministically. An example of the former method is the Monte Carlo technique, in which statistically approximate methods are applied to exact models. No transport equation is solved as individual particles are simulated and some specific aspect (tally) of their average behaviour is recorded. The average behaviour of the physical system is then inferred using the central limit theorem. In contrast, deterministic codes use mathematically exact methods that are applied to approximate models to solve the transport equation for the average particle behaviour. The physical system is subdivided in boxes in the phase-space system and particles are followed from one box to the next. The smaller the boxes the better the approximations become. Although the Monte Carlo method has been used for centuries, its more recent manifestation has really emerged from the Manhattan project of the Word War II. Its invention is thought to be mainly due to Metropolis, Ulah (through his interest in poker), Fermi, von Neuman and Richtmeyer. Over the last 20 years or so, the Monte Carlo technique has become a powerful tool in radiation transport. This is due to users taking full advantage of richer cross section data, more powerful computers and Monte Carlo techniques for radiation transport, with high quality physics and better known source spectra. This method is a common sense approach to radiation transport and its success and popularity is quite often also due to necessity, because measurements are not always possible or affordable. In the Monte Carlo method, which is inherently realistic because nature is statistical, a more detailed physics is made possible by isolation of events while rather elaborate geometries can be modelled. Provided that the physics is correct, a simulation is exactly analogous to an experimenter counting particles. In contrast to the deterministic approach, however, a disadvantage of the

  1. Patient dose in image guided radiotherapy: Monte Carlo study of the CBCT dose contribution

    OpenAIRE

    Leotta, Salvatore; Amato, Ernesto; Settineri, Nicola; Basile, Emilia; Italiano, Antonio; Auditore, Lucrezia; Santacaterina, Anna; Pergolizzi, Stefano

    2018-01-01

    Image Guided RadioTherapy (IGRT) is a technique whose diffusion is growing thanks to the well-recognized gain in accuracy of dose delivery. However, multiple Cone Beam Computed Tomography (CBCT) scans add dose to patients, and its contribution has to be assessed and minimized. Aim of our work was to evaluate, through Monte Carlo simulations, organ doses in IGRT due to CBCT and therapeutic MV irradiation in head-neck, thorax and pelvis districts. We developed a Monte Carlo simulation in GAMOS ...

  2. Test of some current ideas in quark confinement physics by Monte Carlo computations for finite lattices

    International Nuclear Information System (INIS)

    Mack, G.; Pietarinen, E.

    1980-06-01

    We present some new results of Monte Carlo computations for pure SU(2) Yang Mills theory on a finite lattice. They support consistency of asymptotic freedom with quark confinement, validity of a block cell picture, and ideas based on a vortex condensation picture of quark confinement. (orig.)

  3. Monte Carlo simulations on a 9-node PC cluster

    International Nuclear Information System (INIS)

    Gouriou, J.

    2001-01-01

    Monte Carlo simulation methods are frequently used in the fields of medical physics, dosimetry and metrology of ionising radiation. Nevertheless, the main drawback of this technique is to be computationally slow, because the statistical uncertainty of the result improves only as the square root of the computational time. We present a method, which allows to reduce by a factor 10 to 20 the used effective running time. In practice, the aim was to reduce the calculation time in the LNHB metrological applications from several weeks to a few days. This approach includes the use of a PC-cluster, under Linux operating system and PVM parallel library (version 3.4). The Monte Carlo codes EGS4, MCNP and PENELOPE have been implemented on this platform and for the two last ones adapted for running under the PVM environment. The maximum observed speedup is ranging from a factor 13 to 18 according to the codes and the problems to be simulated. (orig.)

  4. Advanced computational tools and methods for nuclear analyses of fusion technology systems

    International Nuclear Information System (INIS)

    Fischer, U.; Chen, Y.; Pereslavtsev, P.; Simakov, S.P.; Tsige-Tamirat, H.; Loughlin, M.; Perel, R.L.; Petrizzi, L.; Tautges, T.J.; Wilson, P.P.H.

    2005-01-01

    An overview is presented of advanced computational tools and methods developed recently for nuclear analyses of Fusion Technology systems such as the experimental device ITER ('International Thermonuclear Experimental Reactor') and the intense neutron source IFMIF ('International Fusion Material Irradiation Facility'). These include Monte Carlo based computational schemes for the calculation of three-dimensional shut-down dose rate distributions, methods, codes and interfaces for the use of CAD geometry models in Monte Carlo transport calculations, algorithms for Monte Carlo based sensitivity/uncertainty calculations, as well as computational techniques and data for IFMIF neutronics and activation calculations. (author)

  5. Monte-Carlo computation of turbulent premixed methane/air ignition

    Science.gov (United States)

    Carmen, Christina Lieselotte

    The present work describes the results obtained by a time dependent numerical technique that simulates the early flame development of a spark-ignited premixed, lean, gaseous methane/air mixture with the unsteady spherical flame propagating in homogeneous and isotropic turbulence. The algorithm described is based upon a sub-model developed by an international automobile research and manufacturing corporation in order to analyze turbulence conditions within internal combustion engines. Several developments and modifications to the original algorithm have been implemented including a revised chemical reaction scheme and the evaluation and calculation of various turbulent flame properties. Solution of the complete set of Navier-Stokes governing equations for a turbulent reactive flow is avoided by reducing the equations to a single transport equation. The transport equation is derived from the Navier-Stokes equations for a joint probability density function, thus requiring no closure assumptions for the Reynolds stresses. A Monte-Carlo method is also utilized to simulate phenomena represented by the probability density function transport equation by use of the method of fractional steps. Gaussian distributions of fluctuating velocity and fuel concentration are prescribed. Attention is focused on the evaluation of the three primary parameters that influence the initial flame kernel growth-the ignition system characteristics, the mixture composition, and the nature of the flow field. Efforts are concentrated on the effects of moderate to intense turbulence on flames within the distributed reaction zone. Results are presented for lean conditions with the fuel equivalence ratio varying from 0.6 to 0.9. The present computational results, including flame regime analysis and the calculation of various flame speeds, provide excellent agreement with results obtained by other experimental and numerical researchers.

  6. New-generation Monte Carlo shell model for the K computer era

    International Nuclear Information System (INIS)

    Shimizu, Noritaka; Abe, Takashi; Yoshida, Tooru; Otsuka, Takaharu; Tsunoda, Yusuke; Utsuno, Yutaka; Mizusaki, Takahiro; Honma, Michio

    2012-01-01

    We present a newly enhanced version of the Monte Carlo shell-model (MCSM) method by incorporating the conjugate gradient method and energy-variance extrapolation. This new method enables us to perform large-scale shell-model calculations that the direct diagonalization method cannot reach. This new-generation framework of the MCSM provides us with a powerful tool to perform very advanced large-scale shell-model calculations on current massively parallel computers such as the K computer. We discuss the validity of this method in ab initio calculations of light nuclei, and propose a new method to describe the intrinsic wave function in terms of the shell-model picture. We also apply this new MCSM to the study of neutron-rich Cr and Ni isotopes using conventional shell-model calculations with an inert 40 Ca core and discuss how the magicity of N = 28, 40, 50 remains or is broken. (author)

  7. A Monte Carlo program for generating hadronic final states

    International Nuclear Information System (INIS)

    Angelini, L.; Pellicoro, M.; Nitti, L.; Preparata, G.; Valenti, G.

    1991-01-01

    FIRST is a computer program to generate final states from high energy hadronic interactions using the Monte Carlo technique. It is based on a theoretical model in which the high degree of universality in such interactions is related with the existence of highly excited quark-antiquark bound states, called fire-strings. The program handles the decay of both fire-strings and unstable particles produced in the intermediate states. (orig.)

  8. Vectorized Monte Carlo

    International Nuclear Information System (INIS)

    Brown, F.B.

    1981-01-01

    Examination of the global algorithms and local kernels of conventional general-purpose Monte Carlo codes shows that multigroup Monte Carlo methods have sufficient structure to permit efficient vectorization. A structured multigroup Monte Carlo algorithm for vector computers is developed in which many particle events are treated at once on a cell-by-cell basis. Vectorization of kernels for tracking and variance reduction is described, and a new method for discrete sampling is developed to facilitate the vectorization of collision analysis. To demonstrate the potential of the new method, a vectorized Monte Carlo code for multigroup radiation transport analysis was developed. This code incorporates many features of conventional general-purpose production codes, including general geometry, splitting and Russian roulette, survival biasing, variance estimation via batching, a number of cutoffs, and generalized tallies of collision, tracklength, and surface crossing estimators with response functions. Predictions of vectorized performance characteristics for the CYBER-205 were made using emulated coding and a dynamic model of vector instruction timing. Computation rates were examined for a variety of test problems to determine sensitivities to batch size and vector lengths. Significant speedups are predicted for even a few hundred particles per batch, and asymptotic speedups by about 40 over equivalent Amdahl 470V/8 scalar codes arepredicted for a few thousand particles per batch. The principal conclusion is that vectorization of a general-purpose multigroup Monte Carlo code is well worth the significant effort required for stylized coding and major algorithmic changes

  9. Development of self-learning Monte Carlo technique for more efficient modeling of nuclear logging measurements

    International Nuclear Information System (INIS)

    Zazula, J.M.

    1988-01-01

    The self-learning Monte Carlo technique has been implemented to the commonly used general purpose neutron transport code MORSE, in order to enhance sampling of the particle histories that contribute to a detector response. The parameters of all the biasing techniques available in MORSE, i.e. of splitting, Russian roulette, source and collision outgoing energy importance sampling, path length transformation and additional biasing of the source angular distribution are optimized. The learning process is iteratively performed after each batch of particles, by retrieving the data concerning the subset of histories that passed the detector region and energy range in the previous batches. This procedure has been tested on two sample problems in nuclear geophysics, where an unoptimized Monte Carlo calculation is particularly inefficient. The results are encouraging, although the presented method does not directly minimize the variance and the convergence of our algorithm is restricted by the statistics of successful histories from previous random walk. Further applications for modeling of the nuclear logging measurements seem to be promising. 11 refs., 2 figs., 3 tabs. (author)

  10. Development of ray tracing visualization program by Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Higuchi, Kenji; Otani, Takayuki [Japan Atomic Energy Research Inst., Tokyo (Japan); Hasegawa, Yukihiro

    1997-09-01

    Ray tracing algorithm is a powerful method to synthesize three dimensional computer graphics. In conventional ray tracing algorithms, a view point is used as a starting point of ray tracing, from which the rays are tracked up to the light sources through center points of pixels on the view screen to calculate the intensities of the pixels. This manner, however, makes it difficult to define the configuration of light source as well as to strictly simulate the reflections of the rays. To resolve these problems, we have developed a new ray tracing means which traces rays from a light source, not from a view point, with use of Monte Carlo method which is widely applied in nuclear fields. Moreover, we adopt the variance reduction techniques to the program with use of the specialized machine (Monte-4) for particle transport Monte Carlo so that the computational time could be successfully reduced. (author)

  11. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  12. Monte Carlo: Basics

    OpenAIRE

    Murthy, K. P. N.

    2001-01-01

    An introduction to the basics of Monte Carlo is given. The topics covered include, sample space, events, probabilities, random variables, mean, variance, covariance, characteristic function, chebyshev inequality, law of large numbers, central limit theorem (stable distribution, Levy distribution), random numbers (generation and testing), random sampling techniques (inversion, rejection, sampling from a Gaussian, Metropolis sampling), analogue Monte Carlo and Importance sampling (exponential b...

  13. A computationally efficient moment-preserving Monte Carlo electron transport method with implementation in Geant4

    Energy Technology Data Exchange (ETDEWEB)

    Dixon, D.A., E-mail: ddixon@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663, MS P365, Los Alamos, NM 87545 (United States); Prinja, A.K., E-mail: prinja@unm.edu [Department of Nuclear Engineering, MSC01 1120, 1 University of New Mexico, Albuquerque, NM 87131-0001 (United States); Franke, B.C., E-mail: bcfrank@sandia.gov [Sandia National Laboratories, Albuquerque, NM 87123 (United States)

    2015-09-15

    This paper presents the theoretical development and numerical demonstration of a moment-preserving Monte Carlo electron transport method. Foremost, a full implementation of the moment-preserving (MP) method within the Geant4 particle simulation toolkit is demonstrated. Beyond implementation details, it is shown that the MP method is a viable alternative to the condensed history (CH) method for inclusion in current and future generation transport codes through demonstration of the key features of the method including: systematically controllable accuracy, computational efficiency, mathematical robustness, and versatility. A wide variety of results common to electron transport are presented illustrating the key features of the MP method. In particular, it is possible to achieve accuracy that is statistically indistinguishable from analog Monte Carlo, while remaining up to three orders of magnitude more efficient than analog Monte Carlo simulations. Finally, it is shown that the MP method can be generalized to any applicable analog scattering DCS model by extending previous work on the MP method beyond analytical DCSs to the partial-wave (PW) elastic tabulated DCS data.

  14. Early tumour detection: a transillumination, time-resolved technique

    International Nuclear Information System (INIS)

    Behin-Ain, S.; Van Doorn, T.; Patterson, J.

    2000-01-01

    Full text: Research into transillumination techniques for the detection of tumours in soft tissue has been ongoing for over 70 years. The resolution and contrast, however, remain severely limited by scatter. Single photon detection techniques, with ideally infinite extinction coefficients, have been proposed to accumulate sub-hertz photon transmitted frequencies in the early part of a transmitted pulse. Computer based simulations have been undertaken to examine the theoretical performance requirements of the detector and the resultant image qualities that may be expected with this imaging technique. This paper reports on the computational techniques required for implementing these simulations in an efficient manner. Controlled Monte Carlo (CMC) and Convolution of Layers (CL) techniques were employed to constrain the photon to those having more chance of detection and hence enhance the detection statistics. Extrapolation techniques are proposed to reconstruct the early part of the temporal profile. Computational methods were implemented to evaluate Path Integrals, which are otherwise overly complex to evaluate. CMC and CL reduce the computational time by more than 10 orders of magnitude by only tracking those photons more likely to reach the detector. In the case of an optically thick medium with high scattering coefficient, extrapolation techniques are used to reconstruct the early part of temporal profile. Analytical solutions were found to be too involved for the simplest geometries. However the CL and implementation of computational techniques make Path integrals a useful analytical tool to compliment full Monte Carlo techniques. Results have shown that these methods collectively enable detection of small inhomogeneites within soft tissues. Reduced computation times and full reconstruction of the temporal profile of transmitted photons through optically thick medium enable fast simulations of single photon detectors to be achieved with the above described

  15. The Development and Comparison of Molecular Dynamics Simulation and Monte Carlo Simulation

    Science.gov (United States)

    Chen, Jundong

    2018-03-01

    Molecular dynamics is an integrated technology that combines physics, mathematics and chemistry. Molecular dynamics method is a computer simulation experimental method, which is a powerful tool for studying condensed matter system. This technique not only can get the trajectory of the atom, but can also observe the microscopic details of the atomic motion. By studying the numerical integration algorithm in molecular dynamics simulation, we can not only analyze the microstructure, the motion of particles and the image of macroscopic relationship between them and the material, but can also study the relationship between the interaction and the macroscopic properties more conveniently. The Monte Carlo Simulation, similar to the molecular dynamics, is a tool for studying the micro-molecular and particle nature. In this paper, the theoretical background of computer numerical simulation is introduced, and the specific methods of numerical integration are summarized, including Verlet method, Leap-frog method and Velocity Verlet method. At the same time, the method and principle of Monte Carlo Simulation are introduced. Finally, similarities and differences of Monte Carlo Simulation and the molecular dynamics simulation are discussed.

  16. Computational methods for high-energy source shielding

    International Nuclear Information System (INIS)

    Armstrong, T.W.; Cloth, P.; Filges, D.

    1983-01-01

    The computational methods for high-energy radiation transport related to shielding of the SNQ-spallation source are outlined. The basic approach is to couple radiation-transport computer codes which use Monte Carlo methods and discrete ordinates methods. A code system is suggested that incorporates state-of-the-art radiation-transport techniques. The stepwise verification of that system is briefly summarized. The complexity of the resulting code system suggests a more straightforward code specially tailored for thick shield calculations. A short guide line to future development of such a Monte Carlo code is given

  17. Computation of Ground-State Properties in Molecular Systems: Back-Propagation with Auxiliary-Field Quantum Monte Carlo.

    Science.gov (United States)

    Motta, Mario; Zhang, Shiwei

    2017-11-14

    We address the computation of ground-state properties of chemical systems and realistic materials within the auxiliary-field quantum Monte Carlo method. The phase constraint to control the Fermion phase problem requires the random walks in Slater determinant space to be open-ended with branching. This in turn makes it necessary to use back-propagation (BP) to compute averages and correlation functions of operators that do not commute with the Hamiltonian. Several BP schemes are investigated, and their optimization with respect to the phaseless constraint is considered. We propose a modified BP method for the computation of observables in electronic systems, discuss its numerical stability and computational complexity, and assess its performance by computing ground-state properties in several molecular systems, including small organic molecules.

  18. Investigation of attenuation correction in SPECT using textural features, Monte Carlo simulations, and computational anthropomorphic models.

    Science.gov (United States)

    Spirou, Spiridon V; Papadimitroulas, Panagiotis; Liakou, Paraskevi; Georgoulias, Panagiotis; Loudos, George

    2015-09-01

    To present and evaluate a new methodology to investigate the effect of attenuation correction (AC) in single-photon emission computed tomography (SPECT) using textural features analysis, Monte Carlo techniques, and a computational anthropomorphic model. The GATE Monte Carlo toolkit was used to simulate SPECT experiments using the XCAT computational anthropomorphic model, filled with a realistic biodistribution of (99m)Tc-N-DBODC. The simulated gamma camera was the Siemens ECAM Dual-Head, equipped with a parallel hole lead collimator, with an image resolution of 3.54 × 3.54 mm(2). Thirty-six equispaced camera positions, spanning a full 360° arc, were simulated. Projections were calculated after applying a ± 20% energy window or after eliminating all scattered photons. The activity of the radioisotope was reconstructed using the MLEM algorithm. Photon attenuation was accounted for by calculating the radiological pathlength in a perpendicular line from the center of each voxel to the gamma camera. Twenty-two textural features were calculated on each slice, with and without AC, using 16 and 64 gray levels. A mask was used to identify only those pixels that belonged to each organ. Twelve of the 22 features showed almost no dependence on AC, irrespective of the organ involved. In both the heart and the liver, the mean and SD were the features most affected by AC. In the liver, six features were affected by AC only on some slices. Depending on the slice, skewness decreased by 22-34% with AC, kurtosis by 35-50%, long-run emphasis mean by 71-91%, and long-run emphasis range by 62-95%. In contrast, gray-level non-uniformity mean increased by 78-218% compared with the value without AC and run percentage mean by 51-159%. These results were not affected by the number of gray levels (16 vs. 64) or the data used for reconstruction: with the energy window or without scattered photons. The mean and SD were the main features affected by AC. In the heart, no other feature was

  19. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-01-01

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  20. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros

    2016-08-29

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  1. Non-periodic pseudo-random numbers used in Monte Carlo calculations

    Science.gov (United States)

    Barberis, Gaston E.

    2007-09-01

    The generation of pseudo-random numbers is one of the interesting problems in Monte Carlo simulations, mostly because the common computer generators produce periodic numbers. We used simple pseudo-random numbers generated with the simplest chaotic system, the logistic map, with excellent results. The numbers generated in this way are non-periodic, which we demonstrated for 1013 numbers, and they are obtained in a deterministic way, which allows to repeat systematically any calculation. The Monte Carlo calculations are the ideal field to apply these numbers, and we did it for simple and more elaborated cases. Chemistry and Information Technology use this kind of simulations, and the application of this numbers to quantum Monte Carlo and cryptography is immediate. I present here the techniques to calculate, analyze and use these pseudo-random numbers, show that they lack periodicity up to 1013 numbers and that they are not correlated.

  2. Non-periodic pseudo-random numbers used in Monte Carlo calculations

    International Nuclear Information System (INIS)

    Barberis, Gaston E.

    2007-01-01

    The generation of pseudo-random numbers is one of the interesting problems in Monte Carlo simulations, mostly because the common computer generators produce periodic numbers. We used simple pseudo-random numbers generated with the simplest chaotic system, the logistic map, with excellent results. The numbers generated in this way are non-periodic, which we demonstrated for 10 13 numbers, and they are obtained in a deterministic way, which allows to repeat systematically any calculation. The Monte Carlo calculations are the ideal field to apply these numbers, and we did it for simple and more elaborated cases. Chemistry and Information Technology use this kind of simulations, and the application of this numbers to quantum Monte Carlo and cryptography is immediate. I present here the techniques to calculate, analyze and use these pseudo-random numbers, show that they lack periodicity up to 10 13 numbers and that they are not correlated

  3. Concealed nuclear material identification via combined fast-neutron/γ-ray computed tomography (FNGCT): a Monte Carlo study

    Science.gov (United States)

    Licata, M.; Joyce, M. J.

    2018-02-01

    The potential of a combined and simultaneous fast-neutron/γ-ray computed tomography technique using Monte Carlo simulations is described. This technique is applied on the basis of a hypothetical tomography system comprising an isotopic radiation source (americium-beryllium) and a number (13) of organic scintillation detectors for the production and detection of both fast neutrons and γ rays, respectively. Via a combination of γ-ray and fast neutron tomography the potential is demonstrated to discern nuclear materials, such as compounds comprising plutonium and uranium, from substances that are used widely for neutron moderation and shielding. This discrimination is achieved on the basis of the difference in the attenuation characteristics of these substances. Discrimination of a variety of nuclear material compounds from shielding/moderating substances (the latter comprising lead or polyethylene for example) is shown to be challenging when using either γ-ray or neutron tomography in isolation of one another. Much-improved contrast is obtained for a combination of these tomographic modalities. This method has potential applications for in-situ, non-destructive assessments in nuclear security, safeguards, waste management and related requirements in the nuclear industry.

  4. Region-oriented CT image representation for reducing computing time of Monte Carlo simulations

    International Nuclear Information System (INIS)

    Sarrut, David; Guigues, Laurent

    2008-01-01

    Purpose. We propose a new method for efficient particle transportation in voxelized geometry for Monte Carlo simulations. We describe its use for calculating dose distribution in CT images for radiation therapy. Material and methods. The proposed approach, based on an implicit volume representation named segmented volume, coupled with an adapted segmentation procedure and a distance map, allows us to minimize the number of boundary crossings, which slows down simulation. The method was implemented with the GEANT4 toolkit and compared to four other methods: One box per voxel, parameterized volumes, octree-based volumes, and nested parameterized volumes. For each representation, we compared dose distribution, time, and memory consumption. Results. The proposed method allows us to decrease computational time by up to a factor of 15, while keeping memory consumption low, and without any modification of the transportation engine. Speeding up is related to the geometry complexity and the number of different materials used. We obtained an optimal number of steps with removal of all unnecessary steps between adjacent voxels sharing a similar material. However, the cost of each step is increased. When the number of steps cannot be decreased enough, due for example, to the large number of material boundaries, such a method is not considered suitable. Conclusion. This feasibility study shows that optimizing the representation of an image in memory potentially increases computing efficiency. We used the GEANT4 toolkit, but we could potentially use other Monte Carlo simulation codes. The method introduces a tradeoff between speed and geometry accuracy, allowing computational time gain. However, simulations with GEANT4 remain slow and further work is needed to speed up the procedure while preserving the desired accuracy

  5. Monte Carlo simulation of experiments

    International Nuclear Information System (INIS)

    Opat, G.I.

    1977-07-01

    An outline of the technique of computer simulation of particle physics experiments by the Monte Carlo method is presented. Useful special purpose subprograms are listed and described. At each stage the discussion is made concrete by direct reference to the programs SIMUL8 and its variant MONTE-PION, written to assist in the analysis of the radiative decay experiments μ + → e + ν sub(e) antiνγ and π + → e + ν sub(e)γ, respectively. These experiments were based on the use of two large sodium iodide crystals, TINA and MINA, as e and γ detectors. Instructions for the use of SIMUL8 and MONTE-PION are given. (author)

  6. GATE Monte Carlo simulation of dose distribution using MapReduce in a cloud computing environment.

    Science.gov (United States)

    Liu, Yangchuan; Tang, Yuguo; Gao, Xin

    2017-12-01

    The GATE Monte Carlo simulation platform has good application prospects of treatment planning and quality assurance. However, accurate dose calculation using GATE is time consuming. The purpose of this study is to implement a novel cloud computing method for accurate GATE Monte Carlo simulation of dose distribution using MapReduce. An Amazon Machine Image installed with Hadoop and GATE is created to set up Hadoop clusters on Amazon Elastic Compute Cloud (EC2). Macros, the input files for GATE, are split into a number of self-contained sub-macros. Through Hadoop Streaming, the sub-macros are executed by GATE in Map tasks and the sub-results are aggregated into final outputs in Reduce tasks. As an evaluation, GATE simulations were performed in a cubical water phantom for X-ray photons of 6 and 18 MeV. The parallel simulation on the cloud computing platform is as accurate as the single-threaded simulation on a local server and the simulation correctness is not affected by the failure of some worker nodes. The cloud-based simulation time is approximately inversely proportional to the number of worker nodes. For the simulation of 10 million photons on a cluster with 64 worker nodes, time decreases of 41× and 32× were achieved compared to the single worker node case and the single-threaded case, respectively. The test of Hadoop's fault tolerance showed that the simulation correctness was not affected by the failure of some worker nodes. The results verify that the proposed method provides a feasible cloud computing solution for GATE.

  7. Image based Monte Carlo modeling for computational phantom

    International Nuclear Information System (INIS)

    Cheng, M.; Wang, W.; Zhao, K.; Fan, Y.; Long, P.; Wu, Y.

    2013-01-01

    Full text of the publication follows. The evaluation on the effects of ionizing radiation and the risk of radiation exposure on human body has been becoming one of the most important issues for radiation protection and radiotherapy fields, which is helpful to avoid unnecessary radiation and decrease harm to human body. In order to accurately evaluate the dose on human body, it is necessary to construct more realistic computational phantom. However, manual description and verification of the models for Monte Carlo (MC) simulation are very tedious, error-prone and time-consuming. In addition, it is difficult to locate and fix the geometry error, and difficult to describe material information and assign it to cells. MCAM (CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport Simulation) was developed as an interface program to achieve both CAD- and image-based automatic modeling. The advanced version (Version 6) of MCAM can achieve automatic conversion from CT/segmented sectioned images to computational phantoms such as MCNP models. Imaged-based automatic modeling program(MCAM6.0) has been tested by several medical images and sectioned images. And it has been applied in the construction of Rad-HUMAN. Following manual segmentation and 3D reconstruction, a whole-body computational phantom of Chinese adult female called Rad-HUMAN was created by using MCAM6.0 from sectioned images of a Chinese visible human dataset. Rad-HUMAN contains 46 organs/tissues, which faithfully represented the average anatomical characteristics of the Chinese female. The dose conversion coefficients (Dt/Ka) from kerma free-in-air to absorbed dose of Rad-HUMAN were calculated. Rad-HUMAN can be applied to predict and evaluate dose distributions in the Treatment Plan System (TPS), as well as radiation exposure for human body in radiation protection. (authors)

  8. Gating Techniques for Rao-Blackwellized Monte Carlo Data Association Filter

    Directory of Open Access Journals (Sweden)

    Yazhao Wang

    2014-01-01

    Full Text Available This paper studies the Rao-Blackwellized Monte Carlo data association (RBMCDA filter for multiple target tracking. The elliptical gating strategies are redesigned and incorporated into the framework of the RBMCDA filter. The obvious benefit is the reduction of the time cost because the data association procedure can be carried out with less validated measurements. In addition, the overlapped parts of the neighboring validation regions are divided into several separated subregions according to the possible origins of the validated measurements. In these subregions, the measurement uncertainties can be taken into account more reasonably than those of the simple elliptical gate. This would help to achieve higher tracking ability of the RBMCDA algorithm by a better association prior approximation. Simulation results are provided to show the effectiveness of the proposed gating techniques.

  9. Technical Note: On the efficiency of variance reduction techniques for Monte Carlo estimates of imaging noise.

    Science.gov (United States)

    Sharma, Diksha; Sempau, Josep; Badano, Aldo

    2018-02-01

    Monte Carlo simulations require large number of histories to obtain reliable estimates of the quantity of interest and its associated statistical uncertainty. Numerous variance reduction techniques (VRTs) have been employed to increase computational efficiency by reducing the statistical uncertainty. We investigate the effect of two VRTs for optical transport methods on accuracy and computing time for the estimation of variance (noise) in x-ray imaging detectors. We describe two VRTs. In the first, we preferentially alter the direction of the optical photons to increase detection probability. In the second, we follow only a fraction of the total optical photons generated. In both techniques, the statistical weight of photons is altered to maintain the signal mean. We use fastdetect2, an open-source, freely available optical transport routine from the hybridmantis package. We simulate VRTs for a variety of detector models and energy sources. The imaging data from the VRT simulations are then compared to the analog case (no VRT) using pulse height spectra, Swank factor, and the variance of the Swank estimate. We analyze the effect of VRTs on the statistical uncertainty associated with Swank factors. VRTs increased the relative efficiency by as much as a factor of 9. We demonstrate that we can achieve the same variance of the Swank factor with less computing time. With this approach, the simulations can be stopped when the variance of the variance estimates reaches the desired level of uncertainty. We implemented analytic estimates of the variance of Swank factor and demonstrated the effect of VRTs on image quality calculations. Our findings indicate that the Swank factor is dominated by the x-ray interaction profile as compared to the additional uncertainty introduced in the optical transport by the use of VRTs. For simulation experiments that aim at reducing the uncertainty in the Swank factor estimate, any of the proposed VRT can be used for increasing the relative

  10. Monte Carlo study of the multiquark systems

    International Nuclear Information System (INIS)

    Kerbikov, B.O.; Polikarpov, M.I.; Zamolodchikov, A.B.

    1986-01-01

    Random walks have been used to calculate the energies of the ground states in systems of N=3, 6, 9, 12 quarks. Multiquark states with N>3 are unstable with respect to the spontaneous dissociation into color singlet hadrons. The modified Green's function Monte Carlo algorithm which proved to be more simple and much accurate than the conventional few body methods have been employed. In contrast to other techniques, the same equations are used for any number of particles, while the computer time increases only linearly V, S the number of particles

  11. Therapeutic Applications of Monte Carlo Calculations in Nuclear Medicine

    International Nuclear Information System (INIS)

    Coulot, J

    2003-01-01

    Monte Carlo techniques are involved in many applications in medical physics, and the field of nuclear medicine has seen a great development in the past ten years due to their wider use. Thus, it is of great interest to look at the state of the art in this domain, when improving computer performances allow one to obtain improved results in a dramatically reduced time. The goal of this book is to make, in 15 chapters, an exhaustive review of the use of Monte Carlo techniques in nuclear medicine, also giving key features which are not necessary directly related to the Monte Carlo method, but mandatory for its practical application. As the book deals with therapeutic' nuclear medicine, it focuses on internal dosimetry. After a general introduction on Monte Carlo techniques and their applications in nuclear medicine (dosimetry, imaging and radiation protection), the authors give an overview of internal dosimetry methods (formalism, mathematical phantoms, quantities of interest). Then, some of the more widely used Monte Carlo codes are described, as well as some treatment planning softwares. Some original techniques are also mentioned, such as dosimetry for boron neutron capture synovectomy. It is generally well written, clearly presented, and very well documented. Each chapter gives an overview of each subject, and it is up to the reader to investigate it further using the extensive bibliography provided. Each topic is discussed from a practical point of view, which is of great help for non-experienced readers. For instance, the chapter about mathematical aspects of Monte Carlo particle transport is very clear and helps one to apprehend the philosophy of the method, which is often a difficulty with a more theoretical approach. Each chapter is put in the general (clinical) context, and this allows the reader to keep in mind the intrinsic limitation of each technique involved in dosimetry (for instance activity quantitation). Nevertheless, there are some minor remarks to

  12. Monte Carlo learning/biasing experiment with intelligent random numbers

    International Nuclear Information System (INIS)

    Booth, T.E.

    1985-01-01

    A Monte Carlo learning and biasing technique is described that does its learning and biasing in the random number space rather than the physical phase-space. The technique is probably applicable to all linear Monte Carlo problems, but no proof is provided here. Instead, the technique is illustrated with a simple Monte Carlo transport problem. Problems encountered, problems solved, and speculations about future progress are discussed. 12 refs

  13. Reliable method for fission source convergence of Monte Carlo criticality calculation with Wielandt's method

    International Nuclear Information System (INIS)

    Yamamoto, Toshihiro; Miyoshi, Yoshinori

    2004-01-01

    A new algorithm of Monte Carlo criticality calculations for implementing Wielandt's method, which is one of acceleration techniques for deterministic source iteration methods, is developed, and the algorithm can be successfully implemented into MCNP code. In this algorithm, part of fission neutrons emitted during random walk processes are tracked within the current cycle, and thus a fission source distribution used in the next cycle spread more widely. Applying this method intensifies a neutron interaction effect even in a loosely-coupled array where conventional Monte Carlo criticality methods have difficulties, and a converged fission source distribution can be obtained with fewer cycles. Computing time spent for one cycle, however, increases because of tracking fission neutrons within the current cycle, which eventually results in an increase of total computing time up to convergence. In addition, statistical fluctuations of a fission source distribution in a cycle are worsened by applying Wielandt's method to Monte Carlo criticality calculations. However, since a fission source convergence is attained with fewer source iterations, a reliable determination of convergence can easily be made even in a system with a slow convergence. This acceleration method is expected to contribute to prevention of incorrect Monte Carlo criticality calculations. (author)

  14. Monte Carlo radiation transport: A revolution in science

    International Nuclear Information System (INIS)

    Hendricks, J.

    1993-01-01

    When Enrico Fermi, Stan Ulam, Nicholas Metropolis, John von Neuman, and Robert Richtmyer invented the Monte Carlo method fifty years ago, little could they imagine the far-flung consequences, the international applications, and the revolution in science epitomized by their abstract mathematical method. The Monte Carlo method is used in a wide variety of fields to solve exact computational models approximately by statistical sampling. It is an alternative to traditional physics modeling methods which solve approximate computational models exactly by deterministic methods. Modern computers and improved methods, such as variance reduction, have enhanced the method to the point of enabling a true predictive capability in areas such as radiation or particle transport. This predictive capability has contributed to a radical change in the way science is done: design and understanding come from computations built upon experiments rather than being limited to experiments, and the computer codes doing the computations have become the repository for physics knowledge. The MCNP Monte Carlo computer code effort at Los Alamos is an example of this revolution. Physicians unfamiliar with physics details can design cancer treatments using physics buried in the MCNP computer code. Hazardous environments and hypothetical accidents can be explored. Many other fields, from underground oil well exploration to aerospace, from physics research to energy production, from safety to bulk materials processing, benefit from MCNP, the Monte Carlo method, and the revolution in science

  15. Accelerating Monte Carlo molecular simulations by reweighting and reconstructing Markov chains: Extrapolation of canonical ensemble averages and second derivatives to different temperature and density conditions

    KAUST Repository

    Kadoura, Ahmad Salim; Sun, Shuyu; Salama, Amgad

    2014-01-01

    thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up

  16. Use of Monte Carlo Methods for determination of isodose curves in brachytherapy

    International Nuclear Information System (INIS)

    Vieira, Jose Wilson

    2001-08-01

    Brachytherapy is a special form of cancer treatment in which the radioactive source is very close to or inside the tumor with the objective of causing the necrosis of the cancerous tissue. The intensity of cell response to the radiation varies according to the tissue type and degree of differentiation. Since the malign cells are less differentiated than the normal ones, they are more sensitive to the radiation. This is the basis for radiotherapy techniques. Institutes that work with the application of high dose rates use sophisticated computer programs to calculate the necessary dose to achieve the necrosis of the tumor and the same time, minimizing the irradiation of tissues and organs of the neighborhood. With knowledge the characteristics of the source and the tumor, it is possible to trace isodose curves with the necessary information for planning the brachytherapy in patients. The objective of this work is, using Monte Carlo techniques, to develop a computer program - the ISODOSE - which allows to determine isodose curves in turn of linear radioactive sources used in brachytherapy. The development of ISODOSE is important because the available commercial programs, in general, are very expensive and practically inaccessible to small clinics. The use of Monte Carlo techniques is viable because they avoid problems inherent to analytic solutions as, for instance , the integration of functions with singularities in its domain. The results of ISODOSE were compared with similar data found in the literature and also with those obtained at the institutes of radiotherapy of the 'Hospital do Cancer do Recife' and of the 'Hospital Portugues do Recife'. ISODOSE presented good performance, mainly, due to the Monte Carlo techniques, that allowed a quite detailed drawing of the isodose curves in turn of linear sources. (author)

  17. Computational techniques in gamma-ray skyshine analysis

    International Nuclear Information System (INIS)

    George, D.L.

    1988-12-01

    Two computer codes were developed to analyze gamma-ray skyshine, the scattering of gamma photons by air molecules. A review of previous gamma-ray skyshine studies discusses several Monte Carlo codes, programs using a single-scatter model, and the MicroSkyshine program for microcomputers. A benchmark gamma-ray skyshine experiment performed at Kansas State University is also described. A single-scatter numerical model was presented which traces photons from the source to their first scatter, then applies a buildup factor along a direct path from the scattering point to a detector. The FORTRAN code SKY, developed with this model before the present study, was modified to use Gauss quadrature, recent photon attenuation data and a more accurate buildup approximation. The resulting code, SILOGP, computes response from a point photon source on the axis of a silo, with and without concrete shielding over the opening. Another program, WALLGP, was developed using the same model to compute response from a point gamma source behind a perfectly absorbing wall, with and without shielding overhead. 29 refs., 48 figs., 13 tabs

  18. A Monte Carlo simulation technique to determine the optimal portfolio

    Directory of Open Access Journals (Sweden)

    Hassan Ghodrati

    2014-03-01

    Full Text Available During the past few years, there have been several studies for portfolio management. One of the primary concerns on any stock market is to detect the risk associated with various assets. One of the recognized methods in order to measure, to forecast, and to manage the existing risk is associated with Value at Risk (VaR, which draws much attention by financial institutions in recent years. VaR is a method for recognizing and evaluating of risk, which uses the standard statistical techniques and the method has been used in other fields, increasingly. The present study has measured the value at risk of 26 companies from chemical industry in Tehran Stock Exchange over the period 2009-2011 using the simulation technique of Monte Carlo with 95% confidence level. The used variability in the present study has been the daily return resulted from the stock daily price change. Moreover, the weight of optimal investment has been determined using a hybrid model called Markowitz and Winker model in each determined stocks. The results showed that the maximum loss would not exceed from 1259432 Rials at 95% confidence level in future day.

  19. Calculation of absorbed fractions to human skeletal tissues due to alpha particles using the Monte Carlo and 3-d chord-based transport techniques

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, J.G. [Institute of Radiation Protection and Dosimetry, Av. Salvador Allende s/n, Recreio, Rio de Janeiro, CEP 22780-160 (Brazil); Watchman, C.J. [Department of Radiation Oncology, University of Arizona, Tucson, AZ, 85721 (United States); Bolch, W.E. [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL, 32611 (United States); Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611 (United States)

    2007-07-01

    Absorbed fraction (AF) calculations to the human skeletal tissues due to alpha particles are of interest to the internal dosimetry of occupationally exposed workers and members of the public. The transport of alpha particles through the skeletal tissue is complicated by the detailed and complex microscopic histology of the skeleton. In this study, both Monte Carlo and chord-based techniques were applied to the transport of alpha particles through 3-D micro-CT images of the skeletal microstructure of trabecular spongiosa. The Monte Carlo program used was 'Visual Monte Carlo-VMC'. VMC simulates the emission of the alpha particles and their subsequent energy deposition track. The second method applied to alpha transport is the chord-based technique, which randomly generates chord lengths across bone trabeculae and the marrow cavities via alternate and uniform sampling of their cumulative density functions. This paper compares the AF of energy to two radiosensitive skeletal tissues, active marrow and shallow active marrow, obtained with these two techniques. (authors)

  20. On a New Variance Reduction Technique: Neural Network Biasing-a Study of Two Test Cases with the Monte Carlo Code Tripoli4

    International Nuclear Information System (INIS)

    Dumonteil, E.

    2009-01-01

    Various variance-reduction techniques are used in Monte Carlo particle transport. Most of them rely either on a hypothesis made by the user (parameters of the exponential biasing, mesh and weight bounds for weight windows, etc.) or on a previous calculation of the system with, for example, a deterministic solver. This paper deals with a new acceleration technique, namely, auto-adaptative neural network biasing. Indeed, instead of using any a priori knowledge of the system, it is possible, at a given point in a simulation, to use the Monte Carlo histories previously simulated to train a neural network, which, in return, should be able to provide an estimation of the adjoint flux, used then for biasing the simulation. We will describe this method, detail its implementation in the Monte Carlo code Tripoli4, and discuss its results on two test cases. (author)

  1. Dosimetry in radiotherapy and brachytherapy by Monte-Carlo GATE simulation on computing grid; Dosimetrie en radiotherapie et curietherapie par simulation Monte-Carlo GATE sur grille informatique

    Energy Technology Data Exchange (ETDEWEB)

    Thiam, Ch O

    2007-10-15

    Accurate radiotherapy treatment requires the delivery of a precise dose to the tumour volume and a good knowledge of the dose deposit to the neighbouring zones. Computation of the treatments is usually carried out by a Treatment Planning System (T.P.S.) which needs to be precise and fast. The G.A.T.E. platform for Monte-Carlo simulation based on G.E.A.N.T.4 is an emerging tool for nuclear medicine application that provides functionalities for fast and reliable dosimetric calculations. In this thesis, we studied in parallel a validation of the G.A.T.E. platform for the modelling of electrons and photons low energy sources and the optimized use of grid infrastructures to reduce simulations computing time. G.A.T.E. was validated for the dose calculation of point kernels for mono-energetic electrons and compared with the results of other Monte-Carlo studies. A detailed study was made on the energy deposit during electrons transport in G.E.A.N.T.4. In order to validate G.A.T.E. for very low energy photons (<35 keV), three models of radioactive sources used in brachytherapy and containing iodine 125 (2301 of Best Medical International; Symmetra of Uro- Med/Bebig and 6711 of Amersham) were simulated. Our results were analyzed according to the recommendations of task group No43 of American Association of Physicists in Medicine (A.A.P.M.). They show a good agreement between G.A.T.E., the reference studies and A.A.P.M. recommended values. The use of Monte-Carlo simulations for a better definition of the dose deposited in the tumour volumes requires long computing time. In order to reduce it, we exploited E.G.E.E. grid infrastructure where simulations are distributed using innovative technologies taking into account the grid status. Time necessary for the computing of a radiotherapy planning simulation using electrons was reduced by a factor 30. A Web platform based on G.E.N.I.U.S. portal was developed to make easily available all the methods to submit and manage G

  2. Acceleration of Monte Carlo simulation of photon migration in complex heterogeneous media using Intel many-integrated core architecture.

    Science.gov (United States)

    Gorshkov, Anton V; Kirillin, Mikhail Yu

    2015-08-01

    Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing.

  3. Development of a space radiation Monte Carlo computer simulation based on the FLUKA and ROOT codes

    CERN Document Server

    Pinsky, L; Ferrari, A; Sala, P; Carminati, F; Brun, R

    2001-01-01

    This NASA funded project is proceeding to develop a Monte Carlo-based computer simulation of the radiation environment in space. With actual funding only initially in place at the end of May 2000, the study is still in the early stage of development. The general tasks have been identified and personnel have been selected. The code to be assembled will be based upon two major existing software packages. The radiation transport simulation will be accomplished by updating the FLUKA Monte Carlo program, and the user interface will employ the ROOT software being developed at CERN. The end-product will be a Monte Carlo-based code which will complement the existing analytic codes such as BRYNTRN/HZETRN presently used by NASA to evaluate the effects of radiation shielding in space. The planned code will possess the ability to evaluate the radiation environment for spacecraft and habitats in Earth orbit, in interplanetary space, on the lunar surface, or on a planetary surface such as Mars. Furthermore, it will be usef...

  4. An analytical model for backscattered luminance in fog: comparisons with Monte Carlo computations and experimental results

    International Nuclear Information System (INIS)

    Taillade, Frédéric; Dumont, Eric; Belin, Etienne

    2008-01-01

    We propose an analytical model for backscattered luminance in fog and derive an expression for the visibility signal-to-noise ratio as a function of meteorological visibility distance. The model uses single scattering processes. It is based on the Mie theory and the geometry of the optical device (emitter and receiver). In particular, we present an overlap function and take the phase function of fog into account. The results of the backscattered luminance obtained with our analytical model are compared to simulations made using the Monte Carlo method based on multiple scattering processes. An excellent agreement is found in that the discrepancy between the results is smaller than the Monte Carlo standard uncertainties. If we take no account of the geometry of the optical device, the results of the model-estimated backscattered luminance differ from the simulations by a factor 20. We also conclude that the signal-to-noise ratio computed with the Monte Carlo method and our analytical model is in good agreement with experimental results since the mean difference between the calculations and experimental measurements is smaller than the experimental uncertainty

  5. MORSE Monte Carlo code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described

  6. Application of variance reduction techniques of Monte-Carlo method to deep penetration shielding problems

    International Nuclear Information System (INIS)

    Rawat, K.K.; Subbaiah, K.V.

    1996-01-01

    General purpose Monte Carlo code MCNP is being widely employed for solving deep penetration problems by applying variance reduction techniques. These techniques depend on the nature and type of the problem being solved. Application of geometry splitting and implicit capture method are examined to study the deep penetration problems of neutron, gamma and coupled neutron-gamma in thick shielding materials. The typical problems chosen are: i) point isotropic monoenergetic gamma ray source of 1 MeV energy in nearly infinite water medium, ii) 252 Cf spontaneous source at the centre of 140 cm thick water and concrete and iii) 14 MeV fast neutrons incident on the axis of 100 cm thick concrete disk. (author). 7 refs., 5 figs

  7. Efficient data management techniques implemented in the Karlsruhe Monte Carlo code KAMCCO

    International Nuclear Information System (INIS)

    Arnecke, G.; Borgwaldt, H.; Brandl, V.; Lalovic, M.

    1974-01-01

    The Karlsruhe Monte Carlo Code KAMCCO is a forward neutron transport code with an eigenfunction and a fixed source option, including time-dependence. A continuous energy model is combined with a detailed representation of neutron cross sections, based on linear interpolation, Breit-Wigner resonances and probability tables. All input is processed into densely packed, dynamically addressed parameter fields and networks of pointers (addresses). Estimation routines are decoupled from random walk and analyze a storage region with sample records. This technique leads to fast execution with moderate storage requirements and without any I/O-operations except in the input and output stages. 7 references. (U.S.)

  8. Lectures on Monte Carlo methods

    CERN Document Server

    Madras, Neal

    2001-01-01

    Monte Carlo methods form an experimental branch of mathematics that employs simulations driven by random number generators. These methods are often used when others fail, since they are much less sensitive to the "curse of dimensionality", which plagues deterministic methods in problems with a large number of variables. Monte Carlo methods are used in many fields: mathematics, statistics, physics, chemistry, finance, computer science, and biology, for instance. This book is an introduction to Monte Carlo methods for anyone who would like to use these methods to study various kinds of mathemati

  9. RNA folding kinetics using Monte Carlo and Gillespie algorithms.

    Science.gov (United States)

    Clote, Peter; Bayegan, Amir H

    2018-04-01

    RNA secondary structure folding kinetics is known to be important for the biological function of certain processes, such as the hok/sok system in E. coli. Although linear algebra provides an exact computational solution of secondary structure folding kinetics with respect to the Turner energy model for tiny ([Formula: see text]20 nt) RNA sequences, the folding kinetics for larger sequences can only be approximated by binning structures into macrostates in a coarse-grained model, or by repeatedly simulating secondary structure folding with either the Monte Carlo algorithm or the Gillespie algorithm. Here we investigate the relation between the Monte Carlo algorithm and the Gillespie algorithm. We prove that asymptotically, the expected time for a K-step trajectory of the Monte Carlo algorithm is equal to [Formula: see text] times that of the Gillespie algorithm, where [Formula: see text] denotes the Boltzmann expected network degree. If the network is regular (i.e. every node has the same degree), then the mean first passage time (MFPT) computed by the Monte Carlo algorithm is equal to MFPT computed by the Gillespie algorithm multiplied by [Formula: see text]; however, this is not true for non-regular networks. In particular, RNA secondary structure folding kinetics, as computed by the Monte Carlo algorithm, is not equal to the folding kinetics, as computed by the Gillespie algorithm, although the mean first passage times are roughly correlated. Simulation software for RNA secondary structure folding according to the Monte Carlo and Gillespie algorithms is publicly available, as is our software to compute the expected degree of the network of secondary structures of a given RNA sequence-see http://bioinformatics.bc.edu/clote/RNAexpNumNbors .

  10. Steady-State Electrodiffusion from the Nernst-Planck Equation Coupled to Local Equilibrium Monte Carlo Simulations.

    Science.gov (United States)

    Boda, Dezső; Gillespie, Dirk

    2012-03-13

    We propose a procedure to compute the steady-state transport of charged particles based on the Nernst-Planck (NP) equation of electrodiffusion. To close the NP equation and to establish a relation between the concentration and electrochemical potential profiles, we introduce the Local Equilibrium Monte Carlo (LEMC) method. In this method, Grand Canonical Monte Carlo simulations are performed using the electrochemical potential specified for the distinct volume elements. An iteration procedure that self-consistently solves the NP and flux continuity equations with LEMC is shown to converge quickly. This NP+LEMC technique can be used in systems with diffusion of charged or uncharged particles in complex three-dimensional geometries, including systems with low concentrations and small applied voltages that are difficult for other particle simulation techniques.

  11. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure.

    Science.gov (United States)

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-07

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed.

  12. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure

    International Nuclear Information System (INIS)

    Wang, Henry; Ma Yunzhi; Pratx, Guillem; Xing Lei

    2011-01-01

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47x speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. (note)

  13. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Henry [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Ma Yunzhi; Pratx, Guillem; Xing Lei, E-mail: hwang41@stanford.edu [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA 94305-5847 (United States)

    2011-09-07

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47x speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. (note)

  14. The future of new calculation concepts in dosimetry based on the Monte Carlo Methods; Avenir des nouveaux concepts des calculs dosimetriques bases sur les methodes de Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Makovicka, L.; Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J. [Universite de Franche-Comte, Equipe IRMA/ENISYS/FEMTO-ST, UMR6174 CNRS, 25 - Montbeliard (France); Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J.; Salomon, M. [Universite de Franche-Comte, Equipe AND/LIFC, 90 - Belfort (France)

    2009-01-15

    Monte Carlo codes, precise but slow, are very important tools in the vast majority of specialities connected to Radiation Physics, Radiation Protection and Dosimetry. A discussion about some other computing solutions is carried out; solutions not only based on the enhancement of computer power, or on the 'biasing'used for relative acceleration of these codes (in the case of photons), but on more efficient methods (A.N.N. - artificial neural network, C.B.R. - case-based reasoning - or other computer science techniques) already and successfully used for a long time in other scientific or industrial applications and not only Radiation Protection or Medical Dosimetry. (authors)

  15. Development of phased mission analysis program with Monte Carlo method. Improvement of the variance reduction technique with biasing towards top event

    International Nuclear Information System (INIS)

    Yang Jinan; Mihara, Takatsugu

    1998-12-01

    This report presents a variance reduction technique to estimate the reliability and availability of highly complex systems during phased mission time using the Monte Carlo simulation. In this study, we introduced the variance reduction technique with a concept of distance between the present system state and the cut set configurations. Using this technique, it becomes possible to bias the transition from the operating states to the failed states of components towards the closest cut set. Therefore a component failure can drive the system towards a cut set configuration more effectively. JNC developed the PHAMMON (Phased Mission Analysis Program with Monte Carlo Method) code which involved the two kinds of variance reduction techniques: (1) forced transition, and (2) failure biasing. However, these techniques did not guarantee an effective reduction in variance. For further improvement, a variance reduction technique incorporating the distance concept was introduced to the PHAMMON code and the numerical calculation was carried out for the different design cases of decay heat removal system in a large fast breeder reactor. Our results indicate that the technique addition of this incorporating distance concept is an effective means of further reducing the variance. (author)

  16. Burnup calculations for KIPT accelerator driven subcritical facility using Monte Carlo computer codes-MCB and MCNPX

    International Nuclear Information System (INIS)

    Gohar, Y.; Zhong, Z.; Talamo, A.

    2009-01-01

    Argonne National Laboratory (ANL) of USA and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on the conceptual design development of an electron accelerator driven subcritical (ADS) facility, using the KIPT electron accelerator. The neutron source of the subcritical assembly is generated from the interaction of 100 KW electron beam with a natural uranium target. The electron beam has a uniform spatial distribution and electron energy in the range of 100 to 200 MeV. The main functions of the subcritical assembly are the production of medical isotopes and the support of the Ukraine nuclear power industry. Neutron physics experiments and material structure analyses are planned using this facility. With the 100 KW electron beam power, the total thermal power of the facility is ∼375 kW including the fission power of ∼260 kW. The burnup of the fissile materials and the buildup of fission products reduce continuously the reactivity during the operation, which reduces the neutron flux level and consequently the facility performance. To preserve the neutron flux level during the operation, fuel assemblies should be added after long operating periods to compensate for the lost reactivity. This process requires accurate prediction of the fuel burnup, the decay behavior of the fission produces, and the introduced reactivity from adding fresh fuel assemblies. The recent developments of the Monte Carlo computer codes, the high speed capability of the computer processors, and the parallel computation techniques made it possible to perform three-dimensional detailed burnup simulations. A full detailed three-dimensional geometrical model is used for the burnup simulations with continuous energy nuclear data libraries for the transport calculations and 63-multigroup or one group cross sections libraries for the depletion calculations. Monte Carlo Computer code MCNPX and MCB are utilized for this study. MCNPX transports the electrons and the

  17. Monte Carlo calculations of electron transport on microcomputers

    International Nuclear Information System (INIS)

    Chung, Manho; Jester, W.A.; Levine, S.H.; Foderaro, A.H.

    1990-01-01

    In the work described in this paper, the Monte Carlo program ZEBRA, developed by Berber and Buxton, was converted to run on the Macintosh computer using Microsoft BASIC to reduce the cost of Monte Carlo calculations using microcomputers. Then the Eltran2 program was transferred to an IBM-compatible computer. Turbo BASIC and Microsoft Quick BASIC have been used on the IBM-compatible Tandy 4000SX computer. The paper shows the running speed of the Monte Carlo programs on the different computers, normalized to one for Eltran2 on the Macintosh-SE or Macintosh-Plus computer. Higher values refer to faster running times proportionally. Since Eltran2 is a one-dimensional program, it calculates energy deposited in a semi-infinite multilayer slab. Eltran2 has been modified to a two-dimensional program called Eltran3 to computer more accurately the case with a point source, a small detector, and a short source-to-detector distance. The running time of Eltran3 is about twice as long as that of Eltran2 for a similar case

  18. Geometric allocation approaches in Markov chain Monte Carlo

    International Nuclear Information System (INIS)

    Todo, S; Suwa, H

    2013-01-01

    The Markov chain Monte Carlo method is a versatile tool in statistical physics to evaluate multi-dimensional integrals numerically. For the method to work effectively, we must consider the following key issues: the choice of ensemble, the selection of candidate states, the optimization of transition kernel, algorithm for choosing a configuration according to the transition probabilities. We show that the unconventional approaches based on the geometric allocation of probabilities or weights can improve the dynamics and scaling of the Monte Carlo simulation in several aspects. Particularly, the approach using the irreversible kernel can reduce or sometimes completely eliminate the rejection of trial move in the Markov chain. We also discuss how the space-time interchange technique together with Walker's method of aliases can reduce the computational time especially for the case where the number of candidates is large, such as models with long-range interactions

  19. Computational efficiency using the CYBER-205 computer for the PACER Monte Carlo Program

    International Nuclear Information System (INIS)

    Candelore, N.R.; Maher, C.M.; Gast, R.C.

    1985-09-01

    The use of the large memory of the CYBER-205 and its vector data handling logic produced speedups over scalar code ranging from a factor of 7 for unit cell calculations with relatively few compositions to a factor of 5 for problems having more detailed geometry and materials. By vectorizing the neutron tracking in PACER (the collision analysis remained in scalar code), an asymptotic value of 200 neutrons/cpu-second was achieved for a batch size of 10,000 neutrons. The complete vectorization of the Monte Carlo method as performed by Brown resulted in even higher speedups in neutron processing rates over the use of scalar code. Large speedups in neutron processing rates are beneficial not only to achieve more accurate results for the neutronics calculations which are routinely done using Monte Carlo, but also to extend the use of the Monte Carlo method to applications that were previously considered impractical because of large running times

  20. Calculation Aspects of the European Rebalanced Basket Option using Monte Carlo Methods: Valuation

    Directory of Open Access Journals (Sweden)

    CJ van der Merwe

    2012-06-01

    Full Text Available Extra premiums can be charged to a client to guarantee a minimum payout of a contract on a portfolio that gets rebalanced on a regular basis back to fixed proportions. The valuation of this premium can be changed to that of the pricing of a European put option with underlying rebalanced portfolio. This article finds the most efficient estimators for the value of this path-dependant multi-asset put option using different Monte Carlo methods. With the help of a refined method, computing time of the value decreased significantly. Furthermore, Variance Reduction Techniques and Quasi-Monte Carlo methods delivered more accurate and faster converging estimates as well.

  1. Adaptive Multilevel Monte Carlo Simulation

    KAUST Repository

    Hoel, H

    2011-08-23

    This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).

  2. Creating and using a type of free-form geometry in Monte Carlo particle transport

    International Nuclear Information System (INIS)

    Wessol, D.E.; Wheeler, F.J.

    1993-01-01

    While the reactor physicists were fine-tuning the Monte Carlo paradigm for particle transport in regular geometries, the computer scientists were developing rendering algorithms to display extremely realistic renditions of irregular objects ranging from the ubiquitous teakettle to dynamic Jell-O. Even though the modeling methods share a common basis, the initial strategies each discipline developed for variance reduction were remarkably different. Initially, the reactor physicist used Russian roulette, importance sampling, particle splitting, and rejection techniques. In the early stages of development, the computer scientist relied primarily on rejection techniques, including a very elegant hierarchical construction and sampling method. This sampling method allowed the computer scientist to viably track particles through irregular geometries in three-dimensional space, while the initial methods developed by the reactor physicists would only allow for efficient searches through analytical surfaces or objects. As time goes by, it appears there has been some merging of the variance reduction strategies between the two disciplines. This is an early (possibly first) incorporation of geometric hierarchical construction and sampling into the reactor physicists' Monte Carlo transport model that permits efficient tracking through nonuniform rational B-spline surfaces in three-dimensional space. After some discussion, the results from this model are compared with experiments and the model employing implicit (analytical) geometric representation

  3. Monte Carlo alpha calculation

    Energy Technology Data Exchange (ETDEWEB)

    Brockway, D.; Soran, P.; Whalen, P.

    1985-01-01

    A Monte Carlo algorithm to efficiently calculate static alpha eigenvalues, N = ne/sup ..cap alpha..t/, for supercritical systems has been developed and tested. A direct Monte Carlo approach to calculating a static alpha is to simply follow the buildup in time of neutrons in a supercritical system and evaluate the logarithmic derivative of the neutron population with respect to time. This procedure is expensive, and the solution is very noisy and almost useless for a system near critical. The modified approach is to convert the time-dependent problem to a static ..cap alpha../sup -/eigenvalue problem and regress ..cap alpha.. on solutions of a/sup -/ k/sup -/eigenvalue problem. In practice, this procedure is much more efficient than the direct calculation, and produces much more accurate results. Because the Monte Carlo codes are intrinsically three-dimensional and use elaborate continuous-energy cross sections, this technique is now used as a standard for evaluating other calculational techniques in odd geometries or with group cross sections.

  4. Core Calculation of 1 MWatt PUSPATI TRIGA Reactor (RTP) using Monte Carlo MVP Code System

    Science.gov (United States)

    Karim, Julia Abdul

    2008-05-01

    The Monte Carlo MVP code system was adopted for the Reaktor TRIGA PUSAPTI (RTP) core calculation. The code was developed by a group of researcher of Japan Atomic Energy Agency (JAEA) first in 1994. MVP is a general multi-purpose Monte Carlo code for neutron and photon transport calculation and able to estimate an accurate simulation problems. The code calculation is based on the continuous energy method. This code is capable of adopting an accurate physics model, geometry description and variance reduction technique faster than conventional method as compared to the conventional scalar method. This code could achieve higher computational speed by several factors on the vector super-computer. In this calculation, RTP core was modeled as close as possible to the real core and results of keff flux, fission densities and others were obtained.

  5. Core Calculation of 1 MWatt PUSPATI TRIGA Reactor (RTP) using Monte Carlo MVP Code System

    International Nuclear Information System (INIS)

    Karim, Julia Abdul

    2008-01-01

    The Monte Carlo MVP code system was adopted for the Reaktor TRIGA PUSAPTI (RTP) core calculation. The code was developed by a group of researcher of Japan Atomic Energy Agency (JAEA) first in 1994. MVP is a general multi-purpose Monte Carlo code for neutron and photon transport calculation and able to estimate an accurate simulation problems. The code calculation is based on the continuous energy method. This code is capable of adopting an accurate physics model, geometry description and variance reduction technique faster than conventional method as compared to the conventional scalar method. This code could achieve higher computational speed by several factors on the vector super-computer. In this calculation, RTP core was modeled as close as possible to the real core and results of keff flux, fission densities and others were obtained

  6. Modification to the Monte Carlo N-Particle (MCNP) Visual Editor (MCNPVised) to Read in Computer Aided Design (CAD) Files

    International Nuclear Information System (INIS)

    Randolph Schwarz; Leland L. Carter; Alysia Schwarz

    2005-01-01

    Monte Carlo N-Particle Transport Code (MCNP) is the code of choice for doing complex neutron/photon/electron transport calculations for the nuclear industry and research institutions. The Visual Editor for Monte Carlo N-Particle is internationally recognized as the best code for visually creating and graphically displaying input files for MCNP. The work performed in this grant was used to enhance the capabilities of the MCNP Visual Editor to allow it to read in both 2D and 3D Computer Aided Design (CAD) files, allowing the user to electronically generate a valid MCNP input geometry

  7. Problems in radiation shielding calculations with Monte Carlo methods

    International Nuclear Information System (INIS)

    Ueki, Kohtaro

    1985-01-01

    The Monte Carlo method is a very useful tool for solving a large class of radiation transport problem. In contrast with deterministic method, geometric complexity is a much less significant problem for Monte Carlo calculations. However, the accuracy of Monte Carlo calculations is of course, limited by statistical error of the quantities to be estimated. In this report, we point out some typical problems to solve a large shielding system including radiation streaming. The Monte Carlo coupling technique was developed to settle such a shielding problem accurately. However, the variance of the Monte Carlo results using the coupling technique of which detectors were located outside the radiation streaming, was still not enough. So as to bring on more accurate results for the detectors located outside the streaming and also for a multi-legged-duct streaming problem, a practicable way of ''Prism Scattering technique'' is proposed in the study. (author)

  8. Determination of output factor for 6 MV small photon beam: comparison between Monte Carlo simulation technique and microDiamond detector

    International Nuclear Information System (INIS)

    Krongkietlearts, K; Tangboonduangjit, P; Paisangittisakul, N

    2016-01-01

    In order to improve the life's quality for a cancer patient, the radiation techniques are constantly evolving. Especially, the two modern techniques which are intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) are quite promising. They comprise of many small beam sizes (beamlets) with various intensities to achieve the intended radiation dose to the tumor and minimal dose to the nearby normal tissue. The study investigates whether the microDiamond detector (PTW manufacturer), a synthetic single crystal diamond detector, is suitable for small field output factor measurement. The results were compared with those measured by the stereotactic field detector (SFD) and the Monte Carlo simulation (EGSnrc/BEAMnrc/DOSXYZ). The calibration of Monte Carlo simulation was done using the percentage depth dose and dose profile measured by the photon field detector (PFD) of the 10×10 cm 2 field size with 100 cm SSD. Comparison of the values obtained from the calculations and measurements are consistent, no more than 1% difference. The output factors obtained from the microDiamond detector have been compared with those of SFD and Monte Carlo simulation, the results demonstrate the percentage difference of less than 2%. (paper)

  9. High performance computing system in the framework of the Higgs boson studies

    CERN Document Server

    Belyaev, Nikita; The ATLAS collaboration

    2017-01-01

    The Higgs boson physics is one of the most important and promising fields of study in modern High Energy Physics. To perform precision measurements of the Higgs boson properties, the use of fast and efficient instruments of Monte Carlo event simulation is required. Due to the increasing amount of data and to the growing complexity of the simulation software tools, the computing resources currently available for Monte Carlo simulation on the LHC GRID are not sufficient. One of the possibilities to address this shortfall of computing resources is the usage of institutes computer clusters, commercial computing resources and supercomputers. In this paper, a brief description of the Higgs boson physics, the Monte-Carlo generation and event simulation techniques are presented. A description of modern high performance computing systems and tests of their performance are also discussed. These studies have been performed on the Worldwide LHC Computing Grid and Kurchatov Institute Data Processing Center, including Tier...

  10. Monte Carlo applications at Hanford Engineering Development Laboratory

    International Nuclear Information System (INIS)

    Carter, L.L.; Morford, R.J.; Wilcox, A.D.

    1980-03-01

    Twenty applications of neutron and photon transport with Monte Carlo have been described to give an overview of the current effort at HEDL. A satisfaction factor was defined which quantitatively assigns an overall return for each calculation relative to the investment in machine time and expenditure of manpower. Low satisfaction factors are frequently encountered in the calculations. Usually this is due to limitations in execution rates of present day computers, but sometimes a low satisfaction factor is due to computer code limitations, calendar time constraints, or inadequacy of the nuclear data base. Present day computer codes have taken some of the burden off of the user. Nevertheless, it is highly desirable for the engineer using the computer code to have an understanding of particle transport including some intuition for the problems being solved, to understand the construction of sources for the random walk, to understand the interpretation of tallies made by the code, and to have a basic understanding of elementary biasing techniques

  11. Computational techniques of the simplex method

    CERN Document Server

    Maros, István

    2003-01-01

    Computational Techniques of the Simplex Method is a systematic treatment focused on the computational issues of the simplex method. It provides a comprehensive coverage of the most important and successful algorithmic and implementation techniques of the simplex method. It is a unique source of essential, never discussed details of algorithmic elements and their implementation. On the basis of the book the reader will be able to create a highly advanced implementation of the simplex method which, in turn, can be used directly or as a building block in other solution algorithms.

  12. New Hybrid Monte Carlo methods for efficient sampling. From physics to biology and statistics

    International Nuclear Information System (INIS)

    Akhmatskaya, Elena; Reich, Sebastian

    2011-01-01

    We introduce a class of novel hybrid methods for detailed simulations of large complex systems in physics, biology, materials science and statistics. These generalized shadow Hybrid Monte Carlo (GSHMC) methods combine the advantages of stochastic and deterministic simulation techniques. They utilize a partial momentum update to retain some of the dynamical information, employ modified Hamiltonians to overcome exponential performance degradation with the system’s size and make use of multi-scale nature of complex systems. Variants of GSHMCs were developed for atomistic simulation, particle simulation and statistics: GSHMC (thermodynamically consistent implementation of constant-temperature molecular dynamics), MTS-GSHMC (multiple-time-stepping GSHMC), meso-GSHMC (Metropolis corrected dissipative particle dynamics (DPD) method), and a generalized shadow Hamiltonian Monte Carlo, GSHmMC (a GSHMC for statistical simulations). All of these are compatible with other enhanced sampling techniques and suitable for massively parallel computing allowing for a range of multi-level parallel strategies. A brief description of the GSHMC approach, examples of its application on high performance computers and comparison with other existing techniques are given. Our approach is shown to resolve such problems as resonance instabilities of the MTS methods and non-preservation of thermodynamic equilibrium properties in DPD, and to outperform known methods in sampling efficiency by an order of magnitude. (author)

  13. Quantum Monte Carlo: Faster, More Reliable, And More Accurate

    Science.gov (United States)

    Anderson, Amos Gerald

    2010-06-01

    The Schrodinger Equation has been available for about 83 years, but today, we still strain to apply it accurately to molecules of interest. The difficulty is not theoretical in nature, but practical, since we're held back by lack of sufficient computing power. Consequently, effort is applied to find acceptable approximations to facilitate real time solutions. In the meantime, computer technology has begun rapidly advancing and changing the way we think about efficient algorithms. For those who can reorganize their formulas to take advantage of these changes and thereby lift some approximations, incredible new opportunities await. Over the last decade, we've seen the emergence of a new kind of computer processor, the graphics card. Designed to accelerate computer games by optimizing quantity instead of quality in processor, they have become of sufficient quality to be useful to some scientists. In this thesis, we explore the first known use of a graphics card to computational chemistry by rewriting our Quantum Monte Carlo software into the requisite "data parallel" formalism. We find that notwithstanding precision considerations, we are able to speed up our software by about a factor of 6. The success of a Quantum Monte Carlo calculation depends on more than just processing power. It also requires the scientist to carefully design the trial wavefunction used to guide simulated electrons. We have studied the use of Generalized Valence Bond wavefunctions to simply, and yet effectively, captured the essential static correlation in atoms and molecules. Furthermore, we have developed significantly improved two particle correlation functions, designed with both flexibility and simplicity considerations, representing an effective and reliable way to add the necessary dynamic correlation. Lastly, we present our method for stabilizing the statistical nature of the calculation, by manipulating configuration weights, thus facilitating efficient and robust calculations. Our

  14. Parallel MCNP Monte Carlo transport calculations with MPI

    International Nuclear Information System (INIS)

    Wagner, J.C.; Haghighat, A.

    1996-01-01

    The steady increase in computational performance has made Monte Carlo calculations for large/complex systems possible. However, in order to make these calculations practical, order of magnitude increases in performance are necessary. The Monte Carlo method is inherently parallel (particles are simulated independently) and thus has the potential for near-linear speedup with respect to the number of processors. Further, the ever-increasing accessibility of parallel computers, such as workstation clusters, facilitates the practical use of parallel Monte Carlo. Recognizing the nature of the Monte Carlo method and the trends in available computing, the code developers at Los Alamos National Laboratory implemented the message-passing general-purpose Monte Carlo radiation transport code MCNP (version 4A). The PVM package was chosen by the MCNP code developers because it supports a variety of communication networks, several UNIX platforms, and heterogeneous computer systems. This PVM version of MCNP has been shown to produce speedups that approach the number of processors and thus, is a very useful tool for transport analysis. Due to software incompatibilities on the local IBM SP2, PVM has not been available, and thus it is not possible to take advantage of this useful tool. Hence, it became necessary to implement an alternative message-passing library package into MCNP. Because the message-passing interface (MPI) is supported on the local system, takes advantage of the high-speed communication switches in the SP2, and is considered to be the emerging standard, it was selected

  15. Initial Assessment of Parallelization of Monte Carlo Calculation using Graphics Processing Units

    International Nuclear Information System (INIS)

    Choi, Sung Hoon; Joo, Han Gyu

    2009-01-01

    Monte Carlo (MC) simulation is an effective tool for calculating neutron transports in complex geometry. However, because Monte Carlo simulates each neutron behavior one by one, it takes a very long computing time if enough neutrons are used for high precision of calculation. Accordingly, methods that reduce the computing time are required. In a Monte Carlo code, parallel calculation is well-suited since it simulates the behavior of each neutron independently and thus parallel computation is natural. The parallelization of the Monte Carlo codes, however, was done using multi CPUs. By the global demand for high quality 3D graphics, the Graphics Processing Unit (GPU) has developed into a highly parallel, multi-core processor. This parallel processing capability of GPUs can be available to engineering computing once a suitable interface is provided. Recently, NVIDIA introduced CUDATM, a general purpose parallel computing architecture. CUDA is a software environment that allows developers to manage GPU using C/C++ or other languages. In this work, a GPU-based Monte Carlo is developed and the initial assessment of it parallel performance is investigated

  16. An introduction to statistical computing a simulation-based approach

    CERN Document Server

    Voss, Jochen

    2014-01-01

    A comprehensive introduction to sampling-based methods in statistical computing The use of computers in mathematics and statistics has opened up a wide range of techniques for studying otherwise intractable problems.  Sampling-based simulation techniques are now an invaluable tool for exploring statistical models.  This book gives a comprehensive introduction to the exciting area of sampling-based methods. An Introduction to Statistical Computing introduces the classical topics of random number generation and Monte Carlo methods.  It also includes some advanced met

  17. Adjoint electron Monte Carlo calculations

    International Nuclear Information System (INIS)

    Jordan, T.M.

    1986-01-01

    Adjoint Monte Carlo is the most efficient method for accurate analysis of space systems exposed to natural and artificially enhanced electron environments. Recent adjoint calculations for isotropic electron environments include: comparative data for experimental measurements on electronics boxes; benchmark problem solutions for comparing total dose prediction methodologies; preliminary assessment of sectoring methods used during space system design; and total dose predictions on an electronics package. Adjoint Monte Carlo, forward Monte Carlo, and experiment are in excellent agreement for electron sources that simulate space environments. For electron space environments, adjoint Monte Carlo is clearly superior to forward Monte Carlo, requiring one to two orders of magnitude less computer time for relatively simple geometries. The solid-angle sectoring approximations used for routine design calculations can err by more than a factor of 2 on dose in simple shield geometries. For critical space systems exposed to severe electron environments, these potential sectoring errors demand the establishment of large design margins and/or verification of shield design by adjoint Monte Carlo/experiment

  18. A Monte Carlo Green's function method for three-dimensional neutron transport

    International Nuclear Information System (INIS)

    Gamino, R.G.; Brown, F.B.; Mendelson, M.R.

    1992-01-01

    This paper describes a Monte Carlo transport kernel capability, which has recently been incorporated into the RACER continuous-energy Monte Carlo code. The kernels represent a Green's function method for neutron transport from a fixed-source volume out to a particular volume of interest. This method is very powerful transport technique. Also, since kernels are evaluated numerically by Monte Carlo, the problem geometry can be arbitrarily complex, yet exact. This method is intended for problems where an ex-core neutron response must be determined for a variety of reactor conditions. Two examples are ex-core neutron detector response and vessel critical weld fast flux. The response is expressed in terms of neutron transport kernels weighted by a core fission source distribution. In these types of calculations, the response must be computed for hundreds of source distributions, but the kernels only need to be calculated once. The advance described in this paper is that the kernels are generated with a highly accurate three-dimensional Monte Carlo transport calculation instead of an approximate method such as line-of-sight attenuation theory or a synthesized three-dimensional discrete ordinates solution

  19. Monteray Mark-I: Computer program (PC-version) for shielding calculation with Monte Carlo method

    International Nuclear Information System (INIS)

    Pudjijanto, M.S.; Akhmad, Y.R.

    1998-01-01

    A computer program for gamma ray shielding calculation using Monte Carlo method has been developed. The program is written in WATFOR77 language. The MONTERAY MARH-1 is originally developed by James Wood. The program was modified by the authors that the modified version is easily executed. Applying Monte Carlo method the program observe photon gamma transport in an infinity planar shielding with various thick. A photon gamma is observed till escape from the shielding or when its energy less than the cut off energy. Pair production process is treated as pure absorption process that annihilation photons generated in the process are neglected in the calculation. The out put data calculated by the program are total albedo, build-up factor, and photon spectra. The calculation result for build-up factor of a slab lead and water media with 6 MeV parallel beam gamma source shows that they are in agreement with published data. Hence the program is adequate as a shielding design tool for observing gamma radiation transport in various media

  20. Monte Carlo simulations of adult and pediatric computed tomography exams: Validation studies of organ doses with physical phantoms

    International Nuclear Information System (INIS)

    Long, Daniel J.; Lee, Choonsik; Tien, Christopher; Fisher, Ryan; Hoerner, Matthew R.; Hintenlang, David; Bolch, Wesley E.

    2013-01-01

    Purpose: To validate the accuracy of a Monte Carlo source model of the Siemens SOMATOM Sensation 16 CT scanner using organ doses measured in physical anthropomorphic phantoms. Methods: The x-ray output of the Siemens SOMATOM Sensation 16 multidetector CT scanner was simulated within the Monte Carlo radiation transport code, MCNPX version 2.6. The resulting source model was able to perform various simulated axial and helical computed tomographic (CT) scans of varying scan parameters, including beam energy, filtration, pitch, and beam collimation. Two custom-built anthropomorphic phantoms were used to take dose measurements on the CT scanner: an adult male and a 9-month-old. The adult male is a physical replica of University of Florida reference adult male hybrid computational phantom, while the 9-month-old is a replica of University of Florida Series B 9-month-old voxel computational phantom. Each phantom underwent a series of axial and helical CT scans, during which organ doses were measured using fiber-optic coupled plastic scintillator dosimeters developed at University of Florida. The physical setup was reproduced and simulated in MCNPX using the CT source model and the computational phantoms upon which the anthropomorphic phantoms were constructed. Average organ doses were then calculated based upon these MCNPX results. Results: For all CT scans, good agreement was seen between measured and simulated organ doses. For the adult male, the percent differences were within 16% for axial scans, and within 18% for helical scans. For the 9-month-old, the percent differences were all within 15% for both the axial and helical scans. These results are comparable to previously published validation studies using GE scanners and commercially available anthropomorphic phantoms. Conclusions: Overall results of this study show that the Monte Carlo source model can be used to accurately and reliably calculate organ doses for patients undergoing a variety of axial or helical CT

  1. Alternative implementations of the Monte Carlo power method

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Gelbard, E.M.

    2002-01-01

    We compare nominal efficiencies, i.e. variances in power shapes for equal running time, of different versions of the Monte Carlo eigenvalue computation, as applied to criticality safety analysis calculations. The two main methods considered here are ''conventional'' Monte Carlo and the superhistory method, and both are used in criticality safety codes. Within each of these major methods, different variants are available for the main steps of the basic Monte Carlo algorithm. Thus, for example, different treatments of the fission process may vary in the extent to which they follow, in analog fashion, the details of real-world fission, or may vary in details of the methods by which they choose next-generation source sites. In general the same options are available in both the superhistory method and conventional Monte Carlo, but there seems not to have been much examination of the special properties of the two major methods and their minor variants. We find, first, that the superhistory method is just as efficient as conventional Monte Carlo and, secondly, that use of different variants of the basic algorithms may, in special cases, have a surprisingly large effect on Monte Carlo computational efficiency

  2. Monte Carlo sampling strategies for lattice gauge calculations

    International Nuclear Information System (INIS)

    Guralnik, G.; Zemach, C.; Warnock, T.

    1985-01-01

    We have sought to optimize the elements of the Monte Carlo processes for thermalizing and decorrelating sequences of lattice gauge configurations and for this purpose, to develop computational and theoretical diagnostics to compare alternative techniques. These have been applied to speed up generations of random matrices, compare heat bath and Metropolis stepping methods, and to study autocorrelations of sequences in terms of the classical moment problem. The efficient use of statistically correlated lattice data is an optimization problem depending on the relation between computer times to generate lattice sequences of sufficiently small correlation and times to analyze them. We can solve this problem with the aid of a representation of auto-correlation data for various step lags as moments of positive definite distributions, using methods known for the moment problem to put bounds on statistical variances, in place of estimating the variances by too-lengthy computer runs

  3. 'Odontologic dosimetric card' experiments and simulations using Monte Carlo methods

    International Nuclear Information System (INIS)

    Menezes, C.J.M.; Lima, R. de A.; Peixoto, J.E.; Vieira, J.W.

    2008-01-01

    The techniques for data processing, combined with the development of fast and more powerful computers, makes the Monte Carlo methods one of the most widely used tools in the radiation transport simulation. For applications in diagnostic radiology, this method generally uses anthropomorphic phantoms to evaluate the absorbed dose to patients during exposure. In this paper, some Monte Carlo techniques were used to simulation of a testing device designed for intra-oral X-ray equipment performance evaluation called Odontologic Dosimetric Card (CDO of 'Cartao Dosimetrico Odontologico' in Portuguese) for different thermoluminescent detectors. This paper used two computational models of exposition RXD/EGS4 and CDO/EGS4. In the first model, the simulation results are compared with experimental data obtained in the similar conditions. The second model, it presents the same characteristics of the testing device studied (CDO). For the irradiations, the X-ray spectra were generated by the IPEM report number 78, spectrum processor. The attenuated spectrum was obtained for IEC 61267 qualities and various additional filters for a Pantak 320 X-ray industrial equipment. The results obtained for the study of the copper filters used in the determination of the kVp were compared with experimental data, validating the model proposed for the characterization of the CDO. The results shower of the CDO will be utilized in quality assurance programs in order to guarantee that the equipment fulfill the requirements of the Norm SVS No. 453/98 MS (Brazil) 'Directives of Radiation Protection in Medical and Dental Radiodiagnostic'. We conclude that the EGS4 is a suitable code Monte Carlo to simulate thermoluminescent dosimeters and experimental procedures employed in the routine of the quality control laboratory in diagnostic radiology. (author)

  4. Computational geometry for reactor applications

    International Nuclear Information System (INIS)

    Brown, F.B.; Bischoff, F.G.

    1988-01-01

    Monte Carlo codes for simulating particle transport involve three basic computational sections: a geometry package for locating particles and computing distances to regional boundaries, a physics package for analyzing interactions between particles and problem materials, and an editing package for determining event statistics and overall results. This paper describes the computational geometry methods in RACER, a vectorized Monte Carlo code used for reactor physics analysis, so that comparisons may be made with techniques used in other codes. The principal applications for RACER are eigenvalue calculations and power distributions associated with reactor core physics analysis. Successive batches of neutrons are run until convergence and acceptable confidence intervals are obtained, with typical problems involving >10 6 histories. As such, the development of computational geometry methods has emphasized two basic needs: a flexible but compact geometric representation that permits accurate modeling of reactor core details and efficient geometric computation to permit very large numbers of histories to be run. The current geometric capabilities meet these needs effectively, supporting a variety of very large and demanding applications

  5. Progress on RMC: a Monte Carlo neutron transport code for reactor analysis

    International Nuclear Information System (INIS)

    Wang, Kan; Li, Zeguang; She, Ding; Liu, Yuxuan; Xu, Qi; Shen, Huayun; Yu, Ganglin

    2011-01-01

    This paper presents a new 3-D Monte Carlo neutron transport code named RMC (Reactor Monte Carlo code), specifically intended for reactor physics analysis. This code is being developed by Department of Engineering Physics in Tsinghua University and written in C++ and Fortran 90 language with the latest version of RMC 2.5.0. The RMC code uses the method known as the delta-tracking method to simulate neutron transport, the advantages of which include fast simulation in complex geometries and relatively simple handling of complicated geometrical objects. Some other techniques such as computational-expense oriented method and hash-table method have been developed and implemented in RMC to speedup the calculation. To meet the requirements of reactor analysis, the RMC code has the calculational functions including criticality calculation, burnup calculation and also kinetics simulation. In this paper, comparison calculations of criticality problems, burnup problems and transient problems are carried out using RMC code and other Monte Carlo codes, and the results show that RMC performs quite well in these kinds of problems. Based on MPI, RMC succeeds in parallel computation and represents a high speed-up. This code is still under intensive development and the further work directions are mentioned at the end of this paper. (author)

  6. Computed Radiography: An Innovative Inspection Technique

    International Nuclear Information System (INIS)

    Klein, William A.; Councill, Donald L.

    2002-01-01

    Florida Power and Light Company's (FPL) Nuclear Division combined two diverse technologies to create an innovative inspection technique, Computed Radiography, that improves personnel safety and unit reliability while reducing inspection costs. This technique was pioneered in the medical field and applied in the Nuclear Division initially to detect piping degradation due to flow-accelerated corrosion. Component degradation can be detected by this additional technique. This approach permits FPL to reduce inspection costs, perform on line examinations (no generation curtailment), and to maintain or improve both personnel safety and unit reliability. Computed Radiography is a very versatile tool capable of other uses: - improving the external corrosion program by permitting inspections underneath insulation, and - diagnosing system and component problems such as valve positions, without the need to shutdown or disassemble the component. (authors)

  7. Discrete diffusion Monte Carlo for frequency-dependent radiative transfer

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Thompson, Kelly G.; Urbatsch, Todd J.

    2011-01-01

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique. (author)

  8. Monte Carlo Finite Volume Element Methods for the Convection-Diffusion Equation with a Random Diffusion Coefficient

    Directory of Open Access Journals (Sweden)

    Qian Zhang

    2014-01-01

    Full Text Available The paper presents a framework for the construction of Monte Carlo finite volume element method (MCFVEM for the convection-diffusion equation with a random diffusion coefficient, which is described as a random field. We first approximate the continuous stochastic field by a finite number of random variables via the Karhunen-Loève expansion and transform the initial stochastic problem into a deterministic one with a parameter in high dimensions. Then we generate independent identically distributed approximations of the solution by sampling the coefficient of the equation and employing finite volume element variational formulation. Finally the Monte Carlo (MC method is used to compute corresponding sample averages. Statistic error is estimated analytically and experimentally. A quasi-Monte Carlo (QMC technique with Sobol sequences is also used to accelerate convergence, and experiments indicate that it can improve the efficiency of the Monte Carlo method.

  9. Pediatric personalized CT-dosimetry Monte Carlo simulations, using computational phantoms

    International Nuclear Information System (INIS)

    Papadimitroulas, P; Kagadis, G C; Ploussi, A; Kordolaimi, S; Papamichail, D; Karavasilis, E; Syrgiamiotis, V; Loudos, G

    2015-01-01

    The last 40 years Monte Carlo (MC) simulations serve as a “gold standard” tool for a wide range of applications in the field of medical physics and tend to be essential in daily clinical practice. Regarding diagnostic imaging applications, such as computed tomography (CT), the assessment of deposited energy is of high interest, so as to better analyze the risks and the benefits of the procedure. The last few years a big effort is done towards personalized dosimetry, especially in pediatric applications. In the present study the GATE toolkit was used and computational pediatric phantoms have been modeled for the assessment of CT examinations dosimetry. The pediatric models used come from the XCAT and IT'IS series. The X-ray spectrum of a Brightspeed CT scanner was simulated and validated with experimental data. Specifically, a DCT-10 ionization chamber was irradiated twice using 120 kVp with 100 mAs and 200 mAs, for 1 sec in 1 central axial slice (thickness = 10mm). The absorbed dose was measured in air resulting in differences lower than 4% between the experimental and simulated data. The simulations were acquired using ∼10 10 number of primaries in order to achieve low statistical uncertainties. Dose maps were also saved for quantification of the absorbed dose in several children critical organs during CT acquisition. (paper)

  10. Prospect on general software of Monte Carlo method

    International Nuclear Information System (INIS)

    Pei Lucheng

    1992-01-01

    This is a short paper on the prospect of Monte Carlo general software. The content consists of cluster sampling method, zero variance technique, self-improved method, and vectorized Monte Carlo method

  11. The vector and parallel processing of MORSE code on Monte Carlo Machine

    International Nuclear Information System (INIS)

    Hasegawa, Yukihiro; Higuchi, Kenji.

    1995-11-01

    Multi-group Monte Carlo Code for particle transport, MORSE is modified for high performance computing on Monte Carlo Machine Monte-4. The method and the results are described. Monte-4 was specially developed to realize high performance computing of Monte Carlo codes for particle transport, which have been difficult to obtain high performance in vector processing on conventional vector processors. Monte-4 has four vector processor units with the special hardware called Monte Carlo pipelines. The vectorization and parallelization of MORSE code and the performance evaluation on Monte-4 are described. (author)

  12. Enhancement of high-energy distribution tail in Monte Carlo semiconductor simulations using a Variance Reduction Scheme

    Directory of Open Access Journals (Sweden)

    Vincenza Di Stefano

    2009-11-01

    Full Text Available The Multicomb variance reduction technique has been introduced in the Direct Monte Carlo Simulation for submicrometric semiconductor devices. The method has been implemented in bulk silicon. The simulations show that the statistical variance of hot electrons is reduced with some computational cost. The method is efficient and easy to implement in existing device simulators.

  13. Electron beam treatment planning: A review of dose computation methods

    International Nuclear Information System (INIS)

    Mohan, R.; Riley, R.; Laughlin, J.S.

    1983-01-01

    Various methods of dose computations are reviewed. The equivalent path length methods used to account for body curvature and internal structure are not adequate because they ignore the lateral diffusion of electrons. The Monte Carlo method for the broad field three-dimensional situation in treatment planning is impractical because of the enormous computer time required. The pencil beam technique may represent a suitable compromise. The behavior of a pencil beam may be described by the multiple scattering theory or, alternatively, generated using the Monte Carlo method. Although nearly two orders of magnitude slower than the equivalent path length technique, the pencil beam method improves accuracy sufficiently to justify its use. It applies very well when accounting for the effect of surface irregularities; the formulation for handling inhomogeneous internal structure is yet to be developed

  14. Multiple histogram method and static Monte Carlo sampling

    NARCIS (Netherlands)

    Inda, M.A.; Frenkel, D.

    2004-01-01

    We describe an approach to use multiple-histogram methods in combination with static, biased Monte Carlo simulations. To illustrate this, we computed the force-extension curve of an athermal polymer from multiple histograms constructed in a series of static Rosenbluth Monte Carlo simulations. From

  15. GRAVTool, a Package to Compute Geoid Model by Remove-Compute-Restore Technique

    Science.gov (United States)

    Marotta, G. S.; Blitzkow, D.; Vidotti, R. M.

    2015-12-01

    Currently, there are several methods to determine geoid models. They can be based on terrestrial gravity data, geopotential coefficients, astro-geodetic data or a combination of them. Among the techniques to compute a precise geoid model, the Remove-Compute-Restore (RCR) has been widely applied. It considers short, medium and long wavelengths derived from altitude data provided by Digital Terrain Models (DTM), terrestrial gravity data and global geopotential coefficients, respectively. In order to apply this technique, it is necessary to create procedures that compute gravity anomalies and geoid models, by the integration of different wavelengths, and that adjust these models to one local vertical datum. This research presents a developed package called GRAVTool based on MATLAB software to compute local geoid models by RCR technique and its application in a study area. The studied area comprehends the federal district of Brazil, with ~6000 km², wavy relief, heights varying from 600 m to 1340 m, located between the coordinates 48.25ºW, 15.45ºS and 47.33ºW, 16.06ºS. The results of the numerical example on the studied area show the local geoid model computed by the GRAVTool package (Figure), using 1377 terrestrial gravity data, SRTM data with 3 arc second of resolution, and geopotential coefficients of the EIGEN-6C4 model to degree 360. The accuracy of the computed model (σ = ± 0.071 m, RMS = 0.069 m, maximum = 0.178 m and minimum = -0.123 m) matches the uncertainty (σ =± 0.073) of 21 points randomly spaced where the geoid was computed by geometrical leveling technique supported by positioning GNSS. The results were also better than those achieved by Brazilian official regional geoid model (σ = ± 0.099 m, RMS = 0.208 m, maximum = 0.419 m and minimum = -0.040 m).

  16. Quantum Monte Carlo for vibrating molecules

    International Nuclear Information System (INIS)

    Brown, W.R.; Lawrence Berkeley National Lab., CA

    1996-08-01

    Quantum Monte Carlo (QMC) has successfully computed the total electronic energies of atoms and molecules. The main goal of this work is to use correlation function quantum Monte Carlo (CFQMC) to compute the vibrational state energies of molecules given a potential energy surface (PES). In CFQMC, an ensemble of random walkers simulate the diffusion and branching processes of the imaginary-time time dependent Schroedinger equation in order to evaluate the matrix elements. The program QMCVIB was written to perform multi-state VMC and CFQMC calculations and employed for several calculations of the H 2 O and C 3 vibrational states, using 7 PES's, 3 trial wavefunction forms, two methods of non-linear basis function parameter optimization, and on both serial and parallel computers. In order to construct accurate trial wavefunctions different wavefunctions forms were required for H 2 O and C 3 . In order to construct accurate trial wavefunctions for C 3 , the non-linear parameters were optimized with respect to the sum of the energies of several low-lying vibrational states. In order to stabilize the statistical error estimates for C 3 the Monte Carlo data was collected into blocks. Accurate vibrational state energies were computed using both serial and parallel QMCVIB programs. Comparison of vibrational state energies computed from the three C 3 PES's suggested that a non-linear equilibrium geometry PES is the most accurate and that discrete potential representations may be used to conveniently determine vibrational state energies

  17. A comparison of approximation techniques for variance-based sensitivity analysis of biochemical reaction systems

    Directory of Open Access Journals (Sweden)

    Goutsias John

    2010-05-01

    Full Text Available Abstract Background Sensitivity analysis is an indispensable tool for the analysis of complex systems. In a recent paper, we have introduced a thermodynamically consistent variance-based sensitivity analysis approach for studying the robustness and fragility properties of biochemical reaction systems under uncertainty in the standard chemical potentials of the activated complexes of the reactions and the standard chemical potentials of the molecular species. In that approach, key sensitivity indices were estimated by Monte Carlo sampling, which is computationally very demanding and impractical for large biochemical reaction systems. Computationally efficient algorithms are needed to make variance-based sensitivity analysis applicable to realistic cellular networks, modeled by biochemical reaction systems that consist of a large number of reactions and molecular species. Results We present four techniques, derivative approximation (DA, polynomial approximation (PA, Gauss-Hermite integration (GHI, and orthonormal Hermite approximation (OHA, for analytically approximating the variance-based sensitivity indices associated with a biochemical reaction system. By using a well-known model of the mitogen-activated protein kinase signaling cascade as a case study, we numerically compare the approximation quality of these techniques against traditional Monte Carlo sampling. Our results indicate that, although DA is computationally the most attractive technique, special care should be exercised when using it for sensitivity analysis, since it may only be accurate at low levels of uncertainty. On the other hand, PA, GHI, and OHA are computationally more demanding than DA but can work well at high levels of uncertainty. GHI results in a slightly better accuracy than PA, but it is more difficult to implement. OHA produces the most accurate approximation results and can be implemented in a straightforward manner. It turns out that the computational cost of the

  18. Monte Carlo - Advances and Challenges

    International Nuclear Information System (INIS)

    Brown, Forrest B.; Mosteller, Russell D.; Martin, William R.

    2008-01-01

    Abstract only, full text follows: With ever-faster computers and mature Monte Carlo production codes, there has been tremendous growth in the application of Monte Carlo methods to the analysis of reactor physics and reactor systems. In the past, Monte Carlo methods were used primarily for calculating k eff of a critical system. More recently, Monte Carlo methods have been increasingly used for determining reactor power distributions and many design parameters, such as β eff , l eff , τ, reactivity coefficients, Doppler defect, dominance ratio, etc. These advanced applications of Monte Carlo methods are now becoming common, not just feasible, but bring new challenges to both developers and users: Convergence of 3D power distributions must be assured; confidence interval bias must be eliminated; iterated fission probabilities are required, rather than single-generation probabilities; temperature effects including Doppler and feedback must be represented; isotopic depletion and fission product buildup must be modeled. This workshop focuses on recent advances in Monte Carlo methods and their application to reactor physics problems, and on the resulting challenges faced by code developers and users. The workshop is partly tutorial, partly a review of the current state-of-the-art, and partly a discussion of future work that is needed. It should benefit both novice and expert Monte Carlo developers and users. In each of the topic areas, we provide an overview of needs, perspective on past and current methods, a review of recent work, and discussion of further research and capabilities that are required. Electronic copies of all workshop presentations and material will be available. The workshop is structured as 2 morning and 2 afternoon segments: - Criticality Calculations I - convergence diagnostics, acceleration methods, confidence intervals, and the iterated fission probability, - Criticality Calculations II - reactor kinetics parameters, dominance ratio, temperature

  19. MCNP: a general Monte Carlo code for neutron and photon transport. Version 3A. Revision 2

    International Nuclear Information System (INIS)

    Briesmeister, J.F.

    1986-09-01

    This manual is a practical guide for the use of our general-purpose Monte Carlo code MCNP. The first chapter is a primer for the novice user. The second chapter describes the mathematics, data, physics, and Monte Carlo simulation found in MCNP. This discussion is not meant to be exhaustive - details of the particular techniques and of the Monte Carlo method itself will have to be found elsewhere. The third chapter shows the user how to prepare input for the code. The fourth chapter contains several examples, and the fifth chapter explains the output. The appendices show how to use MCNP on particular computer systems at the Los Alamos National Laboratory and also give details about some of the code internals that those who wish to modify the code may find useful. 57 refs

  20. Estimation of computed tomography dose index in cone beam computed tomography: MOSFET measurements and Monte Carlo simulations.

    Science.gov (United States)

    Kim, Sangroh; Yoshizumi, Terry; Toncheva, Greta; Yoo, Sua; Yin, Fang-Fang; Frush, Donald

    2010-05-01

    To address the lack of accurate dose estimation method in cone beam computed tomography (CBCT), we performed point dose metal oxide semiconductor field-effect transistor (MOSFET) measurements and Monte Carlo (MC) simulations. A Varian On-Board Imager (OBI) was employed to measure point doses in the polymethyl methacrylate (PMMA) CT phantoms with MOSFETs for standard and low dose modes. A MC model of the OBI x-ray tube was developed using BEAMnrc/EGSnrc MC system and validated by the half value layer, x-ray spectrum and lateral and depth dose profiles. We compared the weighted computed tomography dose index (CTDIw) between MOSFET measurements and MC simulations. The CTDIw was found to be 8.39 cGy for the head scan and 4.58 cGy for the body scan from the MOSFET measurements in standard dose mode, and 1.89 cGy for the head and 1.11 cGy for the body in low dose mode, respectively. The CTDIw from MC compared well to the MOSFET measurements within 5% differences. In conclusion, a MC model for Varian CBCT has been established and this approach may be easily extended from the CBCT geometry to multi-detector CT geometry.

  1. Monte Carlo approaches for uncertainty quantification of criticality for system dimensions

    International Nuclear Information System (INIS)

    Kiedrowski, B.C.; Brown, F.B.

    2013-01-01

    One of the current challenges in nuclear engineering computations is the issue of performing uncertainty analysis for either calculations or experimental measurements. This paper specifically focuses on the issue of estimating the uncertainties arising from geometric tolerances. For this paper, two techniques for uncertainty quantification are studied. The first is the forward propagation technique, which can be thought of as a 'brute force' approach; uncertain system parameters are randomly sampled, the calculation is run, and uncertainties are found from the empirically obtained distribution of results. This approach need make no approximations in principle, but is very computationally expensive. The other approach investigated is the adjoint-based approach; system sensitivities are computed via a single Monte Carlo calculation and those are used with a covariance matrix to provide a linear estimate of the uncertainty. Demonstration calculations are performed with the MCNP6 code for both techniques. The 2 techniques are tested on 2 cases: the first case is a solid, bare cylinder of Pu-metal while the second case is a can of plutonium nitrate solution. The results show that the forward and adjoint approaches appear to agree in some cases where the responses are not non-linearly correlated. In other cases, the uncertainties in the effective multiplication k disagree for reasons not yet known

  2. The future of new calculation concepts in dosimetry based on the Monte Carlo Methods

    International Nuclear Information System (INIS)

    Makovicka, L.; Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J.; Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J.; Salomon, M.

    2009-01-01

    Monte Carlo codes, precise but slow, are very important tools in the vast majority of specialities connected to Radiation Physics, Radiation Protection and Dosimetry. A discussion about some other computing solutions is carried out; solutions not only based on the enhancement of computer power, or on the 'biasing'used for relative acceleration of these codes (in the case of photons), but on more efficient methods (A.N.N. - artificial neural network, C.B.R. - case-based reasoning - or other computer science techniques) already and successfully used for a long time in other scientific or industrial applications and not only Radiation Protection or Medical Dosimetry. (authors)

  3. Accuracy assessment of a new Monte Carlo based burnup computer code

    International Nuclear Information System (INIS)

    El Bakkari, B.; ElBardouni, T.; Nacir, B.; ElYounoussi, C.; Boulaich, Y.; Meroun, O.; Zoubair, M.; Chakir, E.

    2012-01-01

    Highlights: ► A new burnup code called BUCAL1 was developed. ► BUCAL1 uses the MCNP tallies directly in the calculation of the isotopic inventories. ► Validation of BUCAL1 was done by code to code comparison using VVER-1000 LEU Benchmark Assembly. ► Differences from BM value were found to be ± 600 pcm for k ∞ and ±6% for the isotopic compositions. ► The effect on reactivity due to the burnup of Gd isotopes is well reproduced by BUCAL1. - Abstract: This study aims to test for the suitability and accuracy of a new home-made Monte Carlo burnup code, called BUCAL1, by investigating and predicting the neutronic behavior of a “VVER-1000 LEU Assembly Computational Benchmark”, at lattice level. BUCAL1 uses MCNP tally information directly in the computation; this approach allows performing straightforward and accurate calculation without having to use the calculated group fluxes to perform transmutation analysis in a separate code. ENDF/B-VII evaluated nuclear data library was used in these calculations. Processing of the data library is performed using recent updates of NJOY99 system. Code to code comparisons with the reported Nuclear OECD/NEA results are presented and analyzed.

  4. Monte Carlo Methods in Physics

    International Nuclear Information System (INIS)

    Santoso, B.

    1997-01-01

    Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained

  5. High-efficiency wavefunction updates for large scale Quantum Monte Carlo

    Science.gov (United States)

    Kent, Paul; McDaniel, Tyler; Li, Ying Wai; D'Azevedo, Ed

    Within ab intio Quantum Monte Carlo (QMC) simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunctions. The evaluation of each Monte Carlo move requires finding the determinant of a dense matrix, which is traditionally iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. For calculations with thousands of electrons, this operation dominates the execution profile. We propose a novel rank- k delayed update scheme. This strategy enables probability evaluation for multiple successive Monte Carlo moves, with application of accepted moves to the matrices delayed until after a predetermined number of moves, k. Accepted events grouped in this manner are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency. This procedure does not change the underlying Monte Carlo sampling or the sampling efficiency. For large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude speedups can be obtained on both multi-core CPU and on GPUs, making this algorithm highly advantageous for current petascale and future exascale computations.

  6. The Physical Models and Statistical Procedures Used in the RACER Monte Carlo Code

    International Nuclear Information System (INIS)

    Sutton, T.M.; Brown, F.B.; Bischoff, F.G.; MacMillan, D.B.; Ellis, C.L.; Ward, J.T.; Ballinger, C.T.; Kelly, D.J.; Schindler, L.

    1999-01-01

    This report describes the MCV (Monte Carlo - Vectorized)Monte Carlo neutron transport code [Brown, 1982, 1983; Brown and Mendelson, 1984a]. MCV is a module in the RACER system of codes that is used for Monte Carlo reactor physics analysis. The MCV module contains all of the neutron transport and statistical analysis functions of the system, while other modules perform various input-related functions such as geometry description, material assignment, output edit specification, etc. MCV is very closely related to the 05R neutron Monte Carlo code [Irving et al., 1965] developed at Oak Ridge National Laboratory. 05R evolved into the 05RR module of the STEMB system, which was the forerunner of the RACER system. Much of the overall logic and physics treatment of 05RR has been retained and, indeed, the original verification of MCV was achieved through comparison with STEMB results. MCV has been designed to be very computationally efficient [Brown, 1981, Brown and Martin, 1984b; Brown, 1986]. It was originally programmed to make use of vector-computing architectures such as those of the CDC Cyber- 205 and Cray X-MP. MCV was the first full-scale production Monte Carlo code to effectively utilize vector-processing capabilities. Subsequently, MCV was modified to utilize both distributed-memory [Sutton and Brown, 1994] and shared memory parallelism. The code has been compiled and run on platforms ranging from 32-bit UNIX workstations to clusters of 64-bit vector-parallel supercomputers. The computational efficiency of the code allows the analyst to perform calculations using many more neutron histories than is practical with most other Monte Carlo codes, thereby yielding results with smaller statistical uncertainties. MCV also utilizes variance reduction techniques such as survival biasing, splitting, and rouletting to permit additional reduction in uncertainties. While a general-purpose neutron Monte Carlo code, MCV is optimized for reactor physics calculations. It has the

  7. Monte Carlo Simulation in Statistical Physics An Introduction

    CERN Document Server

    Binder, Kurt

    2010-01-01

    Monte Carlo Simulation in Statistical Physics deals with the computer simulation of many-body systems in condensed-matter physics and related fields of physics, chemistry and beyond, to traffic flows, stock market fluctuations, etc.). Using random numbers generated by a computer, probability distributions are calculated, allowing the estimation of the thermodynamic properties of various systems. This book describes the theoretical background to several variants of these Monte Carlo methods and gives a systematic presentation from which newcomers can learn to perform such simulations and to analyze their results. The fifth edition covers Classical as well as Quantum Monte Carlo methods. Furthermore a new chapter on the sampling of free-energy landscapes has been added. To help students in their work a special web server has been installed to host programs and discussion groups (http://wwwcp.tphys.uni-heidelberg.de). Prof. Binder was awarded the Berni J. Alder CECAM Award for Computational Physics 2001 as well ...

  8. The MC21 Monte Carlo Transport Code

    International Nuclear Information System (INIS)

    Sutton TM; Donovan TJ; Trumbull TH; Dobreff PS; Caro E; Griesheimer DP; Tyburski LJ; Carpenter DC; Joo H

    2007-01-01

    MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities

  9. Microwave transport in EBT distribution manifolds using Monte Carlo ray-tracing techniques

    International Nuclear Information System (INIS)

    Lillie, R.A.; White, T.L.; Gabriel, T.A.; Alsmiller, R.G. Jr.

    1983-01-01

    Ray tracing Monte Carlo calculations have been carried out using an existing Monte Carlo radiation transport code to obtain estimates of the microsave power exiting the torus coupling links in EPT microwave manifolds. The microwave power loss and polarization at surface reflections were accounted for by treating the microwaves as plane waves reflecting off plane surfaces. Agreement on the order of 10% was obtained between the measured and calculated output power distribution for an existing EBT-S toroidal manifold. A cost effective iterative procedure utilizing the Monte Carlo history data was implemented to predict design changes which could produce increased manifold efficiency and improved output power uniformity

  10. Future trends in power plant process computer techniques

    International Nuclear Information System (INIS)

    Dettloff, K.

    1975-01-01

    The development of new concepts of the process computer technique has advanced in great steps. The steps are in the three sections: hardware, software, application concept. New computers with a new periphery such as, e.g., colour layer equipment, have been developed in hardware. In software, a decisive step in the sector 'automation software' has been made. Through these components, a step forwards has also been made in the question of incorporating the process computer in the structure of the whole power plant control technique. (orig./LH) [de

  11. Efficiency and accuracy of Monte Carlo (importance) sampling

    NARCIS (Netherlands)

    Waarts, P.H.

    2003-01-01

    Monte Carlo Analysis is often regarded as the most simple and accurate reliability method. Be-sides it is the most transparent method. The only problem is the accuracy in correlation with the efficiency. Monte Carlo gets less efficient or less accurate when very low probabilities are to be computed

  12. Multilevel Monte Carlo Approaches for Numerical Homogenization

    KAUST Repository

    Efendiev, Yalchin R.

    2015-10-01

    In this article, we study the application of multilevel Monte Carlo (MLMC) approaches to numerical random homogenization. Our objective is to compute the expectation of some functionals of the homogenized coefficients, or of the homogenized solutions. This is accomplished within MLMC by considering different sizes of representative volumes (RVEs). Many inexpensive computations with the smallest RVE size are combined with fewer expensive computations performed on larger RVEs. Likewise, when it comes to homogenized solutions, different levels of coarse-grid meshes are used to solve the homogenized equation. We show that, by carefully selecting the number of realizations at each level, we can achieve a speed-up in the computations in comparison to a standard Monte Carlo method. Numerical results are presented for both one-dimensional and two-dimensional test-cases that illustrate the efficiency of the approach.

  13. The biological shield of a high-intensity spallation source: a monte Carlo design study

    International Nuclear Information System (INIS)

    Koprivnikar, I.; Schachinger, E.

    2004-01-01

    The design of high-intensity spallation sources requires the best possible estimates for the biological shield. The applicability of three-dimensional Monte Carlo simulation in the design of the biological shield of a spallation source will be discussed. In order to achieve reasonable computing times along with acceptable accuracy, biasing techniques are to be employed and it was the main purpose of this work to develop a strategy for an effective Monte Carlo simulation in shielding design. The most prominent MC computer codes, namely MCNPX and FLUKA99, have been applied to the same model spallation source (the European Spallation Source) and on the basis of the derived strategies, the design and characteristics of the target station shield are discussed. It is also the purpose of the paper to demonstrate the application of the developed strategy for the design of beam lines with their shielding using as an example the target-moderator-reflector complex of the ESS as the primary particle source. (author)

  14. Training Software in Artificial-Intelligence Computing Techniques

    Science.gov (United States)

    Howard, Ayanna; Rogstad, Eric; Chalfant, Eugene

    2005-01-01

    The Artificial Intelligence (AI) Toolkit is a computer program for training scientists, engineers, and university students in three soft-computing techniques (fuzzy logic, neural networks, and genetic algorithms) used in artificial-intelligence applications. The program promotes an easily understandable tutorial interface, including an interactive graphical component through which the user can gain hands-on experience in soft-computing techniques applied to realistic example problems. The tutorial provides step-by-step instructions on the workings of soft-computing technology, whereas the hands-on examples allow interaction and reinforcement of the techniques explained throughout the tutorial. In the fuzzy-logic example, a user can interact with a robot and an obstacle course to verify how fuzzy logic is used to command a rover traverse from an arbitrary start to the goal location. For the genetic algorithm example, the problem is to determine the minimum-length path for visiting a user-chosen set of planets in the solar system. For the neural-network example, the problem is to decide, on the basis of input data on physical characteristics, whether a person is a man, woman, or child. The AI Toolkit is compatible with the Windows 95,98, ME, NT 4.0, 2000, and XP operating systems. A computer having a processor speed of at least 300 MHz, and random-access memory of at least 56MB is recommended for optimal performance. The program can be run on a slower computer having less memory, but some functions may not be executed properly.

  15. State of the art of Monte Carlo technics for reliable activated waste evaluations

    International Nuclear Information System (INIS)

    Culioli, Matthieu; Chapoutier, Nicolas; Barbier, Samuel; Janski, Sylvain

    2016-01-01

    This paper presents the calculation scheme used for many studies to assess the activities inventory of French shutdown reactors (including Pressurized Water Reactor, Heavy Water Reactor, Sodium-Cooled Fast Reactor and Natural Uranium Gas Cooled or UNGG). This calculation scheme is based on Monte Carlo calculations (MCNP) and involves advanced technique for source modeling, geometry modeling (with Computer-Aided Design integration), acceleration methods and depletion calculations coupling on 3D meshes. All these techniques offer efficient and reliable evaluations on large scale model with a high level of details reducing the risks of underestimation or conservatisms. (authors)

  16. Computer animation algorithms and techniques

    CERN Document Server

    Parent, Rick

    2012-01-01

    Driven by the demands of research and the entertainment industry, the techniques of animation are pushed to render increasingly complex objects with ever-greater life-like appearance and motion. This rapid progression of knowledge and technique impacts professional developers, as well as students. Developers must maintain their understanding of conceptual foundations, while their animation tools become ever more complex and specialized. The second edition of Rick Parent's Computer Animation is an excellent resource for the designers who must meet this challenge. The first edition establ

  17. A Monte Carlo-based treatment-planning tool for ion beam therapy

    CERN Document Server

    Böhlen, T T; Dosanjh, M; Ferrari, A; Haberer, T; Parodi, K; Patera, V; Mairan, A

    2013-01-01

    Ion beam therapy, as an emerging radiation therapy modality, requires continuous efforts to develop and improve tools for patient treatment planning (TP) and research applications. Dose and fluence computation algorithms using the Monte Carlo (MC) technique have served for decades as reference tools for accurate dose computations for radiotherapy. In this work, a novel MC-based treatment-planning (MCTP) tool for ion beam therapy using the pencil beam scanning technique is presented. It allows single-field and simultaneous multiple-fields optimization for realistic patient treatment conditions and for dosimetric quality assurance for irradiation conditions at state-of-the-art ion beam therapy facilities. It employs iterative procedures that allow for the optimization of absorbed dose and relative biological effectiveness (RBE)-weighted dose using radiobiological input tables generated by external RBE models. Using a re-implementation of the local effect model (LEM), theMCTP tool is able to perform TP studies u...

  18. Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes

    Science.gov (United States)

    Smith, L. M.; Hochstedler, R. D.

    1997-02-01

    Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of the accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).

  19. Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes

    International Nuclear Information System (INIS)

    Smith, L.M.; Hochstedler, R.D.

    1997-01-01

    Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of the accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code)

  20. Computational intelligence techniques in health care

    CERN Document Server

    Zhou, Wengang; Satheesh, P

    2016-01-01

    This book presents research on emerging computational intelligence techniques and tools, with a particular focus on new trends and applications in health care. Healthcare is a multi-faceted domain, which incorporates advanced decision-making, remote monitoring, healthcare logistics, operational excellence and modern information systems. In recent years, the use of computational intelligence methods to address the scale and the complexity of the problems in healthcare has been investigated. This book discusses various computational intelligence methods that are implemented in applications in different areas of healthcare. It includes contributions by practitioners, technology developers and solution providers.

  1. SU-E-T-314: The Application of Cloud Computing in Pencil Beam Scanning Proton Therapy Monte Carlo Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Z [Reading Hospital, West Reading, PA (United States); Gao, M [ProCure Treatment Centers, Warrenville, IL (United States)

    2014-06-01

    Purpose: Monte Carlo simulation plays an important role for proton Pencil Beam Scanning (PBS) technique. However, MC simulation demands high computing power and is limited to few large proton centers that can afford a computer cluster. We study the feasibility of utilizing cloud computing in the MC simulation of PBS beams. Methods: A GATE/GEANT4 based MC simulation software was installed on a commercial cloud computing virtual machine (Linux 64-bits, Amazon EC2). Single spot Integral Depth Dose (IDD) curves and in-air transverse profiles were used to tune the source parameters to simulate an IBA machine. With the use of StarCluster software developed at MIT, a Linux cluster with 2–100 nodes can be conveniently launched in the cloud. A proton PBS plan was then exported to the cloud where the MC simulation was run. Results: The simulated PBS plan has a field size of 10×10cm{sup 2}, 20cm range, 10cm modulation, and contains over 10,000 beam spots. EC2 instance type m1.medium was selected considering the CPU/memory requirement and 40 instances were used to form a Linux cluster. To minimize cost, master node was created with on-demand instance and worker nodes were created with spot-instance. The hourly cost for the 40-node cluster was $0.63 and the projected cost for a 100-node cluster was $1.41. Ten million events were simulated to plot PDD and profile, with each job containing 500k events. The simulation completed within 1 hour and an overall statistical uncertainty of < 2% was achieved. Good agreement between MC simulation and measurement was observed. Conclusion: Cloud computing is a cost-effective and easy to maintain platform to run proton PBS MC simulation. When proton MC packages such as GATE and TOPAS are combined with cloud computing, it will greatly facilitate the pursuing of PBS MC studies, especially for newly established proton centers or individual researchers.

  2. SU-E-T-314: The Application of Cloud Computing in Pencil Beam Scanning Proton Therapy Monte Carlo Simulation

    International Nuclear Information System (INIS)

    Wang, Z; Gao, M

    2014-01-01

    Purpose: Monte Carlo simulation plays an important role for proton Pencil Beam Scanning (PBS) technique. However, MC simulation demands high computing power and is limited to few large proton centers that can afford a computer cluster. We study the feasibility of utilizing cloud computing in the MC simulation of PBS beams. Methods: A GATE/GEANT4 based MC simulation software was installed on a commercial cloud computing virtual machine (Linux 64-bits, Amazon EC2). Single spot Integral Depth Dose (IDD) curves and in-air transverse profiles were used to tune the source parameters to simulate an IBA machine. With the use of StarCluster software developed at MIT, a Linux cluster with 2–100 nodes can be conveniently launched in the cloud. A proton PBS plan was then exported to the cloud where the MC simulation was run. Results: The simulated PBS plan has a field size of 10×10cm 2 , 20cm range, 10cm modulation, and contains over 10,000 beam spots. EC2 instance type m1.medium was selected considering the CPU/memory requirement and 40 instances were used to form a Linux cluster. To minimize cost, master node was created with on-demand instance and worker nodes were created with spot-instance. The hourly cost for the 40-node cluster was $0.63 and the projected cost for a 100-node cluster was $1.41. Ten million events were simulated to plot PDD and profile, with each job containing 500k events. The simulation completed within 1 hour and an overall statistical uncertainty of < 2% was achieved. Good agreement between MC simulation and measurement was observed. Conclusion: Cloud computing is a cost-effective and easy to maintain platform to run proton PBS MC simulation. When proton MC packages such as GATE and TOPAS are combined with cloud computing, it will greatly facilitate the pursuing of PBS MC studies, especially for newly established proton centers or individual researchers

  3. Development of a Fully-Automated Monte Carlo Burnup Code Monteburns

    International Nuclear Information System (INIS)

    Poston, D.I.; Trellue, H.R.

    1999-01-01

    Several computer codes have been developed to perform nuclear burnup calculations over the past few decades. In addition, because of advances in computer technology, it recently has become more desirable to use Monte Carlo techniques for such problems. Monte Carlo techniques generally offer two distinct advantages over discrete ordinate methods: (1) the use of continuous energy cross sections and (2) the ability to model detailed, complex, three-dimensional (3-D) geometries. These advantages allow more accurate burnup results to be obtained, provided that the user possesses the required computing power (which is required for discrete ordinate methods as well). Several linkage codes have been written that combine a Monte Carlo N-particle transport code (such as MCNP TM ) with a radioactive decay and burnup code. This paper describes one such code that was written at Los Alamos National Laboratory: monteburns. Monteburns links MCNP with the isotope generation and depletion code ORIGEN2. The basis for the development of monteburns was the need for a fully automated code that could perform accurate burnup (and other) calculations for any 3-D system (accelerator-driven or a full reactor core). Before the initial development of monteburns, a list of desired attributes was made and is given below. o The code should be fully automated (that is, after the input is set up, no further user interaction is required). . The code should allow for the irradiation of several materials concurrently (each material is evaluated collectively in MCNP and burned separately in 0RIGEN2). o The code should allow the transfer of materials (shuffling) between regions in MCNP. . The code should allow any materials to be added or removed before, during, or after each step in an automated fashion. . The code should not require the user to provide input for 0RIGEN2 and should have minimal MCNP input file requirements (other than a working MCNP deck). . The code should be relatively easy to use

  4. Nested Sampling with Constrained Hamiltonian Monte Carlo

    OpenAIRE

    Betancourt, M. J.

    2010-01-01

    Nested sampling is a powerful approach to Bayesian inference ultimately limited by the computationally demanding task of sampling from a heavily constrained probability distribution. An effective algorithm in its own right, Hamiltonian Monte Carlo is readily adapted to efficiently sample from any smooth, constrained distribution. Utilizing this constrained Hamiltonian Monte Carlo, I introduce a general implementation of the nested sampling algorithm.

  5. Efficient computation of global sensitivity indices using sparse polynomial chaos expansions

    International Nuclear Information System (INIS)

    Blatman, Geraud; Sudret, Bruno

    2010-01-01

    Global sensitivity analysis aims at quantifying the relative importance of uncertain input variables onto the response of a mathematical model of a physical system. ANOVA-based indices such as the Sobol' indices are well-known in this context. These indices are usually computed by direct Monte Carlo or quasi-Monte Carlo simulation, which may reveal hardly applicable for computationally demanding industrial models. In the present paper, sparse polynomial chaos (PC) expansions are introduced in order to compute sensitivity indices. An adaptive algorithm allows the analyst to build up a PC-based metamodel that only contains the significant terms whereas the PC coefficients are computed by least-square regression using a computer experimental design. The accuracy of the metamodel is assessed by leave-one-out cross validation. Due to the genuine orthogonality properties of the PC basis, ANOVA-based sensitivity indices are post-processed analytically. This paper also develops a bootstrap technique which eventually yields confidence intervals on the results. The approach is illustrated on various application examples up to 21 stochastic dimensions. Accurate results are obtained at a computational cost 2-3 orders of magnitude smaller than that associated with Monte Carlo simulation.

  6. Efficient computation of global sensitivity indices using sparse polynomial chaos expansions

    Energy Technology Data Exchange (ETDEWEB)

    Blatman, Geraud, E-mail: geraud.blatman@edf.f [Clermont Universite, IFMA, EA 3867, Laboratoire de Mecanique et Ingenieries, BP 10448, F-63000 Clermont-Ferrand (France); EDF, R and D Division - Site des Renardieres, F-77818 Moret-sur-Loing (France); Sudret, Bruno, E-mail: sudret@phimeca.co [Clermont Universite, IFMA, EA 3867, Laboratoire de Mecanique et Ingenieries, BP 10448, F-63000 Clermont-Ferrand (France); Phimeca Engineering, Centre d' Affaires du Zenith, 34 rue de Sarlieve, F-63800 Cournon d' Auvergne (France)

    2010-11-15

    Global sensitivity analysis aims at quantifying the relative importance of uncertain input variables onto the response of a mathematical model of a physical system. ANOVA-based indices such as the Sobol' indices are well-known in this context. These indices are usually computed by direct Monte Carlo or quasi-Monte Carlo simulation, which may reveal hardly applicable for computationally demanding industrial models. In the present paper, sparse polynomial chaos (PC) expansions are introduced in order to compute sensitivity indices. An adaptive algorithm allows the analyst to build up a PC-based metamodel that only contains the significant terms whereas the PC coefficients are computed by least-square regression using a computer experimental design. The accuracy of the metamodel is assessed by leave-one-out cross validation. Due to the genuine orthogonality properties of the PC basis, ANOVA-based sensitivity indices are post-processed analytically. This paper also develops a bootstrap technique which eventually yields confidence intervals on the results. The approach is illustrated on various application examples up to 21 stochastic dimensions. Accurate results are obtained at a computational cost 2-3 orders of magnitude smaller than that associated with Monte Carlo simulation.

  7. astroABC : An Approximate Bayesian Computation Sequential Monte Carlo sampler for cosmological parameter estimation

    Energy Technology Data Exchange (ETDEWEB)

    Jennings, E.; Madigan, M.

    2017-04-01

    Given the complexity of modern cosmological parameter inference where we arefaced with non-Gaussian data and noise, correlated systematics and multi-probecorrelated data sets, the Approximate Bayesian Computation (ABC) method is apromising alternative to traditional Markov Chain Monte Carlo approaches in thecase where the Likelihood is intractable or unknown. The ABC method is called"Likelihood free" as it avoids explicit evaluation of the Likelihood by using aforward model simulation of the data which can include systematics. Weintroduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler forparameter estimation. A key challenge in astrophysics is the efficient use oflarge multi-probe datasets to constrain high dimensional, possibly correlatedparameter spaces. With this in mind astroABC allows for massive parallelizationusing MPI, a framework that handles spawning of jobs across multiple nodes. Akey new feature of astroABC is the ability to create MPI groups with differentcommunicators, one for the sampler and several others for the forward modelsimulation, which speeds up sampling time considerably. For smaller jobs thePython multiprocessing option is also available. Other key features include: aSequential Monte Carlo sampler, a method for iteratively adapting tolerancelevels, local covariance estimate using scikit-learn's KDTree, modules forspecifying optimal covariance matrix for a component-wise or multivariatenormal perturbation kernel, output and restart files are backed up everyiteration, user defined metric and simulation methods, a module for specifyingheterogeneous parameter priors including non-standard prior PDFs, a module forspecifying a constant, linear, log or exponential tolerance level,well-documented examples and sample scripts. This code is hosted online athttps://github.com/EliseJ/astroABC

  8. Multi-Detector Computed Tomography Imaging Techniques in Arterial Injuries

    Directory of Open Access Journals (Sweden)

    Cameron Adler

    2018-04-01

    Full Text Available Cross-sectional imaging has become a critical aspect in the evaluation of arterial injuries. In particular, angiography using computed tomography (CT is the imaging of choice. A variety of techniques and options are available when evaluating for arterial injuries. Techniques involve contrast bolus, various phases of contrast enhancement, multiplanar reconstruction, volume rendering, and maximum intensity projection. After the images are rendered, a variety of features may be seen that diagnose the injury. This article provides a general overview of the techniques, important findings, and pitfalls in cross sectional imaging of arterial imaging, particularly in relation to computed tomography. In addition, the future directions of computed tomography, including a few techniques in the process of development, is also discussed.

  9. SU-G-IeP2-13: Toward Heavy Ion Computed Tomography with Carbon Ions: A Monte Carlo Study

    Energy Technology Data Exchange (ETDEWEB)

    Shrestha, D; Qin, N; Zhang, Y; Jia, X; Wang, J [UT Southwestern Medical Center, Dallas, TX (United States)

    2016-06-15

    Purpose: In the present Monte Carlo study, we investigated the use of Carbon ions for computed tomography (CT) with a relatively low imaging dose. This will enable us to avoid any conversion of X-ray CT numbers to the relative stopping power (or relative electron density) values and the associated uncertainties in Carbon dose calculation. Methods: In the first stage, we studied the propagation of Carbon nuclei through a water phantom using the Geant4 specially to understand their lateral displacement inside the phantom. In the second stage, we used our GPU-based Monte Carlo code, which has been cross validated against Geant4, to create the 2D map of the water equivalent path length (WEPL) inside a human head size phantom by acquiring 240 projections each 1.5° apart. Subsequently the 3D relative electron density map of the phantom was reconstructed from the 2D WEPL map using the Algebraic Reconstruction Technique (ART) coupled with total variation (TV) minimization Results: A high quality image of the relative electron density inside the phantom was reconstructed by ARTTV. The mean relative error between the reconstructed image for low contrast object (PMMA) was about 1.74%. The delivered dose per scan at the center of the phantom was about 0.1 Gy. Conclusion: We have been able to obtain a 3D map of the electron density using a human head size phantom while keeping the delivered dose to relatively low value. Using the GPU capabilities of our simulation engine, we believe that a real time CT with Carbon ions could be a reality in future.

  10. Statistics of Monte Carlo methods used in radiation transport calculation

    International Nuclear Information System (INIS)

    Datta, D.

    2009-01-01

    Radiation transport calculation can be carried out by using either deterministic or statistical methods. Radiation transport calculation based on statistical methods is basic theme of the Monte Carlo methods. The aim of this lecture is to describe the fundamental statistics required to build the foundations of Monte Carlo technique for radiation transport calculation. Lecture note is organized in the following way. Section (1) will describe the introduction of Basic Monte Carlo and its classification towards the respective field. Section (2) will describe the random sampling methods, a key component of Monte Carlo radiation transport calculation, Section (3) will provide the statistical uncertainty of Monte Carlo estimates, Section (4) will describe in brief the importance of variance reduction techniques while sampling particles such as photon, or neutron in the process of radiation transport

  11. Monte Carlo theory and practice

    International Nuclear Information System (INIS)

    James, F.

    1987-01-01

    Historically, the first large-scale calculations to make use of the Monte Carlo method were studies of neutron scattering and absorption, random processes for which it is quite natural to employ random numbers. Such calculations, a subset of Monte Carlo calculations, are known as direct simulation, since the 'hypothetical population' of the narrower definition above corresponds directly to the real population being studied. The Monte Carlo method may be applied wherever it is possible to establish equivalence between the desired result and the expected behaviour of a stochastic system. The problem to be solved may already be of a probabilistic or statistical nature, in which case its Monte Carlo formulation will usually be a straightforward simulation, or it may be of a deterministic or analytic nature, in which case an appropriate Monte Carlo formulation may require some imagination and may appear contrived or artificial. In any case, the suitability of the method chosen will depend on its mathematical properties and not on its superficial resemblance to the problem to be solved. The authors show how Monte Carlo techniques may be compared with other methods of solution of the same physical problem

  12. Early years of Computational Statistical Mechanics

    Science.gov (United States)

    Mareschal, Michel

    2018-05-01

    Evidence that a model of hard spheres exhibits a first-order solid-fluid phase transition was provided in the late fifties by two new numerical techniques known as Monte Carlo and Molecular Dynamics. This result can be considered as the starting point of computational statistical mechanics: at the time, it was a confirmation of a counter-intuitive (and controversial) theoretical prediction by J. Kirkwood. It necessitated an intensive collaboration between the Los Alamos team, with Bill Wood developing the Monte Carlo approach, and the Livermore group, where Berni Alder was inventing Molecular Dynamics. This article tells how it happened.

  13. Interface methods for hybrid Monte Carlo-diffusion radiation-transport simulations

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.

    2006-01-01

    Discrete diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Monte Carlo simulations in diffusive media. An important aspect of DDMC is the treatment of interfaces between diffusive regions, where DDMC is used, and transport regions, where standard Monte Carlo is employed. Three previously developed methods exist for treating transport-diffusion interfaces: the Marshak interface method, based on the Marshak boundary condition, the asymptotic interface method, based on the asymptotic diffusion-limit boundary condition, and the Nth-collided source technique, a scheme that allows Monte Carlo particles to undergo several collisions in a diffusive region before DDMC is used. Numerical calculations have shown that each of these interface methods gives reasonable results as part of larger radiation-transport simulations. In this paper, we use both analytic and numerical examples to compare the ability of these three interface techniques to treat simpler, transport-diffusion interface problems outside of a more complex radiation-transport calculation. We find that the asymptotic interface method is accurate regardless of the angular distribution of Monte Carlo particles incident on the interface surface. In contrast, the Marshak boundary condition only produces correct solutions if the incident particles are isotropic. We also show that the Nth-collided source technique has the capacity to yield accurate results if spatial cells are optically small and Monte Carlo particles are allowed to undergo many collisions within a diffusive region before DDMC is employed. These requirements make the Nth-collided source technique impractical for realistic radiation-transport calculations

  14. Hybrid computer optimization of systems with random parameters

    Science.gov (United States)

    White, R. C., Jr.

    1972-01-01

    A hybrid computer Monte Carlo technique for the simulation and optimization of systems with random parameters is presented. The method is applied to the simultaneous optimization of the means and variances of two parameters in the radar-homing missile problem treated by McGhee and Levine.

  15. Present status of transport code development based on Monte Carlo method

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki

    1985-01-01

    The present status of development in Monte Carlo code is briefly reviewed. The main items are the followings; Application fields, Methods used in Monte Carlo code (geometry spectification, nuclear data, estimator and variance reduction technique) and unfinished works, Typical Monte Carlo codes and Merits of continuous energy Monte Carlo code. (author)

  16. On stochastic error and computational efficiency of the Markov Chain Monte Carlo method

    KAUST Repository

    Li, Jun

    2014-01-01

    In Markov Chain Monte Carlo (MCMC) simulations, thermal equilibria quantities are estimated by ensemble average over a sample set containing a large number of correlated samples. These samples are selected in accordance with the probability distribution function, known from the partition function of equilibrium state. As the stochastic error of the simulation results is significant, it is desirable to understand the variance of the estimation by ensemble average, which depends on the sample size (i.e., the total number of samples in the set) and the sampling interval (i.e., cycle number between two consecutive samples). Although large sample sizes reduce the variance, they increase the computational cost of the simulation. For a given CPU time, the sample size can be reduced greatly by increasing the sampling interval, while having the corresponding increase in variance be negligible if the original sampling interval is very small. In this work, we report a few general rules that relate the variance with the sample size and the sampling interval. These results are observed and confirmed numerically. These variance rules are derived for theMCMCmethod but are also valid for the correlated samples obtained using other Monte Carlo methods. The main contribution of this work includes the theoretical proof of these numerical observations and the set of assumptions that lead to them. © 2014 Global-Science Press.

  17. Recommender engine for continuous-time quantum Monte Carlo methods

    Science.gov (United States)

    Huang, Li; Yang, Yi-feng; Wang, Lei

    2017-03-01

    Recommender systems play an essential role in the modern business world. They recommend favorable items such as books, movies, and search queries to users based on their past preferences. Applying similar ideas and techniques to Monte Carlo simulations of physical systems boosts their efficiency without sacrificing accuracy. Exploiting the quantum to classical mapping inherent in the continuous-time quantum Monte Carlo methods, we construct a classical molecular gas model to reproduce the quantum distributions. We then utilize powerful molecular simulation techniques to propose efficient quantum Monte Carlo updates. The recommender engine approach provides a general way to speed up the quantum impurity solvers.

  18. Computational statistics handbook with Matlab

    CERN Document Server

    Martinez, Wendy L

    2007-01-01

    Prefaces Introduction What Is Computational Statistics? An Overview of the Book Probability Concepts Introduction Probability Conditional Probability and Independence Expectation Common Distributions Sampling Concepts Introduction Sampling Terminology and Concepts Sampling Distributions Parameter Estimation Empirical Distribution Function Generating Random Variables Introduction General Techniques for Generating Random Variables Generating Continuous Random Variables Generating Discrete Random Variables Exploratory Data Analysis Introduction Exploring Univariate Data Exploring Bivariate and Trivariate Data Exploring Multidimensional Data Finding Structure Introduction Projecting Data Principal Component Analysis Projection Pursuit EDA Independent Component Analysis Grand Tour Nonlinear Dimensionality Reduction Monte Carlo Methods for Inferential Statistics Introduction Classical Inferential Statistics Monte Carlo Methods for Inferential Statist...

  19. Multi-Index Monte Carlo and stochastic collocation methods for random PDEs

    KAUST Repository

    Nobile, Fabio; Haji Ali, Abdul Lateef; Tamellini, Lorenzo; Tempone, Raul

    2016-01-01

    In this talk we consider the problem of computing statistics of the solution of a partial differential equation with random data, where the random coefficient is parametrized by means of a finite or countable sequence of terms in a suitable expansion. We describe and analyze a Multi-Index Monte Carlo (MIMC) and a Multi-Index Stochastic Collocation method (MISC). the former is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Instead of using firstorder differences as in MLMC, MIMC uses mixed differences to reduce the variance of the hierarchical differences dramatically. This in turn yields new and improved complexity results, which are natural generalizations of Giles s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence, O(TOL-2). On the same vein, MISC is a deterministic combination technique based on mixed differences of spatial approximations and quadratures over the space of random data. Provided enough mixed regularity, MISC can achieve better complexity than MIMC. Moreover, we show that in the optimal case the convergence rate of MISC is only dictated by the convergence of the deterministic solver applied to a one-dimensional spatial problem. We propose optimization procedures to select the most effective mixed differences to include in MIMC and MISC. Such optimization is a crucial step that allows us to make MIMC and MISC computationally effective. We finally show the effectiveness of MIMC and MISC with some computational tests, including tests with a infinite countable number of random parameters.

  20. Multi-Index Monte Carlo and stochastic collocation methods for random PDEs

    KAUST Repository

    Nobile, Fabio

    2016-01-09

    In this talk we consider the problem of computing statistics of the solution of a partial differential equation with random data, where the random coefficient is parametrized by means of a finite or countable sequence of terms in a suitable expansion. We describe and analyze a Multi-Index Monte Carlo (MIMC) and a Multi-Index Stochastic Collocation method (MISC). the former is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Instead of using firstorder differences as in MLMC, MIMC uses mixed differences to reduce the variance of the hierarchical differences dramatically. This in turn yields new and improved complexity results, which are natural generalizations of Giles s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence, O(TOL-2). On the same vein, MISC is a deterministic combination technique based on mixed differences of spatial approximations and quadratures over the space of random data. Provided enough mixed regularity, MISC can achieve better complexity than MIMC. Moreover, we show that in the optimal case the convergence rate of MISC is only dictated by the convergence of the deterministic solver applied to a one-dimensional spatial problem. We propose optimization procedures to select the most effective mixed differences to include in MIMC and MISC. Such optimization is a crucial step that allows us to make MIMC and MISC computationally effective. We finally show the effectiveness of MIMC and MISC with some computational tests, including tests with a infinite countable number of random parameters.

  1. A Monte Carlo study of Weibull reliability analysis for space shuttle main engine components

    Science.gov (United States)

    Abernethy, K.

    1986-01-01

    The incorporation of a number of additional capabilities into an existing Weibull analysis computer program and the results of Monte Carlo computer simulation study to evaluate the usefulness of the Weibull methods using samples with a very small number of failures and extensive censoring are discussed. Since the censoring mechanism inherent in the Space Shuttle Main Engine (SSME) data is hard to analyze, it was decided to use a random censoring model, generating censoring times from a uniform probability distribution. Some of the statistical techniques and computer programs that are used in the SSME Weibull analysis are described. The methods documented in were supplemented by adding computer calculations of approximate (using iteractive methods) confidence intervals for several parameters of interest. These calculations are based on a likelihood ratio statistic which is asymptotically a chisquared statistic with one degree of freedom. The assumptions built into the computer simulations are described. The simulation program and the techniques used in it are described there also. Simulation results are tabulated for various combinations of Weibull shape parameters and the numbers of failures in the samples.

  2. Monte Carlo simulation and gaussian broaden techniques for full energy peak of characteristic X-ray in EDXRF

    International Nuclear Information System (INIS)

    Li Zhe; Liu Min; Shi Rui; Wu Xuemei; Tuo Xianguo

    2012-01-01

    Background: Non-standard analysis (NSA) technique is one of the most important development directions of energy dispersive X-ray fluorescence (EDXRF). Purpose: This NSA technique is mainly based on Monte Carlo (MC) simulation and full energy peak broadening, which were studied preliminarily in this paper. Methods: A kind of MC model was established for Si-PIN based EDXRF setup, and the flux spectra were obtained for iron ore sample. Finally, the flux spectra were broadened by Gaussian broaden parameters calculated by a new method proposed in this paper, and the broadened spectra were compared with measured energy spectra. Results: MC method can be used to simulate EDXRF measurement, and can correct the matrix effects among elements automatically. Peak intensities can be obtained accurately by using the proposed Gaussian broaden technique. Conclusions: This study provided a key technique for EDXRF to achieve advanced NSA technology. (authors)

  3. Uniform distribution and quasi-Monte Carlo methods discrepancy, integration and applications

    CERN Document Server

    Kritzer, Peter; Pillichshammer, Friedrich; Winterhof, Arne

    2014-01-01

    The survey articles in this book focus on number theoretic point constructions, uniform distribution theory, and quasi-Monte Carlo methods. As deterministic versions of the Monte Carlo method, quasi-Monte Carlo rules enjoy increasing popularity, with many fruitful applications in mathematical practice, as for example in finance, computer graphics, and biology.

  4. Multi-chain Markov chain Monte Carlo methods for computationally expensive models

    Science.gov (United States)

    Huang, M.; Ray, J.; Ren, H.; Hou, Z.; Bao, J.

    2017-12-01

    Markov chain Monte Carlo (MCMC) methods are used to infer model parameters from observational data. The parameters are inferred as probability densities, thus capturing estimation error due to sparsity of the data, and the shortcomings of the model. Multiple communicating chains executing the MCMC method have the potential to explore the parameter space better, and conceivably accelerate the convergence to the final distribution. We present results from tests conducted with the multi-chain method to show how the acceleration occurs i.e., for loose convergence tolerances, the multiple chains do not make much of a difference. The ensemble of chains also seems to have the ability to accelerate the convergence of a few chains that might start from suboptimal starting points. Finally, we show the performance of the chains in the estimation of O(10) parameters using computationally expensive forward models such as the Community Land Model, where the sampling burden is distributed over multiple chains.

  5. Joint Data Assimilation and Parameter Calibration in on-line groundwater modelling using Sequential Monte Carlo techniques

    Science.gov (United States)

    Ramgraber, M.; Schirmer, M.

    2017-12-01

    As computational power grows and wireless sensor networks find their way into common practice, it becomes increasingly feasible to pursue on-line numerical groundwater modelling. The reconciliation of model predictions with sensor measurements often necessitates the application of Sequential Monte Carlo (SMC) techniques, most prominently represented by the Ensemble Kalman Filter. In the pursuit of on-line predictions it seems advantageous to transcend the scope of pure data assimilation and incorporate on-line parameter calibration as well. Unfortunately, the interplay between shifting model parameters and transient states is non-trivial. Several recent publications (e.g. Chopin et al., 2013, Kantas et al., 2015) in the field of statistics discuss potential algorithms addressing this issue. However, most of these are computationally intractable for on-line application. In this study, we investigate to what extent compromises between mathematical rigour and computational restrictions can be made within the framework of on-line numerical modelling of groundwater. Preliminary studies are conducted in a synthetic setting, with the goal of transferring the conclusions drawn into application in a real-world setting. To this end, a wireless sensor network has been established in the valley aquifer around Fehraltorf, characterized by a highly dynamic groundwater system and located about 20 km to the East of Zürich, Switzerland. By providing continuous probabilistic estimates of the state and parameter distribution, a steady base for branched-off predictive scenario modelling could be established, providing water authorities with advanced tools for assessing the impact of groundwater management practices. Chopin, N., Jacob, P.E. and Papaspiliopoulos, O. (2013): SMC2: an efficient algorithm for sequential analysis of state space models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 75 (3), p. 397-426. Kantas, N., Doucet, A., Singh, S

  6. Monte Carlo computations for lattice gauge theories with finite gauge groups

    International Nuclear Information System (INIS)

    Rabbi, G.

    1980-01-01

    Recourse to Monte Carlo simulations for obtaining numerical information about lattice gauge field theories is suggested by the fact that, after a Wick rotation of time to imaginary time, the weighted sum over all configurations used to define quantium expectation values becomes formally identical to a statistical sum of a four-dimensional system. Results obtained in a variety of Monte Carlo investigations are described

  7. Monte Carlo electron/photon transport

    International Nuclear Information System (INIS)

    Mack, J.M.; Morel, J.E.; Hughes, H.G.

    1985-01-01

    A review of nonplasma coupled electron/photon transport using Monte Carlo method is presented. Remarks are mainly restricted to linerarized formalisms at electron energies from 1 keV to 1000 MeV. Applications involving pulse-height estimation, transport in external magnetic fields, and optical Cerenkov production are discussed to underscore the importance of this branch of computational physics. Advances in electron multigroup cross-section generation is reported, and its impact on future code development assessed. Progress toward the transformation of MCNP into a generalized neutral/charged-particle Monte Carlo code is described. 48 refs

  8. Operator support system using computational intelligence techniques

    International Nuclear Information System (INIS)

    Bueno, Elaine Inacio; Pereira, Iraci Martinez

    2015-01-01

    Computational Intelligence Systems have been widely applied in Monitoring and Fault Detection Systems in several processes and in different kinds of applications. These systems use interdependent components ordered in modules. It is a typical behavior of such systems to ensure early detection and diagnosis of faults. Monitoring and Fault Detection Techniques can be divided into two categories: estimative and pattern recognition methods. The estimative methods use a mathematical model, which describes the process behavior. The pattern recognition methods use a database to describe the process. In this work, an operator support system using Computational Intelligence Techniques was developed. This system will show the information obtained by different CI techniques in order to help operators to take decision in real time and guide them in the fault diagnosis before the normal alarm limits are reached. (author)

  9. Operator support system using computational intelligence techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bueno, Elaine Inacio, E-mail: ebueno@ifsp.edu.br [Instituto Federal de Educacao, Ciencia e Tecnologia de Sao Paulo (IFSP), Sao Paulo, SP (Brazil); Pereira, Iraci Martinez, E-mail: martinez@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    Computational Intelligence Systems have been widely applied in Monitoring and Fault Detection Systems in several processes and in different kinds of applications. These systems use interdependent components ordered in modules. It is a typical behavior of such systems to ensure early detection and diagnosis of faults. Monitoring and Fault Detection Techniques can be divided into two categories: estimative and pattern recognition methods. The estimative methods use a mathematical model, which describes the process behavior. The pattern recognition methods use a database to describe the process. In this work, an operator support system using Computational Intelligence Techniques was developed. This system will show the information obtained by different CI techniques in order to help operators to take decision in real time and guide them in the fault diagnosis before the normal alarm limits are reached. (author)

  10. Magnetism of iron and nickel from rotationally invariant Hirsch-Fye quantum Monte Carlo calculations

    Science.gov (United States)

    Belozerov, A. S.; Leonov, I.; Anisimov, V. I.

    2013-03-01

    We present a rotationally invariant Hirsch-Fye quantum Monte Carlo algorithm in which the spin rotational invariance of Hund's exchange is approximated by averaging over all possible directions of the spin quantization axis. We employ this technique to perform benchmark calculations for the two- and three-band Hubbard models on the infinite-dimensional Bethe lattice. Our results agree quantitatively well with those obtained using the continuous-time quantum Monte Carlo method with rotationally invariant Coulomb interaction. The proposed approach is employed to compute the electronic and magnetic properties of paramagnetic α iron and nickel. The obtained Curie temperatures agree well with experiment. Our results indicate that the magnetic transition temperature is significantly overestimated by using the density-density type of Coulomb interaction.

  11. Estimation of whole-body radiation exposure from brachytherapy for oral cancer using a Monte Carlo simulation

    International Nuclear Information System (INIS)

    Ozaki, Y.; Watanabe, H.; Kaida, A.; Miura, M.; Nakagawa, K.; Toda, K.; Yoshimura, R.; Sumi, Y.; Kurabayashi, T.

    2017-01-01

    Early stage oral cancer can be cured with oral brachytherapy, but whole-body radiation exposure status has not been previously studied. Recently, the International Commission on Radiological Protection Committee (ICRP) recommended the use of ICRP phantoms to estimate radiation exposure from external and internal radiation sources. In this study, we used a Monte Carlo simulation with ICRP phantoms to estimate whole-body exposure from oral brachytherapy. We used a Particle and Heavy Ion Transport code System (PHITS) to model oral brachytherapy with 192 Ir hairpins and 198 Au grains and to perform a Monte Carlo simulation on the ICRP adult reference computational phantoms. To confirm the simulations, we also computed local dose distributions from these small sources, and compared them with the results from Oncentra manual Low Dose Rate Treatment Planning (mLDR) software which is used in day-to-day clinical practice. We successfully obtained data on absorbed dose for each organ in males and females. Sex-averaged equivalent doses were 0.547 and 0.710 Sv with 192 Ir hairpins and 198 Au grains, respectively. Simulation with PHITS was reliable when compared with an alternative computational technique using mLDR software. We concluded that the absorbed dose for each organ and whole-body exposure from oral brachytherapy can be estimated with Monte Carlo simulation using PHITS on ICRP reference phantoms. Effective doses for patients with oral cancer were obtained.

  12. Statistical implications in Monte Carlo depletions - 051

    International Nuclear Information System (INIS)

    Zhiwen, Xu; Rhodes, J.; Smith, K.

    2010-01-01

    As a result of steady advances of computer power, continuous-energy Monte Carlo depletion analysis is attracting considerable attention for reactor burnup calculations. The typical Monte Carlo analysis is set up as a combination of a Monte Carlo neutron transport solver and a fuel burnup solver. Note that the burnup solver is a deterministic module. The statistical errors in Monte Carlo solutions are introduced into nuclide number densities and propagated along fuel burnup. This paper is towards the understanding of the statistical implications in Monte Carlo depletions, including both statistical bias and statistical variations in depleted fuel number densities. The deterministic Studsvik lattice physics code, CASMO-5, is modified to model the Monte Carlo depletion. The statistical bias in depleted number densities is found to be negligible compared to its statistical variations, which, in turn, demonstrates the correctness of the Monte Carlo depletion method. Meanwhile, the statistical variation in number densities generally increases with burnup. Several possible ways of reducing the statistical errors are discussed: 1) to increase the number of individual Monte Carlo histories; 2) to increase the number of time steps; 3) to run additional independent Monte Carlo depletion cases. Finally, a new Monte Carlo depletion methodology, called the batch depletion method, is proposed, which consists of performing a set of independent Monte Carlo depletions and is thus capable of estimating the overall statistical errors including both the local statistical error and the propagated statistical error. (authors)

  13. Random Numbers and Monte Carlo Methods

    Science.gov (United States)

    Scherer, Philipp O. J.

    Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.

  14. Applications of Monte Carlo technique in the detection of explosives, narcotics and fissile material using neutron sources

    International Nuclear Information System (INIS)

    Sinha, Amar; Kashyap, Yogesh; Roy, Tushar; Agrawal, Ashish; Sarkar, P.S.; Shukla, Mayank

    2009-01-01

    The problem of illicit trafficking of explosives, narcotics or fissile materials represents a real challenge to civil security. Neutron based detection systems are being actively explored worldwide as a confirmatory tool for applications in the detection of explosives either hidden inside a vehicle or a cargo container or buried inside soil. The development of a system and its experimental testing is a tedious process and to develop such a system each experimental condition needs to be theoretically simulated. Monte Carlo based methods are used to find an optimized design for such detection system. In order to design such systems, it is necessary to optimize source and detector system for each specific application. The present paper deals with such optimization studies using Monte Carlo technique for tagged neutron based system for explosives and narcotics detection hidden in a cargo and landmine detection using backscatter neutrons. We will also discuss some simulation studies on detection of fissile material and photo-neutron source design for applications on cargo scanning. (author)

  15. Monte Carlo calculations of energy and angular distributions of transmitted and backscattered neutrons of 15 MeV incident energy

    International Nuclear Information System (INIS)

    Gaber, M.; Faied, A.

    1994-01-01

    The Monte Carlo technique was used to generate both energy and angular distributions of transmitted and backscattered neutrons incident on infinite graphite slabs of thicknesses ranging from 1-90 cm. Point isotropic and parallel beams of 15 MeV neutrons were used. A computer program was developed to simulate collisions by fast neutrons. (author)

  16. Approximate Computing Techniques for Iterative Graph Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Panyala, Ajay R.; Subasi, Omer; Halappanavar, Mahantesh; Kalyanaraman, Anantharaman; Chavarria Miranda, Daniel G.; Krishnamoorthy, Sriram

    2017-12-18

    Approximate computing enables processing of large-scale graphs by trading off quality for performance. Approximate computing techniques have become critical not only due to the emergence of parallel architectures but also the availability of large scale datasets enabling data-driven discovery. Using two prototypical graph algorithms, PageRank and community detection, we present several approximate computing heuristics to scale the performance with minimal loss of accuracy. We present several heuristics including loop perforation, data caching, incomplete graph coloring and synchronization, and evaluate their efficiency. We demonstrate performance improvements of up to 83% for PageRank and up to 450x for community detection, with low impact of accuracy for both the algorithms. We expect the proposed approximate techniques will enable scalable graph analytics on data of importance to several applications in science and their subsequent adoption to scale similar graph algorithms.

  17. Light reflection models for computer graphics.

    Science.gov (United States)

    Greenberg, D P

    1989-04-14

    During the past 20 years, computer graphic techniques for simulating the reflection of light have progressed so that today images of photorealistic quality can be produced. Early algorithms considered direct lighting only, but global illumination phenomena with indirect lighting, surface interreflections, and shadows can now be modeled with ray tracing, radiosity, and Monte Carlo simulations. This article describes the historical development of computer graphic algorithms for light reflection and pictorially illustrates what will be commonly available in the near future.

  18. An Efficient Monte Carlo Approach to Compute PageRank for Large Graphs on a Single PC

    Directory of Open Access Journals (Sweden)

    Sonobe Tomohiro

    2016-03-01

    Full Text Available This paper describes a novel Monte Carlo based random walk to compute PageRanks of nodes in a large graph on a single PC. The target graphs of this paper are ones whose size is larger than the physical memory. In such an environment, memory management is a difficult task for simulating the random walk among the nodes. We propose a novel method that partitions the graph into subgraphs in order to make them fit into the physical memory, and conducts the random walk for each subgraph. By evaluating the walks lazily, we can conduct the walks only in a subgraph and approximate the random walk by rotating the subgraphs. In computational experiments, the proposed method exhibits good performance for existing large graphs with several passes of the graph data.

  19. Numerical Computational Technique for Scattering from Underwater Objects

    OpenAIRE

    T. Ratna Mani; Raj Kumar; Odamapally Vijay Kumar

    2013-01-01

    This paper presents a computational technique for mono-static and bi-static scattering from underwater objects of different shape such as submarines. The scatter has been computed using finite element time domain (FETD) method, based on the superposition of reflections, from the different elements reaching the receiver at a particular instant in time. The results calculated by this method has been verified with the published results based on ramp response technique. An in-depth parametric s...

  20. Off-diagonal expansion quantum Monte Carlo.

    Science.gov (United States)

    Albash, Tameem; Wagenbreth, Gene; Hen, Itay

    2017-12-01

    We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.

  1. Effective use of Monte Carlo methods for simulating photon transport with special reference to slab penetration problems in X-ray diagnostics

    International Nuclear Information System (INIS)

    Carlsson, G.A.

    1981-01-01

    The analysis of Monte Carlo methods has been made in connection with a particular problem concerning the transport of low energy photons (30-140 keV) through layers of water with thicknesses between 5 and 20 cm. While not claiming to be a complete exposition of available Monte Carlo techniques, the methodological analyses are not restricted to this particular problem. The report describes in a general manner a number of methods which can be used in order to obtain results of greater precision in a fixed computing time. (Auth.)

  2. Efficient Monte Carlo Simulations of Gas Molecules Inside Porous Materials.

    Science.gov (United States)

    Kim, Jihan; Smit, Berend

    2012-07-10

    Monte Carlo (MC) simulations are commonly used to obtain adsorption properties of gas molecules inside porous materials. In this work, we discuss various optimization strategies that lead to faster MC simulations with CO2 gas molecules inside host zeolite structures used as a test system. The reciprocal space contribution of the gas-gas Ewald summation and both the direct and the reciprocal gas-host potential energy interactions are stored inside energy grids to reduce the wall time in the MC simulations. Additional speedup can be obtained by selectively calling the routine that computes the gas-gas Ewald summation, which does not impact the accuracy of the zeolite's adsorption characteristics. We utilize two-level density-biased sampling technique in the grand canonical Monte Carlo (GCMC) algorithm to restrict CO2 insertion moves into low-energy regions within the zeolite materials to accelerate convergence. Finally, we make use of the graphics processing units (GPUs) hardware to conduct multiple MC simulations in parallel via judiciously mapping the GPU threads to available workload. As a result, we can obtain a CO2 adsorption isotherm curve with 14 pressure values (up to 10 atm) for a zeolite structure within a minute of total compute wall time.

  3. Neutron point-flux calculation by Monte Carlo

    International Nuclear Information System (INIS)

    Eichhorn, M.

    1986-04-01

    A survey of the usual methods for estimating flux at a point is given. The associated variance-reducing techniques in direct Monte Carlo games are explained. The multigroup Monte Carlo codes MC for critical systems and PUNKT for point source-point detector-systems are represented, and problems in applying the codes to practical tasks are discussed. (author)

  4. Treatment planning in radiosurgery: parallel Monte Carlo simulation software

    Energy Technology Data Exchange (ETDEWEB)

    Scielzo, G [Galliera Hospitals, Genova (Italy). Dept. of Hospital Physics; Grillo Ruggieri, F [Galliera Hospitals, Genova (Italy) Dept. for Radiation Therapy; Modesti, M; Felici, R [Electronic Data System, Rome (Italy); Surridge, M [University of South Hampton (United Kingdom). Parallel Apllication Centre

    1995-12-01

    The main objective of this research was to evaluate the possibility of direct Monte Carlo simulation for accurate dosimetry with short computation time. We made us of: graphics workstation, linear accelerator, water, PMMA and anthropomorphic phantoms, for validation purposes; ionometric, film and thermo-luminescent techniques, for dosimetry; treatment planning system for comparison. Benchmarking results suggest that short computing times can be obtained with use of the parallel version of EGS4 that was developed. Parallelism was obtained assigning simulation incident photons to separate processors, and the development of a parallel random number generator was necessary. Validation consisted in: phantom irradiation, comparison of predicted and measured values good agreement in PDD and dose profiles. Experiments on anthropomorphic phantoms (with inhomogeneities) were carried out, and these values are being compared with results obtained with the conventional treatment planning system.

  5. Development of computer-aided auto-ranging technique for a computed radiography system

    International Nuclear Information System (INIS)

    Ishida, M.; Shimura, K.; Nakajima, N.; Kato, H.

    1988-01-01

    For a computed radiography system, the authors developed a computer-aided autoranging technique in which the clinically useful image data are automatically mapped to the available display range. The preread image data are inspected to determine the location of collimation. A histogram of the pixels inside the collimation is evaluated regarding characteristic values such as maxima and minima, and then the optimal density and contrast are derived for the display image. The effect of the autoranging technique was investigated at several hospitals in Japan. The average rate of films lost due to undesirable density or contrast was about 0.5%

  6. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    Science.gov (United States)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  7. Practical techniques for pediatric computed tomography

    International Nuclear Information System (INIS)

    Fitz, C.R.; Harwood-Nash, D.C.; Kirks, D.R.; Kaufman, R.A.; Berger, P.E.; Kuhn, J.P.; Siegel, M.J.

    1983-01-01

    Dr. Donald Kirks has assembled this section on Practical Techniques for Pediatric Computed Tomography. The material is based on a presentation in the Special Interest session at the 25th Annual Meeting of the Society for Pediatric Radiology in New Orleans, Louisiana, USA in 1982. Meticulous attention to detail and technique is required to ensure an optimal CT examination. CT techniques specifically applicable to infants and children have not been disseminated in the radiology literature and in this respect it may rightly be observed that ''the child is not a small adult''. What follows is a ''cookbook'' prepared by seven participants and it is printed in Pediatric Radiology, in outline form, as a statement of individual preferences for pediatric CT techniques. This outline gives concise explanation of techniques and permits prompt dissemination of information. (orig.)

  8. Computer Assisted Audit Techniques

    Directory of Open Access Journals (Sweden)

    Eugenia Iancu

    2007-01-01

    Full Text Available From the modern point of view, audit takes intoaccount especially the information systems representingmainly the examination performed by a professional asregards the manner for developing an activity by means ofcomparing it to the quality criteria specific to this activity.Having as reference point this very general definition ofauditing, it must be emphasized that the best known segmentof auditing is the financial audit that had a parallel evolutionto the accountancy one.The present day phase of developing the financial audithas as main trait the internationalization of the accountantprofessional. World wide there are multinational companiesthat offer services in the financial auditing, taxing andconsultancy domain. The auditors, natural persons and auditcompanies, take part at the works of the national andinternational authorities for setting out norms in theaccountancy and auditing domain.The computer assisted audit techniques can be classified inseveral manners according to the approaches used by theauditor. The most well-known techniques are comprised inthe following categories: testing data techniques, integratedtest, parallel simulation, revising the program logics,programs developed upon request, generalized auditsoftware, utility programs and expert systems.

  9. A computer simulation model to compute the radiation transfer of mountainous regions

    Science.gov (United States)

    Li, Yuguang; Zhao, Feng; Song, Rui

    2011-11-01

    In mountainous regions, the radiometric signal recorded at the sensor depends on a number of factors such as sun angle, atmospheric conditions, surface cover type, and topography. In this paper, a computer simulation model of radiation transfer is designed and evaluated. This model implements the Monte Carlo ray-tracing techniques and is specifically dedicated to the study of light propagation in mountainous regions. The radiative processes between sun light and the objects within the mountainous region are realized by using forward Monte Carlo ray-tracing methods. The performance of the model is evaluated through detailed comparisons with the well-established 3D computer simulation model: RGM (Radiosity-Graphics combined Model) based on the same scenes and identical spectral parameters, which shows good agreements between these two models' results. By using the newly developed computer model, series of typical mountainous scenes are generated to analyze the physical mechanism of mountainous radiation transfer. The results show that the effects of the adjacent slopes are important for deep valleys and they particularly affect shadowed pixels, and the topographic effect needs to be considered in mountainous terrain before accurate inferences from remotely sensed data can be made.

  10. Efficient Monte Carlo sampling of inverse problems using a neural network-based forward—applied to GPR crosshole traveltime inversion

    Science.gov (United States)

    Hansen, T. M.; Cordua, K. S.

    2017-12-01

    Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.

  11. 1984 CERN school of computing

    International Nuclear Information System (INIS)

    1985-01-01

    The eighth CERN School of Computing covered subjects mainly related to computing for elementary-particle physics. These proceedings contain written versions of most of the lectures delivered at the School. Notes on the following topics are included: trigger and data-acquisition plans for the LEP experiments; unfolding methods in high-energy physics experiments; Monte Carlo techniques; relational data bases; data networks and open systems; the Newcastle connection; portable operating systems; expert systems; microprocessors - from basic chips to complete systems; algorithms for parallel computers; trends in supercomputers and computational physics; supercomputing and related national projects in Japan; application of VLSI in high-energy physics, and single-user systems. See hints under the relevant topics. (orig./HSI)

  12. Guideline of Monte Carlo calculation. Neutron/gamma ray transport simulation by Monte Carlo method

    CERN Document Server

    2002-01-01

    This report condenses basic theories and advanced applications of neutron/gamma ray transport calculations in many fields of nuclear energy research. Chapters 1 through 5 treat historical progress of Monte Carlo methods, general issues of variance reduction technique, cross section libraries used in continuous energy Monte Carlo codes. In chapter 6, the following issues are discussed: fusion benchmark experiments, design of ITER, experiment analyses of fast critical assembly, core analyses of JMTR, simulation of pulsed neutron experiment, core analyses of HTTR, duct streaming calculations, bulk shielding calculations, neutron/gamma ray transport calculations of the Hiroshima atomic bomb. Chapters 8 and 9 treat function enhancements of MCNP and MVP codes, and a parallel processing of Monte Carlo calculation, respectively. An important references are attached at the end of this report.

  13. Comparative Analysis Between Computed and Conventional Inferior Alveolar Nerve Block Techniques.

    Science.gov (United States)

    Araújo, Gabriela Madeira; Barbalho, Jimmy Charles Melo; Dias, Tasiana Guedes de Souza; Santos, Thiago de Santana; Vasconcellos, Ricardo José de Holanda; de Morais, Hécio Henrique Araújo

    2015-11-01

    The aim of this randomized, double-blind, controlled trial was to compare the computed and conventional inferior alveolar nerve block techniques in symmetrically positioned inferior third molars. Both computed and conventional anesthetic techniques were performed in 29 healthy patients (58 surgeries) aged between 18 and 40 years. The anesthetic of choice was 2% lidocaine with 1: 200,000 epinephrine. The Visual Analogue Scale assessed the pain variable after anesthetic infiltration. Patient satisfaction was evaluated using the Likert Scale. Heart and respiratory rates, mean time to perform technique, and the need for additional anesthesia were also evaluated. Pain variable means were higher for the conventional technique as compared with computed, 3.45 ± 2.73 and 2.86 ± 1.96, respectively, but no statistically significant differences were found (P > 0.05). Patient satisfaction showed no statistically significant differences. The average computed technique runtime and the conventional were 3.85 and 1.61 minutes, respectively, showing statistically significant differences (P <0.001). The computed anesthetic technique showed lower mean pain perception, but did not show statistically significant differences when contrasted to the conventional technique.

  14. Monte Carlo Calculated Effective Dose to Teenage Girls from Computed Tomography Examinations

    International Nuclear Information System (INIS)

    Caon, M.; Bibbo, G.; Pattison, J.

    2000-01-01

    Effective doses from CT to paediatric patients are not common in the literature. This article reports some effective doses to teenage girls from CT examinations. The voxel computational model ADELAIDE, representative of a 14-year-old girl, was scaled in size by ±5% to represent also 11-12-year-old and 16-year-old girls. The EGS4 Monte Carlo code was used to calculate the effective dose from chest, abdomen and whole torso CT examinations to the three version of ADELAIDE using a 120 kV spectrum. For the whole torso CT examination, in order of increasing model size, the effective doses were 9.0, 8.2 and 7.8 mSv per 100 mA.s. Data are presented that allow the estimation of effective dose from CT examinations of the torso for girls between the ages of 11 and 16. (author)

  15. Monte Carlo estimation of the absorbed dose in computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jin Woo; Youn, Han Bean; Kim, Ho Kyung [Pusan National University, Busan (Korea, Republic of)

    2016-05-15

    The purpose of this study is to devise an algorithm calculating absorbed dose distributions of patients based on Monte Carlo (MC) methods, and which includes the dose estimations due to primary and secondary (scattered) x-ray photons. Assessment of patient dose in computed tomography (CT) at the population level has become a subject of public attention and concern, and ultimate CT quality assurance and dose optimization have the goal of reducing radiation-induced cancer risks in the examined population. However, the conventional CT dose index (CTDI) concept is not a surrogate of risk but it has rather been designed to measure an average central dose. In addition, the CTDI or the dose-length product has showed troubles for helical CT with a wider beam collimation. Simple algorithms to estimate a patient specific CT dose based on the MCNP output data have been introduced. For numerical chest and head phantoms, the spatial dose distributions were calculated. The results were reasonable. The estimated dose distribution map can be readily converted into the effective dose. The important list for further studies includes the validation of the models with the experimental measurements and the acceleration of algorithms.

  16. Advanced computers and Monte Carlo

    International Nuclear Information System (INIS)

    Jordan, T.L.

    1979-01-01

    High-performance parallelism that is currently available is synchronous in nature. It is manifested in such architectures as Burroughs ILLIAC-IV, CDC STAR-100, TI ASC, CRI CRAY-1, ICL DAP, and many special-purpose array processors designed for signal processing. This form of parallelism has apparently not been of significant value to many important Monte Carlo calculations. Nevertheless, there is much asynchronous parallelism in many of these calculations. A model of a production code that requires up to 20 hours per problem on a CDC 7600 is studied for suitability on some asynchronous architectures that are on the drawing board. The code is described and some of its properties and resource requirements ae identified to compare with corresponding properties and resource requirements are identified to compare with corresponding properties and resource requirements are identified to compare with corresponding properties and resources of some asynchronous multiprocessor architectures. Arguments are made for programer aids and special syntax to identify and support important asynchronous parallelism. 2 figures, 5 tables

  17. Application of computer technique in SMCAMS

    International Nuclear Information System (INIS)

    Lu Deming

    2001-01-01

    A series of applications of computer technique in SMCAMS physics design and magnetic field measurement is described, including digital calculation of electric-magnetic field, beam dynamics, calculation of beam injection and extraction, and mapping and shaping of the magnetic field

  18. Exponential convergence on a continuous Monte Carlo transport problem

    International Nuclear Information System (INIS)

    Booth, T.E.

    1997-01-01

    For more than a decade, it has been known that exponential convergence on discrete transport problems was possible using adaptive Monte Carlo techniques. An adaptive Monte Carlo method that empirically produces exponential convergence on a simple continuous transport problem is described

  19. CARMEN: a system Monte Carlo based on linear programming from direct openings; CARMEN: Un sistema de planficiacion Monte Carlo basado en programacion lineal a partir de aberturas directas

    Energy Technology Data Exchange (ETDEWEB)

    Ureba, A.; Pereira-Barbeiro, A. R.; Jimenez-Ortega, E.; Baeza, J. A.; Salguero, F. J.; Leal, A.

    2013-07-01

    The use of Monte Carlo (MC) has shown an improvement in the accuracy of the calculation of the dose compared to other analytics algorithms installed on the systems of business planning, especially in the case of non-standard situations typical of complex techniques such as IMRT and VMAT. Our treatment planning system called CARMEN, is based on the complete simulation, both the beam transport in the head of the accelerator and the patient, and simulation designed for efficient operation in terms of the accuracy of the estimate and the required computation times. (Author)

  20. Automatic mesh adaptivity for hybrid Monte Carlo/deterministic neutronics modeling of difficult shielding problems

    International Nuclear Information System (INIS)

    Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.

    2015-01-01

    The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer

  1. Propagation of uncertainty by Monte Carlo simulations in case of basic geodetic computations

    Directory of Open Access Journals (Sweden)

    Wyszkowska Patrycja

    2017-12-01

    Full Text Available The determination of the accuracy of functions of measured or adjusted values may be a problem in geodetic computations. The general law of covariance propagation or in case of the uncorrelated observations the propagation of variance (or the Gaussian formula are commonly used for that purpose. That approach is theoretically justified for the linear functions. In case of the non-linear functions, the first-order Taylor series expansion is usually used but that solution is affected by the expansion error. The aim of the study is to determine the applicability of the general variance propagation law in case of the non-linear functions used in basic geodetic computations. The paper presents errors which are a result of negligence of the higher-order expressions and it determines the range of such simplification. The basis of that analysis is the comparison of the results obtained by the law of propagation of variance and the probabilistic approach, namely Monte Carlo simulations. Both methods are used to determine the accuracy of the following geodetic computations: the Cartesian coordinates of unknown point in the three-point resection problem, azimuths and distances of the Cartesian coordinates, height differences in the trigonometric and the geometric levelling. These simulations and the analysis of the results confirm the possibility of applying the general law of variance propagation in basic geodetic computations even if the functions are non-linear. The only condition is the accuracy of observations, which cannot be too low. Generally, this is not a problem with using present geodetic instruments.

  2. Propagation of uncertainty by Monte Carlo simulations in case of basic geodetic computations

    Science.gov (United States)

    Wyszkowska, Patrycja

    2017-12-01

    The determination of the accuracy of functions of measured or adjusted values may be a problem in geodetic computations. The general law of covariance propagation or in case of the uncorrelated observations the propagation of variance (or the Gaussian formula) are commonly used for that purpose. That approach is theoretically justified for the linear functions. In case of the non-linear functions, the first-order Taylor series expansion is usually used but that solution is affected by the expansion error. The aim of the study is to determine the applicability of the general variance propagation law in case of the non-linear functions used in basic geodetic computations. The paper presents errors which are a result of negligence of the higher-order expressions and it determines the range of such simplification. The basis of that analysis is the comparison of the results obtained by the law of propagation of variance and the probabilistic approach, namely Monte Carlo simulations. Both methods are used to determine the accuracy of the following geodetic computations: the Cartesian coordinates of unknown point in the three-point resection problem, azimuths and distances of the Cartesian coordinates, height differences in the trigonometric and the geometric levelling. These simulations and the analysis of the results confirm the possibility of applying the general law of variance propagation in basic geodetic computations even if the functions are non-linear. The only condition is the accuracy of observations, which cannot be too low. Generally, this is not a problem with using present geodetic instruments.

  3. Monte Carlo Solutions for Blind Phase Noise Estimation

    Directory of Open Access Journals (Sweden)

    Çırpan Hakan

    2009-01-01

    Full Text Available This paper investigates the use of Monte Carlo sampling methods for phase noise estimation on additive white Gaussian noise (AWGN channels. The main contributions of the paper are (i the development of a Monte Carlo framework for phase noise estimation, with special attention to sequential importance sampling and Rao-Blackwellization, (ii the interpretation of existing Monte Carlo solutions within this generic framework, and (iii the derivation of a novel phase noise estimator. Contrary to the ad hoc phase noise estimators that have been proposed in the past, the estimators considered in this paper are derived from solid probabilistic and performance-determining arguments. Computer simulations demonstrate that, on one hand, the Monte Carlo phase noise estimators outperform the existing estimators and, on the other hand, our newly proposed solution exhibits a lower complexity than the existing Monte Carlo solutions.

  4. Monte Carlo method applied to medical physics

    International Nuclear Information System (INIS)

    Oliveira, C.; Goncalves, I.F.; Chaves, A.; Lopes, M.C.; Teixeira, N.; Matos, B.; Goncalves, I.C.; Ramalho, A.; Salgado, J.

    2000-01-01

    The main application of the Monte Carlo method to medical physics is dose calculation. This paper shows some results of two dose calculation studies and two other different applications: optimisation of neutron field for Boron Neutron Capture Therapy and optimization of a filter for a beam tube for several purposes. The time necessary for Monte Carlo calculations - the highest boundary for its intensive utilisation - is being over-passed with faster and cheaper computers. (author)

  5. Testing random number generators for Monte Carlo applications

    International Nuclear Information System (INIS)

    Sim, L.H.

    1992-01-01

    Central to any system for modelling radiation transport phenomena using Monte Carlo techniques is the method by which pseudo random numbers are generated. This method is commonly referred to as the Random Number Generator (RNG). It is usually a computer implemented mathematical algorithm which produces a series of numbers uniformly distributed on the interval [0,1]. If this series satisfies certain statistical tests for randomness, then for practical purposes the pseudo random numbers in the series can be considered to be random. Tests of this nature are important not only for new RNGs but also to test the implementation of known RNG algorithms in different computer environments. Six RNGs have been tested using six statistical tests and one visual test. The statistical tests are the moments, frequency (digit and number), serial, gap, and poker tests. The visual test is a simple two dimensional ordered pair display. In addition the RNGs have been tested in a specific Monte Carlo application. This type of test is often overlooked, however it is important that in addition to satisfactory performance in statistical tests, the RNG be able to perform effectively in the applications of interest. The RNGs tested here are based on a variety of algorithms, including multiplicative and linear congruential, lagged Fibonacci, and combination arithmetic and lagged Fibonacci. The effect of the Bays-Durham shuffling algorithm on the output of a known bad RNG has also been investigated. 18 refs., 11 tabs., 4 figs. of

  6. Non-analogue Monte Carlo method, application to neutron simulation; Methode de Monte Carlo non analogue, application a la simulation des neutrons

    Energy Technology Data Exchange (ETDEWEB)

    Morillon, B.

    1996-12-31

    With most of the traditional and contemporary techniques, it is still impossible to solve the transport equation if one takes into account a fully detailed geometry and if one studies precisely the interactions between particles and matters. Only the Monte Carlo method offers such a possibility. However with significant attenuation, the natural simulation remains inefficient: it becomes necessary to use biasing techniques where the solution of the adjoint transport equation is essential. The Monte Carlo code Tripoli has been using such techniques successfully for a long time with different approximate adjoint solutions: these methods require from the user to find out some parameters. If this parameters are not optimal or nearly optimal, the biases simulations may bring about small figures of merit. This paper presents a description of the most important biasing techniques of the Monte Carlo code Tripoli ; then we show how to calculate the importance function for general geometry with multigroup cases. We present a completely automatic biasing technique where the parameters of the biased simulation are deduced from the solution of the adjoint transport equation calculated by collision probabilities. In this study we shall estimate the importance function through collision probabilities method and we shall evaluate its possibilities thanks to a Monte Carlo calculation. We compare different biased simulations with the importance function calculated by collision probabilities for one-group and multigroup problems. We have run simulations with new biasing method for one-group transport problems with isotropic shocks and for multigroup problems with anisotropic shocks. The results show that for the one-group and homogeneous geometry transport problems the method is quite optimal without splitting and russian roulette technique but for the multigroup and heterogeneous X-Y geometry ones the figures of merit are higher if we add splitting and russian roulette technique.

  7. Atomic-level computer simulation

    International Nuclear Information System (INIS)

    Adams, J.B.; Rockett, Angus; Kieffer, John; Xu Wei; Nomura, Miki; Kilian, K.A.; Richards, D.F.; Ramprasad, R.

    1994-01-01

    This paper provides a broad overview of the methods of atomic-level computer simulation. It discusses methods of modelling atomic bonding, and computer simulation methods such as energy minimization, molecular dynamics, Monte Carlo, and lattice Monte Carlo. ((orig.))

  8. Monte Carlo simulation of mixed neutron-gamma radiation fields and dosimetry devices

    International Nuclear Information System (INIS)

    Zhang, Guoqing

    2011-01-01

    Monte Carlo methods based on random sampling are widely used in different fields for the capability of solving problems with a large number of coupled degrees of freedom. In this work, Monte Carlos methods are successfully applied for the simulation of the mixed neutron-gamma field in an interim storage facility and neutron dosimeters of different types. Details are discussed in two parts: In the first part, the method of simulating an interim storage facility loaded with CASTORs is presented. The size of a CASTOR is rather large (several meters) and the CASTOR wall is very thick (tens of centimeters). Obtaining the results of dose rates outside a CASTOR with reasonable errors costs usually hours or even days. For the simulation of a large amount of CASTORs in an interim storage facility, it needs weeks or even months to finish a calculation. Variance reduction techniques were used to reduce the calculation time and to achieve reasonable relative errors. Source clones were applied to avoid unnecessary repeated calculations. In addition, the simulations were performed on a cluster system. With the calculation techniques discussed above, the efficiencies of calculations can be improved evidently. In the second part, the methods of simulating the response of neutron dosimeters are presented. An Alnor albedo dosimeter was modelled in MCNP, and it has been simulated in the facility to calculate the calibration factor to get the evaluated response to a Cf-252 source. The angular response of Makrofol detectors to fast neutrons has also been investigated. As a kind of SSNTD, Makrofol can detect fast neutrons by recording the neutron induced heavy charged recoils. To obtain the information of charged recoils, general-purpose Monte Carlo codes were used for transporting incident neutrons. The response of Makrofol to fast neutrons is dependent on several factors. Based on the parameters which affect the track revealing, the formation of visible tracks was determined. For

  9. Monte Carlo simulation of mixed neutron-gamma radiation fields and dosimetry devices

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Guoqing

    2011-12-22

    Monte Carlo methods based on random sampling are widely used in different fields for the capability of solving problems with a large number of coupled degrees of freedom. In this work, Monte Carlos methods are successfully applied for the simulation of the mixed neutron-gamma field in an interim storage facility and neutron dosimeters of different types. Details are discussed in two parts: In the first part, the method of simulating an interim storage facility loaded with CASTORs is presented. The size of a CASTOR is rather large (several meters) and the CASTOR wall is very thick (tens of centimeters). Obtaining the results of dose rates outside a CASTOR with reasonable errors costs usually hours or even days. For the simulation of a large amount of CASTORs in an interim storage facility, it needs weeks or even months to finish a calculation. Variance reduction techniques were used to reduce the calculation time and to achieve reasonable relative errors. Source clones were applied to avoid unnecessary repeated calculations. In addition, the simulations were performed on a cluster system. With the calculation techniques discussed above, the efficiencies of calculations can be improved evidently. In the second part, the methods of simulating the response of neutron dosimeters are presented. An Alnor albedo dosimeter was modelled in MCNP, and it has been simulated in the facility to calculate the calibration factor to get the evaluated response to a Cf-252 source. The angular response of Makrofol detectors to fast neutrons has also been investigated. As a kind of SSNTD, Makrofol can detect fast neutrons by recording the neutron induced heavy charged recoils. To obtain the information of charged recoils, general-purpose Monte Carlo codes were used for transporting incident neutrons. The response of Makrofol to fast neutrons is dependent on several factors. Based on the parameters which affect the track revealing, the formation of visible tracks was determined. For

  10. Application of computer technique in the reconstruction of Chinese ancient buildings

    Science.gov (United States)

    Li, Deren; Yang, Jie; Zhu, Yixuan

    2003-01-01

    This paper offers an introduction of computer assemble and simulation of ancient building. A pioneer research work was carried out by investigators of surveying and mapping describing ancient Chinese timber buildings by 3D frame graphs with computers. But users can know the structural layers and the assembly process of these buildings if the frame graphs are processed further with computer. This can be implemented by computer simulation technique. This technique display the raw data on the screen of a computer and interactively manage them by combining technologies from computer graphics and image processing, multi-media technology, artificial intelligence, highly parallel real-time computation technique and human behavior science. This paper presents the implement procedure of simulation for large-sized wooden buildings as well as 3D dynamic assembly of these buildings under the 3DS MAX environment. The results of computer simulation are also shown in the paper.

  11. Monte Carlo shielding analyses using an automated biasing procedure

    International Nuclear Information System (INIS)

    Tang, J.S.; Hoffman, T.J.

    1988-01-01

    A systematic and automated approach for biasing Monte Carlo shielding calculations is described. In particular, adjoint fluxes from a one-dimensional discrete ordinates calculation are used to generate biasing parameters for a Monte Carlo calculation. The entire procedure of adjoint calculation, biasing parameters generation, and Monte Carlo calculation has been automated. The automated biasing procedure has been applied to several realistic deep-penetration shipping cask problems. The results obtained for neutron and gamma-ray transport indicate that with the automated biasing procedure Monte Carlo shielding calculations of spent-fuel casks can be easily performed with minimum effort and that accurate results can be obtained at reasonable computing cost

  12. Monte Carlo derivation of filtered tungsten anode X-ray spectra for dose computation in digital mammography

    Energy Technology Data Exchange (ETDEWEB)

    Paixao, L.; Oliveira, B. B.; Nogueira, M. do S. [Centro de Desenvolvimento da Tecnologia Nuclear, Post-graduation in Science and Technology of Radiations, Minerals and Materials, Pte. Antonio Carlos 6.627, Pampulha, 31270-901 Belo Horizonte (Brazil); Viloria, C. [UFMG, Departamento de Engenharia Nuclear, Post-graduation in Nuclear Sciences and Techniques, Pte. Antonio Carlos 6.627, Pampulha, 31270-901 Belo Horizonte (Brazil); Alves de O, M. [UFMG, Department of Anatomy and Imaging, Prof. Alfredo Balena 190, 30130-100 Belo Horizonte (Brazil); Araujo T, M. H., E-mail: lpr@cdtn.br [Dr Maria Helena Araujo Teixeira Clinic, Guajajaras 40, 30180-100 Belo Horizonte (Brazil)

    2014-08-15

    It is widely accepted that the mean glandular dose (D{sub G}) for the glandular tissue is the more useful magnitude for characterizing the breast cancer risk. The procedure to estimate the D{sub G}, for being difficult to measure it directly in the breast, it is to make the use of conversion factors that relate incident air kerma (K{sub i}) at this dose. Generally, the conversion factors vary with the x-ray spectrum half-value layer and the breast composition and thickness. Several authors through computer simulations have calculated such factors by the Monte Carlo (Mc) method. Many spectral models for D{sub G} computer simulations purposes are available in the diagnostic range. One of the models available generates unfiltered spectra. In this work, the Monte Carlo EGSnrc code package with the C++ class library (eg spp) was employed to derive filtered tungsten x-ray spectra used in digital mammography systems. Filtered spectra for rhodium and aluminium filters were obtained for tube potentials between 26 and 32 kV. The half-value layer of simulated filtered spectra were compared with those obtained experimentally with a solid state detector Unfors model 8202031-H Xi R/F and Mam Detector Platinum and 8201023-C Xi Base unit Platinum Plus w m As in a Hologic Selenia Dimensions system using a Direct Radiography mode. Calculated half-value layer values showed good agreement compared to those obtained experimentally. These results show that the filtered tungsten anode x-ray spectra and the EGSnrc Mc code can be used for D{sub G} determination in mammography. (Author)

  13. Monte Carlo derivation of filtered tungsten anode X-ray spectra for dose computation in digital mammography

    International Nuclear Information System (INIS)

    Paixao, L.; Oliveira, B. B.; Nogueira, M. do S.; Viloria, C.; Alves de O, M.; Araujo T, M. H.

    2014-08-01

    It is widely accepted that the mean glandular dose (D G ) for the glandular tissue is the more useful magnitude for characterizing the breast cancer risk. The procedure to estimate the D G , for being difficult to measure it directly in the breast, it is to make the use of conversion factors that relate incident air kerma (K i ) at this dose. Generally, the conversion factors vary with the x-ray spectrum half-value layer and the breast composition and thickness. Several authors through computer simulations have calculated such factors by the Monte Carlo (Mc) method. Many spectral models for D G computer simulations purposes are available in the diagnostic range. One of the models available generates unfiltered spectra. In this work, the Monte Carlo EGSnrc code package with the C++ class library (eg spp) was employed to derive filtered tungsten x-ray spectra used in digital mammography systems. Filtered spectra for rhodium and aluminium filters were obtained for tube potentials between 26 and 32 kV. The half-value layer of simulated filtered spectra were compared with those obtained experimentally with a solid state detector Unfors model 8202031-H Xi R/F and Mam Detector Platinum and 8201023-C Xi Base unit Platinum Plus w m As in a Hologic Selenia Dimensions system using a Direct Radiography mode. Calculated half-value layer values showed good agreement compared to those obtained experimentally. These results show that the filtered tungsten anode x-ray spectra and the EGSnrc Mc code can be used for D G determination in mammography. (Author)

  14. CARMEN: a system Monte Carlo based on linear programming from direct openings

    International Nuclear Information System (INIS)

    Ureba, A.; Pereira-Barbeiro, A. R.; Jimenez-Ortega, E.; Baeza, J. A.; Salguero, F. J.; Leal, A.

    2013-01-01

    The use of Monte Carlo (MC) has shown an improvement in the accuracy of the calculation of the dose compared to other analytics algorithms installed on the systems of business planning, especially in the case of non-standard situations typical of complex techniques such as IMRT and VMAT. Our treatment planning system called CARMEN, is based on the complete simulation, both the beam transport in the head of the accelerator and the patient, and simulation designed for efficient operation in terms of the accuracy of the estimate and the required computation times. (Author)

  15. Analysis of error in Monte Carlo transport calculations

    International Nuclear Information System (INIS)

    Booth, T.E.

    1979-01-01

    The Monte Carlo method for neutron transport calculations suffers, in part, because of the inherent statistical errors associated with the method. Without an estimate of these errors in advance of the calculation, it is difficult to decide what estimator and biasing scheme to use. Recently, integral equations have been derived that, when solved, predicted errors in Monte Carlo calculations in nonmultiplying media. The present work allows error prediction in nonanalog Monte Carlo calculations of multiplying systems, even when supercritical. Nonanalog techniques such as biased kernels, particle splitting, and Russian Roulette are incorporated. Equations derived here allow prediction of how much a specific variance reduction technique reduces the number of histories required, to be weighed against the change in time required for calculation of each history. 1 figure, 1 table

  16. Closed-shell variational quantum Monte Carlo simulation for the ...

    African Journals Online (AJOL)

    Closed-shell variational quantum Monte Carlo simulation for the electric dipole moment calculation of hydrazine molecule using casino-code. ... Nigeria Journal of Pure and Applied Physics ... The variational quantum Monte Carlo (VQMC) technique used in this work employed the restricted Hartree-Fock (RHF) scheme.

  17. Imprecision of dose predictions for radionuclides released to the environment: an application of a Monte Carlo simulation technique

    Energy Technology Data Exchange (ETDEWEB)

    Schwarz, G; Hoffman, F O

    1980-01-01

    An evaluation of the imprecision in dose predictions for radionuclides has been performed using correct dose assessment models and knowledge of model parameter value uncertainties. The propagation of parameter uncertainties is demonstrated using a Monte Carlo technique for elemental iodine 131 transported via the pasture-cow-milk-child pathway. Results indicated that when site-specific information is unavailable, the imprecision inherent in the predictions for this pathway is potentially large. (3 graphs, 25 references, 5 tables)

  18. Progress report of a research program in computational physics

    International Nuclear Information System (INIS)

    Guralnik, G.S.

    1990-01-01

    Task D's research is focused on the understanding of elementary particle physics through the techniques of quantum field theory. We make intensive use of computers to aid our research. During the last year we have made significant progress in understanding the weak interactions through the use of Monte Carlo methods as applied to the equations of quenched lattice QCD. We have launched a program to understand full (not quenched) lattice QCD on relatively large lattices using massively parallel computers. Because of our awareness that Monte Carlo methods might not be able to give a good solution to field theories with the computer power likely to be available to us for the forseeable future we have launched an entirely different numerical approach to study these problems. This ''Source Galerkin'' method is based on an algebraic approach to the field theoretic equations of motion and is (somewhat) related to variational and finite element techniques applied to a source rather than a coordinate space. The results for relatively simple problems are sensationally good. In particular, fermions can be treated in a way which allows them to retain their status as independent dynamical entities in the theory. 8 refs

  19. Use of a Monte Carlo technique to complete a fragmented set of H2S emission rates from a wastewater treatment plant.

    Science.gov (United States)

    Schauberger, Günther; Piringer, Martin; Baumann-Stanzer, Kathrin; Knauder, Werner; Petz, Erwin

    2013-12-15

    The impact of ambient concentrations in the vicinity of a plant can only be assessed if the emission rate is known. In this study, based on measurements of ambient H2S concentrations and meteorological parameters, the a priori unknown emission rates of a tannery wastewater treatment plant are calculated by an inverse dispersion technique. The calculations are determined using the Gaussian Austrian regulatory dispersion model. Following this method, emission data can be obtained, though only for a measurement station that is positioned such that the wind direction at the measurement station is leeward of the plant. Using the inverse transform sampling, which is a Monte Carlo technique, the dataset can also be completed for those wind directions for which no ambient concentration measurements are available. For the model validation, the measured ambient concentrations are compared with the calculated ambient concentrations obtained from the synthetic emission data of the Monte Carlo model. The cumulative frequency distribution of this new dataset agrees well with the empirical data. This inverse transform sampling method is thus a useful supplement for calculating emission rates using the inverse dispersion technique. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. A CUMULATIVE MIGRATION METHOD FOR COMPUTING RIGOROUS TRANSPORT CROSS SECTIONS AND DIFFUSION COEFFICIENTS FOR LWR LATTICES WITH MONTE CARLO

    Energy Technology Data Exchange (ETDEWEB)

    Zhaoyuan Liu; Kord Smith; Benoit Forget; Javier Ortensi

    2016-05-01

    A new method for computing homogenized assembly neutron transport cross sections and dif- fusion coefficients that is both rigorous and computationally efficient is proposed in this paper. In the limit of a homogeneous hydrogen slab, the new method is equivalent to the long-used, and only-recently-published CASMO transport method. The rigorous method is used to demonstrate the sources of inaccuracy in the commonly applied “out-scatter” transport correction. It is also demonstrated that the newly developed method is directly applicable to lattice calculations per- formed by Monte Carlo and is capable of computing rigorous homogenized transport cross sections for arbitrarily heterogeneous lattices. Comparisons of several common transport cross section ap- proximations are presented for a simple problem of infinite medium hydrogen. The new method has also been applied in computing 2-group diffusion data for an actual PWR lattice from BEAVRS benchmark.

  1. Computational physics an introduction to Monte Carlo simulations of matrix field theory

    CERN Document Server

    Ydri, Badis

    2017-01-01

    This book is divided into two parts. In the first part we give an elementary introduction to computational physics consisting of 21 simulations which originated from a formal course of lectures and laboratory simulations delivered since 2010 to physics students at Annaba University. The second part is much more advanced and deals with the problem of how to set up working Monte Carlo simulations of matrix field theories which involve finite dimensional matrix regularizations of noncommutative and fuzzy field theories, fuzzy spaces and matrix geometry. The study of matrix field theory in its own right has also become very important to the proper understanding of all noncommutative, fuzzy and matrix phenomena. The second part, which consists of 9 simulations, was delivered informally to doctoral students who are working on various problems in matrix field theory. Sample codes as well as sample key solutions are also provided for convenience and completness. An appendix containing an executive arabic summary of t...

  2. Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, Paul; Evans, Thomas; Tautges, Tim

    2012-12-24

    This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well

  3. A parallel Monte Carlo code for planar and SPECT imaging: implementation, verification and applications in (131)I SPECT.

    Science.gov (United States)

    Dewaraja, Yuni K; Ljungberg, Michael; Majumdar, Amitava; Bose, Abhijit; Koral, Kenneth F

    2002-02-01

    This paper reports the implementation of the SIMIND Monte Carlo code on an IBM SP2 distributed memory parallel computer. Basic aspects of running Monte Carlo particle transport calculations on parallel architectures are described. Our parallelization is based on equally partitioning photons among the processors and uses the Message Passing Interface (MPI) library for interprocessor communication and the Scalable Parallel Random Number Generator (SPRNG) to generate uncorrelated random number streams. These parallelization techniques are also applicable to other distributed memory architectures. A linear increase in computing speed with the number of processors is demonstrated for up to 32 processors. This speed-up is especially significant in Single Photon Emission Computed Tomography (SPECT) simulations involving higher energy photon emitters, where explicit modeling of the phantom and collimator is required. For (131)I, the accuracy of the parallel code is demonstrated by comparing simulated and experimental SPECT images from a heart/thorax phantom. Clinically realistic SPECT simulations using the voxel-man phantom are carried out to assess scatter and attenuation correction.

  4. Monte Carlo simulations for plasma physics

    International Nuclear Information System (INIS)

    Okamoto, M.; Murakami, S.; Nakajima, N.; Wang, W.X.

    2000-07-01

    Plasma behaviours are very complicated and the analyses are generally difficult. However, when the collisional processes play an important role in the plasma behaviour, the Monte Carlo method is often employed as a useful tool. For examples, in neutral particle injection heating (NBI heating), electron or ion cyclotron heating, and alpha heating, Coulomb collisions slow down high energetic particles and pitch angle scatter them. These processes are often studied by the Monte Carlo technique and good agreements can be obtained with the experimental results. Recently, Monte Carlo Method has been developed to study fast particle transports associated with heating and generating the radial electric field. Further it is applied to investigating the neoclassical transport in the plasma with steep gradients of density and temperatures which is beyong the conventional neoclassical theory. In this report, we briefly summarize the researches done by the present authors utilizing the Monte Carlo method. (author)

  5. Monte Carlo simulation of radiative processes in electron-positron scattering

    International Nuclear Information System (INIS)

    Kleiss, R.H.P.

    1982-01-01

    The Monte Carlo simulation of scattering processes has turned out to be one of the most successful methods of translating theoretical predictions into experimentally meaningful quantities. It is the purpose of this thesis to describe how this approach can be applied to higher-order QED corrections to several fundamental processes. In chapter II a very brief overview of the currently interesting phenomena in e +- scattering is given. It is argued that accurate information on higher-order QED corrections is very important and that the Monte Carlo approach is one of the most flexible and general methods to obtain this information. In chapter III the author describes various techniques which are useful in this context, and makes a few remarks on the numerical aspects of the proposed method. In the following three chapters he applies this to the processes e + e - → μ + μ - (γ) and e + e - → qanti q(sigma). In chapter IV he motivates his choice of these processes in view of their experimental and theoretical relevance. The formulae necessary for a computer simulation of all quantities of interest, up to order α 3 , is given. Chapters V and VI describe how this simulation can be performed using the techniques mentioned in chapter III. In chapter VII it is shown how additional dynamical quantities, namely the polarization of the incoming and outgoing particles, can be incorporated in our treatment, and the relevant formulae for the example processes mentioned above are given. Finally, in chapter VIII the author presents some examples of the comparison between theoretical predictions based on Monte Carlo simulations as outlined here, and the results from actual experiments. (Auth.)

  6. Monte Carlo simulation of tomography techniques using the platform Gate

    International Nuclear Information System (INIS)

    Barbouchi, Asma

    2007-01-01

    Simulations play a key role in functional imaging, with applications ranging from scanner design, scatter correction, protocol optimisation. GATE (Geant4 for Application Tomography Emission) is a platform for Monte Carlo Simulation. It is based on Geant4 to generate and track particles, to model geometry and physics process. Explicit modelling of time includes detector motion, time of flight, tracer kinetics. Interfaces to voxellised models and image reconstruction packages improve the integration of GATE in the global modelling cycle. In this work Monte Carlo simulations are used to understand and optimise the gamma camera's performances. We study the effect of the distance between source and collimator, the diameter of the holes and the thick of the collimator on the spatial resolution, energy resolution and efficiency of the gamma camera. We also study the reduction of simulation's time and implement a model of left ventricle in GATE. (Author). 7 refs

  7. Proceedings of the conference on frontiers of Quantum Monte Carlo

    International Nuclear Information System (INIS)

    Gubernatis, J.E.

    1986-01-01

    This journal of conference proceedings includes papers on topics such as: computers and science; Quantum Monte Carlo; condensed matter physics (with papers including the statistical error of Green's Function Monte Carlo, a study of Trotter-like approximations, simulations of the Hubbard model, and stochastic simulation of fermions); chemistry (including papers on quantum simulations of aqueous systems, fourier path integral methods, and a study of electron solvation in polar solvents using path integral calculations); atomic molecular and nuclear physics; high-energy physics, and advanced computer designs

  8. Preparing systems engineering and computing science students in disciplined methods, quantitative, and advanced statistical techniques to improve process performance

    Science.gov (United States)

    McCray, Wilmon Wil L., Jr.

    The research was prompted by a need to conduct a study that assesses process improvement, quality management and analytical techniques taught to students in U.S. colleges and universities undergraduate and graduate systems engineering and the computing science discipline (e.g., software engineering, computer science, and information technology) degree programs during their academic training that can be applied to quantitatively manage processes for performance. Everyone involved in executing repeatable processes in the software and systems development lifecycle processes needs to become familiar with the concepts of quantitative management, statistical thinking, process improvement methods and how they relate to process-performance. Organizations are starting to embrace the de facto Software Engineering Institute (SEI) Capability Maturity Model Integration (CMMI RTM) Models as process improvement frameworks to improve business processes performance. High maturity process areas in the CMMI model imply the use of analytical, statistical, quantitative management techniques, and process performance modeling to identify and eliminate sources of variation, continually improve process-performance; reduce cost and predict future outcomes. The research study identifies and provides a detail discussion of the gap analysis findings of process improvement and quantitative analysis techniques taught in U.S. universities systems engineering and computing science degree programs, gaps that exist in the literature, and a comparison analysis which identifies the gaps that exist between the SEI's "healthy ingredients " of a process performance model and courses taught in U.S. universities degree program. The research also heightens awareness that academicians have conducted little research on applicable statistics and quantitative techniques that can be used to demonstrate high maturity as implied in the CMMI models. The research also includes a Monte Carlo simulation optimization

  9. Adaptive multilevel splitting for Monte Carlo particle transport

    Directory of Open Access Journals (Sweden)

    Louvin Henri

    2017-01-01

    Full Text Available In the Monte Carlo simulation of particle transport, and especially for shielding applications, variance reduction techniques are widely used to help simulate realisations of rare events and reduce the relative errors on the estimated scores for a given computation time. Adaptive Multilevel Splitting (AMS is one of these variance reduction techniques that has recently appeared in the literature. In the present paper, we propose an alternative version of the AMS algorithm, adapted for the first time to the field of particle transport. Within this context, it can be used to build an unbiased estimator of any quantity associated with particle tracks, such as flux, reaction rates or even non-Boltzmann tallies like pulse-height tallies and other spectra. Furthermore, the efficiency of the AMS algorithm is shown not to be very sensitive to variations of its input parameters, which makes it capable of significant variance reduction without requiring extended user effort.

  10. The Direct Lighting Computation in Global Illumination Methods

    Science.gov (United States)

    Wang, Changyaw Allen

    1994-01-01

    Creating realistic images is a computationally expensive process, but it is very important for applications such as interior design, product design, education, virtual reality, and movie special effects. To generate realistic images, state-of-art rendering techniques are employed to simulate global illumination, which accounts for the interreflection of light among objects. In this document, we formalize the global illumination problem into a eight -dimensional integral and discuss various methods that can accelerate the process of approximating this integral. We focus on the direct lighting computation, which accounts for the light reaching the viewer from the emitting sources after exactly one reflection, Monte Carlo sampling methods, and light source simplification. Results include a new sample generation method, a framework for the prediction of the total number of samples used in a solution, and a generalized Monte Carlo approach for computing the direct lighting from an environment which for the first time makes ray tracing feasible for highly complex environments.

  11. THE COMPUTATIONAL INTELLIGENCE TECHNIQUES FOR PREDICTIONS - ARTIFICIAL NEURAL NETWORKS

    OpenAIRE

    Mary Violeta Bar

    2014-01-01

    The computational intelligence techniques are used in problems which can not be solved by traditional techniques when there is insufficient data to develop a model problem or when they have errors.Computational intelligence, as he called Bezdek (Bezdek, 1992) aims at modeling of biological intelligence. Artificial Neural Networks( ANNs) have been applied to an increasing number of real world problems of considerable complexity. Their most important advantage is solving problems that are too c...

  12. Biased Monte Carlo optimization: the basic approach

    International Nuclear Information System (INIS)

    Campioni, Luca; Scardovelli, Ruben; Vestrucci, Paolo

    2005-01-01

    It is well-known that the Monte Carlo method is very successful in tackling several kinds of system simulations. It often happens that one has to deal with rare events, and the use of a variance reduction technique is almost mandatory, in order to have Monte Carlo efficient applications. The main issue associated with variance reduction techniques is related to the choice of the value of the biasing parameter. Actually, this task is typically left to the experience of the Monte Carlo user, who has to make many attempts before achieving an advantageous biasing. A valuable result is provided: a methodology and a practical rule addressed to establish an a priori guidance for the choice of the optimal value of the biasing parameter. This result, which has been obtained for a single component system, has the notable property of being valid for any multicomponent system. In particular, in this paper, the exponential and the uniform biases of exponentially distributed phenomena are investigated thoroughly

  13. Monte Carlo methods and models in finance and insurance

    CERN Document Server

    Korn, Ralf; Kroisandt, Gerald

    2010-01-01

    Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of...

  14. Monte Carlo simulation of continuous-space crystal growth

    International Nuclear Information System (INIS)

    Dodson, B.W.; Taylor, P.A.

    1986-01-01

    We describe a method, based on Monte Carlo techniques, of simulating the atomic growth of crystals without the discrete lattice space assumed by conventional Monte Carlo growth simulations. Since no lattice space is assumed, problems involving epitaxial growth, heteroepitaxy, phonon-driven mechanisms, surface reconstruction, and many other phenomena incompatible with the lattice-space approximation can be studied. Also, use of the Monte Carlo method circumvents to some extent the extreme limitations on simulated timescale inherent in crystal-growth techniques which might be proposed using molecular dynamics. The implementation of the new method is illustrated by studying the growth of strained-layer superlattice (SLS) interfaces in two-dimensional Lennard-Jones atomic systems. Despite the extreme simplicity of such systems, the qualitative features of SLS growth seen here are similar to those observed experimentally in real semiconductor systems

  15. Development and application of the automated Monte Carlo biasing procedure in SAS4

    International Nuclear Information System (INIS)

    Tang, J.S.; Broadhead, B.L.

    1993-01-01

    An automated approach for biasing Monte Carlo shielding calculations is described. In particular, adjoint fluxes from a one-dimensional discrete-ordinates calculation are used to generate biasing parameters for a three-dimensional Monte Carlo calculation. The automated procedure consisting of cross-section processing, adjoint flux determination, biasing parameter generation, and the initiation of a MORSE-SGC/S Monte Carlo calculation has been implemented in the SAS4 module of the SCALE computer code system. The automated procedure has been used extensively in the investigation of both computational and experimental benchmarks for the NEACRP working group on shielding assessment of transportation packages. The results of these studies indicate that with the automated biasing procedure, Monte Carlo shielding calculations of spent fuel casks can be easily performed with minimum effort and that accurate results can be obtained at reasonable computing cost. The systematic biasing approach described in this paper can also be applied to other similar shielding problems

  16. Parallel Algorithms for Monte Carlo Particle Transport Simulation on Exascale Computing Architectures

    Science.gov (United States)

    Romano, Paul Kollath

    Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with

  17. Development and evaluation of attenuation and scatter correction techniques for SPECT using the Monte Carlo method

    International Nuclear Information System (INIS)

    Ljungberg, M.

    1990-05-01

    Quantitative scintigrafic images, obtained by NaI(Tl) scintillation cameras, are limited by photon attenuation and contribution from scattered photons. A Monte Carlo program was developed in order to evaluate these effects. Simple source-phantom geometries and more complex nonhomogeneous cases can be simulated. Comparisons with experimental data for both homogeneous and nonhomogeneous regions and with published results have shown good agreement. The usefulness for simulation of parameters in scintillation camera systems, stationary as well as in SPECT systems, has also been demonstrated. An attenuation correction method based on density maps and build-up functions has been developed. The maps were obtained from a transmission measurement using an external 57 Co flood source and the build-up was simulated by the Monte Carlo code. Two scatter correction methods, the dual-window method and the convolution-subtraction method, have been compared using the Monte Carlo method. The aim was to compare the estimated scatter with the true scatter in the photo-peak window. It was concluded that accurate depth-dependent scatter functions are essential for a proper scatter correction. A new scatter and attenuation correction method has been developed based on scatter line-spread functions (SLSF) obtained for different depths and lateral positions in the phantom. An emission image is used to determine the source location in order to estimate the scatter in the photo-peak window. Simulation studies of a clinically realistic source in different positions in cylindrical water phantoms were made for three photon energies. The SLSF-correction method was also evaluated by simulation studies for 1. a myocardial source, 2. uniform source in the lungs and 3. a tumour located in the lungs in a realistic, nonhomogeneous computer phantom. The results showed that quantitative images could be obtained in nonhomogeneous regions. (67 refs.)

  18. Measured performances on vectorization and multitasking with a Monte Carlo code for neutron transport problems

    International Nuclear Information System (INIS)

    Chauvet, Y.

    1985-01-01

    This paper summarized two improvements of a real production code by using vectorization and multitasking techniques. After a short description of Monte Carlo algorithms employed in neutron transport problems, the authors briefly describe the work done in order to get a vector code. Vectorization principles are presented and measured performances on the CRAY 1S, CYBER 205 and CRAY X-MP compared in terms of vector lengths. The second part of this work is an adaptation to multitasking on the CRAY X-MP using exclusively standard multitasking tools available with FORTRAN under the COS 1.13 system. Two examples are presented. The goal of the first one is to measure the overhead inherent to multitasking when tasks become too small and to define a granularity threshold, that is to say a minimum size for a task. With the second example they propose a method that is very X-MP oriented in order to get the best speedup factor on such a computer. In conclusion they prove that Monte Carlo algorithms are very well suited to future vector and parallel computers

  19. New computing techniques in physics research

    International Nuclear Information System (INIS)

    Becks, Karl-Heinz; Perret-Gallix, Denis

    1994-01-01

    New techniques were highlighted by the ''Third International Workshop on Software Engineering, Artificial Intelligence and Expert Systems for High Energy and Nuclear Physics'' in Oberammergau, Bavaria, Germany, from October 4 to 8. It was the third workshop in the series; the first was held in Lyon in 1990 and the second at France-Telecom site near La Londe les Maures in 1992. This series of workshops covers a broad spectrum of problems. New, highly sophisticated experiments demand new techniques in computing, in hardware as well as in software. Software Engineering Techniques could in principle satisfy the needs for forthcoming accelerator experiments. The growing complexity of detector systems demands new techniques in experimental error diagnosis and repair suggestions; Expert Systems seem to offer a way of assisting the experimental crew during data-taking

  20. Simulation of dose deposition in stereotactic synchrotron radiation therapy: a fast approach combining Monte Carlo and deterministic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Smekens, F; Freud, N; Letang, J M; Babot, D [CNDRI (Nondestructive Testing using Ionizing Radiations) Laboratory, INSA-Lyon, 69621 Villeurbanne Cedex (France); Adam, J-F; Elleaume, H; Esteve, F [INSERM U-836, Equipe 6 ' Rayonnement Synchrotron et Recherche Medicale' , Institut des Neurosciences de Grenoble (France); Ferrero, C; Bravin, A [European Synchrotron Radiation Facility, Grenoble (France)], E-mail: francois.smekens@insa-lyon.fr

    2009-08-07

    A hybrid approach, combining deterministic and Monte Carlo (MC) calculations, is proposed to compute the distribution of dose deposited during stereotactic synchrotron radiation therapy treatment. The proposed approach divides the computation into two parts: (i) the dose deposited by primary radiation (coming directly from the incident x-ray beam) is calculated in a deterministic way using ray casting techniques and energy-absorption coefficient tables and (ii) the dose deposited by secondary radiation (Rayleigh and Compton scattering, fluorescence) is computed using a hybrid algorithm combining MC and deterministic calculations. In the MC part, a small number of particle histories are simulated. Every time a scattering or fluorescence event takes place, a splitting mechanism is applied, so that multiple secondary photons are generated with a reduced weight. The secondary events are further processed in a deterministic way, using ray casting techniques. The whole simulation, carried out within the framework of the Monte Carlo code Geant4, is shown to converge towards the same results as the full MC simulation. The speed of convergence is found to depend notably on the splitting multiplicity, which can easily be optimized. To assess the performance of the proposed algorithm, we compare it to state-of-the-art MC simulations, accelerated by the track length estimator technique (TLE), considering a clinically realistic test case. It is found that the hybrid approach is significantly faster than the MC/TLE method. The gain in speed in a test case was about 25 for a constant precision. Therefore, this method appears to be suitable for treatment planning applications.

  1. Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo

    Science.gov (United States)

    McDaniel, T.; D'Azevedo, E. F.; Li, Y. W.; Wong, K.; Kent, P. R. C.

    2017-11-01

    Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.

  2. Monte Carlo analysis of a control technique for a tunable white lighting system

    DEFF Research Database (Denmark)

    Chakrabarti, Maumita; Thorseth, Anders; Jepsen, Jørgen

    2017-01-01

    A simulated colour control mechanism for a multi-coloured LED lighting system is presented. The system achieves adjustable and stable white light output and allows for system-to-system reproducibility after application of the control mechanism. The control unit works using a pre-calibrated lookup...... table for an experimentally realized system, with a calibrated tristimulus colour sensor. A Monte Carlo simulation is used to examine the system performance concerning the variation of luminous flux and chromaticity of the light output. The inputs to the Monte Carlo simulation, are variations of the LED...... peak wavelength, the LED rated luminous flux bin, the influence of the operating conditions, ambient temperature, driving current, and the spectral response of the colour sensor. The system performance is investigated by evaluating the outputs from the Monte Carlo simulation. The outputs show...

  3. Monte Carlo systems used for treatment planning and dose verification

    Energy Technology Data Exchange (ETDEWEB)

    Brualla, Lorenzo [Universitaetsklinikum Essen, NCTeam, Strahlenklinik, Essen (Germany); Rodriguez, Miguel [Centro Medico Paitilla, Balboa (Panama); Lallena, Antonio M. [Universidad de Granada, Departamento de Fisica Atomica, Molecular y Nuclear, Granada (Spain)

    2017-04-15

    General-purpose radiation transport Monte Carlo codes have been used for estimation of the absorbed dose distribution in external photon and electron beam radiotherapy patients since several decades. Results obtained with these codes are usually more accurate than those provided by treatment planning systems based on non-stochastic methods. Traditionally, absorbed dose computations based on general-purpose Monte Carlo codes have been used only for research, owing to the difficulties associated with setting up a simulation and the long computation time required. To take advantage of radiation transport Monte Carlo codes applied to routine clinical practice, researchers and private companies have developed treatment planning and dose verification systems that are partly or fully based on fast Monte Carlo algorithms. This review presents a comprehensive list of the currently existing Monte Carlo systems that can be used to calculate or verify an external photon and electron beam radiotherapy treatment plan. Particular attention is given to those systems that are distributed, either freely or commercially, and that do not require programming tasks from the end user. These systems are compared in terms of features and the simulation time required to compute a set of benchmark calculations. (orig.) [German] Seit mehreren Jahrzehnten werden allgemein anwendbare Monte-Carlo-Codes zur Simulation des Strahlungstransports benutzt, um die Verteilung der absorbierten Dosis in der perkutanen Strahlentherapie mit Photonen und Elektronen zu evaluieren. Die damit erzielten Ergebnisse sind meist akkurater als solche, die mit nichtstochastischen Methoden herkoemmlicher Bestrahlungsplanungssysteme erzielt werden koennen. Wegen des damit verbundenen Arbeitsaufwands und der langen Dauer der Berechnungen wurden Monte-Carlo-Simulationen von Dosisverteilungen in der konventionellen Strahlentherapie in der Vergangenheit im Wesentlichen in der Forschung eingesetzt. Im Bemuehen, Monte-Carlo

  4. Direct aperture optimization for IMRT using Monte Carlo generated beamlets

    International Nuclear Information System (INIS)

    Bergman, Alanah M.; Bush, Karl; Milette, Marie-Pierre; Popescu, I. Antoniu; Otto, Karl; Duzenli, Cheryl

    2006-01-01

    This work introduces an EGSnrc-based Monte Carlo (MC) beamlet does distribution matrix into a direct aperture optimization (DAO) algorithm for IMRT inverse planning. The technique is referred to as Monte Carlo-direct aperture optimization (MC-DAO). The goal is to assess if the combination of accurate Monte Carlo tissue inhomogeneity modeling and DAO inverse planning will improve the dose accuracy and treatment efficiency for treatment planning. Several authors have shown that the presence of small fields and/or inhomogeneous materials in IMRT treatment fields can cause dose calculation errors for algorithms that are unable to accurately model electronic disequilibrium. This issue may also affect the IMRT optimization process because the dose calculation algorithm may not properly model difficult geometries such as targets close to low-density regions (lung, air etc.). A clinical linear accelerator head is simulated using BEAMnrc (NRC, Canada). A novel in-house algorithm subdivides the resulting phase space into 2.5x5.0 mm 2 beamlets. Each beamlet is projected onto a patient-specific phantom. The beamlet dose contribution to each voxel in a structure-of-interest is calculated using DOSXYZnrc. The multileaf collimator (MLC) leaf positions are linked to the location of the beamlet does distributions. The MLC shapes are optimized using direct aperture optimization (DAO). A final Monte Carlo calculation with MLC modeling is used to compute the final dose distribution. Monte Carlo simulation can generate accurate beamlet dose distributions for traditionally difficult-to-calculate geometries, particularly for small fields crossing regions of tissue inhomogeneity. The introduction of DAO results in an additional improvement by increasing the treatment delivery efficiency. For the examples presented in this paper the reduction in the total number of monitor units to deliver is ∼33% compared to fluence-based optimization methods

  5. Hybrid Adaptive Multilevel Monte Carlo Algorithm for Non-Smooth Observables of Itô Stochastic Differential Equations

    KAUST Repository

    Rached, Nadhir B.

    2013-12-01

    The Monte Carlo forward Euler method with uniform time stepping is the standard technique to compute an approximation of the expected payoff of a solution of an Itô SDE. For a given accuracy requirement TOL, the complexity of this technique for well behaved problems, that is the amount of computational work to solve the problem, is O(TOL-3). A new hybrid adaptive Monte Carlo forward Euler algorithm for SDEs with non-smooth coefficients and low regular observables is developed in this thesis. This adaptive method is based on the derivation of a new error expansion with computable leading-order terms. The basic idea of the new expansion is the use of a mixture of prior information to determine the weight functions and posterior information to compute the local error. In a number of numerical examples the superior efficiency of the hybrid adaptive algorithm over the standard uniform time stepping technique is verified. When a non-smooth binary payoff with either GBM or drift singularity type of SDEs is considered, the new adaptive method achieves the same complexity as the uniform discretization with smooth problems. Moreover, the new developed algorithm is extended to the MLMC forward Euler setting which reduces the complexity from O(TOL-3) to O(TOL-2(log(TOL))2). For the binary option case with the same type of Itô SDEs, the hybrid adaptive MLMC forward Euler recovers the standard multilevel computational cost O(TOL-2(log(TOL))2). When considering a higher order Milstein scheme, a similar complexity result was obtained by Giles using the uniform time stepping for one dimensional SDEs. The difficulty to extend Giles\\' Milstein MLMC method to the multidimensional case is an argument for the flexibility of our new constructed adaptive MLMC forward Euler method which can be easily adapted to this setting. Similarly, the expected complexity O(TOL-2(log(TOL))2) is reached for the multidimensional case and verified numerically.

  6. Optimizing Availability of a Framework in Series Configuration Utilizing Markov Model and Monte Carlo Simulation Techniques

    Directory of Open Access Journals (Sweden)

    Mansoor Ahmed Siddiqui

    2017-06-01

    Full Text Available This research work is aimed at optimizing the availability of a framework comprising of two units linked together in series configuration utilizing Markov Model and Monte Carlo (MC Simulation techniques. In this article, effort has been made to develop a maintenance model that incorporates three distinct states for each unit, while taking into account their different levels of deterioration. Calculations are carried out using the proposed model for two distinct cases of corrective repair, namely perfect and imperfect repairs, with as well as without opportunistic maintenance. Initially, results are accomplished using an analytical technique i.e., Markov Model. Validation of the results achieved is later carried out with the help of MC Simulation. In addition, MC Simulation based codes also work well for the frameworks that follow non-exponential failure and repair rates, and thus overcome the limitations offered by the Markov Model.

  7. Practical Application of Monte Carlo Code in RTP

    International Nuclear Information System (INIS)

    Mohamad Hairie Rabir; Julia Abdul Karim; Muhammad Rawi Mohamed Zin; Na'im Syauqi Hamzah; Mark Dennis Anak Usang; Abi Muttaqin Jalal Bayar; Muhammad Khairul Ariff Mustafa

    2015-01-01

    Monte Carlo neutron transport codes are widely used in various reactor physics applications in RTP and other related nuclear and radiation research in Nuklear Malaysia. The main advantage of the method is the capability to model geometry and interaction physics without major approximations. The disadvantage is that the modelling of complicated systems is very computing-intensive, which restricts the applications to some extent. The importance of Monte Carlo calculation is likely to increase in the future, along with the development in computer capacities and parallel calculation. This paper presents several calculation activities, its achievements and challenges in using MCNP code for neutronics analysis, nuclide inventory and source term calculation, shielding and dose evaluation. (author)

  8. Overview and applications of the Monte Carlo radiation transport kit at LLNL

    International Nuclear Information System (INIS)

    Sale, K. E.

    1999-01-01

    Modern Monte Carlo radiation transport codes can be applied to model most applications of radiation, from optical to TeV photons, from thermal neutrons to heavy ions. Simulations can include any desired level of detail in three-dimensional geometries using the right level of detail in the reaction physics. The technology areas to which we have applied these codes include medical applications, defense, safety and security programs, nuclear safeguards and industrial and research system design and control. The main reason such applications are interesting is that by using these tools substantial savings of time and effort (i.e. money) can be realized. In addition it is possible to separate out and investigate computationally effects which can not be isolated and studied in experiments. In model calculations, just as in real life, one must take care in order to get the correct answer to the right question. Advancing computing technology allows extensions of Monte Carlo applications in two directions. First, as computers become more powerful more problems can be accurately modeled. Second, as computing power becomes cheaper Monte Carlo methods become accessible more widely. An overview of the set of Monte Carlo radiation transport tools in use a LLNL will be presented along with a few examples of applications and future directions

  9. On the representation of electron multiple elastic-scattering distributions for Monte Carlo calculations

    International Nuclear Information System (INIS)

    Kawrakow, I.; Bielajew, A.F.

    1998-01-01

    A new representation of elastic electron-nucleus (Coulomb) multiple-scattering distributions is developed. Using the screened Rutherford cross section with the Moliere screening parameter as an example, a simple analytic angular transformation of the Goudsmit-Saunderson multiple-scattering distribution accounts for most of the structure of the angular distribution leaving a residual 3-parameter (path-length, transformed angle and screening parameter) function that is reasonably slowly varying and suitable for rapid, accurate interpolation in a computer-intensive algorithm. The residual function is calculated numerically for a wide range of Moliere screening parameters and path-lengths suitable for use in a general-purpose condensed-history Monte Carlo code. Additionally, techniques are developed that allow the distributions to be scaled to account for energy loss. This new representation allows ''''on-the-fly'''' sampling of Goudsmit-Saunderson angular distributions in a screened Rutherford approximation suitable for class II condensed-history Monte Carlo codes. (orig.)

  10. Development of a external exposure computational model for studying of input dose in skin for radiographs of thorax and vertebral column

    International Nuclear Information System (INIS)

    Muniz, Bianca C.; Menezes, Claudio J.M.; Vieira, Jose W.

    2014-01-01

    The dosimetric measurements do not always happen directly in the human body. Therefore, these assessments can be performed using anthropomorphic models (phantoms) evidencing models computational exposure (MCE) using techniques of Monte Carlo Method for virtual simulations. These processing techniques coupled with more powerful and affordable computers make the Monte Carlo method one of the tools most used worldwide in radiation transport area. In this work, the Monte Carlo EGS4 program was used to develop a computer model of external exposure to study the entrance skin dose for chest and column X-radiography and, aiming to optimize these practices by reducing doses to patients, professionals involved and the general public. The results obtained experimentally with the electrometer Radcal, model 9015, associated with the ionization chamber for radiology model 10X5-6, showed that the proposed computational model can be used in quality assurance programs in radiodiagnostic, evaluating the entrance skin dose when varying parameters of the radiation beam such as kilo voltage peak (kVp), current-time product (mAs), total filtration and distance surface source (DFS), optimizing the practices in radiodiagnostic and meeting the current regulation

  11. Reflections on early Monte Carlo calculations

    International Nuclear Information System (INIS)

    Spanier, J.

    1992-01-01

    Monte Carlo methods for solving various particle transport problems developed in parallel with the evolution of increasingly sophisticated computer programs implementing diffusion theory and low-order moments calculations. In these early years, Monte Carlo calculations and high-order approximations to the transport equation were seen as too expensive to use routinely for nuclear design but served as invaluable aids and supplements to design with less expensive tools. The earliest Monte Carlo programs were quite literal; i.e., neutron and other particle random walk histories were simulated by sampling from the probability laws inherent in the physical system without distoration. Use of such analogue sampling schemes resulted in a good deal of time being spent in examining the possibility of lowering the statistical uncertainties in the sample estimates by replacing simple, and intuitively obvious, random variables by those with identical means but lower variances

  12. A comparative analysis of soft computing techniques for gene prediction.

    Science.gov (United States)

    Goel, Neelam; Singh, Shailendra; Aseri, Trilok Chand

    2013-07-01

    The rapid growth of genomic sequence data for both human and nonhuman species has made analyzing these sequences, especially predicting genes in them, very important and is currently the focus of many research efforts. Beside its scientific interest in the molecular biology and genomics community, gene prediction is of considerable importance in human health and medicine. A variety of gene prediction techniques have been developed for eukaryotes over the past few years. This article reviews and analyzes the application of certain soft computing techniques in gene prediction. First, the problem of gene prediction and its challenges are described. These are followed by different soft computing techniques along with their application to gene prediction. In addition, a comparative analysis of different soft computing techniques for gene prediction is given. Finally some limitations of the current research activities and future research directions are provided. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Population synthesis of radio and gamma-ray millisecond pulsars using Markov Chain Monte Carlo techniques

    Science.gov (United States)

    Gonthier, Peter L.; Koh, Yew-Meng; Kust Harding, Alice

    2016-04-01

    We present preliminary results of a new population synthesis of millisecond pulsars (MSP) from the Galactic disk using Markov Chain Monte Carlo techniques to better understand the model parameter space. We include empirical radio and gamma-ray luminosity models that are dependent on the pulsar period and period derivative with freely varying exponents. The magnitudes of the model luminosities are adjusted to reproduce the number of MSPs detected by a group of thirteen radio surveys as well as the MSP birth rate in the Galaxy and the number of MSPs detected by Fermi. We explore various high-energy emission geometries like the slot gap, outer gap, two pole caustic and pair starved polar cap models. The parameters associated with the birth distributions for the mass accretion rate, magnetic field, and period distributions are well constrained. With the set of four free parameters, we employ Markov Chain Monte Carlo simulations to explore the model parameter space. We present preliminary comparisons of the simulated and detected distributions of radio and gamma-ray pulsar characteristics. We estimate the contribution of MSPs to the diffuse gamma-ray background with a special focus on the Galactic Center.We express our gratitude for the generous support of the National Science Foundation (RUI: AST-1009731), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program (NNX09AQ71G).

  14. Utilization of a photon transport code to investigate radiation therapy treatment planning quantities and techniques

    International Nuclear Information System (INIS)

    Palta, J.R.

    1981-01-01

    A versatile computer program MORSE, based on neutron and photon transport theory has been utilzed to investigate radiation therapy treatment planning quantities and techniques. A multi-energy group representation of transport equation provides a concise approach in utilizing Monte Carlo numerical techniques to multiple radiation therapy treatment planning problems. Central axis total and scattered dose distributions for homogeneous and inhomogeneous water phantoms are calculated and the correction factor for lung and bone inhomogeneities are also evaluated. Results show that Monte Carlo calculations based on multi-energy group tansport theory predict the depth dose distributions that are in good agreement with available experimental data. Central axis depth dose distributions for a bremsstrahlung spectrum from a linear accelerator is also calculated to exhibit the versatility of the computer program in handling multiple radiation therapy problems. A novel approach is undertaken to study the dosimetric properties of brachytherapy sources

  15. Angular biasing in implicit Monte-Carlo

    International Nuclear Information System (INIS)

    Zimmerman, G.B.

    1994-01-01

    Calculations of indirect drive Inertial Confinement Fusion target experiments require an integrated approach in which laser irradiation and radiation transport in the hohlraum are solved simultaneously with the symmetry, implosion and burn of the fuel capsule. The Implicit Monte Carlo method has proved to be a valuable tool for the two dimensional radiation transport within the hohlraum, but the impact of statistical noise on the symmetric implosion of the small fuel capsule is difficult to overcome. We present an angular biasing technique in which an increased number of low weight photons are directed at the imploding capsule. For typical parameters this reduces the required computer time for an integrated calculation by a factor of 10. An additional factor of 5 can also be achieved by directing even smaller weight photons at the polar regions of the capsule where small mass zones are most sensitive to statistical noise

  16. New coding technique for computer generated holograms.

    Science.gov (United States)

    Haskell, R. E.; Culver, B. C.

    1972-01-01

    A coding technique is developed for recording computer generated holograms on a computer controlled CRT in which each resolution cell contains two beam spots of equal size and equal intensity. This provides a binary hologram in which only the position of the two dots is varied from cell to cell. The amplitude associated with each resolution cell is controlled by selectively diffracting unwanted light into a higher diffraction order. The recording of the holograms is fast and simple.

  17. Techniques involving extreme environment, nondestructive techniques, computer methods in metals research, and data analysis

    International Nuclear Information System (INIS)

    Bunshah, R.F.

    1976-01-01

    A number of different techniques which range over several different aspects of materials research are covered in this volume. They are concerned with property evaluation of 4 0 K and below, surface characterization, coating techniques, techniques for the fabrication of composite materials, computer methods, data evaluation and analysis, statistical design of experiments and non-destructive test techniques. Topics covered in this part include internal friction measurements; nondestructive testing techniques; statistical design of experiments and regression analysis in metallurgical research; and measurement of surfaces of engineering materials

  18. [Development of computer aided forming techniques in manufacturing scaffolds for bone tissue engineering].

    Science.gov (United States)

    Wei, Xuelei; Dong, Fuhui

    2011-12-01

    To review recent advance in the research and application of computer aided forming techniques for constructing bone tissue engineering scaffolds. The literature concerning computer aided forming techniques for constructing bone tissue engineering scaffolds in recent years was reviewed extensively and summarized. Several studies over last decade have focused on computer aided forming techniques for bone scaffold construction using various scaffold materials, which is based on computer aided design (CAD) and bone scaffold rapid prototyping (RP). CAD include medical CAD, STL, and reverse design. Reverse design can fully simulate normal bone tissue and could be very useful for the CAD. RP techniques include fused deposition modeling, three dimensional printing, selected laser sintering, three dimensional bioplotting, and low-temperature deposition manufacturing. These techniques provide a new way to construct bone tissue engineering scaffolds with complex internal structures. With rapid development of molding and forming techniques, computer aided forming techniques are expected to provide ideal bone tissue engineering scaffolds.

  19. Numerical computation of discrete differential scattering cross sections for Monte Carlo charged particle transport

    International Nuclear Information System (INIS)

    Walsh, Jonathan A.; Palmer, Todd S.; Urbatsch, Todd J.

    2015-01-01

    Highlights: • Generation of discrete differential scattering angle and energy loss cross sections. • Gauss–Radau quadrature utilizing numerically computed cross section moments. • Development of a charged particle transport capability in the Milagro IMC code. • Integration of cross section generation and charged particle transport capabilities. - Abstract: We investigate a method for numerically generating discrete scattering cross sections for use in charged particle transport simulations. We describe the cross section generation procedure and compare it to existing methods used to obtain discrete cross sections. The numerical approach presented here is generalized to allow greater flexibility in choosing a cross section model from which to derive discrete values. Cross section data computed with this method compare favorably with discrete data generated with an existing method. Additionally, a charged particle transport capability is demonstrated in the time-dependent Implicit Monte Carlo radiative transfer code, Milagro. We verify the implementation of charged particle transport in Milagro with analytic test problems and we compare calculated electron depth–dose profiles with another particle transport code that has a validated electron transport capability. Finally, we investigate the integration of the new discrete cross section generation method with the charged particle transport capability in Milagro.

  20. Patient-specific scatter correction in clinical cone beam computed tomography imaging made possible by the combination of Monte Carlo simulations and a ray tracing algorithm

    DEFF Research Database (Denmark)

    Slot Thing, Rune; Bernchou, Uffe; Mainegra-Hing, Ernesto

    2013-01-01

    Abstract Purpose. Cone beam computed tomography (CBCT) image quality is limited by scattered photons. Monte Carlo (MC) simulations provide the ability of predicting the patient-specific scatter contamination in clinical CBCT imaging. Lengthy simulations prevent MC-based scatter correction from...

  1. Belle monte-carlo production on the Amazon EC2 cloud

    International Nuclear Information System (INIS)

    Sevior, Martin; Fifield, Tom; Katayama, Nobuhiko

    2010-01-01

    The Belle II experiment which aims to increase the Luminosity of the KEKB collider by a factor of 50 will search for physics beyond the Standard Model through precision measurements and the investigation of rare processes in Flavour physics. The expected data rate is comparable to a current era LHC experiment with commensurate computing needs. Incorporating commercial cloud computing, such as that provided by the Amazon Elastic Compute Cloud (EC2) into the Belle II computing model may provide a lower Total Cost of Ownership for the Belle II computing solution. To investigate this possibility, we have created a system to conduct the complete Belle Monte Carlo simulation chain on EC2 to benchmark the cost and performance of the service. This paper will describe how this was achieved in addition to the drawbacks and costs of large-scale Monte Carlo production on EC2.

  2. Belle monte-carlo production on the Amazon EC2 cloud

    Energy Technology Data Exchange (ETDEWEB)

    Sevior, Martin; Fifield, Tom [University of Melbourne, School of Physics, Victoria 3010 Australia (Australia); Katayama, Nobuhiko, E-mail: msevior@gmail.co, E-mail: fifieldt@unimelb.edu.a, E-mail: nobu.katayama@kek.j [High Energy Accelerator Research Organisation (KEK), Tsukuba (Japan)

    2010-04-01

    The Belle II experiment which aims to increase the Luminosity of the KEKB collider by a factor of 50 will search for physics beyond the Standard Model through precision measurements and the investigation of rare processes in Flavour physics. The expected data rate is comparable to a current era LHC experiment with commensurate computing needs. Incorporating commercial cloud computing, such as that provided by the Amazon Elastic Compute Cloud (EC2) into the Belle II computing model may provide a lower Total Cost of Ownership for the Belle II computing solution. To investigate this possibility, we have created a system to conduct the complete Belle Monte Carlo simulation chain on EC2 to benchmark the cost and performance of the service. This paper will describe how this was achieved in addition to the drawbacks and costs of large-scale Monte Carlo production on EC2.

  3. Analysis of skin tissues spatial fluorescence distribution by the Monte Carlo simulation

    International Nuclear Information System (INIS)

    Churmakov, D Y; Meglinski, I V; Piletsky, S A; Greenhalgh, D A

    2003-01-01

    A novel Monte Carlo technique of simulation of spatial fluorescence distribution within the human skin is presented. The computational model of skin takes into account the spatial distribution of fluorophores, which would arise due to the structure of collagen fibres, compared to the epidermis and stratum corneum where the distribution of fluorophores is assumed to be homogeneous. The results of simulation suggest that distribution of auto-fluorescence is significantly suppressed in the near-infrared spectral region, whereas the spatial distribution of fluorescence sources within a sensor layer embedded in the epidermis is localized at an 'effective' depth

  4. Analysis of skin tissues spatial fluorescence distribution by the Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Churmakov, D Y [School of Engineering, Cranfield University, Cranfield, MK43 0AL (United Kingdom); Meglinski, I V [School of Engineering, Cranfield University, Cranfield, MK43 0AL (United Kingdom); Piletsky, S A [Institute of BioScience and Technology, Cranfield University, Silsoe, MK45 4DT (United Kingdom); Greenhalgh, D A [School of Engineering, Cranfield University, Cranfield, MK43 0AL (United Kingdom)

    2003-07-21

    A novel Monte Carlo technique of simulation of spatial fluorescence distribution within the human skin is presented. The computational model of skin takes into account the spatial distribution of fluorophores, which would arise due to the structure of collagen fibres, compared to the epidermis and stratum corneum where the distribution of fluorophores is assumed to be homogeneous. The results of simulation suggest that distribution of auto-fluorescence is significantly suppressed in the near-infrared spectral region, whereas the spatial distribution of fluorescence sources within a sensor layer embedded in the epidermis is localized at an 'effective' depth.

  5. Analysis of skin tissues spatial fluorescence distribution by the Monte Carlo simulation

    Science.gov (United States)

    Y Churmakov, D.; Meglinski, I. V.; Piletsky, S. A.; Greenhalgh, D. A.

    2003-07-01

    A novel Monte Carlo technique of simulation of spatial fluorescence distribution within the human skin is presented. The computational model of skin takes into account the spatial distribution of fluorophores, which would arise due to the structure of collagen fibres, compared to the epidermis and stratum corneum where the distribution of fluorophores is assumed to be homogeneous. The results of simulation suggest that distribution of auto-fluorescence is significantly suppressed in the near-infrared spectral region, whereas the spatial distribution of fluorescence sources within a sensor layer embedded in the epidermis is localized at an `effective' depth.

  6. MOCARS: a Monte Carlo code for determining the distribution and simulation limits

    International Nuclear Information System (INIS)

    Matthews, S.D.

    1977-07-01

    MOCARS is a computer program designed for the INEL CDC 76-173 operating system to determine the distribution and simulation limits for a function by Monte Carlo techniques. The code randomly samples data from any of the 12 user-specified distributions and then either evaluates the cut set system unavailability or a user-specified function with the sample data. After the data are ordered, the values at various quantities and associated confidence bounds are calculated for output. Also available for output on microfilm are the frequency and cumulative distribution histograms from the sample data. 29 figures, 4 tables

  7. Monte Carlo Simulation of a Linear Accelerator and Electron Beam Parameters Used in Radiotherapy

    Directory of Open Access Journals (Sweden)

    Mohammad Taghi Bahreyni Toossi

    2009-06-01

    Full Text Available Introduction: In recent decades, several Monte Carlo codes have been introduced for research and medical applications. These methods provide both accurate and detailed calculation of particle transport from linear accelerators. The main drawback of Monte Carlo techniques is the extremely long computing time that is required in order to obtain a dose distribution with good statistical accuracy. Material and Methods: In this study, the MCNP-4C Monte Carlo code was used to simulate the electron beams generated by a Neptun 10 PC linear accelerator. The depth dose curves and related parameters to depth dose and beam profiles were calculated for 6, 8 and 10 MeV electron beams with different field sizes and these data were compared with the corresponding measured values. The actual dosimetry was performed by employing a Welhofer-Scanditronix dose scanning system, semiconductor detectors and ionization chambers. Results: The result showed good agreement (better than 2% between calculated and measured depth doses and lateral dose profiles for all energies in different field sizes. Also good agreements were achieved between calculated and measured related electron beam parameters such as E0, Rq, Rp and R50. Conclusion: The simulated model of the linac developed in this study is capable of computing electron beam data in a water phantom for different field sizes and the resulting data can be used to predict the dose distributions in other complex geometries.

  8. KAMCCO, a reactor physics Monte Carlo neutron transport code

    International Nuclear Information System (INIS)

    Arnecke, G.; Borgwaldt, H.; Brandl, V.; Lalovic, M.

    1976-06-01

    KAMCCO is a 3-dimensional reactor Monte Carlo code for fast neutron physics problems. Two options are available for the solution of 1) the inhomogeneous time-dependent neutron transport equation (census time scheme), and 2) the homogeneous static neutron transport equation (generation cycle scheme). The user defines the desired output, e.g. estimates of reaction rates or neutron flux integrated over specified volumes in phase space and time intervals. Such primary quantities can be arbitrarily combined, also ratios of these quantities can be estimated with their errors. The Monte Carlo techniques are mostly analogue (exceptions: Importance sampling for collision processes, ELP/MELP, Russian roulette and splitting). Estimates are obtained from the collision and track length estimators. Elastic scattering takes into account first order anisotropy in the center of mass system. Inelastic scattering is processed via the evaporation model or via the excitation of discrete levels. For the calculation of cross sections, the energy is treated as a continuous variable. They are computed by a) linear interpolation, b) from optionally Doppler broadened single level Breit-Wigner resonances or c) from probability tables (in the region of statistically distributed resonances). (orig.) [de

  9. Monte Carlo simulations in theoretical physic

    International Nuclear Information System (INIS)

    Billoire, A.

    1991-01-01

    After a presentation of the MONTE CARLO method principle, the method is applied, first to the critical exponents calculations in the three dimensions ISING model, and secondly to the discrete quantum chromodynamic with calculation times in function of computer power. 28 refs., 4 tabs

  10. General purpose code for Monte Carlo simulations

    International Nuclear Information System (INIS)

    Wilcke, W.W.

    1983-01-01

    A general-purpose computer called MONTHY has been written to perform Monte Carlo simulations of physical systems. To achieve a high degree of flexibility the code is organized like a general purpose computer, operating on a vector describing the time dependent state of the system under simulation. The instruction set of the computer is defined by the user and is therefore adaptable to the particular problem studied. The organization of MONTHY allows iterative and conditional execution of operations

  11. [Cardiac computed tomography: new applications of an evolving technique].

    Science.gov (United States)

    Martín, María; Corros, Cecilia; Calvo, Juan; Mesa, Alicia; García-Campos, Ana; Rodríguez, María Luisa; Barreiro, Manuel; Rozado, José; Colunga, Santiago; de la Hera, Jesús M; Morís, César; Luyando, Luis H

    2015-01-01

    During the last years we have witnessed an increasing development of imaging techniques applied in Cardiology. Among them, cardiac computed tomography is an emerging and evolving technique. With the current possibility of very low radiation studies, the applications have expanded and go further coronariography In the present article we review the technical developments of cardiac computed tomography and its new applications. Copyright © 2014 Instituto Nacional de Cardiología Ignacio Chávez. Published by Masson Doyma México S.A. All rights reserved.

  12. Applications of the Monte Carlo simulation in dosimetry and medical physics problems; Aplicaciones de la simulacion Monte Carlo en dosimetria y problemas de fisica medica

    Energy Technology Data Exchange (ETDEWEB)

    Rojas C, E. L., E-mail: leticia.rojas@inin.gob.m [ININ, Gerencia de Ciencias Ambientales, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)

    2010-07-01

    At the present time the computers use to solve important problems extends to all the areas. These areas can be of social, economic, of engineering, of basic and applied science, etc. With and appropriate handling of computation programs and information can be carried out calculations and simulations of real models, to study them and to solve theoretical or application problems. The processes that contain random variables are susceptible of being approached with the Monte Carlo method. This is a numeric method that, thanks to the improvements in the processors of the computers, it can apply in many tasks more than what was made in the principles of their practical application (at the beginning of the decade of 1950). In this work the application of the Monte Carlo method will be approached in the simulation of the radiation interaction with the matter, to investigate dosimetric aspects of some problems that exist in the medical physics area. Also, contain an introduction about some historical data and some general concepts related with the Monte Carlo simulation are revised. (Author)

  13. Monte Carlo dosimetry for synchrotron stereotactic radiotherapy of brain tumours

    International Nuclear Information System (INIS)

    Boudou, Caroline; Balosso, Jacques; Esteve, Francois; Elleaume, Helene

    2005-01-01

    A radiation dose enhancement can be obtained in brain tumours after infusion of an iodinated contrast agent and irradiation with kilovoltage x-rays in tomography mode. The aim of this study was to assess dosimetric properties of the synchrotron stereotactic radiotherapy technique applied to humans (SSR) for preparing clinical trials. We designed an interface for dose computation based on a Monte Carlo code (MCNPX). A patient head was constructed from computed tomography (CT) data and a tumour volume was modelled. Dose distributions were calculated in SSR configuration for various energy beam and iodine content in the target volume. From the calculations, it appears that the iodine-filled target (10 mg ml -1 ) can be efficiently irradiated by a monochromatic beam of energy ranging from 50 to 85 keV. This paper demonstrates the feasibility of stereotactic radiotherapy for treating deep-seated brain tumours with monoenergetic x-rays from a synchrotron

  14. EGS code system: computer programs for the Monte Carlo simulation of electromagnetic cascade showers. Version 3

    International Nuclear Information System (INIS)

    Ford, R.L.; Nelson, W.R.

    1978-06-01

    A code to simulate almost any electron--photon transport problem conceivable is described. The report begins with a lengthy historical introduction and a description of the shower generation process. Then the detailed physics of the shower processes and the methods used to simulate them are presented. Ideas of sampling theory, transport techniques, particle interactions in general, and programing details are discussed. Next, EGS calculations and various experiments and other Monte Carlo results are compared. The remainder of the report consists of user manuals for EGS, PEGS, and TESTSR codes; options, input specifications, and typical output are included. 38 figures, 12 tables

  15. Reconstruction of Monte Carlo replicas from Hessian parton distributions

    Energy Technology Data Exchange (ETDEWEB)

    Hou, Tie-Jiun [Department of Physics, Southern Methodist University,Dallas, TX 75275-0181 (United States); Gao, Jun [INPAC, Shanghai Key Laboratory for Particle Physics and Cosmology,Department of Physics and Astronomy, Shanghai Jiao-Tong University, Shanghai 200240 (China); High Energy Physics Division, Argonne National Laboratory,Argonne, Illinois, 60439 (United States); Huston, Joey [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States); Nadolsky, Pavel [Department of Physics, Southern Methodist University,Dallas, TX 75275-0181 (United States); Schmidt, Carl; Stump, Daniel [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States); Wang, Bo-Ting; Xie, Ke Ping [Department of Physics, Southern Methodist University,Dallas, TX 75275-0181 (United States); Dulat, Sayipjamal [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States); School of Physics Science and Technology, Xinjiang University,Urumqi, Xinjiang 830046 (China); Center for Theoretical Physics, Xinjiang University,Urumqi, Xinjiang 830046 (China); Pumplin, Jon; Yuan, C.P. [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States)

    2017-03-20

    We explore connections between two common methods for quantifying the uncertainty in parton distribution functions (PDFs), based on the Hessian error matrix and Monte-Carlo sampling. CT14 parton distributions in the Hessian representation are converted into Monte-Carlo replicas by a numerical method that reproduces important properties of CT14 Hessian PDFs: the asymmetry of CT14 uncertainties and positivity of individual parton distributions. The ensembles of CT14 Monte-Carlo replicas constructed this way at NNLO and NLO are suitable for various collider applications, such as cross section reweighting. Master formulas for computation of asymmetric standard deviations in the Monte-Carlo representation are derived. A correction is proposed to address a bias in asymmetric uncertainties introduced by the Taylor series approximation. A numerical program is made available for conversion of Hessian PDFs into Monte-Carlo replicas according to normal, log-normal, and Watt-Thorne sampling procedures.

  16. Sampling from a polytope and hard-disk Monte Carlo

    International Nuclear Information System (INIS)

    Kapfer, Sebastian C; Krauth, Werner

    2013-01-01

    The hard-disk problem, the statics and the dynamics of equal two-dimensional hard spheres in a periodic box, has had a profound influence on statistical and computational physics. Markov-chain Monte Carlo and molecular dynamics were first discussed for this model. Here we reformulate hard-disk Monte Carlo algorithms in terms of another classic problem, namely the sampling from a polytope. Local Markov-chain Monte Carlo, as proposed by Metropolis et al. in 1953, appears as a sequence of random walks in high-dimensional polytopes, while the moves of the more powerful event-chain algorithm correspond to molecular dynamics evolution. We determine the convergence properties of Monte Carlo methods in a special invariant polytope associated with hard-disk configurations, and the implications for convergence of hard-disk sampling. Finally, we discuss parallelization strategies for event-chain Monte Carlo and present results for a multicore implementation

  17. Uncertainty Analysis Based on Sparse Grid Collocation and Quasi-Monte Carlo Sampling with Application in Groundwater Modeling

    Science.gov (United States)

    Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.

    2011-12-01

    Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently

  18. Current and future applications of Monte Carlo

    International Nuclear Information System (INIS)

    Zaidi, H.

    2003-01-01

    Full text: The use of radionuclides in medicine has a long history and encompasses a large area of applications including diagnosis and radiation treatment of cancer patients using either external or radionuclide radiotherapy. The 'Monte Carlo method'describes a very broad area of science, in which many processes, physical systems, and phenomena are simulated by statistical methods employing random numbers. The general idea of Monte Carlo analysis is to create a model, which is as similar as possible to the real physical system of interest, and to create interactions within that system based on known probabilities of occurrence, with random sampling of the probability density functions (pdfs). As the number of individual events (called 'histories') is increased, the quality of the reported average behavior of the system improves, meaning that the statistical uncertainty decreases. The use of the Monte Carlo method to simulate radiation transport has become the most accurate means of predicting absorbed dose distributions and other quantities of interest in the radiation treatment of cancer patients using either external or radionuclide radiotherapy. The same trend has occurred for the estimation of the absorbed dose in diagnostic procedures using radionuclides as well as the assessment of image quality and quantitative accuracy of radionuclide imaging. As a consequence of this generalized use, many questions are being raised primarily about the need and potential of Monte Carlo techniques, but also about how accurate it really is, what would it take to apply it clinically and make it available widely to the nuclear medicine community at large. Many of these questions will be answered when Monte Carlo techniques are implemented and used for more routine calculations and for in-depth investigations. In this paper, the conceptual role of the Monte Carlo method is briefly introduced and followed by a survey of its different applications in diagnostic and therapeutic

  19. Planar and SPECT Monte Carlo acceleration using a variance reduction technique in I131imaging

    International Nuclear Information System (INIS)

    Khosravi, H. R.; Sarkar, S.; Takavar, A.; Saghari, M.; Shahriari, M.

    2007-01-01

    Various variance reduction techniques such as forced detection (FD) have been implemented in Monte Carlo (MC) simulation of nuclear medicine in an effort to decrease the simulation time while keeping accuracy. However most of these techniques still result in very long MC simulation times for being implemented into routine use. Materials and Methods: Convolution-based forced detection (CFD) method as a variance reduction technique was implemented into the well known SlMlND MC photon simulation software. A variety of simulations including point and extended sources in uniform and non-uniform attenuation media, were performed to compare differences between FD and CFD versions of SlMlND modeling for I 131 radionuclide and camera configurations. Experimental measurement of system response function was compared to FD and CFD simulation data. Results: Different simulations using the CFD method agree very well with experimental measurements as well as FD version. CFD simulations of system response function and larger sources in uniform and non-uniform attenuated phantoms also agree well with FD version of SIMIND. Conclusion: CFD has been modeled into the SlMlND MC program and validated. With the current implementation of CFD, simulation times were approximately 10-15 times shorter with similar accuracy and image quality compared with FD MC

  20. Prediction of Software Reliability using Bio Inspired Soft Computing Techniques.

    Science.gov (United States)

    Diwaker, Chander; Tomar, Pradeep; Poonia, Ramesh C; Singh, Vijander

    2018-04-10

    A lot of models have been made for predicting software reliability. The reliability models are restricted to using particular types of methodologies and restricted number of parameters. There are a number of techniques and methodologies that may be used for reliability prediction. There is need to focus on parameters consideration while estimating reliability. The reliability of a system may increase or decreases depending on the selection of different parameters used. Thus there is need to identify factors that heavily affecting the reliability of the system. In present days, reusability is mostly used in the various area of research. Reusability is the basis of Component-Based System (CBS). The cost, time and human skill can be saved using Component-Based Software Engineering (CBSE) concepts. CBSE metrics may be used to assess those techniques which are more suitable for estimating system reliability. Soft computing is used for small as well as large-scale problems where it is difficult to find accurate results due to uncertainty or randomness. Several possibilities are available to apply soft computing techniques in medicine related problems. Clinical science of medicine using fuzzy-logic, neural network methodology significantly while basic science of medicine using neural-networks-genetic algorithm most frequently and preferably. There is unavoidable interest shown by medical scientists to use the various soft computing methodologies in genetics, physiology, radiology, cardiology and neurology discipline. CBSE boost users to reuse the past and existing software for making new products to provide quality with a saving of time, memory space, and money. This paper focused on assessment of commonly used soft computing technique like Genetic Algorithm (GA), Neural-Network (NN), Fuzzy Logic, Support Vector Machine (SVM), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC). This paper presents working of soft computing

  1. Efficient heterogeneous execution of Monte Carlo shielding calculations on a Beowulf cluster

    International Nuclear Information System (INIS)

    Dewar, D.; Hulse, P.; Cooper, A.; Smith, N.

    2005-01-01

    Recent work has been done in using a high-performance 'Beowulf' cluster computer system for the efficient distribution of Monte Carlo shielding calculations. This has enabled the rapid solution of complex shielding problems at low cost and with greater modularity and scalability than traditional platforms. The work has shown that a simple approach to distributing the workload is as efficient as using more traditional techniques such as PVM (Parallel Virtual Machine). In addition, when used in an operational setting this technique is fairer with the use of resources than traditional methods, in that it does not tie up a single computing resource but instead shares the capacity with other tasks. These developments in computing technology have enabled shielding problems to be solved that would have taken an unacceptably long time to run on traditional platforms. This paper discusses the BNFL Beowulf cluster and a number of tests that have recently been run to demonstrate the efficiency of the asynchronous technique in running the MCBEND program. The BNFL Beowulf currently consists of 84 standard PCs running RedHat Linux. Current performance of the machine has been estimated to be between 40 and 100 Gflop s -1 . When the whole system is employed on one problem up to four million particles can be tracked per second. There are plans to review its size in line with future business needs. (authors)

  2. CDFMC: a program that calculates the fixed neutron source distribution for a BWR using Monte Carlo

    International Nuclear Information System (INIS)

    Gomez T, A.M.; Xolocostli M, J.V.; Palacios H, J.C.

    2006-01-01

    The three-dimensional neutron flux calculation using the synthesis method, it requires of the determination of the neutron flux in two two-dimensional configurations as well as in an unidimensional one. Most of the standard guides for the neutron flux calculation or fluences in the vessel of a nuclear reactor, make special emphasis in the appropriate calculation of the fixed neutron source that should be provided to the used transport code, with the purpose of finding sufficiently approximated flux values. The reactor core assemblies configuration is based on X Y geometry, however the considered problem is solved in R θ geometry for what is necessary to make an appropriate mapping to find the source term associated to the R θ intervals starting from a source distribution in rectangular coordinates. To develop the CDFMC computer program (Source Distribution calculation using Monte Carlo), it was necessary to develop a theory of independent mapping to those that have been in the literature. The method of meshes overlapping here used, is based on a technique of random points generation, commonly well-known as Monte Carlo technique. Although the 'randomness' of this technique it implies considering errors in the calculations, it is well known that when increasing the number of points randomly generated to measure an area or some other quantity of interest, the precision of the method increases. In the particular case of the CDFMC computer program, the developed technique reaches a good general behavior when it is used a considerably high number of points (bigger or equal to a hundred thousand), with what makes sure errors in the calculations of the order of 1%. (Author)

  3. Continuum variational and diffusion quantum Monte Carlo calculations

    International Nuclear Information System (INIS)

    Needs, R J; Towler, M D; Drummond, N D; Lopez RIos, P

    2010-01-01

    This topical review describes the methodology of continuum variational and diffusion quantum Monte Carlo calculations. These stochastic methods are based on many-body wavefunctions and are capable of achieving very high accuracy. The algorithms are intrinsically parallel and well suited to implementation on petascale computers, and the computational cost scales as a polynomial in the number of particles. A guide to the systems and topics which have been investigated using these methods is given. The bulk of the article is devoted to an overview of the basic quantum Monte Carlo methods, the forms and optimization of wavefunctions, performing calculations under periodic boundary conditions, using pseudopotentials, excited-state calculations, sources of calculational inaccuracy, and calculating energy differences and forces. (topical review)

  4. pyNSMC: A Python Module for Null-Space Monte Carlo Uncertainty Analysis

    Science.gov (United States)

    White, J.; Brakefield, L. K.

    2015-12-01

    The null-space monte carlo technique is a non-linear uncertainty analyses technique that is well-suited to high-dimensional inverse problems. While the technique is powerful, the existing workflow for completing null-space monte carlo is cumbersome, requiring the use of multiple commandline utilities, several sets of intermediate files and even a text editor. pyNSMC is an open-source python module that automates the workflow of null-space monte carlo uncertainty analyses. The module is fully compatible with the PEST and PEST++ software suites and leverages existing functionality of pyEMU, a python framework for linear-based uncertainty analyses. pyNSMC greatly simplifies the existing workflow for null-space monte carlo by taking advantage of object oriented design facilities in python. The core of pyNSMC is the ensemble class, which draws and stores realized random vectors and also provides functionality for exporting and visualizing results. By relieving users of the tedium associated with file handling and command line utility execution, pyNSMC instead focuses the user on the important steps and assumptions of null-space monte carlo analysis. Furthermore, pyNSMC facilitates learning through flow charts and results visualization, which are available at many points in the algorithm. The ease-of-use of the pyNSMC workflow is compared to the existing workflow for null-space monte carlo for a synthetic groundwater model with hundreds of estimable parameters.

  5. A vectorized Monte Carlo code for modeling photon transport in SPECT

    International Nuclear Information System (INIS)

    Smith, M.F.; Floyd, C.E. Jr.; Jaszczak, R.J.

    1993-01-01

    A vectorized Monte Carlo computer code has been developed for modeling photon transport in single photon emission computed tomography (SPECT). The code models photon transport in a uniform attenuating region and photon detection by a gamma camera. It is adapted from a history-based Monte Carlo code in which photon history data are stored in scalar variables and photon histories are computed sequentially. The vectorized code is written in FORTRAN77 and uses an event-based algorithm in which photon history data are stored in arrays and photon history computations are performed within DO loops. The indices of the DO loops range over the number of photon histories, and these loops may take advantage of the vector processing unit of our Stellar GS1000 computer for pipelined computations. Without the use of the vector processor the event-based code is faster than the history-based code because of numerical optimization performed during conversion to the event-based algorithm. When only the detection of unscattered photons is modeled, the event-based code executes 5.1 times faster with the use of the vector processor than without; when the detection of scattered and unscattered photons is modeled the speed increase is a factor of 2.9. Vectorization is a valuable way to increase the performance of Monte Carlo code for modeling photon transport in SPECT

  6. A Computer Program to Model Passive Acoustic Antisubmarine Search Using Monte Carlo Simulation Techniques.

    Science.gov (United States)

    1983-09-01

    duplicate a continuous function on a digital computer, and thus the machine representatic- of the GMA is only a close approximation of the continuous...error process. Thus, the manner in which the GMA process is digitally replicated has an effect on the results of the simulation. The parameterization of...Information Center 2 Cameron Station Alexandria, Virginia 22314 2. Libary , Code 0142 2 Naval Postgraduate School Monterey, California 93943 3. Professor

  7. The analysis of gastric function using computational techniques

    International Nuclear Information System (INIS)

    Young, Paul

    2002-01-01

    The work presented in this thesis was carried out at the Magnetic Resonance Centre, Department of Physics and Astronomy, University of Nottingham, between October 1996 and June 2000. This thesis describes the application of computerised techniques to the analysis of gastric function, in relation to Magnetic Resonance Imaging data. The implementation of a computer program enabling the measurement of motility in the lower stomach is described in Chapter 6. This method allowed the dimensional reduction of multi-slice image data sets into a 'Motility Plot', from which the motility parameters - the frequency, velocity and depth of contractions - could be measured. The technique was found to be simple, accurate and involved substantial time savings, when compared to manual analysis. The program was subsequently used in the measurement of motility in three separate studies, described in Chapter 7. In Study 1, four different meal types of varying viscosity and nutrient value were consumed by 12 volunteers. The aim of the study was (i) to assess the feasibility of using the motility program in a volunteer study and (ii) to determine the effects of the meals on motility. The results showed that the parameters were remarkably consistent between the 4 meals. However, for each meal, velocity and percentage occlusion were found to increase as contractions propagated along the antrum. The first clinical application of the motility program was carried out in Study 2. Motility from three patients was measured, after they had been referred to the Magnetic Resonance Centre with gastric problems. The results showed that one of the patients displayed an irregular motility, compared to the results of the volunteer study. This result had not been observed using other investigative techniques. In Study 3, motility was measured in Low Viscosity and High Viscosity liquid/solid meals, with the solid particulate consisting of agar beads of varying breakdown strength. The results showed that

  8. Octree indexing of DICOM images for voxel number reduction and improvement of Monte Carlo simulation computing efficiency

    International Nuclear Information System (INIS)

    Hubert-Tremblay, Vincent; Archambault, Louis; Tubic, Dragan; Roy, Rene; Beaulieu, Luc

    2006-01-01

    The purpose of the present study is to introduce a compression algorithm for the CT (computed tomography) data used in Monte Carlo simulations. Performing simulations on the CT data implies large computational costs as well as large memory requirements since the number of voxels in such data reaches typically into hundreds of millions voxels. CT data, however, contain homogeneous regions which could be regrouped to form larger voxels without affecting the simulation's accuracy. Based on this property we propose a compression algorithm based on octrees: in homogeneous regions the algorithm replaces groups of voxels with a smaller number of larger voxels. This reduces the number of voxels while keeping the critical high-density gradient area. Results obtained using the present algorithm on both phantom and clinical data show that compression rates up to 75% are possible without losing the dosimetric accuracy of the simulation

  9. Monte Carlo simulation of calibration of shadow shield scanning bed whole body monitor using different size BOMAB phantoms

    International Nuclear Information System (INIS)

    Bhati, S.; Patni, H.K.; Singh, I.S.; Garg, S.P.

    2005-01-01

    A shadow shield scanning bed whole body monitor incorporating a (102 mm dia x 76 mm thick) NaI(Tl) detector, is employed for assessment of high-energy photon emitters at BARC. The monitor is calibrated using a Reference BOMAB phantom representative of an average Indian radiation worker. However to account for the size variation in the physique of workers, it is required to calibrate the system with different size BOMAB phantoms which is both difficult and expensive. Therefore, a theoretical approach based on Monte Carlo techniques has been employed to calibrate the system with BOMAB phantoms of different sizes for several radionuclides of interest. A computer program developed for this purpose, simulates the scanning geometry of the whole body monitor and computes detection efficiencies for the BARC Reference phantom (63 kg/168 cm), ICRP Reference phantom (70 kg/170 cm) and several of its scaled versions covering a wide range of body builds. The detection efficiencies computed for different photon energies for BARC Reference phantom were found to be in very good agreement with experimental data, thus validating the Monte Carlo scheme used in the computer code. The results from this study could be used for assessment of internal contamination due to high-energy photon emitters for radiation workers of different physiques. (author)

  10. Monte Carlo calculations for doses in organs and tissues to oral radiography; Calculo de Monte Carlo para doses em orgaos e tecidos para radiologia oral

    Energy Technology Data Exchange (ETDEWEB)

    Sampaio, E V.M.

    1986-12-31

    Using the MIRD 5 phantom and Monte Carlo technique, organ doses in patients undergoing external dental examination were calculated taking into account the different x-ray beam geometries and the various possible positions of x-ray source with regard to the head of the patient. It was necessary to introduce in the original computer program a new source description specific for dental examinations. To have a realistic evaluation of organ doses during dental examination it was necessary to introduce a new region in the phantom heat which characterizes the teeth and salivary glands. The attenuation of the x-ray beam by the lead shield of the radiographic film was also introduced in the calculation. (author).

  11. Summary - COG: A new point-wise Monte Carlo code for burnup credit analysis

    International Nuclear Information System (INIS)

    Alesso, H.P.

    1989-01-01

    COG, a new point-wise Monte Carlo code being developed and tested at Lawrence Livermore National Laboratory (LLNL) for the Cray-1, solves the Boltzmann equation for the transport of neutrons, photons, and (in future versions) other particles. Techniques included in the code for modifying the random walk of particles make COG most suitable for solving deep-penetration (shielding) problems and a wide variety of criticality problems. COG is similar to a number of other computer codes used in the shielding community. Each code is a little different in its geometry input and its random-walk modification options. COG is a Monte Carlo code specifically designed for the CRAY (in 1986) to be as precise as the current state of physics knowledge. It has been extensively benchmarked and used as a shielding code at LLNL since 1986, and has recently been extended to accomplish criticality calculations. It will make an excellent tool for future shipping cask studies

  12. Image quality and dose assessment in digital breast tomosynthesis: A Monte Carlo study

    International Nuclear Information System (INIS)

    Baptista, M.; Di Maria, S.; Oliveira, N.; Matela, N.; Janeiro, L.; Almeida, P.; Vaz, P.

    2014-01-01

    Mammography is considered a standard technique for the early detection of breast cancer. However, its sensitivity is limited essentially due to the issue of the overlapping breast tissue. This limitation can be partially overcome, with a relatively new technique, called digital breast tomosynthesis (DBT). For this technique, optimization of acquisition parameters which maximize image quality, whilst complying with the ALARA principle, continues to be an area of considerable research. The aim of this work was to study the best quantum energies that optimize the image quality with the lowest achievable dose in DBT and compare these results with the digital mammography (DM) ones. Monte Carlo simulations were performed using the state-of-the-art computer program MCNPX 2.7.0 in order to generate several 2D cranio-caudal (CC) projections obtained during an acquisition of a standard DBT examination. Moreover, glandular absorbed doses and photon flux calculations, for each projection image, were performed. A homogeneous breast computational phantom with 50%/50% glandular/adipose tissue composition was used and two compressed breast thicknesses were evaluated: 4 cm and 8 cm. The simulated projection images were afterwards reconstructed with an algebraic reconstruction tool and the signal difference to noise ratio (SDNR) was calculated in order to evaluate the image quality in DBT and DM. Finally, a thorough comparison between the results obtained in terms of SDNR and dose assessment in DBT and DM was performed. - Highlights: • Optimization of the image quality in digital breast tomosynthesis. • Calculation of photon energies that maximize the signal difference to noise ratio. • Projections images and dose calculations through the Monte Carlo (MC) method. • Tumor masses and microcalcifications included in the MC model. • A dose saving of about 30% can be reached if optimal photon energies are used

  13. Evolutionary computation techniques a comparative perspective

    CERN Document Server

    Cuevas, Erik; Oliva, Diego

    2017-01-01

    This book compares the performance of various evolutionary computation (EC) techniques when they are faced with complex optimization problems extracted from different engineering domains. Particularly focusing on recently developed algorithms, it is designed so that each chapter can be read independently. Several comparisons among EC techniques have been reported in the literature, however, they all suffer from one limitation: their conclusions are based on the performance of popular evolutionary approaches over a set of synthetic functions with exact solutions and well-known behaviors, without considering the application context or including recent developments. In each chapter, a complex engineering optimization problem is posed, and then a particular EC technique is presented as the best choice, according to its search characteristics. Lastly, a set of experiments is conducted in order to compare its performance to other popular EC methods.

  14. Computing bubble-points of CO

    NARCIS (Netherlands)

    Ramdin, M.; Balaji, S.P.; Vicent Luna, J.M.; Torres-Knoop, A; Chen, Q.; Dubbeldam, D.; Calero, S; de Loos, T.W.; Vlugt, T.J.H.

    2016-01-01

    Computing bubble-points of multicomponent mixtures using Monte Carlo simulations is a non-trivial task. A new method is used to compute gas compositions from a known temperature, bubble-point pressure, and liquid composition. Monte Carlo simulations are used to calculate the bubble-points of

  15. Speed-up of ab initio hybrid Monte Carlo and ab initio path integral hybrid Monte Carlo simulations by using an auxiliary potential energy surface

    International Nuclear Information System (INIS)

    Nakayama, Akira; Taketsugu, Tetsuya; Shiga, Motoyuki

    2009-01-01

    Efficiency of the ab initio hybrid Monte Carlo and ab initio path integral hybrid Monte Carlo methods is enhanced by employing an auxiliary potential energy surface that is used to update the system configuration via molecular dynamics scheme. As a simple illustration of this method, a dual-level approach is introduced where potential energy gradients are evaluated by computationally less expensive ab initio electronic structure methods. (author)

  16. Computational techniques in tribology and material science at the atomic level

    Science.gov (United States)

    Ferrante, J.; Bozzolo, G. H.

    1992-01-01

    Computations in tribology and material science at the atomic level present considerable difficulties. Computational techniques ranging from first-principles to semi-empirical and their limitations are discussed. Example calculations of metallic surface energies using semi-empirical techniques are presented. Finally, application of the methods to calculation of adhesion and friction are presented.

  17. Failure Probability Estimation of Wind Turbines by Enhanced Monte Carlo

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Naess, Arvid

    2012-01-01

    This paper discusses the estimation of the failure probability of wind turbines required by codes of practice for designing them. The Standard Monte Carlo (SMC) simulations may be used for this reason conceptually as an alternative to the popular Peaks-Over-Threshold (POT) method. However......, estimation of very low failure probabilities with SMC simulations leads to unacceptably high computational costs. In this study, an Enhanced Monte Carlo (EMC) method is proposed that overcomes this obstacle. The method has advantages over both POT and SMC in terms of its low computational cost and accuracy...... is controlled by the pitch controller. This provides a fair framework for comparison of the behavior and failure event of the wind turbine with emphasis on the effect of the pitch controller. The Enhanced Monte Carlo method is then applied to the model and the failure probabilities of the model are estimated...

  18. Fundamentals of Monte Carlo

    International Nuclear Information System (INIS)

    Wollaber, Allan Benton

    2016-01-01

    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating @@), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  19. Fundamentals of Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  20. Development and application of computational aerothermodynamics flowfield computer codes

    Science.gov (United States)

    Venkatapathy, Ethiraj

    1993-01-01

    Computations are presented for one-dimensional, strong shock waves that are typical of those that form in front of a reentering spacecraft. The fluid mechanics and thermochemistry are modeled using two different approaches. The first employs traditional continuum techniques in solving the Navier-Stokes equations. The second-approach employs a particle simulation technique (the direct simulation Monte Carlo method, DSMC). The thermochemical models employed in these two techniques are quite different. The present investigation presents an evaluation of thermochemical models for nitrogen under hypersonic flow conditions. Four separate cases are considered. The cases are governed, respectively, by the following: vibrational relaxation; weak dissociation; strong dissociation; and weak ionization. In near-continuum, hypersonic flow, the nonequilibrium thermochemical models employed in continuum and particle simulations produce nearly identical solutions. Further, the two approaches are evaluated successfully against available experimental data for weakly and strongly dissociating flows.

  1. Research on perturbation based Monte Carlo reactor criticality search

    International Nuclear Information System (INIS)

    Li Zeguang; Wang Kan; Li Yangliu; Deng Jingkang

    2013-01-01

    Criticality search is a very important aspect in reactor physics analysis. Due to the advantages of Monte Carlo method and the development of computer technologies, Monte Carlo criticality search is becoming more and more necessary and feasible. Traditional Monte Carlo criticality search method is suffered from large amount of individual criticality runs and uncertainty and fluctuation of Monte Carlo results. A new Monte Carlo criticality search method based on perturbation calculation is put forward in this paper to overcome the disadvantages of traditional method. By using only one criticality run to get initial k_e_f_f and differential coefficients of concerned parameter, the polynomial estimator of k_e_f_f changing function is solved to get the critical value of concerned parameter. The feasibility of this method was tested. The results show that the accuracy and efficiency of perturbation based criticality search method are quite inspiring and the method overcomes the disadvantages of traditional one. (authors)

  2. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  3. Comparison of radiographic technique by computer simulation

    International Nuclear Information System (INIS)

    Brochi, M.A.C.; Ghilardi Neto, T.

    1989-01-01

    A computational algorithm to compare radiographic techniques (KVp, mAs and filters) is developed based in the fixation of parameters that defines the images, such as optical density and constrast. Before the experience, the results were used in a radiography of thorax. (author) [pt

  4. Personal Supercomputing for Monte Carlo Simulation Using a GPU

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2008-05-15

    Since the usability, accessibility, and maintenance of a personal computer (PC) are very good, a PC is a useful computer simulation tool for researchers. It has enough calculation power to simulate a small scale system with the improved performance of a PC's CPU. However, if a system is large or long time scale, we need a cluster computer or supercomputer. Recently great changes have occurred in the PC calculation environment. A graphic process unit (GPU) on a graphic card, only used to calculate display data, has a superior calculation capability to a PC's CPU. This GPU calculation performance is a match for the supercomputer in 2000. Although it has such a great calculation potential, it is not easy to program a simulation code for GPU due to difficult programming techniques for converting a calculation matrix to a 3D rendering image using graphic APIs. In 2006, NVIDIA provided the Software Development Kit (SDK) for the programming environment for NVIDIA's graphic cards, which is called the Compute Unified Device Architecture (CUDA). It makes the programming on the GPU easy without knowledge of the graphic APIs. This paper describes the basic architectures of NVIDIA's GPU and CUDA, and carries out a performance benchmark for the Monte Carlo simulation.

  5. Personal Supercomputing for Monte Carlo Simulation Using a GPU

    International Nuclear Information System (INIS)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho

    2008-01-01

    Since the usability, accessibility, and maintenance of a personal computer (PC) are very good, a PC is a useful computer simulation tool for researchers. It has enough calculation power to simulate a small scale system with the improved performance of a PC's CPU. However, if a system is large or long time scale, we need a cluster computer or supercomputer. Recently great changes have occurred in the PC calculation environment. A graphic process unit (GPU) on a graphic card, only used to calculate display data, has a superior calculation capability to a PC's CPU. This GPU calculation performance is a match for the supercomputer in 2000. Although it has such a great calculation potential, it is not easy to program a simulation code for GPU due to difficult programming techniques for converting a calculation matrix to a 3D rendering image using graphic APIs. In 2006, NVIDIA provided the Software Development Kit (SDK) for the programming environment for NVIDIA's graphic cards, which is called the Compute Unified Device Architecture (CUDA). It makes the programming on the GPU easy without knowledge of the graphic APIs. This paper describes the basic architectures of NVIDIA's GPU and CUDA, and carries out a performance benchmark for the Monte Carlo simulation

  6. APPLYING ARTIFICIAL INTELLIGENCE TECHNIQUES TO HUMAN-COMPUTER INTERFACES

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.

    1988-01-01

    A description is given of UIMS (User Interface Management System), a system using a variety of artificial intelligence techniques to build knowledge-based user interfaces combining functionality and information from a variety of computer systems that maintain, test, and configure customer telephone...... and data networks. Three artificial intelligence (AI) techniques used in UIMS are discussed, namely, frame representation, object-oriented programming languages, and rule-based systems. The UIMS architecture is presented, and the structure of the UIMS is explained in terms of the AI techniques....

  7. Monte Carlos of the new generation: status and progress

    International Nuclear Information System (INIS)

    Frixione, Stefano

    2005-01-01

    Standard parton shower monte carlos are designed to give reliable descriptions of low-pT physics. In the very high-energy regime of modern colliders, this is may lead to largely incorrect predictions of the basic reaction processes. This motivated the recent theoretical efforts aimed at improving monte carlos through the inclusion of matrix elements computed beyond the leading order in QCD. I briefly review the progress made, and discuss bottom production at the Tevatron

  8. Measured performances on vectorization and multitasking with a Monte Carlo code for neutron transport problems

    International Nuclear Information System (INIS)

    Chauvet, Y.

    1985-01-01

    This paper summarized two improvements of a real production code by using vectorization and multitasking techniques. After a short description of Monte Carlo algorithms employed in our neutron transport problems, we briefly describe the work we have done in order to get a vector code. Vectorization principles will be presented and measured performances on the CRAY 1S, CYBER 205 and CRAY X-MP compared in terms of vector lengths. The second part of this work is an adaptation to multitasking on the CRAY X-MP using exclusively standard multitasking tools available with FORTRAN under the COS 1.13 system. Two examples will be presented. The goal of the first one is to measure the overhead inherent to multitasking when tasks become too small and to define a granularity threshold, that is to say a minimum size for a task. With the second example we propose a method that is very X-MP oriented in order to get the best speedup factor on such a computer. In conclusion we prove that Monte Carlo algorithms are very well suited to future vector and parallel computers. (orig.)

  9. Comparison of MCB and MONTEBURNS Monte Carlo burnup codes on a one-pass deep burn

    International Nuclear Information System (INIS)

    Talamo, Alberto; Ji, Wei; Cetnar, Jerzy; Gudowski, Waclaw

    2006-01-01

    Numerical applications implemented on the Monte Carlo method have developed in line with the increase of computer power; nowadays, in the field of nuclear reactor physics, it is possible to perform burnup simulations in a detailed 3D geometry and a continuous energy description by the Monte Carlo method; moreover, the required computing time can be abundantly reduced by taking advantage of a computer cluster. In this paper we focused on comparing the results of the two major Monte Carlo burnup codes, MONTEBURNS and MCB, when they share the same MCNP geometry, nuclear data library, core thermal power, and they apply the same refueling and shuffling schedule. While simulating a total operation time of the Gas Turbine-Modular Helium Reactor of 2100 effective full power days and a one-pass deep burn in-core fuel management schedule, we have found that the two Monte Carlo codes produce very similar results both on the criticality value of the core and the transmutation of the key actinides

  10. Comparison of MCB and MONTEBURNS Monte Carlo burnup codes on a one-pass deep burn

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, Alberto [Royal Institute of Technology (KTH), Roslagstullsbacken 21, Stockholm S-10691 (Sweden)]. E-mail: alby@anl.gov; Ji, Wei [University of Michigan, Bonisteel Boulevard 2355, Ann Arbor, MI 48109-2104 (United States); Cetnar, Jerzy [AGH-University of Science and Technology, Al. Mickiewicza 30 Cracow (Poland); Gudowski, Waclaw [Royal Institute of Technology (KTH), Roslagstullsbacken 21, Stockholm S-10691 (Sweden)

    2006-09-15

    Numerical applications implemented on the Monte Carlo method have developed in line with the increase of computer power; nowadays, in the field of nuclear reactor physics, it is possible to perform burnup simulations in a detailed 3D geometry and a continuous energy description by the Monte Carlo method; moreover, the required computing time can be abundantly reduced by taking advantage of a computer cluster. In this paper we focused on comparing the results of the two major Monte Carlo burnup codes, MONTEBURNS and MCB, when they share the same MCNP geometry, nuclear data library, core thermal power, and they apply the same refueling and shuffling schedule. While simulating a total operation time of the Gas Turbine-Modular Helium Reactor of 2100 effective full power days and a one-pass deep burn in-core fuel management schedule, we have found that the two Monte Carlo codes produce very similar results both on the criticality value of the core and the transmutation of the key actinides.

  11. Parallel Monte Carlo simulation of aerosol dynamics

    KAUST Repository

    Zhou, K.

    2014-01-01

    A highly efficient Monte Carlo (MC) algorithm is developed for the numerical simulation of aerosol dynamics, that is, nucleation, surface growth, and coagulation. Nucleation and surface growth are handled with deterministic means, while coagulation is simulated with a stochastic method (Marcus-Lushnikov stochastic process). Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI). The parallel computing efficiency is investigated through numerical examples. Near 60% parallel efficiency is achieved for the maximum testing case with 3.7 million MC particles running on 93 parallel computing nodes. The algorithm is verified through simulating various testing cases and comparing the simulation results with available analytical and/or other numerical solutions. Generally, it is found that only small number (hundreds or thousands) of MC particles is necessary to accurately predict the aerosol particle number density, volume fraction, and so forth, that is, low order moments of the Particle Size Distribution (PSD) function. Accurately predicting the high order moments of the PSD needs to dramatically increase the number of MC particles. 2014 Kun Zhou et al.

  12. Computer-Assisted Technique for Surgical Tooth Extraction

    Directory of Open Access Journals (Sweden)

    Hosamuddin Hamza

    2016-01-01

    Full Text Available Introduction. Surgical tooth extraction is a common procedure in dentistry. However, numerous extraction cases show a high level of difficulty in practice. This difficulty is usually related to inadequate visualization, improper instrumentation, or other factors related to the targeted tooth (e.g., ankyloses or presence of bony undercut. Methods. In this work, the author presents a new technique for surgical tooth extraction based on 3D imaging, computer planning, and a new concept of computer-assisted manufacturing. Results. The outcome of this work is a surgical guide made by 3D printing of plastics and CNC of metals (hybrid outcome. In addition, the conventional surgical cutting tools (surgical burs are modified with a number of stoppers adjusted to avoid any excessive drilling that could harm bone or other vital structures. Conclusion. The present outcome could provide a minimally invasive technique to overcome the routine complications facing dental surgeons in surgical extraction procedures.

  13. Rapid Monte Carlo Simulation of Gravitational Wave Galaxies

    Science.gov (United States)

    Breivik, Katelyn; Larson, Shane L.

    2015-01-01

    With the detection of gravitational waves on the horizon, astrophysical catalogs produced by gravitational wave observatories can be used to characterize the populations of sources and validate different galactic population models. Efforts to simulate gravitational wave catalogs and source populations generally focus on population synthesis models that require extensive time and computational power to produce a single simulated galaxy. Monte Carlo simulations of gravitational wave source populations can also be used to generate observation catalogs from the gravitational wave source population. Monte Carlo simulations have the advantes of flexibility and speed, enabling rapid galactic realizations as a function of galactic binary parameters with less time and compuational resources required. We present a Monte Carlo method for rapid galactic simulations of gravitational wave binary populations.

  14. Computational Intelligence Techniques for New Product Design

    CERN Document Server

    Chan, Kit Yan; Dillon, Tharam S

    2012-01-01

    Applying computational intelligence for product design is a fast-growing and promising research area in computer sciences and industrial engineering. However, there is currently a lack of books, which discuss this research area. This book discusses a wide range of computational intelligence techniques for implementation on product design. It covers common issues on product design from identification of customer requirements in product design, determination of importance of customer requirements, determination of optimal design attributes, relating design attributes and customer satisfaction, integration of marketing aspects into product design, affective product design, to quality control of new products. Approaches for refinement of computational intelligence are discussed, in order to address different issues on product design. Cases studies of product design in terms of development of real-world new products are included, in order to illustrate the design procedures, as well as the effectiveness of the com...

  15. Scalable Domain Decomposed Monte Carlo Particle Transport

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  16. Artificial neural networks, a new alternative to Monte Carlo calculations for radiotherapy

    International Nuclear Information System (INIS)

    Martin, E.; Gschwind, R.; Henriet, J.; Sauget, M.; Makovicka, L.

    2010-01-01

    In order to reduce the computing time needed by Monte Carlo codes in the field of irradiation physics, notably in dosimetry, the authors report the use of artificial neural networks in combination with preliminary Monte Carlo calculations. During the learning phase, Monte Carlo calculations are performed in homogeneous media to allow the building up of the neural network. Then, dosimetric calculations (in heterogeneous media, unknown by the network) can be performed by the so-learned network. Results with an equivalent precision can be obtained within less than one minute on a simple PC whereas several days are needed with a Monte Carlo calculation

  17. From Monte Carlo to Quantum Computation

    OpenAIRE

    Heinrich, Stefan

    2001-01-01

    Quantum computing was so far mainly concerned with discrete problems. Recently, E. Novak and the author studied quantum algorithms for high dimensional integration and dealt with the question, which advantages quantum computing can bring over classical deterministic or randomized methods for this type of problem. In this paper we give a short introduction to the basic ideas of quantum computing and survey recent results on high dimensional integration. We discuss connections to the Monte Carl...

  18. Computer technique for evaluating collimator performance

    International Nuclear Information System (INIS)

    Rollo, F.D.

    1975-01-01

    A computer program has been developed to theoretically evaluate the overall performance of collimators used with radioisotope scanners and γ cameras. The first step of the program involves the determination of the line spread function (LSF) and geometrical efficiency from the fundamental parameters of the collimator being evaluated. The working equations can be applied to any plane of interest. The resulting LSF is applied to subroutine computer programs which compute corresponding modulation transfer function and contrast efficiency functions. The latter function is then combined with appropriate geometrical efficiency data to determine the performance index function. The overall computer program allows one to predict from the physical parameters of the collimator alone how well the collimator will reproduce various sized spherical voids of activity in the image plane. The collimator performance program can be used to compare the performance of various collimator types, to study the effects of source depth on collimator performance, and to assist in the design of collimators. The theory of the collimator performance equation is discussed, a comparison between the experimental and theoretical LSF values is made, and examples of the application of the technique are presented

  19. Dosimetric reconstruction of radiological accident by numerical simulations by means associating an anthropomorphic model and a Monte Carlo computation code

    International Nuclear Information System (INIS)

    Courageot, Estelle

    2010-01-01

    After a description of the context of radiological accidents (definition, history, context, exposure types, associated clinic symptoms of irradiation and contamination, medical treatment, return on experience) and a presentation of dose assessment in the case of external exposure (clinic, biological and physical dosimetry), this research thesis describes the principles of numerical reconstruction of a radiological accident, presents some computation codes (Monte Carlo code, MCNPX code) and the SESAME tool, and reports an application to an actual case (an accident which occurred in Equator in April 2009). The next part reports the developments performed to modify the posture of voxelized phantoms and the experimental and numerical validations. The last part reports a feasibility study for the reconstruction of radiological accidents occurring in external radiotherapy. This work is based on a Monte Carlo simulation of a linear accelerator, with the aim of identifying the most relevant parameters to be implemented in SESAME in the case of external radiotherapy

  20. Adaptive polynomial chaos techniques for uncertainty quantification of a gas cooled fast reactor transient

    International Nuclear Information System (INIS)

    Perko, Z.; Gilli, L.; Lathouwers, D.; Kloosterman, J. L.

    2013-01-01

    Uncertainty quantification plays an increasingly important role in the nuclear community, especially with the rise of Best Estimate Plus Uncertainty methodologies. Sensitivity analysis, surrogate models, Monte Carlo sampling and several other techniques can be used to propagate input uncertainties. In recent years however polynomial chaos expansion has become a popular alternative providing high accuracy at affordable computational cost. This paper presents such polynomial chaos (PC) methods using adaptive sparse grids and adaptive basis set construction, together with an application to a Gas Cooled Fast Reactor transient. Comparison is made between a new sparse grid algorithm and the traditionally used technique proposed by Gerstner. An adaptive basis construction method is also introduced and is proved to be advantageous both from an accuracy and a computational point of view. As a demonstration the uncertainty quantification of a 50% loss of flow transient in the GFR2400 Gas Cooled Fast Reactor design was performed using the CATHARE code system. The results are compared to direct Monte Carlo sampling and show the superior convergence and high accuracy of the polynomial chaos expansion. Since PC techniques are easy to implement, they can offer an attractive alternative to traditional techniques for the uncertainty quantification of large scale problems. (authors)