Guideline of Monte Carlo calculation. Neutron/gamma ray transport simulation by Monte Carlo method
2002-01-01
This report condenses basic theories and advanced applications of neutron/gamma ray transport calculations in many fields of nuclear energy research. Chapters 1 through 5 treat historical progress of Monte Carlo methods, general issues of variance reduction technique, cross section libraries used in continuous energy Monte Carlo codes. In chapter 6, the following issues are discussed: fusion benchmark experiments, design of ITER, experiment analyses of fast critical assembly, core analyses of JMTR, simulation of pulsed neutron experiment, core analyses of HTTR, duct streaming calculations, bulk shielding calculations, neutron/gamma ray transport calculations of the Hiroshima atomic bomb. Chapters 8 and 9 treat function enhancements of MCNP and MVP codes, and a parallel processing of Monte Carlo calculation, respectively. An important references are attached at the end of this report.
Criticality coefficient calculation for a small PWR using Monte Carlo Transport Code
Energy Technology Data Exchange (ETDEWEB)
Trombetta, Debora M.; Su, Jian, E-mail: dtrombetta@nuclear.ufrj.br, E-mail: sujian@nuclear.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil); Chirayath, Sunil S., E-mail: sunilsc@tamu.edu [Department of Nuclear Engineering and Nuclear Security Science and Policy Institute, Texas A and M University, TX (United States)
2015-07-01
Computational models of reactors are increasingly used to predict nuclear reactor physics parameters responsible for reactivity changes which could lead to accidents and losses. In this work, preliminary results for criticality coefficient calculation using the Monte Carlo transport code MCNPX were presented for a small PWR. The computational modeling developed consists of the core with fuel elements, radial reflectors, and control rods inside a pressure vessel. Three different geometries were simulated, a single fuel pin, a fuel assembly and the core, with the aim to compare the criticality coefficients among themselves.The criticality coefficients calculated were: Doppler Temperature Coefficient, Coolant Temperature Coefficient, Coolant Void Coefficient, Power Coefficient, and Control Rod Worth. The coefficient values calculated by the MCNP code were compared with literature results, showing good agreement with reference data, which validate the computational model developed and allow it to be used to perform more complex studies. Criticality Coefficient values for the three simulations done had little discrepancy for almost all coefficients investigated, the only exception was the Power Coefficient. Preliminary results presented show that simple modelling as a fuel assembly can describe changes at almost all the criticality coefficients, avoiding the need of a complex core simulation. (author)
A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom
Energy Technology Data Exchange (ETDEWEB)
Bellezzo, M.; Do Nascimento, E.; Yoriyaz, H., E-mail: mbellezzo@gmail.br [Instituto de Pesquisas Energeticas e Nucleares / CNEN, Av. Lineu Prestes 2242, Cidade Universitaria, 05508-000 Sao Paulo (Brazil)
2014-08-15
As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)
Voxel2MCNP: software for handling voxel models for Monte Carlo radiation transport calculations.
Hegenbart, Lars; Pölz, Stefan; Benzler, Andreas; Urban, Manfred
2012-02-01
Voxel2MCNP is a program that sets up radiation protection scenarios with voxel models and generates corresponding input files for the Monte Carlo code MCNPX. Its technology is based on object-oriented programming, and the development is platform-independent. It has a user-friendly graphical interface including a two- and three-dimensional viewer. A row of equipment models is implemented in the program. Various voxel model file formats are supported. Applications include calculation of counting efficiency of in vivo measurement scenarios and calculation of dose coefficients for internal and external radiation scenarios. Moreover, anthropometric parameters of voxel models, for instance chest wall thickness, can be determined. Voxel2MCNP offers several methods for voxel model manipulations including image registration techniques. The authors demonstrate the validity of the program results and provide references for previous successful implementations. The authors illustrate the reliability of calculated dose conversion factors and specific absorbed fractions. Voxel2MCNP is used on a regular basis to generate virtual radiation protection scenarios at Karlsruhe Institute of Technology while further improvements and developments are ongoing.
Energy Technology Data Exchange (ETDEWEB)
Palau, J.M. [CEA Cadarache, Service de Physique des Reacteurs et du Cycle, Lab. de Projets Nucleaires, 13 - Saint-Paul-lez-Durance (France)
2005-07-01
This paper presents how Monte-Carlo calculations (French TRIPOLI4 poly-kinetic code with an appropriate pre-processing and post-processing software called OVNI) are used in the case of 3-dimensional heterogeneous benchmarks (slab reactor cores) to reduce model biases and enable a thorough and detailed analysis of the performances of deterministic methods and their associated data libraries with respect to key neutron parameters (reactivity, local power). Outstanding examples of application of these tools are presented regarding the new numerical methods implemented in the French lattice code APOLLO2 (advanced self-shielding models, new IDT characteristics method implemented within the discrete-ordinates flux solver model) and the JEFF3.1 nuclear data library (checked against JEF2.2 previous file). In particular we have pointed out, by performing multigroup/point-wise TRIPOLI4 (assembly and core) calculations, the efficiency (in terms of accuracy and computation time) of the new IDT method developed in APOLLO2. In addition, by performing 3-dimensional TRIPOLI4 calculations of the whole slab core (few millions of elementary volumes), the high quality of the new JEFF3.1 nuclear data files and revised evaluations (U{sup 235}, U{sup 238}, Hf) for reactivity prediction of slab cores critical experiments has been stressed. As a feedback of the whole validation process, improvements in terms of nuclear data (mainly Hf capture cross-sections) and numerical methods (advanced quadrature formulas accounting validation results, validation of new self-shielding models, parallelization) are suggested to improve even more the APOLLO2-CRONOS2 standard calculation route. (author)
An OpenCL-based Monte Carlo dose calculation engine (oclMC) for coupled photon-electron transport
Tian, Zhen; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun
2015-01-01
Monte Carlo (MC) method has been recognized the most accurate dose calculation method for radiotherapy. However, its extremely long computation time impedes clinical applications. Recently, a lot of efforts have been made to realize fast MC dose calculation on GPUs. Nonetheless, most of the GPU-based MC dose engines were developed in NVidia CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a fast cross-platform MC dose engine oclMC using OpenCL environment for external beam photon and electron radiotherapy in MeV energy range. Coupled photon-electron MC simulation was implemented with analogue simulations for photon transports and a Class II condensed history scheme for electron transports. To test the accuracy and efficiency of our dose engine oclMC, we compared dose calculation results of oclMC and gDPM, our previously developed GPU-based MC code, for a 15 MeV electron ...
Energy Technology Data Exchange (ETDEWEB)
White, Morgan C. [Univ. of Florida, Gainesville, FL (United States)
2000-07-01
The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V&V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second
Su, Lin; Du, Xining; Liu, Tianyu; Xu, X. George
2014-06-01
An electron-photon coupled Monte Carlo code ARCHER - Accelerated Radiation-transport Computations in Heterogeneous EnviRonments - is being developed at Rensselaer Polytechnic Institute as a software testbed for emerging heterogeneous high performance computers that utilize accelerators such as GPUs. This paper presents the preliminary code development and the testing involving radiation dose related problems. In particular, the paper discusses the electron transport simulations using the class-II condensed history method. The considered electron energy ranges from a few hundreds of keV to 30 MeV. For photon part, photoelectric effect, Compton scattering and pair production were modeled. Voxelized geometry was supported. A serial CPU code was first written in C++. The code was then transplanted to the GPU using the CUDA C 5.0 standards. The hardware involved a desktop PC with an Intel Xeon X5660 CPU and six NVIDIA Tesla™ M2090 GPUs. The code was tested for a case of 20 MeV electron beam incident perpendicularly on a water-aluminum-water phantom. The depth and later dose profiles were found to agree with results obtained from well tested MC codes. Using six GPU cards, 6x106 electron histories were simulated within 2 seconds. In comparison, the same case running the EGSnrc and MCNPX codes required 1645 seconds and 9213 seconds, respectively. On-going work continues to test the code for different medical applications such as radiotherapy and brachytherapy.
White, M C
2000-01-01
The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron tran...
Quantum Monte Carlo Calculations of Light Nuclei
Pieper, Steven C
2007-01-01
During the last 15 years, there has been much progress in defining the nuclear Hamiltonian and applying quantum Monte Carlo methods to the calculation of light nuclei. I describe both aspects of this work and some recent results.
Liu, Tianyu; Xu, X. George; Carothers, Christopher D.
2014-06-01
Hardware accelerators are currently becoming increasingly important in boosting high performance computing sys- tems. In this study, we tested the performance of two accelerator models, NVIDIA Tesla M2090 GPU and Intel Xeon Phi 5110p coprocessor, using a new Monte Carlo photon transport package called ARCHER-CT we have developed for fast CT imaging dose calculation. The package contains three code variants, ARCHER - CTCPU, ARCHER - CTGPU and ARCHER - CTCOP to run in parallel on the multi-core CPU, GPU and coprocessor architectures respectively. A detailed GE LightSpeed Multi-Detector Computed Tomography (MDCT) scanner model and a family of voxel patient phantoms were included in the code to calculate absorbed dose to radiosensitive organs under specified scan protocols. The results from ARCHER agreed well with those from the production code Monte Carlo N-Particle eXtended (MCNPX). It was found that all the code variants were significantly faster than the parallel MCNPX running on 12 MPI processes, and that the GPU and coprocessor performed equally well, being 2.89~4.49 and 3.01~3.23 times faster than the parallel ARCHER - CTCPU running with 12 hyperthreads.
Challenges of Monte Carlo Transport
Energy Technology Data Exchange (ETDEWEB)
Long, Alex Roberts [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-10
These are slides from a presentation for Parallel Summer School at Los Alamos National Laboratory. Solving discretized partial differential equations (PDEs) of interest can require a large number of computations. We can identify concurrency to allow parallel solution of discrete PDEs. Simulated particles histories can be used to solve the Boltzmann transport equation. Particle histories are independent in neutral particle transport, making them amenable to parallel computation. Physical parameters and method type determine the data dependencies of particle histories. Data requirements shape parallel algorithms for Monte Carlo. Then, Parallel Computational Physics and Parallel Monte Carlo are discussed and, finally, the results are given. The mesh passing method greatly simplifies the IMC implementation and allows simple load-balancing. Using MPI windows and passive, one-sided RMA further simplifies the implementation by removing target synchronization. The author is very interested in implementations of PGAS that may allow further optimization for one-sided, read-only memory access (e.g. Open SHMEM). The MPICH_RMA_OVER_DMAPP option and library is required to make one-sided messaging scale on Trinitite - Moonlight scales poorly. Interconnect specific libraries or functions are likely necessary to ensure performance. BRANSON has been used to directly compare the current standard method to a proposed method on idealized problems. The mesh passing algorithm performs well on problems that are designed to show the scalability of the particle passing method. BRANSON can now run load-imbalanced, dynamic problems. Potential avenues of improvement in the mesh passing algorithm will be implemented and explored. A suite of test problems that stress DD methods will elucidate a possible path forward for production codes.
THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE
Energy Technology Data Exchange (ETDEWEB)
WATERS, LAURIE S. [Los Alamos National Laboratory; MCKINNEY, GREGG W. [Los Alamos National Laboratory; DURKEE, JOE W. [Los Alamos National Laboratory; FENSIN, MICHAEL L. [Los Alamos National Laboratory; JAMES, MICHAEL R. [Los Alamos National Laboratory; JOHNS, RUSSELL C. [Los Alamos National Laboratory; PELOWITZ, DENISE B. [Los Alamos National Laboratory
2007-01-10
MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.
Monte Carlo methods for particle transport
Haghighat, Alireza
2015-01-01
The Monte Carlo method has become the de facto standard in radiation transport. Although powerful, if not understood and used appropriately, the method can give misleading results. Monte Carlo Methods for Particle Transport teaches appropriate use of the Monte Carlo method, explaining the method's fundamental concepts as well as its limitations. Concise yet comprehensive, this well-organized text: * Introduces the particle importance equation and its use for variance reduction * Describes general and particle-transport-specific variance reduction techniques * Presents particle transport eigenvalue issues and methodologies to address these issues * Explores advanced formulations based on the author's research activities * Discusses parallel processing concepts and factors affecting parallel performance Featuring illustrative examples, mathematical derivations, computer algorithms, and homework problems, Monte Carlo Methods for Particle Transport provides nuclear engineers and scientists with a practical guide ...
Quantum Monte Carlo Calculations of Neutron Matter
Carlson, J; Ravenhall, D G
2003-01-01
Uniform neutron matter is approximated by a cubic box containing a finite number of neutrons, with periodic boundary conditions. We report variational and Green's function Monte Carlo calculations of the ground state of fourteen neutrons in a periodic box using the Argonne $\\vep $ two-nucleon interaction at densities up to one and half times the nuclear matter density. The effects of the finite box size are estimated using variational wave functions together with cluster expansion and chain summation techniques. They are small at subnuclear densities. We discuss the expansion of the energy of low-density neutron gas in powers of its Fermi momentum. This expansion is strongly modified by the large nn scattering length, and does not begin with the Fermi-gas kinetic energy as assumed in both Skyrme and relativistic mean field theories. The leading term of neutron gas energy is ~ half the Fermi-gas kinetic energy. The quantum Monte Carlo results are also used to calibrate the accuracy of variational calculations ...
Radiation Transport Calculations and Simulations
Energy Technology Data Exchange (ETDEWEB)
Fasso, Alberto; /SLAC; Ferrari, A.; /CERN
2011-06-30
This article is an introduction to the Monte Carlo method as used in particle transport. After a description at an elementary level of the mathematical basis of the method, the Boltzmann equation and its physical meaning are presented, followed by Monte Carlo integration and random sampling, and by a general description of the main aspects and components of a typical Monte Carlo particle transport code. In particular, the most common biasing techniques are described, as well as the concepts of estimator and detector. After a discussion of the different types of errors, the issue of Quality Assurance is briefly considered.
Copeland, Kyle; Parker, Donald E; Friedberg, Wallace
2010-03-01
Conversion coefficients have been calculated for fluence-to-absorbed dose, fluence-to-effective dose and fluence-to-gray equivalent for isotropic exposure of an adult male and an adult female to (56)Fe(26+) in the energy range of 10 MeV to 1 TeV (0.01-1000 GeV). The coefficients were calculated using Monte Carlo transport code MCNPX 2.7.A and BodyBuilder 1.3 anthropomorphic phantoms modified to allow calculation of effective dose using tissues and tissue weighting factors from either the 1990 or 2007 recommendations of the International Commission on Radiological Protection (ICRP) and gray equivalent to selected tissues as recommended by the National Council on Radiation Protection and Measurements. Calculations using ICRP 2007 recommendations result in fluence-to-effective dose conversion coefficients that are almost identical at most energies to those calculated using ICRP 1990 recommendations.
Copeland, Kyle; Parker, Donald E; Friedberg, Wallace
2010-03-01
Conversion coefficients have been calculated for fluence to absorbed dose, fluence to effective dose and fluence to gray equivalent, for isotropic exposure to alpha particles in the energy range of 10 MeV to 1 TeV (0.01-1000 GeV). The coefficients were calculated using Monte Carlo transport code MCNPX 2.7.A and BodyBuilder 1.3 anthropomorphic phantoms modified to allow calculation of effective dose to a Reference Person using tissues and tissue weighting factors from 1990 and 2007 recommendations of the International Commission on Radiological Protection (ICRP) and gray equivalent to selected tissues as recommended by the National Council on Radiation Protection and Measurements. Coefficients for effective dose are within 30 % of those calculated using ICRP 1990 recommendations.
Energy Technology Data Exchange (ETDEWEB)
Both, J.P.; Lee, Y.K.; Mazzolo, A.; Peneliau, Y.; Petit, O.; Roesslinger, B. [CEA Saclay, Dir. de l' Energie Nucleaire (DEN), Service d' Etudes de Reacteurs et de Modelisation Avancee, 91 - Gif sur Yvette (France)
2003-07-01
Tripoli-4 is a three dimensional calculations code using the Monte Carlo method to simulate the transport of neutrons, photons, electrons and positrons. This code is used in four application fields: the protection studies, the criticality studies, the core studies and the instrumentation studies. Geometry, cross sections, description of sources, principle. (N.C.)
Energy Technology Data Exchange (ETDEWEB)
Wang, Ping, E-mail: pingwang@xidian.edu.cn [State Key Laboratory of Integrated Service Networks, School of Telecommunications Engineering, Xidian University, Xi’an 710071 (China); School of Physics and Optoelectronic Engineering, Xidian University, Xi’an 710071 (China); Hu, Linlin; Shan, Xuefei [State Key Laboratory of Integrated Service Networks, School of Telecommunications Engineering, Xidian University, Xi’an 710071 (China); Yang, Yintang [Key Laboratory of the Ministry of Education for Wide Band-Gap Semiconductor Materials and Devices, School of Microelectronics, Xidian University, Xi’an 710071 (China); Song, Jiuxu; Guo, Lixin [School of Physics and Optoelectronic Engineering, Xidian University, Xi’an 710071 (China); Zhang, Zhiyong [School of Information Science and Technology, Northwest University, Xi’an 710127 (China)
2015-01-15
Transient characteristics of wurtzite Zn{sub 1−x}Mg{sub x}O are investigated using a three-valley Ensemble Monte Carlo model verified by the agreement between the simulated low-field mobility and the experiment result reported. The electronic structures are obtained by first principles calculations with density functional theory. The results show that the peak electron drift velocities of Zn{sub 1−x}Mg{sub x}O (x = 11.1%, 16.7%, 19.4%, 25%) at 3000 kV/cm are 3.735 × 10{sup 7}, 2.133 × 10{sup 7}, 1.889 × 10{sup 7}, 1.295 × 10{sup 7} cm/s, respectively. With the increase of Mg concentration, a higher electric field is required for the onset of velocity overshoot. When the applied field exceeds 2000 kV/cm and 2500 kV/cm, a phenomena of velocity undershoot is observed in Zn{sub 0.889}Mg{sub 0.111}O and Zn{sub 0.833}Mg{sub 0.167}O respectively, while it is not observed for Zn{sub 0.806}Mg{sub 0.194}O and Zn{sub 0.75}Mg{sub 0.25}O even at 3000 kV/cm which is especially important for high frequency devices.
Scalable Domain Decomposed Monte Carlo Particle Transport
Energy Technology Data Exchange (ETDEWEB)
O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)
2013-12-05
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.
Energy Technology Data Exchange (ETDEWEB)
Mourant, J.R.; Hielscher, A.H.; Bigio, I.J.
1996-04-01
Details of the interaction of photons with tissue phantoms are elucidated using Monte Carlo simulations. In particular, photon sampling volumes and photon pathlengths are determined for a variety of scattering and absorption parameters. The Monte Carlo simulations are specifically designed to model light delivery and collection geometries relevant to clinical applications of optical biopsy techniques. The Monte Carlo simulations assume that light is delivered and collected by two, nearly-adjacent optical fibers and take into account the numerical aperture of the fibers as well as reflectance and refraction at interfaces between different media. To determine the validity of the Monte Carlo simulations for modeling the interactions between the photons and the tissue phantom in these geometries, the simulations were compared to measurements of aqueous suspensions of polystyrene microspheres in the wavelength range 450-750 nm.
Monte Carlo radiation transport in external beam radiotherapy
Çeçen, Yiğit
2013-01-01
The use of Monte Carlo in radiation transport is an effective way to predict absorbed dose distributions. Monte Carlo modeling has contributed to a better understanding of photon and electron transport by radiotherapy physicists. The aim of this review is to introduce Monte Carlo as a powerful radiation transport tool. In this review, photon and electron transport algorithms for Monte Carlo techniques are investigated and a clinical linear accelerator model is studied for external beam radiot...
Copeland, Kyle; Parker, Donald E; Friedberg, Wallace
2010-12-01
Conversion coefficients were calculated for fluence-to-absorbed dose, fluence-to-equivalent dose, fluence-to-effective dose and fluence-to-gray equivalent for isotropic exposure of an adult female and an adult male to tritons ((3)H(+)) in the energy range of 10 MeV to 1 TeV (0.01-1000 GeV). Coefficients were calculated using Monte Carlo transport code MCNPX 2.7.C and BodyBuilder™ 1.3 anthropomorphic phantoms. Phantoms were modified to allow calculation of effective dose to a Reference Person using tissues and tissue weighting factors from 1990 and 2007 recommendations of the International Commission on Radiological Protection (ICRP) and calculation of gray equivalent to selected tissues as recommended by the National Council on Radiation Protection and Measurements. At 15 of the 19 energies for which coefficients for effective dose were calculated, coefficients based on ICRP 2007 and 1990 recommendations differed by less than 3%. The greatest difference, 43%, occurred at 30 MeV.
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Energy Technology Data Exchange (ETDEWEB)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
MONTE-CARLO BURNUP CALCULATION UNCERTAINTY QUANTIFICATION AND PROPAGATION DETERMINATION
Energy Technology Data Exchange (ETDEWEB)
Nichols, T.; Sternat, M.; Charlton, W.
2011-05-08
MONTEBURNS is a Monte-Carlo depletion routine utilizing MCNP and ORIGEN 2.2. Uncertainties exist in the MCNP transport calculation, but this information is not passed to the depletion calculation in ORIGEN or saved. To quantify this transport uncertainty and determine how it propagates between burnup steps, a statistical analysis of a multiple repeated depletion runs is performed. The reactor model chosen is the Oak Ridge Research Reactor (ORR) in a single assembly, infinite lattice configuration. This model was burned for a 25.5 day cycle broken down into three steps. The output isotopics as well as effective multiplication factor (k-effective) were tabulated and histograms were created at each burnup step using the Scott Method to determine the bin width. It was expected that the gram quantities and k-effective histograms would produce normally distributed results since they were produced from a Monte-Carlo routine, but some of results do not. The standard deviation at each burnup step was consistent between fission product isotopes as expected, while the uranium isotopes created some unique results. The variation in the quantity of uranium was small enough that, from the reaction rate MCNP tally, round off error occurred producing a set of repeated results with slight variation. Statistical analyses were performed using the {chi}{sup 2} test against a normal distribution for several isotopes and the k-effective results. While the isotopes failed to reject the null hypothesis of being normally distributed, the {chi}{sup 2} statistic grew through the steps in the k-effective test. The null hypothesis was rejected in the later steps. These results suggest, for a high accuracy solution, MCNP cell material quantities less than 100 grams and greater kcode parameters are needed to minimize uncertainty propagation and minimize round off effects.
Copeland, Kyle; Parker, Donald E; Friedberg, Wallace
2011-01-01
Conversion coefficients were calculated for fluence-to-absorbed dose, fluence-to-equivalent dose, fluence-to-effective dose and fluence-to-gray equivalent for isotropic exposure of an adult female and an adult male to deuterons ((2)H(+)) in the energy range 10 MeV-1 TeV (0.01-1000 GeV). Coefficients were calculated using the Monte Carlo transport code MCNPX 2.7.C and BodyBuilder™ 1.3 anthropomorphic phantoms. Phantoms were modified to allow calculation of the effective dose to a Reference Person using tissues and tissue weighting factors from 1990 and 2007 recommendations of the International Commission on Radiological Protection (ICRP) and gray equivalent to selected tissues as recommended by the National Council on Radiation Protection and Measurements. Coefficients for the equivalent and effective dose incorporated a radiation weighting factor of 2. At 15 of 19 energies for which coefficients for the effective dose were calculated, coefficients based on ICRP 1990 and 2007 recommendations differed by <3%. The greatest difference, 47%, occurred at 30 MeV.
Copeland, Kyle; Parker, Donald E; Friedberg, Wallace
2010-12-01
Conversion coefficients were calculated for fluence-to-absorbed dose, fluence-to-equivalent dose, fluence-to-effective dose and fluence-to-gray equivalent, for isotropic exposure of an adult male and an adult female to helions ((3)He(2+)) in the energy range of 10 MeV to 1 TeV (0.01-1000 GeV). Calculations were performed using Monte Carlo transport code MCNPX 2.7.C and BodyBuilder™ 1.3 anthropomorphic phantoms modified to allow calculation of effective dose using tissues and tissue weighting factors from either the 1990 or 2007 recommendations of the International Commission on Radiological Protection (ICRP), and gray equivalent to selected tissues as recommended by the National Council on Radiation Protection and Measurements. At 15 of the 19 energies for which coefficients for effective dose were calculated, coefficients based on ICRP 2007 and 1990 recommendations differed by less than 2%. The greatest difference, 62%, occurred at 100 MeV.
Confidence and efficiency scaling in variational quantum Monte Carlo calculations
Delyon, F.; Bernu, B.; Holzmann, Markus
2017-02-01
Based on the central limit theorem, we discuss the problem of evaluation of the statistical error of Monte Carlo calculations using a time-discretized diffusion process. We present a robust and practical method to determine the effective variance of general observables and show how to verify the equilibrium hypothesis by the Kolmogorov-Smirnov test. We then derive scaling laws of the efficiency illustrated by variational Monte Carlo calculations on the two-dimensional electron gas.
Confidence and efficiency scaling in Variational Quantum Monte Carlo calculations
Delyon, François; Holzmann, Markus
2016-01-01
Based on the central limit theorem, we discuss the problem of evaluation of the statistical error of Monte Carlo calculations using a time discretized diffusion process. We present a robust and practical method to determine the effective variance of general observables and show how to verify the equilibrium hypothesis by the Kolmogorov-Smirnov test. We then derive scaling laws of the efficiency illustrated by Variational Monte Carlo calculations on the two dimensional electron gas.
TRIPOLI-3: a neutron/photon Monte Carlo transport code
Energy Technology Data Exchange (ETDEWEB)
Nimal, J.C.; Vergnaud, T. [Commissariat a l' Energie Atomique, Gif-sur-Yvette (France). Service d' Etudes de Reacteurs et de Mathematiques Appliquees
2001-07-01
The present version of TRIPOLI-3 solves the transport equation for coupled neutron and gamma ray problems in three dimensional geometries by using the Monte Carlo method. This code is devoted both to shielding and criticality problems. The most important feature for particle transport equation solving is the fine treatment of the physical phenomena and sophisticated biasing technics useful for deep penetrations. The code is used either for shielding design studies or for reference and benchmark to validate cross sections. Neutronic studies are essentially cell or small core calculations and criticality problems. TRIPOLI-3 has been used as reference method, for example, for resonance self shielding qualification. (orig.)
Monte Carlo simulation for the transport beamline
Energy Technology Data Exchange (ETDEWEB)
Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania (Italy); Attili, A.; Marchetto, F.; Russo, G. [INFN, Sezione di Torino, Via P.Giuria, 1 10125 Torino (Italy); Cirrone, G. A. P.; Schillaci, F.; Scuderi, V. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Institute of Physics Czech Academy of Science, ELI-Beamlines project, Na Slovance 2, Prague (Czech Republic); Carpinelli, M. [INFN Sezione di Cagliari, c/o Dipartimento di Fisica, Università di Cagliari, Cagliari (Italy); Tramontana, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Università di Catania, Dipartimento di Fisica e Astronomia, Via S. Sofia 64, Catania (Italy)
2013-07-26
In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.
Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations
Hoogenboom, J. Eduard; Dufek, Jan
2014-06-01
This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.
Monte-carlo calculations for some problems of quantum mechanics
Energy Technology Data Exchange (ETDEWEB)
Novoselov, A. A., E-mail: novoselov@goa.bog.msu.ru; Pavlovsky, O. V.; Ulybyshev, M. V. [Moscow State University (Russian Federation)
2012-09-15
The Monte-Carlo technique for the calculations of functional integral in two one-dimensional quantum-mechanical problems had been applied. The energies of the bound states in some potential wells were obtained using this method. Also some peculiarities in the calculation of the kinetic energy in the ground state had been studied.
Quantum Monte Carlo diagonalization method as a variational calculation
Energy Technology Data Exchange (ETDEWEB)
Mizusaki, Takahiro; Otsuka, Takaharu [Tokyo Univ. (Japan). Dept. of Physics; Honma, Michio
1997-05-01
A stochastic method for performing large-scale shell model calculations is presented, which utilizes the auxiliary field Monte Carlo technique and diagonalization method. This method overcomes the limitation of the conventional shell model diagonalization and can extremely widen the feasibility of shell model calculations with realistic interactions for spectroscopic study of nuclear structure. (author)
Yan, Qiang; Shao, Lin
2017-03-01
Current popular Monte Carlo simulation codes for simulating electron bombardment in solids focus primarily on electron trajectories, instead of electron-induced displacements. Here we report a Monte Carol simulation code, DEEPER (damage creation and particle transport in matter), developed for calculating 3-D distributions of displacements produced by electrons of incident energies up to 900 MeV. Electron elastic scattering is calculated by using full-Mott cross sections for high accuracy, and primary-knock-on-atoms (PKAs)-induced damage cascades are modeled using ZBL potential. We compare and show large differences in 3-D distributions of displacements and electrons in electron-irradiated Fe. The distributions of total displacements are similar to that of PKAs at low electron energies. But they are substantially different for higher energy electrons due to the shifting of PKA energy spectra towards higher energies. The study is important to evaluate electron-induced radiation damage, for the applications using high flux electron beams to intentionally introduce defects and using an electron analysis beam for microstructural characterization of nuclear materials.
Bias in Dynamic Monte Carlo Alpha Calculations
Energy Technology Data Exchange (ETDEWEB)
Sweezy, Jeremy Ed [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Nolen, Steven Douglas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Adams, Terry R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Trahan, Travis John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-02-06
A 1/N bias in the estimate of the neutron time-constant (commonly denoted as α) has been seen in dynamic neutronic calculations performed with MCATK. In this paper we show that the bias is most likely caused by taking the logarithm of a stochastic quantity. We also investigate the known bias due to the particle population control method used in MCATK. We conclude that this bias due to the particle population control method is negligible compared to other sources of bias.
The macro response Monte Carlo method for electron transport
Energy Technology Data Exchange (ETDEWEB)
Svatos, M M
1998-09-01
The main goal of this thesis was to prove the feasibility of basing electron depth dose calculations in a phantom on first-principles single scatter physics, in an amount of time that is equal to or better than current electron Monte Carlo methods. The Macro Response Monte Carlo (MRMC) method achieves run times that are on the order of conventional electron transport methods such as condensed history, with the potential to be much faster. This is possible because MRMC is a Local-to-Global method, meaning the problem is broken down into two separate transport calculations. The first stage is a local, in this case, single scatter calculation, which generates probability distribution functions (PDFs) to describe the electron's energy, position and trajectory after leaving the local geometry, a small sphere or "kugel" A number of local kugel calculations were run for calcium and carbon, creating a library of kugel data sets over a range of incident energies (0.25 MeV - 8 MeV) and sizes (0.025 cm to 0.1 cm in radius). The second transport stage is a global calculation, where steps that conform to the size of the kugels in the library are taken through the global geometry. For each step, the appropriate PDFs from the MRMC library are sampled to determine the electron's new energy, position and trajectory. The electron is immediately advanced to the end of the step and then chooses another kugel to sample, which continues until transport is completed. The MRMC global stepping code was benchmarked as a series of subroutines inside of the Peregrine Monte Carlo code. It was compared to Peregrine's class II condensed history electron transport package, EGS4, and MCNP for depth dose in simple phantoms having density inhomogeneities. Since the kugels completed in the library were of relatively small size, the zoning of the phantoms was scaled down from a clinical size, so that the energy deposition algorithms for spreading dose across 5-10 zones per kugel could
Morse Monte Carlo Radiation Transport Code System
Energy Technology Data Exchange (ETDEWEB)
Emmett, M.B.
1975-02-01
The report contains sections containing descriptions of the MORSE and PICTURE codes, input descriptions, sample problems, deviations of the physical equations and explanations of the various error messages. The MORSE code is a multipurpose neutron and gamma-ray transport Monte Carlo code. Time dependence for both shielding and criticality problems is provided. General three-dimensional geometry may be used with an albedo option available at any material surface. The PICTURE code provide aid in preparing correct input data for the combinatorial geometry package CG. It provides a printed view of arbitrary two-dimensional slices through the geometry. By inspecting these pictures one may determine if the geometry specified by the input cards is indeed the desired geometry. 23 refs. (WRF)
Maucec, M
2005-01-01
Monte Carlo simulations for nuclear logging applications are considered to be highly demanding transport problems. In this paper, the implementation of weight-window variance reduction schemes in a 'manual' fashion to improve the efficiency of calculations for a neutron logging tool is presented. Th
Discrete angle biasing in Monte Carlo radiation transport
Energy Technology Data Exchange (ETDEWEB)
Cramer, S.N.
1988-05-01
An angular biasing procedure is presented for use in Monte Carlo radiation transport with discretized scattering angle data. As in more general studies, the method is shown to reduce statistical weight fluctuations when it is combined with the exponential transformation. This discrete data application has a simple analytic form which is problem independent. The results from a sample problem illustrate the variance reduction and efficiency characteristics of the combined biasing procedures, and a large neutron and gamma ray integral experiment is also calculated. A proposal is given for the possible code generation of the biasing parameter p and the preferential direction /ovr/Omega///sub 0/ used in the combined biasing schemes.
Quantum Monte Carlo calculations with chiral effective field theory interactions.
Gezerlis, A; Tews, I; Epelbaum, E; Gandolfi, S; Hebeler, K; Nogga, A; Schwenk, A
2013-07-19
We present the first quantum Monte Carlo (QMC) calculations with chiral effective field theory (EFT) interactions. To achieve this, we remove all sources of nonlocality, which hamper the inclusion in QMC calculations, in nuclear forces to next-to-next-to-leading order. We perform auxiliary-field diffusion Monte Carlo (AFDMC) calculations for the neutron matter energy up to saturation density based on local leading-order, next-to-leading order, and next-to-next-to-leading order nucleon-nucleon interactions. Our results exhibit a systematic order-by-order convergence in chiral EFT and provide nonperturbative benchmarks with theoretical uncertainties. For the softer interactions, perturbative calculations are in excellent agreement with the AFDMC results. This work paves the way for QMC calculations with systematic chiral EFT interactions for nuclei and nuclear matter, for testing the perturbativeness of different orders, and allows for matching to lattice QCD results by varying the pion mass.
Benchmarking of Proton Transport in Super Monte Carlo Simulation Program
Wang, Yongfeng; Li, Gui; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Wu, Yican
2014-06-01
The Monte Carlo (MC) method has been traditionally applied in nuclear design and analysis due to its capability of dealing with complicated geometries and multi-dimensional physics problems as well as obtaining accurate results. The Super Monte Carlo Simulation Program (SuperMC) is developed by FDS Team in China for fusion, fission, and other nuclear applications. The simulations of radiation transport, isotope burn-up, material activation, radiation dose, and biology damage could be performed using SuperMC. Complicated geometries and the whole physical process of various types of particles in broad energy scale can be well handled. Bi-directional automatic conversion between general CAD models and full-formed input files of SuperMC is supported by MCAM, which is a CAD/image-based automatic modeling program for neutronics and radiation transport simulation. Mixed visualization of dynamical 3D dataset and geometry model is supported by RVIS, which is a nuclear radiation virtual simulation and assessment system. Continuous-energy cross section data from hybrid evaluated nuclear data library HENDL are utilized to support simulation. Neutronic fixed source and critical design parameters calculates for reactors of complex geometry and material distribution based on the transport of neutron and photon have been achieved in our former version of SuperMC. Recently, the proton transport has also been intergrated in SuperMC in the energy region up to 10 GeV. The physical processes considered for proton transport include electromagnetic processes and hadronic processes. The electromagnetic processes include ionization, multiple scattering, bremsstrahlung, and pair production processes. Public evaluated data from HENDL are used in some electromagnetic processes. In hadronic physics, the Bertini intra-nuclear cascade model with exitons, preequilibrium model, nucleus explosion model, fission model, and evaporation model are incorporated to treat the intermediate energy nuclear
Research on GPU Acceleration for Monte Carlo Criticality Calculation
Xu, Qi; Yu, Ganglin; Wang, Kan
2014-06-01
The Monte Carlo neutron transport method can be naturally parallelized by multi-core architectures due to the dependency between particles during the simulation. The GPU+CPU heterogeneous parallel mode has become an increasingly popular way of parallelism in the field of scientific supercomputing. Thus, this work focuses on the GPU acceleration method for the Monte Carlo criticality simulation, as well as the computational efficiency that GPUs can bring. The "neutron transport step" is introduced to increase the GPU thread occupancy. In order to test the sensitivity of the MC code's complexity, a 1D one-group code and a 3D multi-group general purpose code are respectively transplanted to GPUs, and the acceleration effects are compared. The result of numerical experiments shows considerable acceleration effect of the "neutron transport step" strategy. However, the performance comparison between the 1D code and the 3D code indicates the poor scalability of MC codes on GPUs.
Multi-microcomputer system for Monte-Carlo calculations
Berg, B; Krasemann, H
1981-01-01
The authors propose a microcomputer system that allows parallel processing for Monte Carlo calculations in lattice gauge theories, simulations of high energy physics experiments and many other fields of current interest. The master-n-slave multiprocessor system is based on the Motorola MC 6800 microprocessor. One attraction of this processor is that it allows up to 16 M Byte random access memory.
Energy Technology Data Exchange (ETDEWEB)
Albright, N; Bergstrom, P M; Daly, T P; Descalle, M; Garrett, D; House, R K; Knapp, D K; May, S; Patterson, R W; Siantar, C L; Verhey, L; Walling, R S; Welczorek, D
1999-07-01
PEREGRINE is a 3D Monte Carlo dose calculation system designed to serve as a dose calculation engine for clinical radiation therapy treatment planning systems. Taking advantage of recent advances in low-cost computer hardware, modern multiprocessor architectures and optimized Monte Carlo transport algorithms, PEREGRINE performs mm-resolution Monte Carlo calculations in times that are reasonable for clinical use. PEREGRINE has been developed to simulate radiation therapy for several source types, including photons, electrons, neutrons and protons, for both teletherapy and brachytherapy. However the work described in this paper is limited to linear accelerator-based megavoltage photon therapy. Here we assess the accuracy, reliability, and added value of 3D Monte Carlo transport for photon therapy treatment planning. Comparisons with clinical measurements in homogeneous and heterogeneous phantoms demonstrate PEREGRINE's accuracy. Studies with variable tissue composition demonstrate the importance of material assignment on the overall dose distribution. Detailed analysis of Monte Carlo results provides new information for radiation research by expanding the set of observables.
Monte Carlo calculation of the neutron and gamma sensitivities of self-powered detectors
Energy Technology Data Exchange (ETDEWEB)
Pytel, K.
1981-01-01
A calculational model is presented for the self-powered detector response prediction in various radiation environments. The fast beta particles and electron transport is treated by Monte Carlo technique. A new model of electronic processes within the insulator is introduced. Calculated neutron and gamma sensitivities of five detectors (with Rh, V, Co, Ag and Pt emitters) are compared with reported experimental values. The comparison gives a satisfactory agreement for the majority of examined detectors.
On the Calculation of Reactor Time Constants Using the Monte Carlo Method
Energy Technology Data Exchange (ETDEWEB)
Leppaenen, Jaakko [VTT Technical Research Centre of Finland, P.O. Box 1000, FI-02044 VTT (Finland)
2008-07-01
Full-core reactor dynamics calculation involves the coupled modelling of thermal hydraulics and the time-dependent behaviour of core neutronics. The reactor time constants include prompt neutron lifetimes, neutron reproduction times, effective delayed neutron fractions and the corresponding decay constants, typically divided into six or eight precursor groups. The calculation of these parameters is traditionally carried out using deterministic lattice transport codes, which also produce the homogenised few-group constants needed for resolving the spatial dependence of neutron flux. In recent years, there has been a growing interest in the production of simulator input parameters using the stochastic Monte Carlo method, which has several advantages over deterministic transport calculation. This paper reviews the methodology used for the calculation of reactor time constants. The calculation techniques are put to practice using two codes, the PSG continuous-energy Monte Carlo reactor physics code and MORA, a new full-core Monte Carlo neutron transport code entirely based on homogenisation. Both codes are being developed at the VTT Technical Research Centre of Finland. The results are compared to other codes and experimental reference data in the CROCUS reactor kinetics benchmark calculation. (author)
Energy Technology Data Exchange (ETDEWEB)
Authier, N
1998-12-01
One of the questions asked in radiation shielding problems is the estimation of the radiation level in particular to determine accessibility of working persons in controlled area (nuclear power plants, nuclear fuel reprocessing plants) or to study the dose gradients encountered in material (iron nuclear vessel, medical therapy, electronics in satellite). The flux and reaction rate estimators used in Monte Carlo codes give average values in volumes or on surfaces of the geometrical description of the system. But in certain configurations, the knowledge of punctual deposited energy and dose estimates are necessary. The Monte Carlo estimate of the flux at a point of interest is a calculus which presents an unbounded variance. The central limit theorem cannot be applied thus no easy confidencelevel may be calculated. The convergence rate is then very poor. We propose in this study a new solution for the photon flux at a point estimator. The method is based on the 'once more collided flux estimator' developed earlier for neutron calculations. It solves the problem of the unbounded variance and do not add any bias to the estimation. We show however that our new sampling schemes specially developed to treat the anisotropy of the photon coherent scattering is necessary for a good and regular behavior of the estimator. This developments integrated in the TRIPOLI-4 Monte Carlo code add the possibility of an unbiased punctual estimate on media interfaces. (author)
Energy Technology Data Exchange (ETDEWEB)
Liang, Jingang; Wang, Kan; Qiu, Yishu [Dept. of Engineering Physics, LiuQing Building, Tsinghua University, Beijing (China); Chai, Xiao Ming; Qiang, Sheng Long [Science and Technology on Reactor System Design Technology Laboratory, Nuclear Power Institute of China, Chengdu (China)
2016-06-15
Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC) codes in accomplishing pin-wise three-dimensional (3D) full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC) code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.
AVATAR -- Automatic variance reduction in Monte Carlo calculations
Energy Technology Data Exchange (ETDEWEB)
Van Riper, K.A.; Urbatsch, T.J.; Soran, P.D. [and others
1997-05-01
AVATAR{trademark} (Automatic Variance And Time of Analysis Reduction), accessed through the graphical user interface application, Justine{trademark}, is a superset of MCNP{trademark} that automatically invokes THREEDANT{trademark} for a three-dimensional deterministic adjoint calculation on a mesh independent of the Monte Carlo geometry, calculates weight windows, and runs MCNP. Computational efficiency increases by a factor of 2 to 5 for a three-detector oil well logging tool model. Human efficiency increases dramatically, since AVATAR eliminates the need for deep intuition and hours of tedious handwork.
Xenon instability study of large core Monte Carlo calculations
Energy Technology Data Exchange (ETDEWEB)
Bogdanova, E.V. [National Research Nuclear University ' MEPHi' , Moscow (Russian Federation); Gorodkov, S.S.
2016-09-15
One of the goals of neutronic calculations of large cores may be self-consistent distribution of equilibrium xenon through the reactor core. In deterministic calculations such self consistency is relatively simply achieved with the help of additional outer iterations by xenon, which can increase several times solution run time. But in stochastic calculation of large cores such increase is utterly undesirable, since even without these outer iterations it demands modeling of billion of histories, which in case of complicated large core may take about a day of 100 processors work. In addition the unavoidable statistical uncertainty here plays role of transient process, which excites xenon oscillations. In this work the rise of such oscillations and the way of their overcoming with the help of hybrid stochastic/deterministic calculation is studied. It is proposed to make at first single static Monte Carlo calculation of given core and to receive multi-group mesh cell characteristics for future use in operative code. This one will evaluate xenon distribution through the core, which will be equilibrium for deterministic solution and substantially close to equilibrium Monte Carlo solution, paid with enormous computing cost.
Propagation of statistical and nuclear data uncertainties in Monte Carlo burn-up calculations
Energy Technology Data Exchange (ETDEWEB)
Garcia-Herranz, Nuria [Departamento de Ingenieria Nuclear, Universidad Politecnica de Madrid, UPM (Spain)], E-mail: nuria@din.upm.es; Cabellos, Oscar [Departamento de Ingenieria Nuclear, Universidad Politecnica de Madrid, UPM (Spain); Sanz, Javier [Departamento de Ingenieria Energetica, Universidad Nacional de Educacion a Distancia, UNED (Spain); Juan, Jesus [Laboratorio de Estadistica, Universidad Politecnica de Madrid, UPM (Spain); Kuijper, Jim C. [NRG - Fuels, Actinides and Isotopes Group, Petten (Netherlands)
2008-04-15
Two methodologies to propagate the uncertainties on the nuclide inventory in combined Monte Carlo-spectrum and burn-up calculations are presented, based on sensitivity/uncertainty and random sampling techniques (uncertainty Monte Carlo method). Both enable the assessment of the impact of uncertainties in the nuclear data as well as uncertainties due to the statistical nature of the Monte Carlo neutron transport calculation. The methodologies are implemented in our MCNP-ACAB system, which combines the neutron transport code MCNP-4C and the inventory code ACAB. A high burn-up benchmark problem is used to test the MCNP-ACAB performance in inventory predictions, with no uncertainties. A good agreement is found with the results of other participants. This benchmark problem is also used to assess the impact of nuclear data uncertainties and statistical flux errors in high burn-up applications. A detailed calculation is performed to evaluate the effect of cross-section uncertainties in the inventory prediction, taking into account the temporal evolution of the neutron flux level and spectrum. Very large uncertainties are found at the unusually high burn-up of this exercise (800 MWd/kgHM). To compare the impact of the statistical errors in the calculated flux with respect to the cross uncertainties, a simplified problem is considered, taking a constant neutron flux level and spectrum. It is shown that, provided that the flux statistical deviations in the Monte Carlo transport calculation do not exceed a given value, the effect of the flux errors in the calculated isotopic inventory are negligible (even at very high burn-up) compared to the effect of the large cross-section uncertainties available at present in the data files.
A Monte Carlo Method for Calculating Initiation Probability
Energy Technology Data Exchange (ETDEWEB)
Greenman, G M; Procassini, R J; Clouse, C J
2007-03-05
A Monte Carlo method for calculating the probability of initiating a self-sustaining neutron chain reaction has been developed. In contrast to deterministic codes which solve a non-linear, adjoint form of the Boltzmann equation to calculate initiation probability, this new method solves the forward (standard) form of the equation using a modified source calculation technique. Results from this new method are compared with results obtained from several deterministic codes for a suite of historical test problems. The level of agreement between these code predictions is quite good, considering the use of different numerical techniques and nuclear data. A set of modifications to the historical test problems has also been developed which reduces the impact of neutron source ambiguities on the calculated probabilities.
Infinite variance in fermion quantum Monte Carlo calculations
Shi, Hao; Zhang, Shiwei
2016-03-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.
Infinite variance in fermion quantum Monte Carlo calculations.
Shi, Hao; Zhang, Shiwei
2016-03-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.
Quantum Monte Carlo calculations of two neutrons in finite volume
Klos, P; Tews, I; Gandolfi, S; Gezerlis, A; Hammer, H -W; Hoferichter, M; Schwenk, A
2016-01-01
Ab initio calculations provide direct access to the properties of pure neutron systems that are challenging to study experimentally. In addition to their importance for fundamental physics, their properties are required as input for effective field theories of the strong interaction. In this work, we perform auxiliary-field diffusion Monte Carlo calculations of the ground and first excited state of two neutrons in a finite box, considering a simple contact potential as well as chiral effective field theory interactions. We compare the results against exact diagonalizations and present a detailed analysis of the finite-volume effects, whose understanding is crucial for determining observables from the calculated energies. Using the L\\"uscher formula, we extract the low-energy S-wave scattering parameters from ground- and excited-state energies for different box sizes.
Streamlining resummed QCD calculations using Monte Carlo integration
Farhi, David; Freytsis, Marat; Schwartz, Matthew D
2015-01-01
Some of the most arduous and error-prone aspects of precision resummed calculations are related to the partonic hard process, having nothing to do with the resummation. In particular, interfacing to parton-distribution functions, combining various channels, and performing the phase space integration can be limiting factors in completing calculations. Conveniently, however, most of these tasks are already automated in many Monte Carlo programs, such as MadGraph, Alpgen or Sherpa. In this paper, we show how such programs can be used to produce distributions of partonic kinematics with associated color structures representing the hard factor in a resummed distribution. These distributions can then be used to weight convolutions of jet, soft and beam functions producing a complete resummed calculation. In fact, only around 1000 unweighted events are necessary to produce precise distributions. A number of examples and checks are provided, including $e^+e^-$ two- and four-jet event shapes, $n$-jettiness and jet-mas...
Hellman-Feynman operator sampling in diffusion Monte Carlo calculations.
Gaudoin, R; Pitarke, J M
2007-09-21
Diffusion Monte Carlo (DMC) calculations typically yield highly accurate results in solid-state and quantum-chemical calculations. However, operators that do not commute with the Hamiltonian are at best sampled correctly up to second order in the error of the underlying trial wave function once simple corrections have been applied. This error is of the same order as that for the energy in variational calculations. Operators that suffer from these problems include potential energies and the density. This Letter presents a new method, based on the Hellman-Feynman theorem, for the correct DMC sampling of all operators diagonal in real space. Our method is easy to implement in any standard DMC code.
Patient-dependent beam-modifier physics in Monte Carlo photon dose calculations.
Schach von Wittenau, A E; Bergstrom, P M; Cox, L J
2000-05-01
Model pencil-beam on slab calculations are used as well as a series of detailed calculations of photon and electron output from commercial accelerators to quantify level(s) of physics required for the Monte Carlo transport of photons and electrons in treatment-dependent beam modifiers, such as jaws, wedges, blocks, and multileaf collimators, in photon teletherapy dose calculations. The physics approximations investigated comprise (1) not tracking particles below a given kinetic energy, (2) continuing to track particles, but performing simplified collision physics, particularly in handling secondary particle production, and (3) not tracking particles in specific spatial regions. Figures-of-merit needed to estimate the effects of these approximations are developed, and these estimates are compared with full-physics Monte Carlo calculations of the contribution of the collimating jaws to the on-axis depth-dose curve in a water phantom. These figures of merit are next used to evaluate various approximations used in coupled photon/electron physics in beam modifiers. Approximations for tracking electrons in air are then evaluated. It is found that knowledge of the materials used for beam modifiers, of the energies of the photon beams used, as well as of the length scales typically found in photon teletherapy plans, allows a number of simplifying approximations to be made in the Monte Carlo transport of secondary particles from the accelerator head and beam modifiers to the isocenter plane.
Infinite Variance in Fermion Quantum Monte Carlo Calculations
Shi, Hao
2015-01-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties, without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, lattice QCD calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied upon to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple sub-areas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations turn out to have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calc...
MCOR - Monte Carlo depletion code for reference LWR calculations
Energy Technology Data Exchange (ETDEWEB)
Puente Espel, Federico, E-mail: fup104@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Tippayakul, Chanatip, E-mail: cut110@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Ivanov, Kostadin, E-mail: kni1@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Misu, Stefan, E-mail: Stefan.Misu@areva.com [AREVA, AREVA NP GmbH, Erlangen (Germany)
2011-04-15
Research highlights: > Introduction of a reference Monte Carlo based depletion code with extended capabilities. > Verification and validation results for MCOR. > Utilization of MCOR for benchmarking deterministic lattice physics (spectral) codes. - Abstract: The MCOR (MCnp-kORigen) code system is a Monte Carlo based depletion system for reference fuel assembly and core calculations. The MCOR code is designed as an interfacing code that provides depletion capability to the LANL Monte Carlo code by coupling two codes: MCNP5 with the AREVA NP depletion code, KORIGEN. The physical quality of both codes is unchanged. The MCOR code system has been maintained and continuously enhanced since it was initially developed and validated. The verification of the coupling was made by evaluating the MCOR code against similar sophisticated code systems like MONTEBURNS, OCTOPUS and TRIPOLI-PEPIN. After its validation, the MCOR code has been further improved with important features. The MCOR code presents several valuable capabilities such as: (a) a predictor-corrector depletion algorithm, (b) utilization of KORIGEN as the depletion module, (c) individual depletion calculation of each burnup zone (no burnup zone grouping is required, which is particularly important for the modeling of gadolinium rings), and (d) on-line burnup cross-section generation by the Monte Carlo calculation for 88 isotopes and usage of the KORIGEN libraries for PWR and BWR typical spectra for the remaining isotopes. Besides the just mentioned capabilities, the MCOR code newest enhancements focus on the possibility of executing the MCNP5 calculation in sequential or parallel mode, a user-friendly automatic re-start capability, a modification of the burnup step size evaluation, and a post-processor and test-matrix, just to name the most important. The article describes the capabilities of the MCOR code system; from its design and development to its latest improvements and further ameliorations. Additionally
Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark.
Renner, F; Wulff, J; Kapsch, R-P; Zink, K
2015-10-01
There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as
Takahashi, F; Endo, A
2007-01-01
A system utilising radiation transport codes has been developed to derive accurate dose distributions in a human body for radiological accidents. A suitable model is quite essential for a numerical analysis. Therefore, two tools were developed to setup a 'problem-dependent' input file, defining a radiation source and an exposed person to simulate the radiation transport in an accident with the Monte Carlo calculation codes-MCNP and MCNPX. Necessary resources are defined by a dialogue method with a generally used personal computer for both the tools. The tools prepare human body and source models described in the input file format of the employed Monte Carlo codes. The tools were validated for dose assessment in comparison with a past criticality accident and a hypothesized exposure.
Geometrical form factor calculation using Monte Carlo integration for lidar
Mao, Feiyue; Gong, Wei; Li, Jun
2012-06-01
We proposed a geometrical form factor (GFF) calculation using Monte Carlo integration (GFF-MC) for lidar that is practical and can be applied to any laser intensity distribution. Theoretical results have been calculated with our method based on the functions of measured, uniform and Gaussian laser intensity distribution. Two experimental GFF traces on clear days are obtained to verify the validity of the theoretical results. The results indicated that the measured distribution function outperformed the Gaussian and uniform functions. That means that the deviation of the measured laser intensity distribution from an ideal one can be too large to neglect. In addition, the theoretical GFF of the uniform distribution had a larger error than that of the Gaussian distribution. Furthermore, the effects of the inclination angle of the laser beam and the central obstruction of the support structure of the second mirror of the telescope are discussed in this study.
Therapeutic Applications of Monte Carlo Calculations in Nuclear Medicine
Sgouros, George
2003-01-01
This book examines the applications of Monte Carlo (MC) calculations in therapeutic nuclear medicine, from basic principles to computer implementations of software packages and their applications in radiation dosimetry and treatment planning. It is written for nuclear medicine physicists and physicians as well as radiation oncologists, and can serve as a supplementary text for medical imaging, radiation dosimetry and nuclear engineering graduate courses in science, medical and engineering faculties. With chapters is written by recognised authorities in that particular field, the book covers the entire range of MC applications in therapeutic medical and health physics, from its use in imaging prior to therapy to dose distribution modelling targeted radiotherapy. The contributions discuss the fundamental concepts of radiation dosimetry, radiobiological aspects of targeted radionuclide therapy and the various components and steps required for implementing a dose calculation and treatment planning methodology in ...
Quantum Monte Carlo Calculations of Nucleon-Nucleus Scattering
Wiringa, R. B.; Nollett, Kenneth M.; Pieper, Steven C.; Brida, I.
2009-10-01
We report recent quantum Monte Carlo (variational and Green's function) calculations of elastic nucleon-nucleus scattering. We are adding the cases of proton-^4He, neutron-^3H and proton-^3He scattering to a previous GFMC study of neutron-^4He scattering [1]. To do this requires generalizing our methods to include long-range Coulomb forces and to treat coupled channels. The two four-body cases can be compared to other accurate four-body calculational methods such as the AGS equations and hyperspherical harmonic expansions. We will present results for the Argonne v18 interaction alone and with Urbana and Illinois three-nucleon potentials. [4pt] [1] K.M. Nollett, S. C. Pieper, R.B. Wiringa, J. Carlson, and G.M. Hale, Phys. Rev. Lett. 99, 022502 (2007)
Monte Carlo dose calculation in dental amalgam phantom.
Aziz, Mohd Zahri Abdul; Yusoff, A L; Osman, N D; Abdullah, R; Rabaie, N A; Salikin, M S
2015-01-01
It has become a great challenge in the modern radiation treatment to ensure the accuracy of treatment delivery in electron beam therapy. Tissue inhomogeneity has become one of the factors for accurate dose calculation, and this requires complex algorithm calculation like Monte Carlo (MC). On the other hand, computed tomography (CT) images used in treatment planning system need to be trustful as they are the input in radiotherapy treatment. However, with the presence of metal amalgam in treatment volume, the CT images input showed prominent streak artefact, thus, contributed sources of error. Hence, metal amalgam phantom often creates streak artifacts, which cause an error in the dose calculation. Thus, a streak artifact reduction technique was applied to correct the images, and as a result, better images were observed in terms of structure delineation and density assigning. Furthermore, the amalgam density data were corrected to provide amalgam voxel with accurate density value. As for the errors of dose uncertainties due to metal amalgam, they were reduced from 46% to as low as 2% at d80 (depth of the 80% dose beyond Zmax) using the presented strategies. Considering the number of vital and radiosensitive organs in the head and the neck regions, this correction strategy is suggested in reducing calculation uncertainties through MC calculation.
Monte carlo dose calculation in dental amalgam phantom
Directory of Open Access Journals (Sweden)
Mohd Zahri Abdul Aziz
2015-01-01
Full Text Available It has become a great challenge in the modern radiation treatment to ensure the accuracy of treatment delivery in electron beam therapy. Tissue inhomogeneity has become one of the factors for accurate dose calculation, and this requires complex algorithm calculation like Monte Carlo (MC. On the other hand, computed tomography (CT images used in treatment planning system need to be trustful as they are the input in radiotherapy treatment. However, with the presence of metal amalgam in treatment volume, the CT images input showed prominent streak artefact, thus, contributed sources of error. Hence, metal amalgam phantom often creates streak artifacts, which cause an error in the dose calculation. Thus, a streak artifact reduction technique was applied to correct the images, and as a result, better images were observed in terms of structure delineation and density assigning. Furthermore, the amalgam density data were corrected to provide amalgam voxel with accurate density value. As for the errors of dose uncertainties due to metal amalgam, they were reduced from 46% to as low as 2% at d80 (depth of the 80% dose beyond Zmax using the presented strategies. Considering the number of vital and radiosensitive organs in the head and the neck regions, this correction strategy is suggested in reducing calculation uncertainties through MC calculation.
Using Nuclear Theory, Data and Uncertainties in Monte Carlo Transport Applications
Energy Technology Data Exchange (ETDEWEB)
Rising, Michael Evan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-11-03
These are slides for a presentation on using nuclear theory, data and uncertainties in Monte Carlo transport applications. The following topics are covered: nuclear data (experimental data versus theoretical models, data evaluation and uncertainty quantification), fission multiplicity models (fixed source applications, criticality calculations), uncertainties and their impact (integral quantities, sensitivity analysis, uncertainty propagation).
GPU-based fast Monte Carlo simulation for radiotherapy dose calculation
Jia, Xun; Graves, Yan Jiang; Folkerts, Michael; Jiang, Steve B
2011-01-01
Monte Carlo (MC) simulation is commonly considered to be the most accurate dose calculation method in radiotherapy. However, its efficiency still requires improvement for many routine clinical applications. In this paper, we present our recent progress towards the development a GPU-based MC dose calculation package, gDPM v2.0. It utilizes the parallel computation ability of a GPU to achieve high efficiency, while maintaining the same particle transport physics as in the original DPM code and hence the same level of simulation accuracy. In GPU computing, divergence of execution paths between threads can considerably reduce the efficiency. Since photons and electrons undergo different physics and hence attain different execution paths, we use a simulation scheme where photon transport and electron transport are separated to partially relieve the thread divergence issue. High performance random number generator and hardware linear interpolation are also utilized. We have also developed various components to hand...
Quantum Monte Carlo Calculations Applied to Magnetic Molecules
Energy Technology Data Exchange (ETDEWEB)
Engelhardt, Larry [Iowa State Univ., Ames, IA (United States)
2006-01-01
We have calculated the equilibrium thermodynamic properties of Heisenberg spin systems using a quantum Monte Carlo (QMC) method. We have used some of these systems as models to describe recently synthesized magnetic molecules, and-upon comparing the results of these calculations with experimental data-have obtained accurate estimates for the basic parameters of these models. We have also performed calculations for other systems that are of more general interest, being relevant both for existing experimental data and for future experiments. Utilizing the concept of importance sampling, these calculations can be carried out in an arbitrarily large quantum Hilbert space, while still avoiding any approximations that would introduce systematic errors. The only errors are statistical in nature, and as such, their magnitudes are accurately estimated during the course of a simulation. Frustrated spin systems present a major challenge to the QMC method, nevertheless, in many instances progress can be made. In this chapter, the field of magnetic molecules is introduced, paying particular attention to the characteristics that distinguish magnetic molecules from other systems that are studied in condensed matter physics. We briefly outline the typical path by which we learn about magnetic molecules, which requires a close relationship between experiments and theoretical calculations. The typical experiments are introduced here, while the theoretical methods are discussed in the next chapter. Each of these theoretical methods has a considerable limitation, also described in Chapter 2, which together serve to motivate the present work. As is shown throughout the later chapters, the present QMC method is often able to provide useful information where other methods fail. In Chapter 3, the use of Monte Carlo methods in statistical physics is reviewed, building up the fundamental ideas that are necessary in order to understand the method that has been used in this work. With these
A generic algorithm for Monte Carlo simulation of proton transport
Energy Technology Data Exchange (ETDEWEB)
Salvat, Francesc, E-mail: francesc.salvat@ub.edu
2013-12-01
A mixed (class II) algorithm for Monte Carlo simulation of the transport of protons, and other heavy charged particles, in matter is presented. The emphasis is on the electromagnetic interactions (elastic and inelastic collisions) which are simulated using strategies similar to those employed in the electron–photon code PENELOPE. Elastic collisions are described in terms of numerical differential cross sections (DCSs) in the center-of-mass frame, calculated from the eikonal approximation with the Dirac–Hartree–Fock–Slater atomic potential. The polar scattering angle is sampled by employing an adaptive numerical algorithm which allows control of interpolation errors. The energy transferred to the recoiling target atoms (nuclear stopping) is consistently described by transformation to the laboratory frame. Inelastic collisions are simulated from DCSs based on the plane–wave Born approximation (PWBA), making use of the Sternheimer–Liljequist model of the generalized oscillator strength, with parameters adjusted to reproduce (1) the electronic stopping power read from the input file, and (2) the total cross sections for impact ionization of inner subshells. The latter were calculated from the PWBA including screening and Coulomb corrections. This approach provides quite a realistic description of the energy-loss distribution in single collisions, and of the emission of X-rays induced by proton impact. The simulation algorithm can be readily modified to include nuclear reactions, when the corresponding cross sections and emission probabilities are available, and bremsstrahlung emission.
A generic algorithm for Monte Carlo simulation of proton transport
Salvat, Francesc
2013-12-01
A mixed (class II) algorithm for Monte Carlo simulation of the transport of protons, and other heavy charged particles, in matter is presented. The emphasis is on the electromagnetic interactions (elastic and inelastic collisions) which are simulated using strategies similar to those employed in the electron-photon code PENELOPE. Elastic collisions are described in terms of numerical differential cross sections (DCSs) in the center-of-mass frame, calculated from the eikonal approximation with the Dirac-Hartree-Fock-Slater atomic potential. The polar scattering angle is sampled by employing an adaptive numerical algorithm which allows control of interpolation errors. The energy transferred to the recoiling target atoms (nuclear stopping) is consistently described by transformation to the laboratory frame. Inelastic collisions are simulated from DCSs based on the plane-wave Born approximation (PWBA), making use of the Sternheimer-Liljequist model of the generalized oscillator strength, with parameters adjusted to reproduce (1) the electronic stopping power read from the input file, and (2) the total cross sections for impact ionization of inner subshells. The latter were calculated from the PWBA including screening and Coulomb corrections. This approach provides quite a realistic description of the energy-loss distribution in single collisions, and of the emission of X-rays induced by proton impact. The simulation algorithm can be readily modified to include nuclear reactions, when the corresponding cross sections and emission probabilities are available, and bremsstrahlung emission.
Stationarity and source convergence in monte carlo criticality calculation.
Energy Technology Data Exchange (ETDEWEB)
Ueki, T. (Taro); Brown, F. B. (Forrest B.)
2002-01-01
In Monte Carlo (MC) criticality calculations, source error propagation through the stationary cycles and source convergcnce in the settling (inactive) cycles are both dominated by the dominance ratio (DR) of fission kernels, Le., the ratio of the second largest to largest eigenvalues. For symmetric two fissile component systems with DR close to unity, the extinction of fission source sites can occur in one of the components even when the initial source is symmetric and the number of histories per cycle is larger than one thousand. When such a system is made slightly asymmetric, the neutron effective multiplication factor (kern) at the inactive cycles does not reflect the convergence to stationary source distribution. To overcome this problem, relative entropy (Kullback Leibler distance) is applied to a slightly asymmetric two fissile component problem with a dominance ratio of 0.9925. Numerical results show that relative entropy is effective as a posterior diagnostic tool.
Quantum Monte Carlo calculations with chiral effective field theory interactions
Energy Technology Data Exchange (ETDEWEB)
Tews, Ingo
2015-10-12
The neutron-matter equation of state connects several physical systems over a wide density range, from cold atomic gases in the unitary limit at low densities, to neutron-rich nuclei at intermediate densities, up to neutron stars which reach supranuclear densities in their core. An accurate description of the neutron-matter equation of state is therefore crucial to describe these systems. To calculate the neutron-matter equation of state reliably, precise many-body methods in combination with a systematic theory for nuclear forces are needed. Chiral effective field theory (EFT) is such a theory. It provides a systematic framework for the description of low-energy hadronic interactions and enables calculations with controlled theoretical uncertainties. Chiral EFT makes use of a momentum-space expansion of nuclear forces based on the symmetries of Quantum Chromodynamics, which is the fundamental theory of strong interactions. In chiral EFT, the description of nuclear forces can be systematically improved by going to higher orders in the chiral expansion. On the other hand, continuum Quantum Monte Carlo (QMC) methods are among the most precise many-body methods available to study strongly interacting systems at finite densities. They treat the Schroedinger equation as a diffusion equation in imaginary time and project out the ground-state wave function of the system starting from a trial wave function by propagating the system in imaginary time. To perform this propagation, continuum QMC methods require as input local interactions. However, chiral EFT, which is naturally formulated in momentum space, contains several sources of nonlocality. In this Thesis, we show how to construct local chiral two-nucleon (NN) and three-nucleon (3N) interactions and discuss results of first QMC calculations for pure neutron systems. We have performed systematic auxiliary-field diffusion Monte Carlo (AFDMC) calculations for neutron matter using local chiral NN interactions. By
Energy Technology Data Exchange (ETDEWEB)
Burkatzki, Mark Thomas
2008-07-01
The author presents scalar-relativistic energy-consistent Hartree-Fock pseudopotentials for the main-group and 3d-transition-metal elements. The pseudopotentials do not exhibit a singularity at the nucleus and are therefore suitable for quantum Monte Carlo (QMC) calculations. The author demonstrates their transferability through extensive benchmark calculations of atomic excitation spectra as well as molecular properties. In particular, the author computes the vibrational frequencies and binding energies of 26 first- and second-row diatomic molecules using post Hartree-Fock methods, finding excellent agreement with the corresponding all-electron values. The author shows that the presented pseudopotentials give superior accuracy than other existing pseudopotentials constructed specifically for QMC. The localization error and the efficiency in QMC are discussed. The author also presents QMC calculations for selected atomic and diatomic 3d-transitionmetal systems. Finally, valence basis sets of different sizes (VnZ with n=D,T,Q,5 for 1st and 2nd row; with n=D,T for 3rd to 5th row; with n=D,T,Q for the 3d transition metals) optimized for the pseudopotentials are presented. (orig.)
Neutron cross-section probability tables in TRIPOLI-3 Monte Carlo transport code
Energy Technology Data Exchange (ETDEWEB)
Zheng, S.H.; Vergnaud, T.; Nimal, J.C. [Commissariat a l`Energie Atomique, Gif-sur-Yvette (France). Lab. d`Etudes de Protection et de Probabilite
1998-03-01
Neutron transport calculations need an accurate treatment of cross sections. Two methods (multi-group and pointwise) are usually used. A third one, the probability table (PT) method, has been developed to produce a set of cross-section libraries, well adapted to describe the neutron interaction in the unresolved resonance energy range. Its advantage is to present properly the neutron cross-section fluctuation within a given energy group, allowing correct calculation of the self-shielding effect. Also, this PT cross-section representation is suitable for simulation of neutron propagation by the Monte Carlo method. The implementation of PTs in the TRIPOLI-3 three-dimensional general Monte Carlo transport code, developed at Commissariat a l`Energie Atomique, and several validation calculations are presented. The PT method is proved to be valid not only in the unresolved resonance range but also in all the other energy ranges.
Implict Monte Carlo Radiation Transport Simulations of Four Test Problems
Energy Technology Data Exchange (ETDEWEB)
Gentile, N
2007-08-01
Radiation transport codes, like almost all codes, are difficult to develop and debug. It is helpful to have small, easy to run test problems with known answers to use in development and debugging. It is also prudent to re-run test problems periodically during development to ensure that previous code capabilities have not been lost. We describe four radiation transport test problems with analytic or approximate analytic answers. These test problems are suitable for use in debugging and testing radiation transport codes. We also give results of simulations of these test problems performed with an Implicit Monte Carlo photonics code.
Quantum Monte Carlo calculations of the dimerization energy of borane.
Fracchia, Francesco; Bressanini, Dario; Morosi, Gabriele
2011-09-07
Accurate thermodynamic data are required to improve the performance of chemical hydrides that are potential hydrogen storage materials. Boron compounds are among the most interesting candidates. However, different experimental measurements of the borane dimerization energy resulted in a rather wide range (-34.3 to -39.1) ± 2 kcal/mol. Diffusion Monte Carlo (DMC) simulations usually recover more than 95% of the correlation energy, so energy differences rely less on error cancellation than other methods. DMC energies of BH(3), B(2)H(6), BH(3)CO, CO, and BH(2)(+) allowed us to predict the borane dimerization energy, both via the direct process and indirect processes such as the dissociation of BH(3)CO. Our D(e) = -43.12(8) kcal/mol, corrected for the zero point energy evaluated by considering the anharmonic contributions, results in a borane dimerization energy of -36.59(8) kcal/mol. The process via the dissociation of BH(3)CO gives -34.5(2) kcal/mol. Overall, our values suggest a slightly less D(e) than the most recent W4 estimate D(e) = -44.47 kcal/mol [A. Karton and J. M. L. Martin, J. Phys. Chem. A 111, 5936 (2007)]. Our results show that reliable thermochemical data for boranes can be predicted by fixed node (FN)-DMC calculations.
Monte Carlo calculation of radiation energy absorbed in plastic scintillators
Energy Technology Data Exchange (ETDEWEB)
Mainardi, R.T.; Bonzi, E.V. [Universidad Nacional de Cordoba (Argentina). Facultad de Matematica, Astronomia y Fisica
1995-05-01
Monte Carlo calculations of the rate of absorbed energy from a photon beam were carried out to compare the response of commercial plastic scintillators with that of air in the energy region below 1 MeV. We have found that for photon energies above 100 keV, the response of different kinds of plastics is proportional to that of air, while below this value of energy, we have obtained differences between the responses of plastics and air. In a literature search, we have also found discrepancies with other authors as well as among them. In this paper, we investigate the possibilities of eliminating these differences and explaining discrepancies. We found that doping a plastic scintillator with silicon makes the composite materials behave like air from 2 keV up to 600 keV, making the ratio of absorbed energy constant. This energy region is of interest in radiology and surface radiotherapy and we conclude that a plastic scintillator with truly air-equivalent behavior is of importance to carry out more precise dosimetry. Other elements such as fluorine and magnesium were also considered, but silicon was found to be more appropriate due to its greater atomic number and its interchangeability with carbon in hydrocarbon molecules. (author).
Monte Carlo calculations of the properties of solid nitromethane
Rice, Betsy M.; Trevino, Samuel F.
1991-09-01
Pairwise additive potential energy functions for H-O, H-H, and O-O intermolecular interactions are presented; methods by which these functions were developed are discussed, and preliminary Monte Carlo calculations of the crystal lattice parameters using these functions are presented. The results indicate that these potential energy functions correctly reproduce the lattice parameters measured by neutron diffraction at 4.2 K, ambient pressure, and at pressures below 1.0 GPa, room temperature. It is our intention in this and future work to obtain sufficient information concerning the intermolecular interactions between molecules of nitromethane (CH3NO2) in order to produce, via computer simulation, a reliable equation of state and other related properties in the condensed phase. For this purpose, substantial experimental investigations have been performed in the past on several properties of the crystal. For the present study, the most important of these are the determination of the crystal structure at ambient pressure, from 4.2 K to 228 K (Trevino, Prince, and Hubbard 1980) and neutron spectroscopic determination of the rotational properties of the methyl group (Trevino and Rymes 1980; Alefeld et al. 1982; Cavagnat et al. 1985).
Baräo, Fernando; Nakagawa, Masayuki; Távora, Luis; Vaz, Pedro
2001-01-01
This book focusses on the state of the art of Monte Carlo methods in radiation physics and particle transport simulation and applications, the latter involving in particular, the use and development of electron--gamma, neutron--gamma and hadronic codes. Besides the basic theory and the methods employed, special attention is paid to algorithm development for modeling, and the analysis of experiments and measurements in a variety of fields ranging from particle to medical physics.
Monte Carlo calculations supporting patient plan verification in proton therapy
Directory of Open Access Journals (Sweden)
Thiago Viana Miranda Lima
2016-03-01
Full Text Available Patient’s treatment plan verification covers substantial amount of the quality assurance (QA resources, this is especially true for Intensity Modulated Proton Therapy (IMPT. The use of Monte Carlo (MC simulations in supporting QA has been widely discussed and several methods have been proposed. In this paper we studied an alternative approach from the one being currently applied clinically at Centro Nazionale di Adroterapia Oncologica (CNAO. We reanalysed the previously published data (Molinelli et al. 2013, where 9 patient plans were investigated in which the warning QA threshold of 3% mean dose deviation was crossed. The possibility that these differences between measurement and calculated dose were related to dose modelling (Treatment Planning Systems (TPS vs MC, limitations on dose delivery system or detectors mispositioning was originally explored but other factors such as the geometric description of the detectors were not ruled out. For the purpose of this work we compared ionisation-chambers measurements with different MC simulations results. It was also studied some physical effects introduced by this new approach for example inter detector interference and the delta ray thresholds. The simulations accounting for a detailed geometry typically are superior (statistical difference - p-value around 0.01 to most of the MC simulations used at CNAO (only inferior to the shift approach used. No real improvement were observed in reducing the current delta-ray threshold used (100 keV and no significant interference between ion chambers in the phantom were detected (p-value 0.81. In conclusion, it was observed that the detailed geometrical description improves the agreement between measurement and MC calculations in some cases. But in other cases position uncertainty represents the dominant uncertainty. The inter chamber disturbance was not detected for the therapeutic protons energies and the results from the current delta threshold are
Zhong, Zhaopeng; Talamo, Alberto; Gohar, Yousry
2013-07-01
The effective delayed neutron fraction β plays an important role in kinetics and static analysis of the reactor physics experiments. It is used as reactivity unit referred to as "dollar". Usually, it is obtained by computer simulation due to the difficulty in measuring it experimentally. In 1965, Keepin proposed a method, widely used in the literature, for the calculation of the effective delayed neutron fraction β. This method requires calculation of the adjoint neutron flux as a weighting function of the phase space inner products and is easy to implement by deterministic codes. With Monte Carlo codes, the solution of the adjoint neutron transport equation is much more difficult because of the continuous-energy treatment of nuclear data. Consequently, alternative methods, which do not require the explicit calculation of the adjoint neutron flux, have been proposed. In 1997, Bretscher introduced the k-ratio method for calculating the effective delayed neutron fraction; this method is based on calculating the multiplication factor of a nuclear reactor core with and without the contribution of delayed neutrons. The multiplication factor set by the delayed neutrons (the delayed multiplication factor) is obtained as the difference between the total and the prompt multiplication factors. Using Monte Carlo calculation Bretscher evaluated the β as the ratio between the delayed and total multiplication factors (therefore the method is often referred to as the k-ratio method). In the present work, the k-ratio method is applied by Monte Carlo (MCNPX) and deterministic (PARTISN) codes. In the latter case, the ENDF/B nuclear data library of the fuel isotopes (235U and 238U) has been processed by the NJOY code with and without the delayed neutron data to prepare multi-group WIMSD neutron libraries for the lattice physics code DRAGON, which was used to generate the PARTISN macroscopic cross sections. In recent years Meulekamp and van der Marck in 2006 and Nauchi and Kameyama
Neutral Particle Transport in Cylindrical Plasma Simulated by a Monte Carlo Code
Institute of Scientific and Technical Information of China (English)
YU Deliang; YAN Longwen; ZHONG Guangwu; LU Jie; YI Ping
2007-01-01
A Monte Carlo code (MCHGAS) has been developed to investigate the neutral particle transport.The code can calculate the radial profile and energy spectrum of neutral particles in cylindrical plasmas.The calculation time of the code is dramatically reduced when the Splitting and Roulette schemes are applied. The plasma model of an infinite cylinder is assumed in the code,which is very convenient in simulating neutral particle transports in small and middle-sized tokamaks.The design of the multi-channel neutral particle analyser (NPA) on HL-2A can be optimized by using this code.
Energy Technology Data Exchange (ETDEWEB)
Zychor, I. [Soltan Inst. for Nuclear Studies, Otwock-Swierk (Poland)
1994-12-31
The application of a Monte Carlo method to study a transport in matter of electron and photon beams is presented, especially for electrons with energies up to 18 MeV. The SHOWME Monte Carlo code, a modified version of GEANT3 code, was used on the CONVEX C3210 computer at Swierk. It was assumed that an electron beam is mono directional and monoenergetic. Arbitrary user-defined, complex geometries made of any element or material can be used in calculation. All principal phenomena occurring when electron beam penetrates the matter are taken into account. The use of calculation for a therapeutic electron beam collimation is presented. (author). 20 refs, 29 figs.
Investigation of Nonuniform Dose Voxel Geometry in Monte Carlo Calculations.
Yuan, Jiankui; Chen, Quan; Brindle, James; Zheng, Yiran; Lo, Simon; Sohn, Jason; Wessels, Barry
2015-08-01
The purpose of this work is to investigate the efficacy of using multi-resolution nonuniform dose voxel geometry in Monte Carlo (MC) simulations. An in-house MC code based on the dose planning method MC code was developed in C++ to accommodate the nonuniform dose voxel geometry package since general purpose MC codes use their own coupled geometry packages. We devised the package in a manner that the entire calculation volume was first divided into a coarse mesh and then the coarse mesh was subdivided into nonuniform voxels with variable voxel sizes based on density difference. We name this approach as multi-resolution subdivision (MRS). It generates larger voxels in small density gradient regions and smaller voxels in large density gradient regions. To take into account the large dose gradients due to the beam penumbra, the nonuniform voxels can be further split using ray tracing starting from the beam edges. The accuracy of the implementation of the algorithm was verified by comparing with the data published by Rogers and Mohan. The discrepancy was found to be 1% to 2%, with a maximum of 3% at the interfaces. Two clinical cases were used to investigate the efficacy of nonuniform voxel geometry in the MC code. Applying our MRS approach, we started with the initial voxel size of 5 × 5 × 3 mm(3), which was further divided into smaller voxels. The smallest voxel size was 1.25 × 1.25 × 3 mm(3). We found that the simulation time per history for the nonuniform voxels is about 30% to 40% faster than the uniform fine voxels (1.25 × 1.25 × 3 mm(3)) while maintaining similar accuracy.
Monte Carlo calculations of positron emitter yields in proton radiotherapy.
Seravalli, E; Robert, C; Bauer, J; Stichelbaut, F; Kurz, C; Smeets, J; Van Ngoc Ty, C; Schaart, D R; Buvat, I; Parodi, K; Verhaegen, F
2012-03-21
Positron emission tomography (PET) is a promising tool for monitoring the three-dimensional dose distribution in charged particle radiotherapy. PET imaging during or shortly after proton treatment is based on the detection of annihilation photons following the ß(+)-decay of radionuclides resulting from nuclear reactions in the irradiated tissue. Therapy monitoring is achieved by comparing the measured spatial distribution of irradiation-induced ß(+)-activity with the predicted distribution based on the treatment plan. The accuracy of the calculated distribution depends on the correctness of the computational models, implemented in the employed Monte Carlo (MC) codes that describe the interactions of the charged particle beam with matter and the production of radionuclides and secondary particles. However, no well-established theoretical models exist for predicting the nuclear interactions and so phenomenological models are typically used based on parameters derived from experimental data. Unfortunately, the experimental data presently available are insufficient to validate such phenomenological hadronic interaction models. Hence, a comparison among the models used by the different MC packages is desirable. In this work, starting from a common geometry, we compare the performances of MCNPX, GATE and PHITS MC codes in predicting the amount and spatial distribution of proton-induced activity, at therapeutic energies, to the already experimentally validated PET modelling based on the FLUKA MC code. In particular, we show how the amount of ß(+)-emitters produced in tissue-like media depends on the physics model and cross-sectional data used to describe the proton nuclear interactions, thus calling for future experimental campaigns aiming at supporting improvements of MC modelling for clinical application of PET monitoring.
Monte Carlo transport simulation of velocity undershoot in zinc blende and wurtzite InN
Energy Technology Data Exchange (ETDEWEB)
Wang, Shulong; Liu, Hongxia; Gao, Bo; Zhuo, Qingqing [School of Microelectronics, Key Laboratory of Wide Band-gap Semiconductor Materials and Device, Xidian University, Xi& #x27; an, 710071 (China)
2012-09-15
Velocity undershoot in zinc blende (ZB) and wurtzite (WZ) InN is investigated by ensemble Monte Carlo (EMC) calculation. The results show that velocity undershoot arises from the relatively long energy relaxation time compared with momentum. Monte Carlo transport simulations over wide range of electric fields is presented in the paper. The results show that velocity undershoot impacts the electron transport greatly, compared with velocity overshoot, when the electric field changes quickly with time and space. A comparison study between WZ and ZB InN shows that WZ InN has more advantages in device applications due to its excellent electron transport properties. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Monte Carlo simulations of charge transport in heterogeneous organic semiconductors
Aung, Pyie Phyo; Khanal, Kiran; Luettmer-Strathmann, Jutta
2015-03-01
The efficiency of organic solar cells depends on the morphology and electronic properties of the active layer. Research teams have been experimenting with different conducting materials to achieve more efficient solar panels. In this work, we perform Monte Carlo simulations to study charge transport in heterogeneous materials. We have developed a coarse-grained lattice model of polymeric photovoltaics and use it to generate active layers with ordered and disordered regions. We determine carrier mobilities for a range of conditions to investigate the effect of the morphology on charge transport.
A fast Monte Carlo code for proton transport in radiation therapy based on MCNPX
Keyvan Jabbari; Jan Seuntjens
2014-01-01
An important requirement for proton therapy is a software for dose calculation. Monte Carlo is the most accurate method for dose calculation, but it is very slow. In this work, a method is developed to improve the speed of dose calculation. The method is based on pre-generated tracks for particle transport. The MCNPX code has been used for generation of tracks. A set of data including the track of the particle was produced in each particular material (water, air, lung tissue, bone, and soft t...
Application de la methode des sous-groupes au calcul Monte-Carlo multigroupe
Martin, Nicolas
This thesis is dedicated to the development of a Monte Carlo neutron transport solver based on the subgroup (or multiband) method. In this formalism, cross sections for resonant isotopes are represented in the form of probability tables on the whole energy spectrum. This study is intended in order to test and validate this approach in lattice physics and criticality-safety applications. The probability table method seems promising since it introduces an alternative computational way between the legacy continuous-energy representation and the multigroup method. In the first case, the amount of data invoked in continuous-energy Monte Carlo calculations can be very important and tend to slow down the overall computational time. In addition, this model preserves the quality of the physical laws present in the ENDF format. Due to its cheap computational cost, the multigroup Monte Carlo way is usually at the basis of production codes in criticality-safety studies. However, the use of a multigroup representation of the cross sections implies a preliminary calculation to take into account self-shielding effects for resonant isotopes. This is generally performed by deterministic lattice codes relying on the collision probability method. Using cross-section probability tables on the whole energy range permits to directly take into account self-shielding effects and can be employed in both lattice physics and criticality-safety calculations. Several aspects have been thoroughly studied: (1) The consistent computation of probability tables with a energy grid comprising only 295 or 361 groups. The CALENDF moment approach conducted to probability tables suitable for a Monte Carlo code. (2) The combination of the probability table sampling for the energy variable with the delta-tracking rejection technique for the space variable, and its impact on the overall efficiency of the proposed Monte Carlo algorithm. (3) The derivation of a model for taking into account anisotropic
Usage of burnt fuel isotopic compositions from engineering codes in Monte-Carlo code calculations
Energy Technology Data Exchange (ETDEWEB)
Aleshin, Sergey S.; Gorodkov, Sergey S.; Shcherenko, Anna I. [Nuclear Research Centre ' ' Kurchatov Institute' ' , Moscow (Russian Federation)
2015-09-15
A burn-up calculation of VVER's cores by Monte-Carlo code is complex process and requires large computational costs. This fact makes Monte-Carlo codes usage complicated for project and operating calculations. Previously prepared isotopic compositions are proposed to use for the Monte-Carlo code (MCU) calculations of different states of VVER's core with burnt fuel. Isotopic compositions are proposed to calculate by an approximation method. The approximation method is based on usage of a spectral functionality and reference isotopic compositions, that are calculated by engineering codes (TVS-M, PERMAK-A). The multiplication factors and power distributions of FA and VVER with infinite height are calculated in this work by the Monte-Carlo code MCU using earlier prepared isotopic compositions. The MCU calculation data were compared with the data which were obtained by engineering codes.
Graphical User Interface for Simplified Neutron Transport Calculations
Energy Technology Data Exchange (ETDEWEB)
Schwarz, Randolph; Carter, Leland L
2011-07-18
A number of codes perform simple photon physics calculations. The nuclear industry is lacking in similar tools to perform simplified neutron physics shielding calculations. With the increased importance of performing neutron calculations for homeland security applications and defense nuclear nonproliferation tasks, having an efficient method for performing simple neutron transport calculations becomes increasingly important. Codes such as Monte Carlo N-particle (MCNP) can perform the transport calculations; however, the technical details in setting up, running, and interpreting the required simulations are quite complex and typically go beyond the abilities of most users who need a simple answer to a neutron transport calculation. The work documented in this report resulted in the development of the NucWiz program, which can create an MCNP input file for a set of simple geometries, source, and detector configurations. The user selects source, shield, and tally configurations from a set of pre-defined lists, and the software creates a complete MCNP input file that can be optionally run and the results viewed inside NucWiz.
Bakshi, A K; Chatterjee, S; Palani Selvam, T; Dhabekar, B S
2010-07-01
In the present study, the energy dependence of response of some popular thermoluminescent dosemeters (TLDs) have been investigated such as LiF:Mg,Ti, LiF:Mg,Cu,P and CaSO(4):Dy to synchrotron radiation in the energy range of 10-34 keV. The study utilised experimental, Monte Carlo and analytical methods. The Monte Carlo calculations were based on the EGSnrc and FLUKA codes. The calculated energy response of all the TLDs using the EGSnrc and FLUKA codes shows excellent agreement with each other. The analytically calculated response shows good agreement with the Monte Carlo calculated response in the low-energy region. In the case of CaSO(4):Dy, the Monte Carlo-calculated energy response is smaller by a factor of 3 at all energies in comparison with the experimental response when polytetrafluoroethylene (PTFE) (75 % by wt) is included in the Monte Carlo calculations. When PTFE is ignored in the Monte Carlo calculations, the difference between the calculated and experimental response decreases (both responses are comparable >25 keV). For the LiF-based TLDs, the Monte Carlo-based response shows reasonable agreement with the experimental response.
GPU-based Monte Carlo radiotherapy dose calculation using phase-space sources
Townson, Reid W.; Jia, Xun; Tian, Zhen; Jiang Graves, Yan; Zavgorodni, Sergei; Jiang, Steve B.
2013-06-01
A novel phase-space source implementation has been designed for graphics processing unit (GPU)-based Monte Carlo dose calculation engines. Short of full simulation of the linac head, using a phase-space source is the most accurate method to model a clinical radiation beam in dose calculations. However, in GPU-based Monte Carlo dose calculations where the computation efficiency is very high, the time required to read and process a large phase-space file becomes comparable to the particle transport time. Moreover, due to the parallelized nature of GPU hardware, it is essential to simultaneously transport particles of the same type and similar energies but separated spatially to yield a high efficiency. We present three methods for phase-space implementation that have been integrated into the most recent version of the GPU-based Monte Carlo radiotherapy dose calculation package gDPM v3.0. The first method is to sequentially read particles from a patient-dependent phase-space and sort them on-the-fly based on particle type and energy. The second method supplements this with a simple secondary collimator model and fluence map implementation so that patient-independent phase-space sources can be used. Finally, as the third method (called the phase-space-let, or PSL, method) we introduce a novel source implementation utilizing pre-processed patient-independent phase-spaces that are sorted by particle type, energy and position. Position bins located outside a rectangular region of interest enclosing the treatment field are ignored, substantially decreasing simulation time with little effect on the final dose distribution. The three methods were validated in absolute dose against BEAMnrc/DOSXYZnrc and compared using gamma-index tests (2%/2 mm above the 10% isodose). It was found that the PSL method has the optimal balance between accuracy and efficiency and thus is used as the default method in gDPM v3.0. Using the PSL method, open fields of 4 × 4, 10 × 10 and 30 × 30 cm
GPU-based Monte Carlo radiotherapy dose calculation using phase-space sources.
Townson, Reid W; Jia, Xun; Tian, Zhen; Graves, Yan Jiang; Zavgorodni, Sergei; Jiang, Steve B
2013-06-21
A novel phase-space source implementation has been designed for graphics processing unit (GPU)-based Monte Carlo dose calculation engines. Short of full simulation of the linac head, using a phase-space source is the most accurate method to model a clinical radiation beam in dose calculations. However, in GPU-based Monte Carlo dose calculations where the computation efficiency is very high, the time required to read and process a large phase-space file becomes comparable to the particle transport time. Moreover, due to the parallelized nature of GPU hardware, it is essential to simultaneously transport particles of the same type and similar energies but separated spatially to yield a high efficiency. We present three methods for phase-space implementation that have been integrated into the most recent version of the GPU-based Monte Carlo radiotherapy dose calculation package gDPM v3.0. The first method is to sequentially read particles from a patient-dependent phase-space and sort them on-the-fly based on particle type and energy. The second method supplements this with a simple secondary collimator model and fluence map implementation so that patient-independent phase-space sources can be used. Finally, as the third method (called the phase-space-let, or PSL, method) we introduce a novel source implementation utilizing pre-processed patient-independent phase-spaces that are sorted by particle type, energy and position. Position bins located outside a rectangular region of interest enclosing the treatment field are ignored, substantially decreasing simulation time with little effect on the final dose distribution. The three methods were validated in absolute dose against BEAMnrc/DOSXYZnrc and compared using gamma-index tests (2%/2 mm above the 10% isodose). It was found that the PSL method has the optimal balance between accuracy and efficiency and thus is used as the default method in gDPM v3.0. Using the PSL method, open fields of 4 × 4, 10 × 10 and 30 × 30 cm
A fast Monte Carlo code for proton transport in radiation therapy based on MCNPX.
Jabbari, Keyvan; Seuntjens, Jan
2014-07-01
An important requirement for proton therapy is a software for dose calculation. Monte Carlo is the most accurate method for dose calculation, but it is very slow. In this work, a method is developed to improve the speed of dose calculation. The method is based on pre-generated tracks for particle transport. The MCNPX code has been used for generation of tracks. A set of data including the track of the particle was produced in each particular material (water, air, lung tissue, bone, and soft tissue). This code can transport protons in wide range of energies (up to 200 MeV for proton). The validity of the fast Monte Carlo (MC) code is evaluated with data MCNPX as a reference code. While analytical pencil beam algorithm transport shows great errors (up to 10%) near small high density heterogeneities, there was less than 2% deviation of MCNPX results in our dose calculation and isodose distribution. In terms of speed, the code runs 200 times faster than MCNPX. In the Fast MC code which is developed in this work, it takes the system less than 2 minutes to calculate dose for 10(6) particles in an Intel Core 2 Duo 2.66 GHZ desktop computer.
A fast Monte Carlo code for proton transport in radiation therapy based on MCNPX
Directory of Open Access Journals (Sweden)
Keyvan Jabbari
2014-01-01
Full Text Available An important requirement for proton therapy is a software for dose calculation. Monte Carlo is the most accurate method for dose calculation, but it is very slow. In this work, a method is developed to improve the speed of dose calculation. The method is based on pre-generated tracks for particle transport. The MCNPX code has been used for generation of tracks. A set of data including the track of the particle was produced in each particular material (water, air, lung tissue, bone, and soft tissue. This code can transport protons in wide range of energies (up to 200 MeV for proton. The validity of the fast Monte Carlo (MC code is evaluated with data MCNPX as a reference code. While analytical pencil beam algorithm transport shows great errors (up to 10% near small high density heterogeneities, there was less than 2% deviation of MCNPX results in our dose calculation and isodose distribution. In terms of speed, the code runs 200 times faster than MCNPX. In the Fast MC code which is developed in this work, it takes the system less than 2 minutes to calculate dose for 10 6 particles in an Intel Core 2 Duo 2.66 GHZ desktop computer.
Improved Monte Carlo model for multiple scattering calculations
Institute of Scientific and Technical Information of China (English)
Weiwei Cai; Lin Ma
2012-01-01
The coupling between the Monte Carlo (MC) method and geometrical optics to improve accuracy is investigated.The results obtained show improved agreement with previous experimental data,demonstrating that the MC method,when coupled with simple geometrical optics,can simulate multiple scattering with enhanced fidelity.
Variational Monte Carlo calculations of few-body nuclei
Energy Technology Data Exchange (ETDEWEB)
Wiringa, R.B.
1986-01-01
The variational Monte Carlo method is described. Results for the binding energies, density distributions, momentum distributions, and static longitudinal structure functions of the /sup 3/H, /sup 3/He, and /sup 4/He ground states, and for the energies of the low-lying scattering states in /sup 4/He are presented. 25 refs., 3 figs.
GPU-based Monte Carlo radiotherapy dose calculation using phase-space sources
Townson, Reid; Tian, Zhen; Graves, Yan Jiang; Zavgorodni, Sergei; Jiang, Steve B
2013-01-01
A novel phase-space source implementation has been designed for GPU-based Monte Carlo dose calculation engines. Due to the parallelized nature of GPU hardware, it is essential to simultaneously transport particles of the same type and similar energies but separated spatially to yield a high efficiency. We present three methods for phase-space implementation that have been integrated into the most recent version of the GPU-based Monte Carlo radiotherapy dose calculation package gDPM v3.0. The first method is to sequentially read particles from a patient-dependent phase-space and sort them on-the-fly based on particle type and energy. The second method supplements this with a simple secondary collimator model and fluence map implementation so that patient-independent phase-space sources can be used. Finally, as the third method (called the phase-space-let, or PSL, method) we introduce a novel strategy to pre-process patient-independent phase-spaces and bin particles by type, energy and position. Position bins l...
GMC: a GPU implementation of a Monte Carlo dose calculation based on Geant4.
Jahnke, Lennart; Fleckenstein, Jens; Wenz, Frederik; Hesser, Jürgen
2012-03-07
We present a GPU implementation called GMC (GPU Monte Carlo) of the low energy (CUDA programming interface. The classes for electron and photon interactions as well as a new parallel particle transport engine were implemented. The way a particle is processed is not in a history by history manner but rather by an interaction by interaction method. Every history is divided into steps that are then calculated in parallel by different kernels. The geometry package is currently limited to voxelized geometries. A modified parallel Mersenne twister was used to generate random numbers and a random number repetition method on the GPU was introduced. All phantom results showed a very good agreement between GPU and CPU simulation with gamma indices of >97.5% for a 2%/2 mm gamma criteria. The mean acceleration on one GTX 580 for all cases compared to Geant4 on one CPU core was 4860. The mean number of histories per millisecond on the GPU for all cases was 658 leading to a total simulation time for one intensity-modulated radiation therapy dose distribution of 349 s. In conclusion, Geant4-based Monte Carlo dose calculations were significantly accelerated on the GPU.
GPU-based fast Monte Carlo dose calculation for proton therapy.
Jia, Xun; Schümann, Jan; Paganetti, Harald; Jiang, Steve B
2012-12-07
Accurate radiation dose calculation is essential for successful proton radiotherapy. Monte Carlo (MC) simulation is considered to be the most accurate method. However, the long computation time limits it from routine clinical applications. Recently, graphics processing units (GPUs) have been widely used to accelerate computationally intensive tasks in radiotherapy. We have developed a fast MC dose calculation package, gPMC, for proton dose calculation on a GPU. In gPMC, proton transport is modeled by the class II condensed history simulation scheme with a continuous slowing down approximation. Ionization, elastic and inelastic proton nucleus interactions are considered. Energy straggling and multiple scattering are modeled. Secondary electrons are not transported and their energies are locally deposited. After an inelastic nuclear interaction event, a variety of products are generated using an empirical model. Among them, charged nuclear fragments are terminated with energy locally deposited. Secondary protons are stored in a stack and transported after finishing transport of the primary protons, while secondary neutral particles are neglected. gPMC is implemented on the GPU under the CUDA platform. We have validated gPMC using the TOPAS/Geant4 MC code as the gold standard. For various cases including homogeneous and inhomogeneous phantoms as well as a patient case, good agreements between gPMC and TOPAS/Geant4 are observed. The gamma passing rate for the 2%/2 mm criterion is over 98.7% in the region with dose greater than 10% maximum dose in all cases, excluding low-density air regions. With gPMC it takes only 6-22 s to simulate 10 million source protons to achieve ∼1% relative statistical uncertainty, depending on the phantoms and energy. This is an extremely high efficiency compared to the computational time of tens of CPU hours for TOPAS/Geant4. Our fast GPU-based code can thus facilitate the routine use of MC dose calculation in proton therapy.
Green's function Monte Carlo calculations of /sup 4/He
Energy Technology Data Exchange (ETDEWEB)
Carlson, J.A.
1988-01-01
Green's Function Monte Carlo methods have been developed to study the ground state properties of light nuclei. These methods are shown to reproduce results of Faddeev calculations for A = 3, and are then used to calculate ground state energies, one- and two-body distribution functions, and the D-state probability for the alpha particle. Results are compared to variational Monte Carlo calculations for several nuclear interaction models. 31 refs.
Strategies for CT tissue segmentation for Monte Carlo calculations in nuclear medicine dosimetry
DEFF Research Database (Denmark)
Braad, Poul-Erik; Andersen, Thomas; Hansen, Søren Baarsgaard;
2016-01-01
Purpose: CT images are used for patient specific Monte Carlo treatment planning in radionuclide therapy. The authors investigated the impact of tissue classification, CT image segmentation, and CT errors on Monte Carlo calculated absorbed dose estimates in nuclear medicine. Methods: CT errors...... patient specific dosimetry in nuclear medicine. Accurate dosimetry was obtained with a 13-tissue ramp that included five different bone types....
Sign problem and Monte Carlo calculations beyond Lefschetz thimbles
Alexandru, Andrei; Bedaque, Paulo F; Ridgway, Gregory W; Warrington, Neill C
2015-01-01
We point out that Monte Carlo simulations of theories with severe sign problems can be profitably performed over manifolds in complex space different from the one with fixed imaginary part of the action. We describe a family of such manifolds that interpolate between the tangent space at one critical point, where the sign problem is milder compared to the real plane but in some cases still severe, and the union of relevant thimbles, where the sign problem is mild but a multimodal distribution function complicates the Monte Carlo sampling. We exemplify this approach using a simple 0 + 1 dimensional fermion model previously used on sign problem studies and show that it can solve the model for some parameter values where a solution using Lefshetz thimbles was elusive.
Calculation of photon pulse height distribution using deterministic and Monte Carlo methods
Akhavan, Azadeh; Vosoughi, Naser
2015-12-01
Radiation transport techniques which are used in radiation detection systems comprise one of two categories namely probabilistic and deterministic. However, probabilistic methods are typically used in pulse height distribution simulation by recreating the behavior of each individual particle, the deterministic approach, which approximates the macroscopic behavior of particles by solution of Boltzmann transport equation, is being developed because of its potential advantages in computational efficiency for complex radiation detection problems. In current work linear transport equation is solved using two methods including collided components of the scalar flux algorithm which is applied by iterating on the scattering source and ANISN deterministic computer code. This approach is presented in one dimension with anisotropic scattering orders up to P8 and angular quadrature orders up to S16. Also, multi-group gamma cross-section library required for this numerical transport simulation is generated in a discrete appropriate form. Finally, photon pulse height distributions are indirectly calculated by deterministic methods that approvingly compare with those from Monte Carlo based codes namely MCNPX and FLUKA.
Monte Carlo calculations for r-process nucleosynthesis
Energy Technology Data Exchange (ETDEWEB)
Mumpower, Matthew Ryan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-11-12
A Monte Carlo framework is developed for exploring the impact of nuclear model uncertainties on the formation of the heavy elements. Mass measurements tightly constrain the macroscopic sector of FRDM2012. For r-process nucleosynthesis, it is necessary to understand the microscopic physics of the nuclear model employed. A combined approach of measurements and a deeper understanding of the microphysics is thus warranted to elucidate the site of the r-process.
Neutron and gamma ray transport calculations in shielding system
Energy Technology Data Exchange (ETDEWEB)
Masukawa, Fumihiro; Sakamoto, Hiroki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1998-03-01
In the shields for radiation in nuclear facilities, the penetrating holes of various kinds and irregular shapes are made for the reasons of operation, control and others. These penetrating holes and gaps are filled with air or the substances with relatively small shielding performance, and radiation flows out through them, which is called streaming. As the calculation techniques for the shielding design or analysis related to the streaming problem, there are the calculations by simplified evaluation, transport calculation and Monte Carlo method. In this report, the example of calculation by Monte Carlo method which is represented by MCNP code is discussed. A number of variance reduction techniques which seem effective for the analysis of streaming problem were tried. As to the investigation of the applicability of MCNP code to streaming analysis, the object of analysis which are the concrete walls without hole and with horizontal hole, oblique hole and bent oblique hole, the analysis procedure, the composition of concrete, and the conversion coefficient of dose equivalent, and the results of analysis are reported. As for variance reduction technique, cell importance was adopted. (K.I.)
Energy Technology Data Exchange (ETDEWEB)
Cobut, V.; Frongillo, Y.; Jay-Gerin, J.-P. (Sherbrooke Univ., PQ (Canada). Faculte de Medecine); Patau, J.-P. (Toulouse-3 Univ., 31 (France))
1992-12-01
An energy spectrum of ''subexcitation electrons'' produced in liquid water by electrons with initial energies of a few keV is obtained by using a Monte Carlo transport simulation calculation. It is found that the introduction of vibrational-excitation cross sections leads to the appearance of a sharp peak in the probability density function near the electronic-excitation threshold. Electrons contributing to this peak are shown to be more naturally described if a novel energy spectrum, that we propose to name ''vibrationally-relaxing electron'' spectrum, is introduced. The corresponding distribution function is presented, and an empirical expression of it is given. (author).
Monte Carlo modelling of positron transport in real world applications
Marjanović, S.; Banković, A.; Šuvakov, M.; Petrović, Z. Lj
2014-05-01
Due to the unstable nature of positrons and their short lifetime, it is difficult to obtain high positron particle densities. This is why the Monte Carlo simulation technique, as a swarm method, is very suitable for modelling most of the current positron applications involving gaseous and liquid media. The ongoing work on the measurements of cross-sections for positron interactions with atoms and molecules and swarm calculations for positrons in gasses led to the establishment of good cross-section sets for positron interaction with gasses commonly used in real-world applications. Using the standard Monte Carlo technique and codes that can follow both low- (down to thermal energy) and high- (up to keV) energy particles, we are able to model different systems directly applicable to existing experimental setups and techniques. This paper reviews the results on modelling Surko-type positron buffer gas traps, application of the rotating wall technique and simulation of positron tracks in water vapor as a substitute for human tissue, and pinpoints the challenges in and advantages of applying Monte Carlo simulations to these systems.
Monte Carlo Simulation Calculation of Critical Coupling Constant for Continuum \\phi^4_2
Loinaz, Will; Willey, R. S.
1997-01-01
We perform a Monte Carlo simulation calculation of the critical coupling constant for the continuum {\\lambda \\over 4} \\phi^4_2 theory. The critical coupling constant we obtain is [{\\lambda \\over \\mu^2}]_crit=10.24(3).
The application of the Monte-Carlo neutron transport code MCNP to a small "nuclear battery" system
Puigdellívol Sadurní, Roger
2009-01-01
The project consist in calculate the keff to a small nuclear battery. The code Monte- Carlo neutron transport code MCNP is used to calculate the keff. The calculations are done at the beginning of life to know the capacity of the core becomes critical in different conditions. These conditions are the study parameters that determine the criticality of the core. These parameters are the uranium enrichment, the coated particles (TRISO) packing factor and the size of the core. More...
Calculating kinetics parameters and reactivity changes with continuous-energy Monte Carlo
Energy Technology Data Exchange (ETDEWEB)
Kiedrowski, Brian C [Los Alamos National Laboratory; Brown, Forrest B [Los Alamos National Laboratory; Wilson, Paul [UNIV. WISCONSIN
2009-01-01
The iterated fission probability interpretation of the adjoint flux forms the basis for a method to perform adjoint weighting of tally scores in continuous-energy Monte Carlo k-eigenvalue calculations. Applying this approach, adjoint-weighted tallies are developed for two applications: calculating point reactor kinetics parameters and estimating changes in reactivity from perturbations. Calculations are performed in the widely-used production code, MCNP, and the results of both applications are compared with discrete ordinates calculations, experimental measurements, and other Monte Carlo calculations.
Bond-updating mechanism in cluster Monte Carlo calculations
Heringa, J. R.; Blöte, H. W. J.
1994-03-01
We study a cluster Monte Carlo method with an adjustable parameter: the number of energy levels of a demon mediating the exchange of bond energy with the heat bath. The efficiency of the algorithm in the case of the three-dimensional Ising model is studied as a function of the number of such levels. The optimum is found in the limit of an infinite number of levels, where the method reproduces the Wolff or the Swendsen-Wang algorithm. In this limit the size distribution of flipped clusters approximates a power law more closely than that for a finite number of energy levels.
Energy Technology Data Exchange (ETDEWEB)
Davidson, S; Followill, D; Ibbott, G [University of Texas M. D. Anderson Cancer Center, Houston, TX (United States); Cui, J; Deasy, J [Washington University, St. Louis, MO (United States)], E-mail: sedavids@mdanderson.org
2008-02-01
The Dose Planning Method (DPM) is one of several 'fast' Monte Carlo (MC) computer codes designed to produce an accurate dose calculation for advanced clinical applications. We have developed a flexible machine modeling process and validation tests for open-field and IMRT calculations. To complement the DPM code, a practical and versatile source model has been developed, whose parameters are derived from a standard set of planning system commissioning measurements. The primary photon spectrum and the spectrum resulting from the flattening filter are modeled by a Fatigue function, cut-off by a multiplying Fermi function, which effectively regularizes the difficult energy spectrum determination process. Commonly-used functions are applied to represent the off-axis softening, increasing primary fluence with increasing angle ('the horn effect'), and electron contamination. The patient dependent aspect of the MC dose calculation utilizes the multi-leaf collimator (MLC) leaf sequence file exported from the treatment planning system DICOM output, coupled with the source model, to derive the particle transport. This model has been commissioned for Varian 2100C 6 MV and 18 MV photon beams using percent depth dose, dose profiles, and output factors. A 3-D conformal plan and an IMRT plan delivered to an anthropomorphic thorax phantom were used to benchmark the model. The calculated results were compared to Pinnacle v7.6c results and measurements made using radiochromic film and thermoluminescent detectors (TLD)
Davidson, S.; Cui, J.; Followill, D.; Ibbott, G.; Deasy, J.
2008-02-01
The Dose Planning Method (DPM) is one of several 'fast' Monte Carlo (MC) computer codes designed to produce an accurate dose calculation for advanced clinical applications. We have developed a flexible machine modeling process and validation tests for open-field and IMRT calculations. To complement the DPM code, a practical and versatile source model has been developed, whose parameters are derived from a standard set of planning system commissioning measurements. The primary photon spectrum and the spectrum resulting from the flattening filter are modeled by a Fatigue function, cut-off by a multiplying Fermi function, which effectively regularizes the difficult energy spectrum determination process. Commonly-used functions are applied to represent the off-axis softening, increasing primary fluence with increasing angle ('the horn effect'), and electron contamination. The patient dependent aspect of the MC dose calculation utilizes the multi-leaf collimator (MLC) leaf sequence file exported from the treatment planning system DICOM output, coupled with the source model, to derive the particle transport. This model has been commissioned for Varian 2100C 6 MV and 18 MV photon beams using percent depth dose, dose profiles, and output factors. A 3-D conformal plan and an IMRT plan delivered to an anthropomorphic thorax phantom were used to benchmark the model. The calculated results were compared to Pinnacle v7.6c results and measurements made using radiochromic film and thermoluminescent detectors (TLD).
Directory of Open Access Journals (Sweden)
Jingang Liang
2016-06-01
Full Text Available Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC codes in accomplishing pin-wise three-dimensional (3D full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.
Absorbed Dose Calculations Using Mesh-based Human Phantoms And Monte Carlo Methods
Kramer, Richard
2011-08-01
Health risks attributable to the exposure to ionizing radiation are considered to be a function of the absorbed or equivalent dose to radiosensitive organs and tissues. However, as human tissue cannot express itself in terms of equivalent dose, exposure models have to be used to determine the distribution of equivalent dose throughout the human body. An exposure model, be it physical or computational, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the equivalent dose to organ and tissues of interest. The FASH2 (Female Adult meSH) and the MASH2 (Male Adult meSH) computational phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools and anatomical atlases. Representing standing adults, FASH2 and MASH2 have organ and tissue masses, body height and body mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which can transport photons, electrons and positrons through arbitrary media. This paper reviews the development of the FASH2 and the MASH2 phantoms and presents dosimetric applications for X-ray diagnosis and for prostate brachytherapy.
Thermal transport in nanocrystalline Si and SiGe by ab initio based Monte Carlo simulation.
Yang, Lina; Minnich, Austin J
2017-03-14
Nanocrystalline thermoelectric materials based on Si have long been of interest because Si is earth-abundant, inexpensive, and non-toxic. However, a poor understanding of phonon grain boundary scattering and its effect on thermal conductivity has impeded efforts to improve the thermoelectric figure of merit. Here, we report an ab-initio based computational study of thermal transport in nanocrystalline Si-based materials using a variance-reduced Monte Carlo method with the full phonon dispersion and intrinsic lifetimes from first-principles as input. By fitting the transmission profile of grain boundaries, we obtain excellent agreement with experimental thermal conductivity of nanocrystalline Si [Wang et al. Nano Letters 11, 2206 (2011)]. Based on these calculations, we examine phonon transport in nanocrystalline SiGe alloys with ab-initio electron-phonon scattering rates. Our calculations show that low energy phonons still transport substantial amounts of heat in these materials, despite scattering by electron-phonon interactions, due to the high transmission of phonons at grain boundaries, and thus improvements in ZT are still possible by disrupting these modes. This work demonstrates the important insights into phonon transport that can be obtained using ab-initio based Monte Carlo simulations in complex nanostructured materials.
Monte Carlo calculation of skyshine'' neutron dose from ALS (Advanced Light Source)
Energy Technology Data Exchange (ETDEWEB)
Moin-Vasiri, M.
1990-06-01
This report discusses the following topics on skyshine'' neutron dose from ALS: Sources of radiation; ALS modeling for skyshine calculations; MORSE Monte-Carlo; Implementation of MORSE; Results of skyshine calculations from storage ring; and Comparison of MORSE shielding calculations.
Monte Carlo study of electron transport in monolayer silicene
Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek
2016-11-01
Electron mobility and diffusion coefficients in monolayer silicene are calculated by Monte Carlo simulations using simplified band structure with linear energy bands. Results demonstrate reasonable agreement with the full-band Monte Carlo method in low applied electric field conditions. Negative differential resistivity is observed and an explanation of the origin of this effect is proposed. Electron mobility and diffusion coefficients are studied in low applied electric field conditions. We demonstrate that a comparison of these parameter values can provide a good check that the calculation is correct. Low-field mobility in silicene exhibits {T}-3 temperature dependence for nondegenerate electron gas conditions and {T}-1 for higher electron concentrations, when degenerate conditions are imposed. It is demonstrated that to explain the relation between mobility and temperature in nondegenerate electron gas the linearity of the band structure has to be taken into account. It is also found that electron-electron scattering only slightly modifies low-field electron mobility in degenerate electron gas conditions.
GPU-based fast Monte Carlo simulation for radiotherapy dose calculation.
Jia, Xun; Gu, Xuejun; Graves, Yan Jiang; Folkerts, Michael; Jiang, Steve B
2011-11-21
Monte Carlo (MC) simulation is commonly considered to be the most accurate dose calculation method in radiotherapy. However, its efficiency still requires improvement for many routine clinical applications. In this paper, we present our recent progress toward the development of a graphics processing unit (GPU)-based MC dose calculation package, gDPM v2.0. It utilizes the parallel computation ability of a GPU to achieve high efficiency, while maintaining the same particle transport physics as in the original dose planning method (DPM) code and hence the same level of simulation accuracy. In GPU computing, divergence of execution paths between threads can considerably reduce the efficiency. Since photons and electrons undergo different physics and hence attain different execution paths, we use a simulation scheme where photon transport and electron transport are separated to partially relieve the thread divergence issue. A high-performance random number generator and a hardware linear interpolation are also utilized. We have also developed various components to handle the fluence map and linac geometry, so that gDPM can be used to compute dose distributions for realistic IMRT or VMAT treatment plans. Our gDPM package is tested for its accuracy and efficiency in both phantoms and realistic patient cases. In all cases, the average relative uncertainties are less than 1%. A statistical t-test is performed and the dose difference between the CPU and the GPU results is not found to be statistically significant in over 96% of the high dose region and over 97% of the entire region. Speed-up factors of 69.1 ∼ 87.2 have been observed using an NVIDIA Tesla C2050 GPU card against a 2.27 GHz Intel Xeon CPU processor. For realistic IMRT and VMAT plans, MC dose calculation can be completed with less than 1% standard deviation in 36.1 ∼ 39.6 s using gDPM.
Monte Carlo PENRADIO software for dose calculation in medical imaging
Adrien, Camille; Lòpez Noriega, Mercedes; Bonniaud, Guillaume; Bordy, Jean-Marc; Le Loirec, Cindy; Poumarede, Bénédicte
2014-06-01
The increase on the collective radiation dose due to the large number of medical imaging exams has led the medical physics community to deeply consider the amount of dose delivered and its associated risks in these exams. For this purpose we have developed a Monte Carlo tool, PENRADIO, based on a modified version of PENELOPE code 2006 release, to obtain an accurate individualized radiation dose in conventional and interventional radiography and in computed tomography (CT). This tool has been validated showing excellent agreement between the measured and simulated organ doses in the case of a hip conventional radiography and a coronography. We expect the same accuracy in further results for other localizations and CT examinations.
Energy Technology Data Exchange (ETDEWEB)
O' Brien, M. J.; Brantley, P. S.
2015-01-20
In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 2^{21} = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domain’s replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.
Domain Decomposition of a Constructive Solid Geometry Monte Carlo Transport Code
Energy Technology Data Exchange (ETDEWEB)
O' Brien, M J; Joy, K I; Procassini, R J; Greenman, G M
2008-12-07
Domain decomposition has been implemented in a Constructive Solid Geometry (CSG) Monte Carlo neutron transport code. Previous methods to parallelize a CSG code relied entirely on particle parallelism; but in our approach we distribute the geometry as well as the particles across processors. This enables calculations whose geometric description is larger than what could fit in memory of a single processor, thus it must be distributed across processors. In addition to enabling very large calculations, we show that domain decomposition can speed up calculations compared to particle parallelism alone. We also show results of a calculation of the proposed Laser Inertial-Confinement Fusion-Fission Energy (LIFE) facility, which has 5.6 million CSG parts.
TART97 a coupled neutron-photon 3-D, combinatorial geometry Monte Carlo transport code
Energy Technology Data Exchange (ETDEWEB)
Cullen, D.E.
1997-11-22
TART97 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART97 is distributed on CD. This CD contains on- line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and its data riles.
Monte Carlo Neutrino Transport Through Remnant Disks from Neutron Star Mergers
Richers, S; O'Connor, Evan; Fernandez, Rodrigo; Ott, Christian
2015-01-01
We present Sedonu, a new open source, steady-state, special relativistic Monte Carlo (MC) neutrino transport code, available at bitbucket.org/srichers/sedonu. The code calculates the energy- and angle-dependent neutrino distribution function on fluid backgrounds of any number of spatial dimensions, calculates the rates of change of fluid internal energy and electron fraction, and solves for the equilibrium fluid temperature and electron fraction. We apply this method to snapshots from two dimensional simulations of accretion disks left behind by binary neutron star mergers, varying the input physics and comparing to the results obtained with a leakage scheme for the case of a central black hole and a central hypermassive neutron star. Neutrinos are guided away from the densest regions of the disk and escape preferentially around 45 degrees from the equatorial plane. Neutrino heating is strengthened by MC transport a few scale heights above the disk midplane near the innermost stable circular orbit, potentiall...
COMET-PE: an incident fluence response expansion transport method for radiotherapy calculations
Hayward, Robert M.; Rahnema, Farzad
2013-05-01
Accurate dose calculation is a central component of radiotherapy treatment planning. A new method of dose calculation has been developed based on transport theory and validated by comparison to Monte Carlo methods. The coarse mesh transport method has been extended to allow coupled photon-electron transport in 3D. The method combines stochastic pre-computation with a deterministic solver to achieve high accuracy and precision. To enhance the method for radiotherapy calculations, a new angular basis was derived, and an analytical source treatment was developed. Validation was performed by comparison to DOSXYZnrc using a heterogeneous interface phantom composed of water, aluminum, and lung. Calculations of both kinetic energy released per unit mass and dose were compared. Good agreement was found with a maximum error and root mean square relative error of less than 1.5% for all cases. The results show that the new method achieves an accuracy comparable to Monte Carlo.
Monte Carlo Study of Temperature-dependent Non-diffusive Thermal Transport in Si Nanowires
Ma, Lei; Liu, Mengmeng; Zhao, Xuxin; Wu, Qixing; Sun, Hongyuan
2016-01-01
Non-diffusive thermal transport has gained extensive research interest recently due to its important implications on fundamental understanding of material phonon mean free path distributions and many nanoscale energy applications. In this work, we systematically investigate the role of boundary scattering and nanowire length on the nondiffusive thermal transport in thin silicon nanowires by rigorously solving the phonon Boltzmann transport equation using a variance reduced Monte Carlo technique across a range of temperatures. The simulations use the complete phonon dispersion and spectral lifetime data obtained from first-principle density function theory calculations as input without any adjustable parameters. Our BTE simulation results show that the nanowire length plays an important role in determining the thermal conductivity of silicon nanowires. In addition, our simulation results suggest significant phonon confinement effect for the previously measured silicon nanowires. These findings are important fo...
Update on the Development and Validation of MERCURY: A Modern, Monte Carlo Particle Transport Code
Energy Technology Data Exchange (ETDEWEB)
Procassini, R J; Taylor, J M; McKinley, M S; Greenman, G M; Cullen, D E; O' Brien, M J; Beck, B R; Hagmann, C A
2005-06-06
An update on the development and validation of the MERCURY Monte Carlo particle transport code is presented. MERCURY is a modern, parallel, general-purpose Monte Carlo code being developed at the Lawrence Livermore National Laboratory. During the past year, several major algorithm enhancements have been completed. These include the addition of particle trackers for 3-D combinatorial geometry (CG), 1-D radial meshes, 2-D quadrilateral unstructured meshes, as well as a feature known as templates for defining recursive, repeated structures in CG. New physics capabilities include an elastic-scattering neutron thermalization model, support for continuous energy cross sections and S ({alpha}, {beta}) molecular bound scattering. Each of these new physics features has been validated through code-to-code comparisons with another Monte Carlo transport code. Several important computer science features have been developed, including an extensible input-parameter parser based upon the XML data description language, and a dynamic load-balance methodology for efficient parallel calculations. This paper discusses the recent work in each of these areas, and describes a plan for future extensions that are required to meet the needs of our ever expanding user base.
Energy Technology Data Exchange (ETDEWEB)
Walsh, Jonathan A., E-mail: walshjon@mit.edu [Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, 24-107, Cambridge, MA 02139 (United States); Palmer, Todd S. [Department of Nuclear Engineering and Radiation Health Physics, Oregon State University, 116 Radiation Center, Corvallis, OR 97331 (United States); Urbatsch, Todd J. [XTD-IDA: Theoretical Design, Integrated Design and Assessment, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)
2015-12-15
Highlights: • Generation of discrete differential scattering angle and energy loss cross sections. • Gauss–Radau quadrature utilizing numerically computed cross section moments. • Development of a charged particle transport capability in the Milagro IMC code. • Integration of cross section generation and charged particle transport capabilities. - Abstract: We investigate a method for numerically generating discrete scattering cross sections for use in charged particle transport simulations. We describe the cross section generation procedure and compare it to existing methods used to obtain discrete cross sections. The numerical approach presented here is generalized to allow greater flexibility in choosing a cross section model from which to derive discrete values. Cross section data computed with this method compare favorably with discrete data generated with an existing method. Additionally, a charged particle transport capability is demonstrated in the time-dependent Implicit Monte Carlo radiative transfer code, Milagro. We verify the implementation of charged particle transport in Milagro with analytic test problems and we compare calculated electron depth–dose profiles with another particle transport code that has a validated electron transport capability. Finally, we investigate the integration of the new discrete cross section generation method with the charged particle transport capability in Milagro.
A comparison of Monte Carlo dose calculation denoising techniques
El Naqa, I.; Kawrakow, I.; Fippel, M.; Siebers, J. V.; Lindsay, P. E.; Wickerhauser, M. V.; Vicic, M.; Zakarian, K.; Kauffmann, N.; Deasy, J. O.
2005-03-01
Recent studies have demonstrated that Monte Carlo (MC) denoising techniques can reduce MC radiotherapy dose computation time significantly by preferentially eliminating statistical fluctuations ('noise') through smoothing. In this study, we compare new and previously published approaches to MC denoising, including 3D wavelet threshold denoising with sub-band adaptive thresholding, content adaptive mean-median-hybrid (CAMH) filtering, locally adaptive Savitzky-Golay curve-fitting (LASG), anisotropic diffusion (AD) and an iterative reduction of noise (IRON) method formulated as an optimization problem. Several challenging phantom and computed-tomography-based MC dose distributions with varying levels of noise formed the test set. Denoising effectiveness was measured in three ways: by improvements in the mean-square-error (MSE) with respect to a reference (low noise) dose distribution; by the maximum difference from the reference distribution and by the 'Van Dyk' pass/fail criteria of either adequate agreement with the reference image in low-gradient regions (within 2% in our case) or, in high-gradient regions, a distance-to-agreement-within-2% of less than 2 mm. Results varied significantly based on the dose test case: greater reductions in MSE were observed for the relatively smoother phantom-based dose distribution (up to a factor of 16 for the LASG algorithm); smaller reductions were seen for an intensity modulated radiation therapy (IMRT) head and neck case (typically, factors of 2-4). Although several algorithms reduced statistical noise for all test geometries, the LASG method had the best MSE reduction for three of the four test geometries, and performed the best for the Van Dyk criteria. However, the wavelet thresholding method performed better for the head and neck IMRT geometry and also decreased the maximum error more effectively than LASG. In almost all cases, the evaluated methods provided acceleration of MC results towards statistically more accurate
A comparison of Monte Carlo dose calculation denoising techniques
Energy Technology Data Exchange (ETDEWEB)
Naqa, I El [Washington University, St Louis, MO (United States); Kawrakow, I [National Research Council of Canada, Ottawa, Ontario, Canada (Canada); Fippel, M [Univ Tuebingen, Tuebingen (Germany); Siebers, J V [Virginia Commonwealth University, Richmond, VA (United States); Lindsay, P E [Washington University, St Louis, MO (United States); Wickerhauser, M V [Washington University, St Louis, MO (United States); Vicic, M [Washington University, St Louis, MO (United States); Zakarian, K [Washington University, St Louis, MO (United States); Kauffmann, N [Ecole Polytechnique, Palaiseau (France); Deasy, J O [Washington University, St Louis, MO (United States)
2005-03-07
Recent studies have demonstrated that Monte Carlo (MC) denoising techniques can reduce MC radiotherapy dose computation time significantly by preferentially eliminating statistical fluctuations ('noise') through smoothing. In this study, we compare new and previously published approaches to MC denoising, including 3D wavelet threshold denoising with sub-band adaptive thresholding, content adaptive mean-median-hybrid (CAMH) filtering, locally adaptive Savitzky-Golay curve-fitting (LASG), anisotropic diffusion (AD) and an iterative reduction of noise (IRON) method formulated as an optimization problem. Several challenging phantom and computed-tomography-based MC dose distributions with varying levels of noise formed the test set. Denoising effectiveness was measured in three ways: by improvements in the mean-square-error (MSE) with respect to a reference (low noise) dose distribution; by the maximum difference from the reference distribution and by the 'Van Dyk' pass/fail criteria of either adequate agreement with the reference image in low-gradient regions (within 2% in our case) or, in high-gradient regions, a distance-to-agreement-within-2% of less than 2 mm. Results varied significantly based on the dose test case: greater reductions in MSE were observed for the relatively smoother phantom-based dose distribution (up to a factor of 16 for the LASG algorithm); smaller reductions were seen for an intensity modulated radiation therapy (IMRT) head and neck case (typically, factors of 2-4). Although several algorithms reduced statistical noise for all test geometries, the LASG method had the best MSE reduction for three of the four test geometries, and performed the best for the Van Dyk criteria. However, the wavelet thresholding method performed better for the head and neck IMRT geometry and also decreased the maximum error more effectively than LASG. In almost all cases, the evaluated methods provided acceleration of MC results towards
Towards real-time photon Monte Carlo dose calculation in the cloud.
Ziegenhein, Peter; Kozin, Igor; Kamerling, Cornelis Philippus; Oelfke, Uwe
2017-01-31
Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as GPUs or clusters of central processing units (CPU)-based system. Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that forms in the cloud. Computational resources can be provisioned dynamically at low costs without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and efficiently transports data to and from the cloud. The client application integrates seamlessly into a Treatment Planning System (TPS). It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. The Advanced Encryption Standard (AES) was used to add an addition security layer which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 to 10.9 seconds for simulating a clinical prostate and liver case up to 1\\% statistical uncertainty. The computation times include the data transportation processes with the cloud as well as process scheduling and synchronisation overhead. Cloud based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.
Parallelization of a Monte Carlo particle transport simulation code
Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.
2010-05-01
We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.
Directory of Open Access Journals (Sweden)
Diego Ferraro
2011-01-01
Full Text Available Monte Carlo neutron transport codes are usually used to perform criticality calculations and to solve shielding problems due to their capability to model complex systems without major approximations. However, these codes demand high computational resources. The improvement in computer capabilities leads to several new applications of Monte Carlo neutron transport codes. An interesting one is to use this method to perform cell-level fuel assembly calculations in order to obtain few group constants to be used on core calculations. In the present work the VTT recently developed Serpent v.1.1.7 cell-oriented neutronic calculation code is used to perform cell calculations of a theoretical BWR lattice benchmark with burnable poisons, and the main results are compared to reported ones and with calculations performed with Condor v.2.61, the INVAP's neutronic collision probability cell code.
Energy Technology Data Exchange (ETDEWEB)
Boudou, C
2006-09-15
High grade gliomas are extremely aggressive brain tumours. Specific techniques combining the presence of high atomic number elements within the tumour to an irradiation with a low x-rays (below 100 keV) beam from a synchrotron source were proposed. For the sake of clinical trials, the use of treatment planning system has to be foreseen as well as tailored dosimetry protocols. Objectives of this thesis work were (1) the development of a dose calculation tools based on Monte Carlo code for particles transport and (2) the implementation of an experimental method for the three dimensional verification of the dose delivered. The dosimetric tool is an interface between tomography images from patient or sample and the M.C.N.P.X. general purpose code. Besides, dose distributions were measured through a radiosensitive polymer gel, providing acceptable results compared to calculations.
Directory of Open Access Journals (Sweden)
P.Orea
2003-01-01
Full Text Available We have performed Monte Carlo simulations in the canonical ensemble of a hard-sphere fluid adsorbed in microporous media. The pressure of the adsorbed fluid is calculated by using an original procedure that includes the calculations of the pressure tensor components during the simulation. In order to confirm the equivalence of bulk and adsorbed fluid pressures, we have exploited the mechanical condition of equilibrium and performed additional canonical Monte Carlo simulations in a super system "bulk fluid + adsorbed fluid". When the configuration of a model porous media permits each of its particles to be in contact with adsorbed fluid particles, we found that these pressures are equal. Unlike the grand canonical Monte Carlo method, the proposed calculation approach can be used efficiently to obtain adsorption isotherms over a wide range of fluid densities and porosities of adsorbent.
Mairani, A; Valente, M; Battistoni, G; Botta, F; Pedroli, G; Ferrari, A; Cremonesi, M; Di Dia, A; Ferrari, M; Fasso, A
2011-01-01
Purpose: The calculation of patient-specific dose distribution can be achieved by Monte Carlo simulations or by analytical methods. In this study, FLUKA Monte Carlo code has been considered for use in nuclear medicine dosimetry. Up to now, FLUKA has mainly been dedicated to other fields, namely high energy physics, radiation protection, and hadrontherapy. When first employing a Monte Carlo code for nuclear medicine dosimetry, its results concerning electron transport at energies typical of nuclear medicine applications need to be verified. This is commonly achieved by means of calculation of a representative parameter and comparison with reference data. Dose point kernel (DPK), quantifying the energy deposition all around a point isotropic source, is often the one. Methods: FLUKA DPKS have been calculated in both water and compact bone for monoenergetic electrons (10-3 MeV) and for beta emitting isotopes commonly used for therapy ((89)Sr, (90)Y, (131)I, (153)Sm, (177)Lu, (186)Re, and (188)Re). Point isotropic...
Energy Technology Data Exchange (ETDEWEB)
Sloan, D.P.
1983-05-01
Morel (1981) has developed multigroup Legendre cross sections suitable for input to standard discrete ordinates transport codes for performing charged-particle Fokker-Planck calculations in one-dimensional slab and spherical geometries. Since the Monte Carlo neutron transport code, MORSE, uses the same multigroup cross section data that discrete ordinates codes use, it was natural to consider whether Fokker-Planck calculations could be performed with MORSE. In order to extend the unique three-dimensional forward or adjoint capability of MORSE to Fokker-Planck calculations, the MORSE code was modified to correctly treat the delta-function scattering of the energy operator, and a new set of physically acceptable cross sections was derived to model the angular operator. Morel (1979) has also developed multigroup Legendre cross sections suitable for input to standard discrete ordinates codes for performing electron Boltzmann calculations. These electron cross sections may be treated in MORSE with the same methods developed to treat the Fokker-Planck cross sections. The large magnitude of the elastic scattering cross section, however, severely increases the computation or run time. It is well-known that approximate elastic cross sections are easily obtained by applying the extended transport (or delta function) correction to the Legendre coefficients of the exact cross section. An exact method for performing the extended transport cross section correction produces cross sections which are physically acceptable. Sample calculations using electron cross sections have demonstrated this new technique to be very effective in decreasing the large magnitude of the cross sections.
Fermion Monte Carlo Calculations on Liquid-3He
Energy Technology Data Exchange (ETDEWEB)
Kalos, M H; Colletti, L; Pederiva, F
2004-03-16
Methods and results for calculations of the ground state energy of the bulk system of {sup 3}He atoms are discussed. Results are encouraging: they believe that they demonstrate that their methods offer a solution of the ''fermion sign problem'' and the possibility of direct computation of many-fermion systems with no uncontrolled approximations. Nevertheless, the method is still rather inefficient compared with variational or fixed-node approximate methods. There appears to be a significant populations size effect. The situation is improved by the inclusion of ''Second Stage Importance Sampling'' and of ''Acceptance/Rejection'' adapted to their needs.
Monte Carlo analysis of radiative transport in oceanographic lidar measurements
Energy Technology Data Exchange (ETDEWEB)
Cupini, E.; Ferro, G. [ENEA, Divisione Fisica Applicata, Centro Ricerche Ezio Clementel, Bologna (Italy); Ferrari, N. [Bologna Univ., Bologna (Italy). Dipt. Ingegneria Energetica, Nucleare e del Controllo Ambientale
2001-07-01
The analysis of oceanographic lidar systems measurements is often carried out with semi-empirical methods, since there is only a rough understanding of the effects of many environmental variables. The development of techniques for interpreting the accuracy of lidar measurements is needed to evaluate the effects of various environmental situations, as well as of different experimental geometric configurations and boundary conditions. A Monte Carlo simulation model represents a tool that is particularly well suited for answering these important questions. The PREMAR-2F Monte Carlo code has been developed taking into account the main molecular and non-molecular components of the marine environment. The laser radiation interaction processes of diffusion, re-emission, refraction and absorption are treated. In particular are considered: the Rayleigh elastic scattering, produced by atoms and molecules with small dimensions with respect to the laser emission wavelength (i.e. water molecules), the Mie elastic scattering, arising from atoms or molecules with dimensions comparable to the laser wavelength (hydrosols), the Raman inelastic scattering, typical of water, the absorption of water, inorganic (sediments) and organic (phytoplankton and CDOM) hydrosols, the fluorescence re-emission of chlorophyll and yellow substances. PREMAR-2F is an extension of a code for the simulation of the radiative transport in atmospheric environments (PREMAR-2). The approach followed in PREMAR-2 was to combine conventional Monte Carlo techniques with analytical estimates of the probability of the receiver to have a contribution from photons coming back after an interaction in the field of view of the lidar fluorosensor collecting apparatus. This offers an effective mean for modelling a lidar system with realistic geometric constraints. The retrieved semianalytic Monte Carlo radiative transfer model has been developed in the frame of the Italian Research Program for Antarctica (PNRA) and it is
The impact of advances in computer technology on particle transport Monte Carlo
Energy Technology Data Exchange (ETDEWEB)
Martin, W.R. [Michigan Univ., Ann Arbor, MI (United States). Dept. of Nuclear Engineering; Rathkopf, J.A. [Lawrence Livermore National Lab., CA (United States); Brown, F.B. [Knolls Atomic Power Lab., Schenectady, NY (United States)
1992-01-21
Advances in computer technology, including hardware, architectural, and software advances, have led to dramatic gains in computer performance over the past decade. We summarize these performance trends and discuss the extent to which particle transport Monte Carlo codes have been able to take advantage of these performance gains. We consider MIMD, SIMD, and parallel distributed computer configurations for particle transport Monte Carlo applications. Some specific experience with vectorization and parallelization of production Monte Carlo codes is included. The topic of parallel random number generation is discussed in some detail. Finally, some software issues that hinder the implementation of Monte Carlo methods on parallel processors are addressed.
Molecular transport calculations with Wannier Functions
DEFF Research Database (Denmark)
Thygesen, Kristian Sommer; Jacobsen, Karsten Wedel
2005-01-01
We present a scheme for calculating coherent electron transport in atomic-scale contacts. The method combines a formally exact Green's function formalism with a mean-field description of the electronic structure based on the Kohn-Sham scheme of density functional theory. We use an accurate plane...... is applied to a hydrogen molecule in an infinite Pt wire and a benzene-dithiol (BDT) molecule between Au(111) surfaces. We show that the transmission function of BDT in a wide energy window around the Fermi level can be completely accounted for by only two molecular orbitals. (c) 2005 Elsevier B.V. All...
Srna - Monte Carlo codes for proton transport simulation in combined and voxelized geometries
Directory of Open Access Journals (Sweden)
Ilić Radovan D.
2002-01-01
Full Text Available This paper describes new Monte Carlo codes for proton transport simulations in complex geometrical forms and in materials of different composition. The SRNA codes were developed for three dimensional (3D dose distribution calculation in proton therapy and dosimetry. The model of these codes is based on the theory of proton multiple scattering and a simple model of compound nucleus decay. The developed package consists of two codes: SRNA-2KG and SRNA-VOX. The first code simulates proton transport in combined geometry that can be described by planes and second order surfaces. The second one uses the voxelized geometry of material zones and is specifically adopted for the application of patient computer tomography data. Transition probabilities for both codes are given by the SRNADAT program. In this paper, we will present the models and algorithms of our programs, as well as the results of the numerical experiments we have carried out applying them, along with the results of proton transport simulation obtained through the PETRA and GEANT programs. The simulation of the proton beam characterization by means of the Multi-Layer Faraday Cup and spatial distribution of positron emitters obtained by our program indicate the imminent application of Monte Carlo techniques in clinical practice.
Homma, Yuto; Moriwaki, Hiroyuki; Ohki, Shigeo; Ikeda, Kazumi
2014-06-01
This paper deals with verification of three dimensional triangular prismatic discrete ordinates transport calculation code ENSEMBLE-TRIZ by comparison with multi-group Monte Carlo calculation code GMVP in a large fast breeder reactor. The reactor is a 750 MWe electric power sodium cooled reactor. Nuclear characteristics are calculated at beginning of cycle of an initial core and at beginning and end of cycle of equilibrium core. According to the calculations, the differences between the two methodologies are smaller than 0.0002 Δk in the multi-plication factor, relatively about 1% in the control rod reactivity, and 1% in the sodium void reactivity.
Sakamoto, Y
2002-01-01
In the prevention of nuclear disaster, there needs the information on the dose equivalent rate distribution inside and outside the site, and energy spectra. The three dimensional radiation transport calculation code is a useful tool for the site specific detailed analysis with the consideration of facility structures. It is important in the prediction of individual doses in the future countermeasure that the reliability of the evaluation methods of dose equivalent rate distribution and energy spectra by using of Monte Carlo radiation transport calculation code, and the factors which influence the dose equivalent rate distribution outside the site are confirmed. The reliability of radiation transport calculation code and the influence factors of dose equivalent rate distribution were examined through the analyses of critical accident at JCO's uranium processing plant occurred on September 30, 1999. The radiation transport calculations including the burn-up calculations were done by using of the structural info...
Strategies for CT tissue segmentation for Monte Carlo calculations in nuclear medicine dosimetry
DEFF Research Database (Denmark)
Braad, Poul-Erik; Andersen, Thomas; Hansen, Søren Baarsgaard;
2016-01-01
Purpose: CT images are used for patient specific Monte Carlo treatment planning in radionuclide therapy. The authors investigated the impact of tissue classification, CT image segmentation, and CT errors on Monte Carlo calculated absorbed dose estimates in nuclear medicine. Methods: CT errors...... as a function of patient size, CT reconstruction, and tube current modulation methods were assessed in a phantom experiment on a clinical CT system. The impact of tissue segmentation methods and CT number variations on EGSnrc Monte Carlo calculated absorbed dose distributions was assessed for 99mTc and 131I...... in the ICRP/ICRU male phantom and in a patient PET/CT-scanned with 124I prior to radioiodine therapy. Results: CT number variations types and accurate...
Report of 'Monte Carlo calculation summer seminar'
Energy Technology Data Exchange (ETDEWEB)
Sakurai, Kiyoshi; Kume, Etsuo; Yatabe, Shigeru; Maekawa, Fujio; Yamamoto, Toshihiro; Nagaya, Yasunobu; Mori, Takamasa [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ueki, Kohtaro [Ship Research Inst., Tokyo (Japan); Naito, Yoshitaka [Nippon Advanced Information Service, Tokai, Ibaraki (Japan)
2001-02-01
'Monte Carlo Calculation Summer Seminar', which was sponsored by Research Committee on Particle Simulation with Monte Carlo Method' in Atomic Energy Society of Japan, was held in 26-28 July 2000 at Tokai Research Establishment, Japan Atomic Energy Research Institute. The participator is 111 persons from universities, Research Institutes and Companies. In the beginner course, the lecture of fundamental theory of Monte Carlo Method and the installation to the note-type personal computer of MCNP- 4B2 and attached libraries, sample input were performed. As the seminar is first attempt in Japan, the general review and lecture, installation, exercise calculation were summarized in this report. (author)
penORNL: a parallel Monte Carlo photon and electron transport package using PENELOPE
Energy Technology Data Exchange (ETDEWEB)
Bekar, Kursat B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Miller, Thomas Martin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Patton, Bruce W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Weber, Charles F. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2015-01-01
The parallel Monte Carlo photon and electron transport code package penORNL was developed at Oak Ridge National Laboratory to enable advanced scanning electron microscope (SEM) simulations on high-performance computing systems. This paper discusses the implementations, capabilities and parallel performance of the new code package. penORNL uses PENELOPE for its physics calculations and provides all available PENELOPE features to the users, as well as some new features including source definitions specifically developed for SEM simulations, a pulse-height tally capability for detailed simulations of gamma and x-ray detectors, and a modified interaction forcing mechanism to enable accurate energy deposition calculations. The parallel performance of penORNL was extensively tested with several model problems, and very good linear parallel scaling was observed with up to 512 processors. penORNL, along with its new features, will be available for SEM simulations upon completion of the new pulse-height tally implementation.
Monte Carlo calculation of quantum tunneling in the dilute instanton limit
Cross, M. C.
1986-01-01
A new approach for estimating small quantum tunneling rates by Monte Carlo calculation is proposed and demonstrated on a simple one-dimensional model. The application to many-body situations such as atomic exchange in solid 3He is discussed.
Widder, Joachim; Hollander, Miranda; Ubbels, Jan F.; Bolt, Rene A.; Langendijk, Johannes A.
2010-01-01
Purpose: To define a method of dose prescription employing Monte Carlo (MC) dose calculation in stereotactic body radiotherapy (SBRT) for lung tumours aiming at a dose as low as possible outside of the PTV. Methods and materials: Six typical T1 lung tumours - three small, three large - were construc
Hard, charged spheres in spherical pores. Grand canonical ensemble Monte Carlo calculations
DEFF Research Database (Denmark)
Sloth, Peter; Sørensen, T. S.
1992-01-01
A model consisting of hard charged spheres inside hard spherical pores is investigated by grand canonical ensemble Monte Carlo calculations. It is found that the mean ionic density profiles in the pores are almost the same when the wall of the pore is moderately charged as when it is uncharged...
A general framework for implementing NLO calculations in shower Monte Carlo programs. The POWHEG BOX
Energy Technology Data Exchange (ETDEWEB)
Alioli, Simone [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Nason, Paolo [INFN, Milano-Bicocca (Italy); Oleari, Carlo [INFN, Milano-Bicocca (Italy); Milano-Bicocca Univ. (Italy); Re, Emanuele [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomenology
2010-02-15
In this work we illustrate the POWHEG BOX, a general computer code framework for implementing NLO calculations in shower Monte Carlo programs according to the POWHEG method. Aim of this work is to provide an illustration of the needed theoretical ingredients, a view of how the code is organized and a description of what a user should provide in order to use it. (orig.)
Benchmark calculation of no-core Monte Carlo shell model in light nuclei
Abe, T; Otsuka, T; Shimizu, N; Utsuno, Y; Vary, J P; 10.1063/1.3584062
2011-01-01
The Monte Carlo shell model is firstly applied to the calculation of the no-core shell model in light nuclei. The results are compared with those of the full configuration interaction. The agreements between them are within a few % at most.
Clouvas, A; Antonopoulos-Domis, M; Silva, J
2000-01-01
The dose rate conversion factors D/sub CF/ (absorbed dose rate in air per unit activity per unit of soil mass, nGy h/sup -1/ per Bq kg/sup -1/) are calculated 1 m above ground for photon emitters of natural radionuclides uniformly distributed in the soil. Three Monte Carlo codes are used: 1) The MCNP code of Los Alamos; 2) The GEANT code of CERN; and 3) a Monte Carlo code developed in the Nuclear Technology Laboratory of the Aristotle University of Thessaloniki. The accuracy of the Monte Carlo results is tested by the comparison of the unscattered flux obtained by the three Monte Carlo codes with an independent straightforward calculation. All codes and particularly the MCNP calculate accurately the absorbed dose rate in air due to the unscattered radiation. For the total radiation (unscattered plus scattered) the D/sub CF/ values calculated from the three codes are in very good agreement between them. The comparison between these results and the results deduced previously by other authors indicates a good ag...
Multiplatform application for calculating a combined standard uncertainty using a Monte Carlo method
Niewinski, Marek; Gurnecki, Pawel
2016-12-01
The paper presents a new computer program for calculating a combined standard uncertainty. It implements the algorithm described in JCGM 101:20081 which is concerned with the use of a Monte Carlo method as an implementation of the propagation of distributions for uncertainty evaluation. The accuracy of the calculation has been obtained by using the high quality random number generators. The paper describes the main principles of the program and compares the obtained result with example problems presented in JCGM Supplement 1.
Progress Towards Optimally Efficient Schemes for Monte Carlo Thermal Radiation Transport
Energy Technology Data Exchange (ETDEWEB)
Smedley-Stevenson, R P; Brooks III, E D
2007-09-26
In this summary we review the complementary research being undertaken at AWE and LLNL aimed at developing optimally efficient algorithms for Monte Carlo thermal radiation transport based on the difference formulation. We conclude by presenting preliminary results on the application of Newton-Krylov methods for solving the Symbolic Implicit Monte Carlo (SIMC) energy equation.
Monte Carlo calculations of efficiencies for photon interactions in plastic scintillators
Energy Technology Data Exchange (ETDEWEB)
Bonzi, E.V.; Mainardi, R.T. (Facultad de Matematica, Astronomia y Fisica, Univ. Nacional de Cordoba (Argentina))
1992-12-01
Energy absorption and total peak efficiencies for plastic scintillators have been calculated by means of the Monte Carlo method. These results are of interest for potential uses of plastic scintillators as dosimetric or spectrometric devices. The calculations were carried out for photon energies from 2 keV up to 1 MeV. We considered all of the physical effects involved in each range of energy, photoelectric, Compton and Rayleigh. As a consistency test the same code was used to calculate efficiencies for NaI scintillators. The agreement with results published previously by other authors, within calculated errors, is very satisfactory. (orig.).
Monte Carlo dose calculation improvements for low energy electron beams using eMC.
Fix, Michael K; Frei, Daniel; Volken, Werner; Neuenschwander, Hans; Born, Ernst J; Manser, Peter
2010-08-21
The electron Monte Carlo (eMC) dose calculation algorithm in Eclipse (Varian Medical Systems) is based on the macro MC method and is able to predict dose distributions for high energy electron beams with high accuracy. However, there are limitations for low energy electron beams. This work aims to improve the accuracy of the dose calculation using eMC for 4 and 6 MeV electron beams of Varian linear accelerators. Improvements implemented into the eMC include (1) improved determination of the initial electron energy spectrum by increased resolution of mono-energetic depth dose curves used during beam configuration; (2) inclusion of all the scrapers of the applicator in the beam model; (3) reduction of the maximum size of the sphere to be selected within the macro MC transport when the energy of the incident electron is below certain thresholds. The impact of these changes in eMC is investigated by comparing calculated dose distributions for 4 and 6 MeV electron beams at source to surface distance (SSD) of 100 and 110 cm with applicators ranging from 6 x 6 to 25 x 25 cm(2) of a Varian Clinac 2300C/D with the corresponding measurements. Dose differences between calculated and measured absolute depth dose curves are reduced from 6% to less than 1.5% for both energies and all applicators considered at SSD of 100 cm. Using the original eMC implementation, absolute dose profiles at depths of 1 cm, d(max) and R50 in water lead to dose differences of up to 8% for applicators larger than 15 x 15 cm(2) at SSD 100 cm. Those differences are now reduced to less than 2% for all dose profiles investigated when the improved version of eMC is used. At SSD of 110 cm the dose difference for the original eMC version is even more pronounced and can be larger than 10%. Those differences are reduced to within 2% or 2 mm with the improved version of eMC. In this work several enhancements were made in the eMC algorithm leading to significant improvements in the accuracy of the dose
Meric, N; Bor, D
1999-01-01
Scatter fractions have been determined experimentally for lucite, polyethylene, polypropylene, aluminium and copper of varying thicknesses using a polyenergetic broad X-ray beam of 67 kVp. Simulation of the experiment has been carried out by the Monte Carlo technique under the same input conditions. Comparison of the measured and predicted data with each other and with the previously reported values has been given. The Monte Carlo calculations have also been carried out for water, bakelite and bone to examine the dependence of scatter fraction on the density of the scatterer.
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-01
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-01
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU’s shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0
Energy Technology Data Exchange (ETDEWEB)
Bordy, J.M.; Kodeli, I.; Menard, St.; Bouchet, J.L.; Renard, F.; Martin, E.; Blazy, L.; Voros, S.; Bochud, F.; Laedermann, J.P.; Beaugelin, K.; Makovicka, L.; Quiot, A.; Vermeersch, F.; Roche, H.; Perrin, M.C.; Laye, F.; Bardies, M.; Struelens, L.; Vanhavere, F.; Gschwind, R.; Fernandez, F.; Quesne, B.; Fritsch, P.; Lamart, St.; Crovisier, Ph.; Leservot, A.; Antoni, R.; Huet, Ch.; Thiam, Ch.; Donadille, L.; Monfort, M.; Diop, Ch.; Ricard, M
2006-07-01
The purpose of this conference was to describe the present state of computer codes dedicated to radiation transport or radiation source assessment or dosimetry. The presentations have been parted into 2 sessions: 1) methodology and 2) uses in industrial or medical or research domains. It appears that 2 different calculation strategies are prevailing, both are based on preliminary Monte-Carlo calculations with data storage. First, quick simulations made from a database of particle histories built though a previous Monte-Carlo simulation and secondly, a neuronal approach involving a learning platform generated through a previous Monte-Carlo simulation. This document gathers the slides of the presentations.
Energy Technology Data Exchange (ETDEWEB)
Cho, S; Shin, E H; Kim, J; Ahn, S H; Chung, K; Kim, D-H; Han, Y; Choi, D H [Samsung Medical Center, Seoul (Korea, Republic of)
2015-06-15
Purpose: To evaluate the shielding wall design to protect patients, staff and member of the general public for secondary neutron using a simply analytic solution, multi-Monte Carlo code MCNPX, ANISN and FLUKA. Methods: An analytical and multi-Monte Carlo method were calculated for proton facility (Sumitomo Heavy Industry Ltd.) at Samsung Medical Center in Korea. The NCRP-144 analytical evaluation methods, which produced conservative estimates on the dose equivalent values for the shielding, were used for analytical evaluations. Then, the radiation transport was simulated with the multi-Monte Carlo code. The neutron dose at evaluation point is got by the value using the production of the simulation value and the neutron dose coefficient introduced in ICRP-74. Results: The evaluation points of accelerator control room and control room entrance are mainly influenced by the point of the proton beam loss. So the neutron dose equivalent of accelerator control room for evaluation point is 0.651, 1.530, 0.912, 0.943 mSv/yr and the entrance of cyclotron room is 0.465, 0.790, 0.522, 0.453 mSv/yr with calculation by the method of NCRP-144 formalism, ANISN, FLUKA and MCNP, respectively. The most of Result of MCNPX and FLUKA using the complicated geometry showed smaller values than Result of ANISN. Conclusion: The neutron shielding for a proton therapy facility has been evaluated by the analytic model and multi-Monte Carlo methods. We confirmed that the setting of shielding was located in well accessible area to people when the proton facility is operated.
Monte Carlo Calculation for Landmine Detection using Prompt Gamma Neutron Activation Analysis
Energy Technology Data Exchange (ETDEWEB)
Park, Seungil; Kim, Seong Bong; Yoo, Suk Jae [Plasma Technology Research Center, Gunsan (Korea, Republic of); Shin, Sung Gyun; Cho, Moohyun [POSTECH, Pohang (Korea, Republic of); Han, Seunghoon; Lim, Byeongok [Samsung Thales, Yongin (Korea, Republic of)
2014-05-15
Identification and demining of landmines are a very important issue for the safety of the people and the economic development. To solve the issue, several methods have been proposed in the past. In Korea, National Fusion Research Institute (NFRI) is developing a landmine detector using prompt gamma neutron activation analysis (PGNAA) as a part of the complex sensor-based landmine detection system. In this paper, the Monte Carlo calculation results for this system are presented. Monte Carlo calculation was carried out for the design of the landmine detector using PGNAA. To consider the soil effect, average soil composition is analyzed and applied to the calculation. This results has been used to determine the specification of the landmine detector.
CPMC-Lab: A MATLAB package for Constrained Path Monte Carlo calculations
Nguyen, Huy; Shi, Hao; Xu, Jie; Zhang, Shiwei
2014-12-01
We describe CPMC-Lab, a MATLAB program for the constrained-path and phaseless auxiliary-field Monte Carlo methods. These methods have allowed applications ranging from the study of strongly correlated models, such as the Hubbard model, to ab initio calculations in molecules and solids. The present package implements the full ground-state constrained-path Monte Carlo (CPMC) method in MATLAB with a graphical interface, using the Hubbard model as an example. The package can perform calculations in finite supercells in any dimensions, under periodic or twist boundary conditions. Importance sampling and all other algorithmic details of a total energy calculation are included and illustrated. This open-source tool allows users to experiment with various model and run parameters and visualize the results. It provides a direct and interactive environment to learn the method and study the code with minimal overhead for setup. Furthermore, the package can be easily generalized for auxiliary-field quantum Monte Carlo (AFQMC) calculations in many other models for correlated electron systems, and can serve as a template for developing a production code for AFQMC total energy calculations in real materials. Several illustrative studies are carried out in one- and two-dimensional lattices on total energy, kinetic energy, potential energy, and charge- and spin-gaps.
Energy Technology Data Exchange (ETDEWEB)
Murata, Isao [Osaka Univ., Suita (Japan); Mori, Takamasa; Nakagawa, Masayuki; Itakura, Hirofumi
1996-03-01
The method to calculate neutronics parameters of a core composed of randomly distributed spherical fuels has been developed based on a statistical geometry model with a continuous energy Monte Carlo method. This method was implemented in a general purpose Monte Carlo code MCNP, and a new code MCNP-CFP had been developed. This paper describes the model and method how to use it and the validation results. In the Monte Carlo calculation, the location of a spherical fuel is sampled probabilistically along the particle flight path from the spatial probability distribution of spherical fuels, called nearest neighbor distribution (NND). This sampling method was validated through the following two comparisons: (1) Calculations of inventory of coated fuel particles (CFPs) in a fuel compact by both track length estimator and direct evaluation method, and (2) Criticality calculations for ordered packed geometries. This method was also confined by applying to an analysis of the critical assembly experiment at VHTRC. The method established in the present study is quite unique so as to a probabilistic model of the geometry with a great number of spherical fuels distributed randomly. Realizing the speed-up by vector or parallel computations in future, it is expected to be widely used in calculation of a nuclear reactor core, especially HTGR cores. (author).
MCNPX Monte Carlo simulations of particle transport in SiC semiconductor detectors of fast neutrons
Sedlačková, K.; Zat'ko, B.; Šagátová, A.; Pavlovič, M.; Nečas, V.; Stacho, M.
2014-05-01
The aim of this paper was to investigate particle transport properties of a fast neutron detector based on silicon carbide. MCNPX (Monte Carlo N-Particle eXtended) code was used in our study because it allows seamless particle transport, thus not only interacting neutrons can be inspected but also secondary particles can be banked for subsequent transport. Modelling of the fast-neutron response of a SiC detector was carried out for fast neutrons produced by 239Pu-Be source with the mean energy of about 4.3 MeV. Using the MCNPX code, the following quantities have been calculated: secondary particle flux densities, reaction rates of elastic/inelastic scattering and other nuclear reactions, distribution of residual ions, deposited energy and energy distribution of pulses. The values of reaction rates calculated for different types of reactions and resulting energy deposition values showed that the incident neutrons transfer part of the carried energy predominantly via elastic scattering on silicon and carbon atoms. Other fast-neutron induced reactions include inelastic scattering and nuclear reactions followed by production of α-particles and protons. Silicon and carbon recoil atoms, α-particles and protons are charged particles which contribute to the detector response. It was demonstrated that although the bare SiC material can register fast neutrons directly, its detection efficiency can be enlarged if it is covered by an appropriate conversion layer. Comparison of the simulation results with experimental data was successfully accomplished.
Energy Technology Data Exchange (ETDEWEB)
Walsh, J. A. [Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, NW12-312 Albany, St. Cambridge, MA 02139 (United States); Palmer, T. S. [Department of Nuclear Engineering and Radiation Health Physics, Oregon State University, 116 Radiation Center, Corvallis, OR 97331 (United States); Urbatsch, T. J. [XTD-5: Air Force Systems, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)
2013-07-01
A new method for generating discrete scattering cross sections to be used in charged particle transport calculations is investigated. The method of data generation is presented and compared to current methods for obtaining discrete cross sections. The new, more generalized approach allows greater flexibility in choosing a cross section model from which to derive discrete values. Cross section data generated with the new method is verified through a comparison with discrete data obtained with an existing method. Additionally, a charged particle transport capability is demonstrated in the time-dependent Implicit Monte Carlo radiative transfer code package, Milagro. The implementation of this capability is verified using test problems with analytic solutions as well as a comparison of electron dose-depth profiles calculated with Milagro and an already-established electron transport code. An initial investigation of a preliminary integration of the discrete cross section generation method with the new charged particle transport capability in Milagro is also presented. (authors)
Towards scalable parellelism in Monte Carlo particle transport codes using remote memory access
Energy Technology Data Exchange (ETDEWEB)
Romano, Paul K [Los Alamos National Laboratory; Brown, Forrest B [Los Alamos National Laboratory; Forget, Benoit [MIT
2010-01-01
One forthcoming challenge in the area of high-performance computing is having the ability to run large-scale problems while coping with less memory per compute node. In this work, they investigate a novel data decomposition method that would allow Monte Carlo transport calculations to be performed on systems with limited memory per compute node. In this method, each compute node remotely retrieves a small set of geometry and cross-section data as needed and remotely accumulates local tallies when crossing the boundary of the local spatial domain. initial results demonstrate that while the method does allow large problems to be run in a memory-limited environment, achieving scalability may be difficult due to inefficiencies in the current implementation of RMA operations.
McKinley, Michael Scott; Brooks, Eugene D., III; Szoke, Abraham
2003-07-01
We compare the implicit Monte Carlo (IMC) technique to the symbolic IMC (SIMC) technique, with and without weight vectors in frequency space, for time-dependent line transport in the presence of collisional pumping. We examine the efficiency and accuracy of the IMC and SIMC methods for test problems involving the evolution of a collisionally pumped trapping problem to its steady-state, the surface heating of a cold medium by a beam, and the diffusion of energy from a localized region that is collisionally pumped. The importance of spatial biasing and teleportation for problems involving high opacity is demonstrated. Our numerical solution, along with its associated teleportation error, is checked against theoretical calculations for the last example.
Directory of Open Access Journals (Sweden)
Alhassid Y.
2014-04-01
Full Text Available The shell model Monte Carlo (SMMC method enables calculations in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods, and is particularly suitable for the calculation of level densities in the presence of correlations. We review recent advances and applications of SMMC for the microscopic calculation of level densities. Recent developments include (i a method to calculate accurately the ground-state energy of an odd-mass nucleus, circumventing a sign problem that originates in the projection on an odd number of particles, and (ii a method to calculate directly level densities, which, unlike state densities, do not include the spin degeneracy of the levels. We calculated the level densities of a family of nickel isotopes 59−64Ni and of a heavy deformed rare-earth nucleus 162Dy and found them to be in close agreement with various experimental data sets.
Prudnikov, V. V.; Prudnikov, P. V.; Romanovskiy, D. E.
2016-06-01
A Monte Carlo study of trilayer and spin-valve magnetic structures with giant magnetoresistance effects is carried out. The anisotropic Heisenberg model is used for description of magnetic properties of ultrathin ferromagnetic films forming these structures. The temperature and magnetic field dependences of magnetic characteristics are considered for ferromagnetic and antiferromagnetic configurations of these multilayer structures. The methodology for determination of the magnetoresistance by the Monte Carlo method is introduced; this permits us to calculate the magnetoresistance of multilayer structures for different thicknesses of the ferromagnetic films. The calculated temperature dependence of the magnetoresistance agrees very well with the experimental results measured for the Fe(0 0 1)-Cr(0 0 1) multilayer structure and CFAS-Ag-CFAS-IrMn spin-valve structure based on the half-metallic Heusler alloy Co2FeAl0.5Si0.5.
Prudnikov, V. V.; Prudnikov, P. V.; Romanovskii, D. E.
2015-11-01
The Monte Carlo study of three-layer and spin-valve magnetic structures with giant magnetoresistance effects has been performed with the application of the Heisenberg anisotropic model to the description of the magnetic properties of thin ferromagnetic films. The dependences of the magnetic characteristics on the temperature and external magnetic field have been obtained for the ferromagnetic and antiferromagnetic configurations of these structures. A Monte Carlo method for determining the magnetoresistance coefficient has been developed. The magnetoresistance coefficient has been calculated for three-layer and spin-valve magnetic structures at various thicknesses of ferromagnetic films. It has been shown that the calculated temperature dependence of the magnetoresistance coefficient is in good agreement with experimental data obtained for the Fe(001)/Cr(001) multilayer structure and the CFAS/Ag/CFAS/IrMn spin valve based on the Co2FeAl0.5Si0.5 (CFAS) Heusler alloy.
Energy Technology Data Exchange (ETDEWEB)
Tholomier, M.; Vicario, E.; Doghmane, N.
1987-10-01
The contribution of backscattered electrons to Auger electrons yield was studied with a multiple scattering Monte-Carlo simulation. The Auger backscattering factor has been calculated in the 5 keV-60 keV energy range. The dependence of the Auger backscattering factor on the primary energy and the beam incidence angle were determined. Spatial distributions of backscattered electrons and Auger electrons are presented for a point incident beam. Correlations between these distributions are briefly investigated.
Efficient implementation of the Hellmann-Feynman theorem in a diffusion Monte Carlo calculation.
Vitiello, S A
2011-02-07
Kinetic and potential energies of systems of (4)He atoms in the solid phase are computed at T = 0. Results at two densities of the liquid phase are presented as well. Calculations are performed by the multiweight extension to the diffusion Monte Carlo method that allows the application of the Hellmann-Feynman theorem in a robust and efficient way. This is a general method that can be applied in other situations of interest as well.
Poster — Thur Eve — 14: Improving Tissue Segmentation for Monte Carlo Dose Calculation using DECT
Energy Technology Data Exchange (ETDEWEB)
Di Salvio, A.; Bedwani, S.; Carrier, J-F. [Centre hospitalier de l' Université de Montréal (Canada); Bouchard, H. [National Physics Laboratory, Teddington (United Kingdom)
2014-08-15
Purpose: To improve Monte Carlo dose calculation accuracy through a new tissue segmentation technique with dual energy CT (DECT). Methods: Electron density (ED) and effective atomic number (EAN) can be extracted directly from DECT data with a stoichiometric calibration method. Images are acquired with Monte Carlo CT projections using the user code egs-cbct and reconstructed using an FDK backprojection algorithm. Calibration is performed using projections of a numerical RMI phantom. A weighted parameter algorithm then uses both EAN and ED to assign materials to voxels from DECT simulated images. This new method is compared to a standard tissue characterization from single energy CT (SECT) data using a segmented calibrated Hounsfield unit (HU) to ED curve. Both methods are compared to the reference numerical head phantom. Monte Carlo simulations on uniform phantoms of different tissues using dosxyz-nrc show discrepancies in depth-dose distributions. Results: Both SECT and DECT segmentation methods show similar performance assigning soft tissues. Performance is however improved with DECT in regions with higher density, such as bones, where it assigns materials correctly 8% more often than segmentation with SECT, considering the same set of tissues and simulated clinical CT images, i.e. including noise and reconstruction artifacts. Furthermore, Monte Carlo results indicate that kV photon beam depth-dose distributions can double between two tissues of density higher than muscle. Conclusions: A direct acquisition of ED and the added information of EAN with DECT data improves tissue segmentation and increases the accuracy of Monte Carlo dose calculation in kV photon beams.
Mohammadi, A; Hassanzadeh, M; Gharib, M
2016-02-01
In this study, shielding calculation and criticality safety analysis were carried out for general material testing reactor (MTR) research reactors interim storage and relevant transportation cask. During these processes, three major terms were considered: source term, shielding, and criticality calculations. The Monte Carlo transport code MCNP5 was used for shielding calculation and criticality safety analysis and ORIGEN2.1 code for source term calculation. According to the results obtained, a cylindrical cask with body, top, and bottom thicknesses of 18, 13, and 13 cm, respectively, was accepted as the dual-purpose cask. Furthermore, it is shown that the total dose rates are below the normal transport criteria that meet the standards specified.
Energy Technology Data Exchange (ETDEWEB)
Zhaoyuan Liu; Kord Smith; Benoit Forget; Javier Ortensi
2016-05-01
A new method for computing homogenized assembly neutron transport cross sections and dif- fusion coefficients that is both rigorous and computationally efficient is proposed in this paper. In the limit of a homogeneous hydrogen slab, the new method is equivalent to the long-used, and only-recently-published CASMO transport method. The rigorous method is used to demonstrate the sources of inaccuracy in the commonly applied “out-scatter” transport correction. It is also demonstrated that the newly developed method is directly applicable to lattice calculations per- formed by Monte Carlo and is capable of computing rigorous homogenized transport cross sections for arbitrarily heterogeneous lattices. Comparisons of several common transport cross section ap- proximations are presented for a simple problem of infinite medium hydrogen. The new method has also been applied in computing 2-group diffusion data for an actual PWR lattice from BEAVRS benchmark.
Sadeghi, Mahdi; Raisali, Gholamreza; Hosseini, S Hamed; Shavar, Arzhang
2008-04-01
This article presents a brachytherapy source having 103Pd adsorbed onto a cylindrical silver rod that has been developed by the Agricultural, Medical, and Industrial Research School for permanent implant applications. Dosimetric characteristics (radial dose function, anisotropy function, and anisotropy factor) of this source were experimentally and theoretically determined in terms of the updated AAPM Task group 43 (TG-43U1) recommendations. Monte Carlo simulations were used to calculate the dose rate constant. Measurements were performed using TLD-GR200A circular chip dosimeters using standard methods employing thermoluminescent dosimeters in a Perspex phantom. Precision machined bores in the phantom located the dosimeters and the source in a reproducible fixed geometry, providing for transverse-axis and angular dose profiles over a range of distances from 0.5 to 5 cm. The Monte Carlo N-particle (MCNP) code, version 4C simulation techniques have been used to evaluate the dose-rate distributions around this model 103Pd source in water and Perspex phantoms. The Monte Carlo calculated dose rate constant of the IRA-103Pd source in water was found to be 0.678 cGy h(-1) U(-1) with an approximate uncertainty of +/-0.1%. The anisotropy function, F(r, theta), and the radial dose function, g(r), of the IRA- 103Pd source were also measured in a Perspex phantom and calculated in both Perspex and liquid water phantoms.
Hickson, Kevin J; O'Keefe, Graeme J
2014-09-01
The scalable XCAT voxelised phantom was used with the GATE Monte Carlo toolkit to investigate the effect of voxel size on dosimetry estimates of internally distributed radionuclide calculated using direct Monte Carlo simulation. A uniformly distributed Fluorine-18 source was simulated in the Kidneys of the XCAT phantom with the organ self dose (kidney ← kidney) and organ cross dose (liver ← kidney) being calculated for a number of organ and voxel sizes. Patient specific dose factors (DF) from a clinically acquired FDG PET/CT study have also been calculated for kidney self dose and liver ← kidney cross dose. Using the XCAT phantom it was found that significantly small voxel sizes are required to achieve accurate calculation of organ self dose. It has also been used to show that a voxel size of 2 mm or less is suitable for accurate calculations of organ cross dose. To compensate for insufficient voxel sampling a correction factor is proposed. This correction factor is applied to the patient specific dose factors calculated with the native voxel size of the PET/CT study.
Diffusion coefficients for LMFBR cells calculated with MOC and Monte Carlo methods
Energy Technology Data Exchange (ETDEWEB)
Rooijen, W.F.G. van, E-mail: rooijen@u-fukui.ac.j [Research Institute of Nuclear Energy, University of Fukui, Bunkyo 3-9-1, Fukui-shi, Fukui-ken 910-8507 (Japan); Chiba, G., E-mail: chiba.go@jaea.go.j [Japan Atomic Energy Agency, 2-4 Shirakata Shirane, Tokai-mura, Naka-gun, Ibaraki-ken 319-1195 (Japan)
2011-01-15
The present work discusses the calculation of the diffusion coefficient of a lattice of hexagonal cells, with both 'sodium present' and 'sodium absent' conditions. Calculations are performed in the framework of lattice theory (also known as fundamental mode approximation). Unlike the classical approaches, our heterogeneous leakage model allows the calculation of diffusion coefficients under all conditions, even if planar voids are present in the lattice. Equations resulting from this model are solved using the method of characteristics (MOC). Independent confirmation of the MOC result comes from Monte Carlo calculations, in which the diffusion coefficient is obtained without any of the assumptions of lattice theory. It is shown by comparison to the Monte Carlo results that the MOC solution yields correct values of the diffusion coefficient under all conditions, even in cases where the classic calculation of the diffusion coefficient fails. This work is a first step in the development of a robust method to calculate the diffusion coefficient of lattice cells. Adoption into production codes will require more development and validation of the method.
Trail-Needs pseudopotentials in quantum Monte Carlo calculations with plane-wave/blip basis sets
Drummond, N. D.; Trail, J. R.; Needs, R. J.
2016-10-01
We report a systematic analysis of the performance of a widely used set of Dirac-Fock pseudopotentials for quantum Monte Carlo (QMC) calculations. We study each atom in the periodic table from hydrogen (Z =1 ) to mercury (Z =80 ), with the exception of the 4 f elements (57 ≤Z ≤70 ). We demonstrate that ghost states are a potentially serious problem when plane-wave basis sets are used in density functional theory (DFT) orbital-generation calculations, but that this problem can be almost entirely eliminated by choosing the s channel to be local in the DFT calculation; the d channel can then be chosen to be local in subsequent QMC calculations, which generally leads to more accurate results. We investigate the achievable energy variance per electron with different levels of trial wave function and we determine appropriate plane-wave cutoff energies for DFT calculations for each pseudopotential. We demonstrate that the so-called "T-move" scheme in diffusion Monte Carlo is essential for many elements. We investigate the optimal choice of spherical integration rule for pseudopotential projectors in QMC calculations. The information reported here will prove crucial in the planning and execution of QMC projects involving beyond-first-row elements.
COMET-PE as an Alternative to Monte Carlo for Photon and Electron Transport
Hayward, Robert M.; Rahnema, Farzad
2014-06-01
Monte Carlo methods are a central component of radiotherapy treatment planning, shielding design, detector modeling, and other applications. Long calculation times, however, can limit the usefulness of these purely stochastic methods. The coarse mesh method for photon and electron transport (COMET-PE) provides an attractive alternative. By combining stochastic pre-computation with a deterministic solver, COMET-PE achieves accuracy comparable to Monte Carlo methods in only a fraction of the time. The method's implementation has been extended to 3D, and in this work, it is validated by comparison to DOSXYZnrc using a photon radiotherapy benchmark. The comparison demonstrates excellent agreement; of the voxels that received more than 10% of the maximum dose, over 97.3% pass a 2% / 2mm acceptance test and over 99.7% pass a 3% / 3mm test. Furthermore, the method is over an order of magnitude faster than DOSXYZnrc and is able to take advantage of both distributed-memory and shared-memory parallel architectures for increased performance.
GPUMCD: a new GPU-oriented Monte Carlo dose calculation platform
Hissoiny, Sami; Ozell, Benoît; Després, Philippe
2011-01-01
Purpose: Monte Carlo methods are considered the gold standard for dosimetric computations in radiotherapy. Their execution time is however still an obstacle to the routine use of Monte Carlo packages in a clinical setting. To address this problem, a completely new, and designed from the ground up for the GPU, Monte Carlo dose calculation package for voxelized geometries is proposed: GPUMCD. Method : GPUMCD implements a coupled photon-electron Monte Carlo simulation for energies in the range 0.01 MeV to 20 MeV. An analogue simulation of photon interactions is used and a Class II condensed history method has been implemented for the simulation of electrons. A new GPU random number generator, some divergence reduction methods as well as other optimization strategies are also described. GPUMCD was run on a NVIDIA GTX480 while single threaded implementations of EGSnrc and DPM were run on an Intel Core i7 860. Results : Dosimetric results obtained with GPUMCD were compared to EGSnrc. In all but one test case, 98% o...
Three-dimensional hypersonic rarefied flow calculations using direct simulation Monte Carlo method
Celenligil, M. Cevdet; Moss, James N.
1993-01-01
A summary of three-dimensional simulations on the hypersonic rarefied flows in an effort to understand the highly nonequilibrium flows about space vehicles entering the Earth's atmosphere for a realistic estimation of the aerothermal loads is presented. Calculations are performed using the direct simulation Monte Carlo method with a five-species reacting gas model, which accounts for rotational and vibrational internal energies. Results are obtained for the external flows about various bodies in the transitional flow regime. For the cases considered, convective heating, flowfield structure and overall aerodynamic coefficients are presented and comparisons are made with the available experimental data. The agreement between the calculated and measured results are very good.
Applying graphics processor units to Monte Carlo dose calculation in radiation therapy
Directory of Open Access Journals (Sweden)
Bakhtiari M
2010-01-01
Full Text Available We investigate the potential in using of using a graphics processor unit (GPU for Monte-Carlo (MC-based radiation dose calculations. The percent depth dose (PDD of photons in a medium with known absorption and scattering coefficients is computed using a MC simulation running on both a standard CPU and a GPU. We demonstrate that the GPU′s capability for massive parallel processing provides a significant acceleration in the MC calculation, and offers a significant advantage for distributed stochastic simulations on a single computer. Harnessing this potential of GPUs will help in the early adoption of MC for routine planning in a clinical environment.
Probing finite size effects in $(\\lambda \\Phi^{4})_4$ MonteCarlo calculations
Agodi, A
1999-01-01
The Constrained Effective Potential (CEP) is known to be equivalent to the usual Effective Potential (EP) in the infinite volume limit. We have carried out MonteCarlo calculations based on the two different definitions to get informations on finite size effects. We also compared these calculations with those based on an Improved CEP (ICEP) which takes into account the finite size of the lattice. It turns out that ICEP actually reduces the finite size effects which are more visible near the vanishing of the external source.
Radon detection in conical diffusion chambers: Monte Carlo calculations and experiment
Energy Technology Data Exchange (ETDEWEB)
Rickards, J.; Golzarri, J. I.; Espinosa, G., E-mail: espinosa@fisica.unam.mx [Instituto de Física, Universidad Nacional Autónoma de México Circuito de la Investigación Científica, Ciudad Universitaria México, D.F. 04520, México (Mexico); Vázquez-López, C. [Departamento de Física, Centro de Investigación y de Estudios Avanzados del IPN Ave. IPN 2508, Col. San Pedro Zacatenco, México 07360, DF, México (Mexico)
2015-07-23
The operation of radon detection diffusion chambers of truncated conical shape was studied using Monte Carlo calculations. The efficiency was studied for alpha particles generated randomly in the volume of the chamber, and progeny generated randomly on the interior surface, which reach track detectors placed in different positions within the chamber. Incidence angular distributions, incidence energy spectra and path length distributions are calculated. Cases studied include different positions of the detector within the chamber, varying atmospheric pressure, and introducing a cutoff incidence angle and energy.
Monte-Carlo calculations of light nuclei with the Reid potential
Energy Technology Data Exchange (ETDEWEB)
Lomnitz-Adler, J. (Universidad Nacional Autonoma de Mexico, Mexico City. Inst. de Fisica)
1981-01-01
A Monte-Carlo method is developed to calculate the binding energy and density distribution of the /sup 3/H and /sup 4/He nuclei for a variational wave function written as a symmetrized product of correlation operators. The upper bounds obtained with the Reid potential are -6.86 +- .08 and -22.9 +- .5 MeV respectively. The Coulomb interaction in /sup 4/He is ignored. The calculated density distributions have reasonable radii, but they do not show any dip at the center.
Srna-Monte Carlo codes for proton transport simulation in combined and voxelized geometries
Ilic, R D; Stankovic, S J
2002-01-01
This paper describes new Monte Carlo codes for proton transport simulations in complex geometrical forms and in materials of different composition. The SRNA codes were developed for three dimensional (3D) dose distribution calculation in proton therapy and dosimetry. The model of these codes is based on the theory of proton multiple scattering and a simple model of compound nucleus decay. The developed package consists of two codes: SRNA-2KG and SRNA-VOX. The first code simulates proton transport in combined geometry that can be described by planes and second order surfaces. The second one uses the voxelized geometry of material zones and is specifically adopted for the application of patient computer tomography data. Transition probabilities for both codes are given by the SRNADAT program. In this paper, we will present the models and algorithms of our programs, as well as the results of the numerical experiments we have carried out applying them, along with the results of proton transport simulation obtaine...
Monte Carlo simulation of ballistic transport in high-mobility channels
Energy Technology Data Exchange (ETDEWEB)
Sabatini, G; Marinchio, H; Palermo, C; Varani, L; Daoud, T; Teissier, R [Institut d' Electronique du Sud (CNRS UMR 5214) - Universite Montpellier II (France); Rodilla, H; Gonzalez, T; Mateos, J, E-mail: sabatini@ies.univ-montp2.f [Departamento de Fisica Aplicada - Universidad de Salamanca (Spain)
2009-11-15
By means of Monte Carlo simulations coupled with a two-dimensional Poisson solver, we evaluate directly the possibility to use high mobility materials in ultra fast devices exploiting ballistic transport. To this purpose, we have calculated specific physical quantities such as the transit time, the transit velocity, the free flight time and the mean free path as functions of applied voltage in InAs channels with different lengths, from 2000 nm down to 50 nm. In this way the transition from diffusive to ballistic transport is carefully described. We remark a high value of the mean transit velocity with a maximum of 14x10{sup 5} m/s for a 50 nm-long channel and a transit time shorter than 0.1 ps, corresponding to a cutoff frequency in the terahertz domain. The percentage of ballistic electrons and the number of scatterings as functions of distance are also reported, showing the strong influence of quasi-ballistic transport in the shorter channels.
Radiation Transport for Explosive Outflows: A Multigroup Hybrid Monte Carlo Method
Wollaeger, Ryan T; Graziani, Carlo; Couch, Sean M; Jordan, George C; Lamb, Donald Q; Moses, Gregory A
2013-01-01
We explore the application of Implicit Monte Carlo (IMC) and Discrete Diffusion Monte Carlo (DDMC) to radiation transport in strong fluid outflows with structured opacity. The IMC method of Fleck & Cummings is a stochastic computational technique for nonlinear radiation transport. IMC is partially implicit in time and may suffer in efficiency when tracking Monte Carlo particles through optically thick materials. The DDMC method of Densmore accelerates an IMC computation where the domain is diffusive. Recently, Abdikamalov extended IMC and DDMC to multigroup, velocity-dependent neutrino transport with the intent of modeling neutrino dynamics in core-collapse supernovae. Densmore has also formulated a multifrequency extension to the originally grey DDMC method. In this article we rigorously formulate IMC and DDMC over a high-velocity Lagrangian grid for possible application to photon transport in the post-explosion phase of Type Ia supernovae. The method described is suitable for a large variety of non-mono...
MONTE CARLO NEUTRINO TRANSPORT THROUGH REMNANT DISKS FROM NEUTRON STAR MERGERS
Energy Technology Data Exchange (ETDEWEB)
Richers, Sherwood; Ott, Christian D. [TAPIR, Mailcode 350-17, Walter Burke Institute for Theoretical Physics, California Institute of Technology, Pasadena, CA 91125 (United States); Kasen, Daniel; Fernández, Rodrigo [Department of Astronomy and Theoretical Astrophysics Center, University of California, Berkeley, CA 94720 (United States); O’Connor, Evan [Department of Physics, Campus Code 8202, North Carolina State University, Raleigh, NC 27695 (United States)
2015-11-01
We present Sedonu, a new open source, steady-state, special relativistic Monte Carlo (MC) neutrino transport code, available at bitbucket.org/srichers/sedonu. The code calculates the energy- and angle-dependent neutrino distribution function on fluid backgrounds of any number of spatial dimensions, calculates the rates of change of fluid internal energy and electron fraction, and solves for the equilibrium fluid temperature and electron fraction. We apply this method to snapshots from two-dimensional simulations of accretion disks left behind by binary neutron star mergers, varying the input physics and comparing to the results obtained with a leakage scheme for the cases of a central black hole and a central hypermassive neutron star. Neutrinos are guided away from the densest regions of the disk and escape preferentially around 45° from the equatorial plane. Neutrino heating is strengthened by MC transport a few scale heights above the disk midplane near the innermost stable circular orbit, potentially leading to a stronger neutrino-driven wind. Neutrino cooling in the dense midplane of the disk is stronger when using MC transport, leading to a globally higher cooling rate by a factor of a few and a larger leptonization rate by an order of magnitude. We calculate neutrino pair annihilation rates and estimate that an energy of 2.8 × 10{sup 46} erg is deposited within 45° of the symmetry axis over 300 ms when a central BH is present. Similarly, 1.9 × 10{sup 48} erg is deposited over 3 s when an HMNS sits at the center, but neither estimate is likely to be sufficient to drive a gamma-ray burst jet.
Calculation of transport coefficients in an axisymmetric plasma
Energy Technology Data Exchange (ETDEWEB)
Shumaker, D.E.
1977-01-01
A method of calculating the transport coefficient in an axisymmetric toroidal plasma is presented. This method is useful in calculating the transport coefficients in a Tokamak plasma confinement device. The particle density and temperature are shown to be a constant on a magnetic flux surface. Transport equations are given for the total particle flux and total energy flux crossing a closed toroidal surface. Also transport equations are given for the toroidal magnetic flux. A computer code was written to calculate the transport coefficients for a three species plasma, electrons and two species of ions. This is useful for calculating the transport coefficients of a plasma which contains impurities. It was found that the particle and energy transport coefficients are increased by a large amount, and the transport coefficients for the toroidal magnetic field are reduced by a small amount.
Uncertainty calculation in transport models and forecasts
DEFF Research Database (Denmark)
Manzo, Stefano; Prato, Carlo Giacomo
. & Prato, C. G. (2015). How uncertainty in input and parameters influences transport model output: a four-stage model case-study. Transport Policy, 38, 64-72. 3 Manzo, S., Nielsen, O. A. & Prato, C. G. (2015). How uncertainty in socio-economic variables affects large-scale transport model forecasts......Transport projects and policy evaluations are often based on transport model output, i.e. traffic flows and derived effects. However, literature has shown that there is often a considerable difference between forecasted and observed traffic flows. This difference causes misallocation of (public....... The results highlighted that both the choice of the variable distributions and the use of different assignment algorithms has a noticeable impact on model output. Besides, it showed that the higher the link congestion, the lower the level of final uncertainty. The third paper presented in this thesis deals...
Demmel, F.; Pokhilchuk, K.
2014-12-01
The energy resolution of an indirect time of flight (tof) spectrometer is determined mainly by the pulse shape of the incoming pulse and the contribution of the crystal analyser. We performed extensive Monte Carlo simulations for the indirect near-backscattering spectrometer OSIRIS utilising the McStas neutron ray-traycing package. The simulations are accompanied by analytical calculations for the energy resolution. From simulation and calculation an excellent description for the width of the line is achieved for the PG002 and PG004 energy setting. The simulations and calculations reveal that the secondary spectrometer and hence the analyser geometry is the dominating term for the energy resolution at zero energy transfer. The remaining differences in the lineshape can be traced to a not perfectly modeled hydrogen moderator. The simulations and calculations predict a superb energy resolution of less than 100 μeV at an energy transfer of 15 meV.
Energy Technology Data Exchange (ETDEWEB)
Demmel, F., E-mail: franz.demmel@stfc.ac.uk [ISIS Facility, Didcot, OX11 0QX (United Kingdom); Pokhilchuk, K. [ISIS Facility, Didcot, OX11 0QX (United Kingdom); Loughborough University, Loughborough (United Kingdom)
2014-12-11
The energy resolution of an indirect time of flight (tof) spectrometer is determined mainly by the pulse shape of the incoming pulse and the contribution of the crystal analyser. We performed extensive Monte Carlo simulations for the indirect near-backscattering spectrometer OSIRIS utilising the McStas neutron ray-traycing package. The simulations are accompanied by analytical calculations for the energy resolution. From simulation and calculation an excellent description for the width of the line is achieved for the PG002 and PG004 energy setting. The simulations and calculations reveal that the secondary spectrometer and hence the analyser geometry is the dominating term for the energy resolution at zero energy transfer. The remaining differences in the lineshape can be traced to a not perfectly modeled hydrogen moderator. The simulations and calculations predict a superb energy resolution of less than 100 μeV at an energy transfer of 15 meV.
Effects of CT based Voxel Phantoms on Dose Distribution Calculated with Monte Carlo Method
Institute of Scientific and Technical Information of China (English)
Chen Chaobin; Huang Qunying; Wu Yican
2005-01-01
A few CT-based voxel phantoms were produced to investigate the sensitivity of Monte Carlo simulations of X-ray beam and electron beam to the proportions of elements and the mass densities of the materials used to express the patient's anatomical structure. The human body can be well outlined by air, lung, adipose, muscle, soft bone and hard bone to calculate the dose distribution with Monte Carlo method. The effects of the calibration curves established by using various CT scanners are not clinically significant based on our investigation. The deviation from the values of cumulative dose volume histogram derived from CT-based voxel phantoms is less than 1% for the given target.
Gritzay, Olena; Kalchenko, Oleksandr; Klimova, Nataliya; Razbudey, Volodymyr; Sanzhur, Andriy; Binney, Stephen
2005-05-01
The presented results show our consecutive steps in developing a neutron source with parameters required by Boron Neutron Capture Therapy (BNCT) at the Kyiv Research Reactor (KRR). The main goal of this work was to analyze the influence of installation of different types of uranium converters close to the reactor core on neutron beam characteristics and on level of reactor safety. The general Monte Carlo radiation transport code MCNP, version 4B, has been used for these calculations.
Validation of Monte Carlo calculated surface doses for megavoltage photon beams.
Abdel-Rahman, Wamied; Seuntjens, Jan P; Verhaegen, Frank; Deblois, François; Podgorsak, Ervin B
2005-01-01
Recent work has shown that there is significant uncertainty in measuring build-up doses in mega-voltage photon beams especially at high energies. In this present investigation we used a phantom-embedded extrapolation chamber (PEEC) made of Solid Water to validate Monte Carlo (MC)-calculated doses in the dose build-up region for 6 and 18 MV x-ray beams. The study showed that the percentage depth ionizations (PDIs) obtained from measurements are higher than the percentage depth doses (PDDs) obtained with Monte Carlo techniques. To validate the MC-calculated PDDs, the design of the PEEC was incorporated into the simulations. While the MC-calculated and measured PDIs in the dose build-up region agree with one another for the 6 MV beam, a non-negligible difference is observed for the 18 MV x-ray beam. A number of experiments and theoretical studies of various possible effects that could be the source of this discrepancy were performed. The contribution of contaminating neutrons and protons to the build-up dose region in the 18 MV x-ray beam is negligible. Moreover, the MC calculations using the XCOM photon cross-section database and the NIST bremsstrahlung differential cross section do not explain the discrepancy between the MC calculations and measurement in the dose build-up region for the 18 MV. A simple incorporation of triplet production events into the MC dose calculation increases the calculated doses in the build-up region but does not fully account for the discrepancy between measurement and calculations for the 18 MV x-ray beam.
DEFF Research Database (Denmark)
Sloth, Peter
1990-01-01
Density profiles and partition coefficients are obtained for hard-sphere fluids inside hard, spherical pores of different sizes by grand canonical ensemble Monte Carlo calculations. The Monte Carlo results are compared to the results obtained by application of different kinds of integral equation...
Applicability of 3D Monte Carlo simulations for local values calculations in a PWR core
Bernard, Franck; Cochet, Bertrand; Jinaphanh, Alexis; Jacquet, Olivier
2014-06-01
As technical support of the French Nuclear Safety Authority, IRSN has been developing the MORET Monte Carlo code for many years in the framework of criticality safety assessment and is now working to extend its application to reactor physics. For that purpose, beside the validation for criticality safety (more than 2000 benchmarks from the ICSBEP Handbook have been modeled and analyzed), a complementary validation phase for reactor physics has been started, with benchmarks from IRPHEP Handbook and others. In particular, to evaluate the applicability of MORET and other Monte Carlo codes for local flux or power density calculations in large power reactors, it has been decided to contribute to the "Monte Carlo Performance Benchmark" (hosted by OECD/NEA). The aim of this benchmark is to monitor, in forthcoming decades, the performance progress of detailed Monte Carlo full core calculations. More precisely, it measures their advancement towards achieving high statistical accuracy in reasonable computation time for local power at fuel pellet level. A full PWR reactor core is modeled to compute local power densities for more than 6 million fuel regions. This paper presents results obtained at IRSN for this benchmark with MORET and comparisons with MCNP. The number of fuel elements is so large that source convergence as well as statistical convergence issues could cause large errors in local tallies, especially in peripheral zones. Various sampling or tracking methods have been implemented in MORET, and their operational effects on such a complex case have been studied. Beyond convergence issues, to compute local values in so many fuel regions could cause prohibitive slowing down of neutron tracking. To avoid this, energy grid unification and tallies preparation before tracking have been implemented, tested and proved to be successful. In this particular case, IRSN obtained promising results with MORET compared to MCNP, in terms of local power densities, standard
Tseung, H Wan Chan; Beltran, C
2014-01-01
Purpose: Very fast Monte Carlo (MC) simulations of proton transport have been implemented recently on GPUs. However, these usually use simplified models for non-elastic (NE) proton-nucleus interactions. Our primary goal is to build a GPU-based proton transport MC with detailed modeling of elastic and NE collisions. Methods: Using CUDA, we implemented GPU kernels for these tasks: (1) Simulation of spots from our scanning nozzle configurations, (2) Proton propagation through CT geometry, considering nuclear elastic scattering, multiple scattering, and energy loss straggling, (3) Modeling of the intranuclear cascade stage of NE interactions, (4) Nuclear evaporation simulation, and (5) Statistical error estimates on the dose. To validate our MC, we performed: (1) Secondary particle yield calculations in NE collisions, (2) Dose calculations in homogeneous phantoms, (3) Re-calculations of head and neck plans from a commercial treatment planning system (TPS), and compared with Geant4.9.6p2/TOPAS. Results: Yields, en...
Monte Carlo calculated CT numbers for improved heavy ion treatment planning
Directory of Open Access Journals (Sweden)
Qamhiyeh Sima
2014-03-01
Full Text Available Better knowledge of CT number values and their uncertainties can be applied to improve heavy ion treatment planning. We developed a novel method to calculate CT numbers for a computed tomography (CT scanner using the Monte Carlo (MC code, BEAMnrc/EGSnrc. To generate the initial beam shape and spectra we conducted full simulations of an X-ray tube, filters and beam shapers for a Siemens Emotion CT. The simulation output files were analyzed to calculate projections of a phantom with inserts. A simple reconstruction algorithm (FBP using a Ram-Lak filter was applied to calculate the pixel values, which represent an attenuation coefficient, normalized in such a way to give zero for water (Hounsfield unit (HU. Measured and Monte Carlo calculated CT numbers were compared. The average deviation between measured and simulated CT numbers was 4 ± 4 HU and the standard deviation σ was 49 ± 4 HU. The simulation also correctly predicted the behaviour of H-materials compared to a Gammex tissue substitutes. We believe the developed approach represents a useful new tool for evaluating the effect of CT scanner and phantom parameters on CT number values.
Energy Technology Data Exchange (ETDEWEB)
Betzler, Benjamin R., E-mail: betzlerbr@ornl.gov [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, Ann Arbor, MI 48109 (United States); Kiedrowski, Brian C., E-mail: bckiedro@umich.edu [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, Ann Arbor, MI 48109 (United States); Brown, Forrest B., E-mail: fbrown@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663, MS A143, Los Alamos, NM 87545 (United States); Martin, William R., E-mail: wrm@umich.edu [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, Ann Arbor, MI 48109 (United States)
2015-12-15
Highlights: • A transition rate matrix method for calculating α-eigenvalues is formulated. • Verification of this method is performed using multigroup infinite-medium problems. • Applications to continuous-energy media examine the slowing down of neutrons. • The effect of the α-eigenvalue spectrum on the short-time flux behavior is discussed. - Abstract: The time-dependent behavior of the energy spectrum in neutron transport was investigated with a formulation, based on continuous-time Markov processes, for computing α eigenvalues and eigenvectors in an infinite medium. For this, a research Monte Carlo code called “TORTE” (To Obtain Real Time Eigenvalues) was created and used to estimate elements of a transition rate matrix. TORTE is capable of using both multigroup and continuous-energy nuclear data, and verification was performed. Eigenvalue spectra for infinite homogeneous mixtures were obtained, and an eigenfunction expansion was used to investigate transient behavior of the neutron energy spectrum.
SU-E-T-202: Impact of Monte Carlo Dose Calculation Algorithm On Prostate SBRT Treatments
Energy Technology Data Exchange (ETDEWEB)
Venencia, C; Garrigo, E; Cardenas, J; Castro Pena, P [Instituto de Radioterapia - Fundacion Marie Curie, Cordoba (Argentina)
2014-06-01
Purpose: The purpose of this work was to quantify the dosimetric impact of using Monte Carlo algorithm on pre calculated SBRT prostate treatment with pencil beam dose calculation algorithm. Methods: A 6MV photon beam produced by a Novalis TX (BrainLAB-Varian) linear accelerator equipped with HDMLC was used. Treatment plans were done using 9 fields with Iplanv4.5 (BrainLAB) and dynamic IMRT modality. Institutional SBRT protocol uses a total dose to the prostate of 40Gy in 5 fractions, every other day. Dose calculation is done by pencil beam (2mm dose resolution), heterogeneity correction and dose volume constraint (UCLA) for PTV D95%=40Gy and D98%>39.2Gy, Rectum V20Gy<50%, V32Gy<20%, V36Gy<10% and V40Gy<5%, Bladder V20Gy<40% and V40Gy<10%, femoral heads V16Gy<5%, penile bulb V25Gy<3cc, urethra and overlap region between PTV and PRV Rectum Dmax<42Gy. 10 SBRT treatments plans were selected and recalculated using Monte Carlo with 2mm spatial resolution and mean variance of 2%. DVH comparisons between plans were done. Results: The average difference between PTV doses constraints were within 2%. However 3 plans have differences higher than 3% which does not meet the D98% criteria (>39.2Gy) and should have been renormalized. Dose volume constraint differences for rectum, bladder, femoral heads and penile bulb were les than 2% and within tolerances. Urethra region and overlapping between PTV and PRV Rectum shows increment of dose in all plans. The average difference for urethra region was 2.1% with a maximum of 7.8% and for the overlapping region 2.5% with a maximum of 8.7%. Conclusion: Monte Carlo dose calculation on dynamic IMRT treatments could affects on plan normalization. Dose increment in critical region of urethra and PTV overlapping region with PTV could have clinical consequences which need to be studied. The use of Monte Carlo dose calculation algorithm is limited because inverse planning dose optimization use only pencil beam.
New electron multiple scattering distributions for Monte Carlo transport simulation
Energy Technology Data Exchange (ETDEWEB)
Chibani, Omar (Haut Commissariat a la Recherche (C.R.S.), 2 Boulevard Franz Fanon, Alger B.P. 1017, Alger-Gare (Algeria)); Patau, Jean Paul (Laboratoire de Biophysique et Biomathematiques, Faculte des Sciences Pharmaceutiques, Universite Paul Sabatier, 35 Chemin des Maraichers, 31062 Toulouse cedex (France))
1994-10-01
New forms of electron (positron) multiple scattering distributions are proposed. The first is intended for use in the conditions of validity of the Moliere theory. The second distribution takes place when the electron path is so short that only few elastic collisions occur. These distributions are adjustable formulas. The introduction of some parameters allows impositions of the correct value of the first moment. Only positive and analytic functions were used in constructing the present expressions. This makes sampling procedures easier. Systematic tests are presented and some Monte Carlo simulations, as benchmarks, are carried out. ((orig.))
Many-body effects on graphene conductivity: Quantum Monte Carlo calculations
Boyda, D. L.; Braguta, V. V.; Katsnelson, M. I.; Ulybyshev, M. V.
2016-08-01
Optical conductivity of graphene is studied using quantum Monte Carlo calculations. We start from a Euclidean current-current correlator and extract σ (ω ) from Green-Kubo relations using the Backus-Gilbert method. Calculations were performed both for long-range interactions and taking into account only the contact term. In both cases we vary interaction strength and study its influence on optical conductivity. We compare our results with previous theoretical calculations choosing ω ≈κ , thus working in the region of the plateau in σ (ω ) which corresponds to optical conductivity of Dirac quasiparticles. No dependence of optical conductivity on interaction strength is observed unless we approach the antiferromagnetic phase transition in the case of an artificially enhanced contact term. Our results strongly support previous theoretical studies that claimed very weak regularization of graphene conductivity.
Fix, Michael K.; Cygler, Joanna; Frei, Daniel; Volken, Werner; Neuenschwander, Hans; Born, Ernst J.; Manser, Peter
2013-05-01
The electron Monte Carlo (eMC) dose calculation algorithm available in the Eclipse treatment planning system (Varian Medical Systems) is based on the macro MC method and uses a beam model applicable to Varian linear accelerators. This leads to limitations in accuracy if eMC is applied to non-Varian machines. In this work eMC is generalized to also allow accurate dose calculations for electron beams from Elekta and Siemens accelerators. First, changes made in the previous study to use eMC for low electron beam energies of Varian accelerators are applied. Then, a generalized beam model is developed using a main electron source and a main photon source representing electrons and photons from the scattering foil, respectively, an edge source of electrons, a transmission source of photons and a line source of electrons and photons representing the particles from the scrapers or inserts and head scatter radiation. Regarding the macro MC dose calculation algorithm, the transport code of the secondary particles is improved. The macro MC dose calculations are validated with corresponding dose calculations using EGSnrc in homogeneous and inhomogeneous phantoms. The validation of the generalized eMC is carried out by comparing calculated and measured dose distributions in water for Varian, Elekta and Siemens machines for a variety of beam energies, applicator sizes and SSDs. The comparisons are performed in units of cGy per MU. Overall, a general agreement between calculated and measured dose distributions for all machine types and all combinations of parameters investigated is found to be within 2% or 2 mm. The results of the dose comparisons suggest that the generalized eMC is now suitable to calculate dose distributions for Varian, Elekta and Siemens linear accelerators with sufficient accuracy in the range of the investigated combinations of beam energies, applicator sizes and SSDs.
Fix, Michael K; Cygler, Joanna; Frei, Daniel; Volken, Werner; Neuenschwander, Hans; Born, Ernst J; Manser, Peter
2013-05-07
The electron Monte Carlo (eMC) dose calculation algorithm available in the Eclipse treatment planning system (Varian Medical Systems) is based on the macro MC method and uses a beam model applicable to Varian linear accelerators. This leads to limitations in accuracy if eMC is applied to non-Varian machines. In this work eMC is generalized to also allow accurate dose calculations for electron beams from Elekta and Siemens accelerators. First, changes made in the previous study to use eMC for low electron beam energies of Varian accelerators are applied. Then, a generalized beam model is developed using a main electron source and a main photon source representing electrons and photons from the scattering foil, respectively, an edge source of electrons, a transmission source of photons and a line source of electrons and photons representing the particles from the scrapers or inserts and head scatter radiation. Regarding the macro MC dose calculation algorithm, the transport code of the secondary particles is improved. The macro MC dose calculations are validated with corresponding dose calculations using EGSnrc in homogeneous and inhomogeneous phantoms. The validation of the generalized eMC is carried out by comparing calculated and measured dose distributions in water for Varian, Elekta and Siemens machines for a variety of beam energies, applicator sizes and SSDs. The comparisons are performed in units of cGy per MU. Overall, a general agreement between calculated and measured dose distributions for all machine types and all combinations of parameters investigated is found to be within 2% or 2 mm. The results of the dose comparisons suggest that the generalized eMC is now suitable to calculate dose distributions for Varian, Elekta and Siemens linear accelerators with sufficient accuracy in the range of the investigated combinations of beam energies, applicator sizes and SSDs.
Ab initio quantum Monte Carlo calculations of ground-state properties of manganese's oxides
Sharma, Vinit; Krogel, Jaron T.; Kent, P. R. C.; Reboredo, Fernando A.
One of the critical scientific challenges of contemporary research is to obtain an accurate theoretical description of the electronic properties of strongly correlated systems such as transition metal oxides and rare-earth compounds, since state-of-art ab-initio methods based on approximate density functionals are not always sufficiently accurate. Quantum Monte Carlo (QMC) methods, which use statistical sampling to evaluate many-body wave functions, have the potential to answer this challenge. Owing to the few fundamental approximations made and the direct treatment of electron correlation, QMC methods are among the most accurate electronic structure methods available to date. We assess the accuracy of the diffusion Monte Carlo method in the case of rocksalt manganese oxide (MnO). We study the electronic properties of this strongly-correlated oxide, which has been identified as a suitable candidate for many applications ranging from catalysts to electronic devices. ``This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division.'' Ab initio quantum Monte Carlo calculations of ground-state properties of manganese's oxides.
Development of a software package for solid-angle calculations using the Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Zhang, Jie, E-mail: zhangjie_scu@163.com [Key Laboratory for Neutron Physics of Chinese Academy of Engineering Physics, Institute of Nuclear Physics and Chemistry, Mianyang 621900 (China); College of Physical Science and Technology, Sichuan University, Chengdu 610064 (China); Chen, Xiulian [College of Physical Science and Technology, Sichuan University, Chengdu 610064 (China); Zhang, Changsheng [Key Laboratory for Neutron Physics of Chinese Academy of Engineering Physics, Institute of Nuclear Physics and Chemistry, Mianyang 621900 (China); Li, Gang [College of Physical Science and Technology, Sichuan University, Chengdu 610064 (China); Xu, Jiayun, E-mail: xjy@scu.edu.cn [College of Physical Science and Technology, Sichuan University, Chengdu 610064 (China); Sun, Guangai [Key Laboratory for Neutron Physics of Chinese Academy of Engineering Physics, Institute of Nuclear Physics and Chemistry, Mianyang 621900 (China)
2014-02-01
Solid-angle calculations play an important role in the absolute calibration of radioactivity measurement systems and in the determination of the activity of radioactive sources, which are often complicated. In the present paper, a software package is developed to provide a convenient tool for solid-angle calculations in nuclear physics. The proposed software calculates solid angles using the Monte Carlo method, in which a new type of variance reduction technique was integrated. The package, developed under the environment of Microsoft Foundation Classes (MFC) in Microsoft Visual C{sup ++}, has a graphical user interface, in which, the visualization function is integrated in conjunction with OpenGL. One advantage of the proposed software package is that it can calculate the solid angle subtended by a detector with different geometric shapes (e.g., cylinder, square prism, regular triangular prism or regular hexagonal prism) to a point, circular or cylindrical source without any difficulty. The results obtained from the proposed software package were compared with those obtained from previous studies and calculated using Geant4. It shows that the proposed software package can produce accurate solid-angle values with a greater computation speed than Geant4. -- Highlights: • This software package (SAC) can give accurate solid-angle values. • SAC calculate solid angles using the Monte Carlo method and it has higher computation speed than Geant4. • A simple but effective variance reduction technique which was put forward by the authors has been applied in SAC. • A visualization function and a graphical user interface are also integrated in SAC.
Load balancing in highly parallel processing of Monte Carlo code for particle transport
Energy Technology Data Exchange (ETDEWEB)
Higuchi, Kenji; Takemiya, Hiroshi [Japan Atomic Energy Research Inst., Tokyo (Japan); Kawasaki, Takuji [Fuji Research Institute Corporation, Tokyo (Japan)
2001-01-01
In parallel processing of Monte Carlo(MC) codes for neutron, photon and electron transport problems, particle histories are assigned to processors making use of independency of the calculation for each particle. Although we can easily parallelize main part of a MC code by this method, it is necessary and practically difficult to optimize the code concerning load balancing in order to attain high speedup ratio in highly parallel processing. In fact, the speedup ratio in the case of 128 processors remains in nearly one hundred times when using the test bed for the performance evaluation. Through the parallel processing of the MCNP code, which is widely used in the nuclear field, it is shown that it is difficult to attain high performance by static load balancing in especially neutron transport problems, and a load balancing method, which dynamically changes the number of assigned particles minimizing the sum of the computational and communication costs, overcomes the difficulty, resulting in nearly fifteen percentage of reduction for execution time. (author)
Measurement and Monte Carlo Calculation of Waste Drum Filled With Radioactive Aqueous Solution
Institute of Scientific and Technical Information of China (English)
XU; Li-jun; ZHANG; Wei-dong; YE; Hong-sheng; LIN; Min; CHEN; Xi-lin; GUO; Xiao-qing
2012-01-01
<正>Theoretically the best calibrating source of gamma scan system (SGS) is a waste drum filled with uniform distribution of medium and radioactive nuclides. However, in reality, waste drums usually full of solid substance, which are difficult to be prepared in a completely uniformly distributed state. To reduce measurement uncertainty of the radioactivity of waste drums prepared using the method of shell source, a waste drum filled with radioactive aqueous solution was prepared. Besides, its radioactivity was measured by a SGS device and calculated using Monte Carlo method to verify the exact geometric model, which
Exchange interactions and Tc in rhenium-doped silicon: DFT, DFT + U and Monte Carlo calculations.
Wierzbowska, Małgorzata
2012-03-28
Interactions between rhenium impurities in silicon are investigated by means of the density functional theory (DFT) and the DFT + U scheme. All couplings between impurities are ferromagnetic except the Re-Re dimers which in the DFT method are nonmagnetic, due to the formation of the chemical bond supported by substantial relaxation of the geometry. The critical temperature is calculated by means of classical Monte Carlo (MC) simulations with the Heisenberg Hamiltonian. The uniform ferromagnetic phase is obtained with the DFT exchange interactions at room temperature for the impurities concentration of 7%. With the DFT + U exchange interactions, the ferromagnetic clusters form above room temperature in MC samples containing only 3% Re.
Theory of Finite Size Effects for Electronic Quantum Monte Carlo Calculations of Liquids and Solids
Holzmann, Markus; Morales, Miguel A; Tubmann, Norm M; Ceperley, David M; Pierleoni, Carlo
2016-01-01
Concentrating on zero temperature Quantum Monte Carlo calculations of electronic systems, we give a general description of the theory of finite size extrapolations of energies to the thermodynamic limit based on one and two-body correlation functions. We introduce new effective procedures, such as using the potential and wavefunction split-up into long and short range functions to simplify the method and we discuss how to treat backflow wavefunctions. Then we explicitly test the accuracy of our method to correct finite size errors on example hydrogen and helium many-body systems and show that the finite size bias can be drastically reduced for even small systems.
Wang, Guan-bo; Liu, Han-gang; Wang, Kan; Yang, Xin; Feng, Qi-jie
2012-09-01
Thermal-to-fusion neutron convertor has being studied in China Academy of Engineering Physics (CAEP). Current Monte Carlo codes, such as MCNP and GEANT, are inadequate when applied in this multi-step reactions problems. A Monte Carlo tool RSMC (Reaction Sequence Monte Carlo) has been developed to simulate such coupled problem, from neutron absorption, to charged particle ionization and secondary neutron generation. "Forced particle production" variance reduction technique has been implemented to improve the calculation speed distinctly by making deuteron/triton induced secondary product plays a major role. Nuclear data is handled from ENDF or TENDL, and stopping power from SRIM, which described better for low energy deuteron/triton interactions. As a validation, accelerator driven mono-energy 14 MeV fusion neutron source is employed, which has been deeply studied and includes deuteron transport and secondary neutron generation. Various parameters, including fusion neutron angle distribution, average neutron energy at different emission directions, differential and integral energy distributions, are calculated with our tool and traditional deterministic method as references. As a result, we present the calculation results of convertor with RSMC, including conversion ratio of 1 mm 6LiD with a typical thermal neutron (Maxwell spectrum) incidence, and fusion neutron spectrum, which will be used for our experiment.
Lutsyshyn, Y.; Halley, J. W.
2011-01-01
We present the results of diffusion Monte Carlo calculations of the elastic transmission of a low-energy beam of helium atoms through a suspended slab of superfluid helium. These calculations represent a significant improvement on variational Monte Carlo methods which were previously used to study this problem. The results are consistent with the existence of a condensate-mediated transmission mechanism, which would result in very fast transmission of pulses through a slab.
Monte Carlo Calculations of Dose to Medium and Dose to Water for Carbon Ion Beams in Various Media
DEFF Research Database (Denmark)
Herrmann, Rochus; Petersen, Jørgen B.B.; Jäkel, Oliver
. The dose to medium (Dm ) may however differ from Dw , due to the different particle spectrum and stopping power found herein. Monte Carlo particle transport codes are capable of directly calculating dose to medium (Dm ), and was for instance recently investigated by Paganetti 2009 for various proton...... treatment plans. Here, we quantisize the effect of dose to water vs. dose to medium for a series of typical target materials found in medical physics. 2 Material and Methods The Monte Carlo code FLUKA [Battistioni et al. 2007] is used to simulate the particle fluence spectrum in a series of target...... the PSTAR, ASTAR stopping power routines available at NIST1 and MSTAR2 provided by H. Paul et al. 3 Results For a pristine carbon ion beam we encountered a maximum deviation between Dw and Dm up to 8% for bone. In addition we investigate spread out Bragg peak configurations which dilutes the effect...
Bahadori, Amir Alexander
Astronauts are exposed to a unique radiation environment in space. United States terrestrial radiation worker limits, derived from guidelines produced by scientific panels, do not apply to astronauts. Limits for astronauts have changed throughout the Space Age, eventually reaching the current National Aeronautics and Space Administration limit of 3% risk of exposure induced death, with an administrative stipulation that the risk be assured to the upper 95% confidence limit. Much effort has been spent on reducing the uncertainty associated with evaluating astronaut risk for radiogenic cancer mortality, while tools that affect the accuracy of the calculations have largely remained unchanged. In the present study, the impacts of using more realistic computational phantoms with size variability to represent astronauts with simplified deterministic radiation transport were evaluated. Next, the impacts of microgravity-induced body changes on space radiation dosimetry using the same transport method were investigated. Finally, dosimetry and risk calculations resulting from Monte Carlo radiation transport were compared with results obtained using simplified deterministic radiation transport. The results of the present study indicated that the use of phantoms that more accurately represent human anatomy can substantially improve space radiation dose estimates, most notably for exposures from solar particle events under light shielding conditions. Microgravity-induced changes were less important, but results showed that flexible phantoms could assist in optimizing astronaut body position for reducing exposures during solar particle events. Finally, little overall differences in risk calculations using simplified deterministic radiation transport and 3D Monte Carlo radiation transport were found; however, for the galactic cosmic ray ion spectra, compensating errors were observed for the constituent ions, thus exhibiting the need to perform evaluations on a particle
An optimized initialization algorithm to ensure accuracy in quantum Monte Carlo calculations.
Fisher, Daniel R; Kent, David R; Feldmann, Michael T; Goddard, William A
2008-11-15
Quantum Monte Carlo (QMC) calculations require the generation of random electronic configurations with respect to a desired probability density, usually the square of the magnitude of the wavefunction. In most cases, the Metropolis algorithm is used to generate a sequence of configurations in a Markov chain. This method has an inherent equilibration phase, during which the configurations are not representative of the desired density and must be discarded. If statistics are gathered before the walkers have equilibrated, contamination by nonequilibrated configurations can greatly reduce the accuracy of the results. Because separate Markov chains must be equilibrated for the walkers on each processor, the use of a long equilibration phase has a profoundly detrimental effect on the efficiency of large parallel calculations. The stratified atomic walker initialization (STRAW) shortens the equilibration phase of QMC calculations by generating statistically independent electronic configurations in regions of high probability density. This ensures the accuracy of calculations by avoiding contamination by nonequilibrated configurations. Shortening the length of the equilibration phase also results in significant improvements in the efficiency of parallel calculations, which reduces the total computational run time. For example, using STRAW rather than a standard initialization method in 512 processor calculations reduces the amount of time needed to calculate the energy expectation value of a trial function for a molecule of the energetic material RDX to within 0.01 au by 33%.
Monte Carlo model of neutral-particle transport in diverted plasmas
Energy Technology Data Exchange (ETDEWEB)
Heifetz, D.; Post, D.; Petravic, M.; Weisheit, J.; Bateman, G.
1981-11-01
The transport of neutral atoms and molecules in the edge and divertor regions of fusion experiments has been calculated using Monte-Carlo techniques. The deuterium, tritium, and helium atoms are produced by recombination in the plasma and at the walls. The relevant collision processes of charge exchange, ionization, and dissociation between the neutrals and the flowing plasma electrons and ions are included, along with wall reflection models. General two-dimensional wall and plasma geometries are treated in a flexible manner so that varied configurations can be easily studied. The algorithm uses a pseudo-collision method. Splitting with Russian roulette, suppression of absorption, and efficient scoring techniques are used to reduce the variance. The resulting code is sufficiently fast and compact to be incorporated into iterative treatments of plasma dynamics requiring numerous neutral profiles. The calculation yields the neutral gas densities, pressures, fluxes, ionization rates, momentum transfer rates, energy transfer rates, and wall sputtering rates. Applications have included modeling of proposed INTOR/FED poloidal divertor designs and other experimental devices.
Energy Technology Data Exchange (ETDEWEB)
Han, Gi Young; Seo, Bo Kyun [Korea Institute of Nuclear Safety,, Daejeon (Korea, Republic of); Kim, Do Hyun; Shin, Chang Ho; Kim, Song Hyun [Dept. of Nuclear Engineering, Hanyang University, Seoul (Korea, Republic of); Sun, Gwang Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2016-06-15
In analyzing residual radiation, researchers generally use a two-step Monte Carlo (MC) simulation. The first step (MC1) simulates neutron transport, and the second step (MC2) transports the decay photons emitted from the activated materials. In this process, the stochastic uncertainty estimated by the MC2 appears only as a final result, but it is underestimated because the stochastic error generated in MC1 cannot be directly included in MC2. Hence, estimating the true stochastic uncertainty requires quantifying the propagation degree of the stochastic error in MC1. The brute force technique is a straightforward method to estimate the true uncertainty. However, it is a costly method to obtain reliable results. Another method, called the adjoint-based method, can reduce the computational time needed to evaluate the true uncertainty; however, there are limitations. To address those limitations, we propose a new strategy to estimate uncertainty propagation without any additional calculations in two-step MC simulations. To verify the proposed method, we applied it to activation benchmark problems and compared the results with those of previous methods. The results show that the proposed method increases the applicability and user-friendliness preserving accuracy in quantifying uncertainty propagation. We expect that the proposed strategy will contribute to efficient and accurate two-step MC calculations.
Energy Technology Data Exchange (ETDEWEB)
Cullen, D E
1998-11-22
TART98 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART98 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART98 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART98 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART98 and its data files.
Evaluation of atomic electron binding energies for Monte Carlo particle transport
Pia, Maria Grazia; Batic, Matej; Begalli, Marcia; Kim, Chan Hyeong; Quintieri, Lina; Saracco, Paolo
2011-01-01
A survey of atomic binding energies used by general purpose Monte Carlo systems is reported. Various compilations of these parameters have been evaluated; their accuracy is estimated with respect to experimental data. Their effects on physics quantities relevant to Monte Carlo particle transport are highlighted: X-ray fluorescence emission, electron and proton ionization cross sections, and Doppler broadening in Compton scattering. The effects due to different binding energies are quantified with respect to experimental data. The results of the analysis provide quantitative ground for the selection of binding energies to optimize the accuracy of Monte Carlo simulation in experimental use cases. Recommendations on software design dealing with these parameters and on the improvement of data libraries for Monte Carlo simulation are discussed.
DEFF Research Database (Denmark)
Taasti, Vicki Trier; Knudsen, Helge; Holzscheiter, Michael
2015-01-01
The Monte Carlo particle transport code SHIELD-HIT12A is designed to simulate therapeutic beams for cancer radiotherapy with fast ions. SHIELD-HIT12A allows creation of antiproton beam kernels for the treatment planning system TRiP98, but first it must be benchmarked against experimental data...
Spread-out Bragg peak and monitor units calculation with the Monte Carlo code MCNPX.
Hérault, J; Iborra, N; Serrano, B; Chauvel, P
2007-02-01
The aim of this work was to study the dosimetric potential of the Monte Carlo code MCNPX applied to the protontherapy field. For series of clinical configurations a comparison between simulated and experimental data was carried out, using the proton beam line of the MEDICYC isochronous cyclotron installed in the Centre Antoine Lacassagne in Nice. The dosimetric quantities tested were depth-dose distributions, output factors, and monitor units. For each parameter, the simulation reproduced accurately the experiment, which attests the quality of the choices made both in the geometrical description and in the physics parameters for beam definition. These encouraging results enable us today to consider a simplification of quality control measurements in the future. Monitor Units calculation is planned to be carried out with preestablished Monte Carlo simulation data. The measurement, which was until now our main patient dose calibration system, will be progressively replaced by computation based on the MCNPX code. This determination of Monitor Units will be controlled by an independent semi-empirical calculation.
Parsons, Tom
2008-01-01
Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques [e.g., Ellsworth et al., 1999]. In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means [e.g., NIST/SEMATECH, 2006]. For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDF?s, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.
Noblet, C.; Chiavassa, S.; Smekens, F.; Sarrut, D.; Passal, V.; Suhard, J.; Lisbona, A.; Paris, F.; Delpon, G.
2016-05-01
In preclinical studies, the absorbed dose calculation accuracy in small animals is fundamental to reliably investigate and understand observed biological effects. This work investigated the use of the split exponential track length estimator (seTLE), a new kerma based Monte Carlo dose calculation method for preclinical radiotherapy using a small animal precision micro irradiator, the X-RAD 225Cx. Monte Carlo modelling of the irradiator with GATE/GEANT4 was extensively evaluated by comparing measurements and simulations for half-value layer, percent depth dose, off-axis profiles and output factors in water and water-equivalent material for seven circular fields, from 20 mm down to 1 mm in diameter. Simulated and measured dose distributions in cylinders of water obtained for a 360° arc were also compared using dose, distance-to-agreement and gamma-index maps. Simulations and measurements agreed within 3% for all static beam configurations, with uncertainties estimated to 1% for the simulation and 3% for the measurements. Distance-to-agreement accuracy was better to 0.14 mm. For the arc irradiations, gamma-index maps of 2D dose distributions showed that the success rate was higher than 98%, except for the 0.1 cm collimator (92%). Using the seTLE method, MC simulations compute 3D dose distributions within minutes for realistic beam configurations with a clinically acceptable accuracy for beam diameter as small as 1 mm.
Yang, Bo; Qiu, Rui; Li, JunLi; Lu, Wei; Wu, Zhen; Li, Chunyan
2017-02-01
When a strong laser beam irradiates a solid target, a hot plasma is produced and high-energy electrons are usually generated (the so-called "hot electrons"). These energetic electrons subsequently generate hard X-rays in the solid target through the Bremsstrahlung process. To date, only limited studies have been conducted on this laser-induced radiological protection issue. In this study, extensive literature reviews on the physics and properties of hot electrons have been conducted. On the basis of these information, the photon dose generated by the interaction between hot electrons and a solid target was simulated with the Monte Carlo code FLUKA. With some reasonable assumptions, the calculated dose can be regarded as the upper boundary of the experimental results over the laser intensity ranging from 1019 to 1021 W/cm2. Furthermore, an equation to estimate the photon dose generated from ultraintense laser-solid interactions based on the normalized laser intensity is derived. The shielding effects of common materials including concrete and lead were also studied for the laser-driven X-ray source. The dose transmission curves and tenth-value layers (TVLs) in concrete and lead were calculated through Monte Carlo simulations. These results could be used to perform a preliminary and fast radiation safety assessment for the X-rays generated from ultraintense laser-solid interactions.
Energy Technology Data Exchange (ETDEWEB)
Biondo, Elliott D [ORNL; Ibrahim, Ahmad M [ORNL; Mosher, Scott W [ORNL; Grove, Robert E [ORNL
2015-01-01
Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNce reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).
Diffusion Monte Carlo ab initio calculations to study wetting properties of graphene
Wu, Yanbin; Zheng, Huihuo; Wagner, Lucas; Aluru, N. R.
2013-11-01
For applications of graphene in water, including for example desalination and DNA sequencing, it is critical to understand the wetting properties of graphene. In this work, we investigate the wetting properties using data from highly accurate diffusion quantum Monte Carlo (DMC) calculations, which treat electron correlation explicitly. Our DMC data show a strong graphene-water interaction, indicating graphene surface is more hydrophilic than previously believed. This has been recently confirmed by experiments [Li et al. Nat. Mater. 2013, doi:10.1038/nmat3709]. The unusually strong interaction can be attributed to weak bonding formed between graphene and water. Besides its inadequate description of dispersion interactions as commonly reported in the literature, density function theory (DFT) fails to describe the correct charge transfer, which leads to an underestimate of graphene-water binding energy. Our DMC calculations can provide insight to experimentalists seeking to understand water-graphene interfaces and to theorists improving DFT for weakly bound systems.
Françoise Benz
2006-01-01
2005-2006 ACADEMIC TRAINING PROGRAMME LECTURE SERIES 27, 28, 29 June 11:00-12:00 - TH Conference Room, bldg. 4 The use of Monte Carlo radiation transport codes in radiation physics and dosimetry F. Salvat Gavalda,Univ. de Barcelona, A. FERRARI, CERN-AB, M. SILARI, CERN-SC Lecture 1. Transport and interaction of electromagnetic radiation F. Salvat Gavalda,Univ. de Barcelona Interaction models and simulation schemes implemented in modern Monte Carlo codes for the simulation of coupled electron-photon transport will be briefly reviewed. Different schemes for simulating electron transport will be discussed. Condensed algorithms, which rely on multiple-scattering theories, are comparatively fast, but less accurate than mixed algorithms, in which hard interactions (with energy loss or angular deflection larger than certain cut-off values) are simulated individually. The reliability, and limitations, of electron-interaction models and multiple-scattering theories will be analyzed. Benchmark comparisons of simu...
Kum, Oyeon; Han, Youngyih; Jeong, Hae Sun
2012-05-01
Minimizing the differences between dose distributions calculated at the treatment planning stage and those delivered to the patient is an essential requirement for successful radiotheraphy. Accurate calculation of dose distributions in the treatment planning process is important and can be done only by using a Monte Carlo calculation of particle transport. In this paper, we perform a further validation of our previously developed parallel Monte Carlo electron and photon transport (PMCEPT) code [Kum and Lee, J. Korean Phys. Soc. 47, 716 (2005) and Kim and Kum, J. Korean Phys. Soc. 49, 1640 (2006)] for applications to clinical radiation problems. A linear accelerator, Siemens' Primus 6 MV, was modeled and commissioned. A thorough validation includes both small fields, closely related to the intensity modulated radiation treatment (IMRT), and large fields. Two-dimensional comparisons with film measurements were also performed. The PMCEPT results, in general, agreed well with the measured data within a maximum error of about 2%. However, considering the experimental errors, the PMCEPT results can provide the gold standard of dose distributions for radiotherapy. The computing time was also much faster, compared to that needed for experiments, although it is still a bottleneck for direct applications to the daily routine treatment planning procedure.
Energy Technology Data Exchange (ETDEWEB)
Bankovic, A., E-mail: ana.bankovic@gmail.com [Institute of Physics, University of Belgrade, Pregrevica 118, 11080 Belgrade (Serbia); Dujko, S. [Institute of Physics, University of Belgrade, Pregrevica 118, 11080 Belgrade (Serbia); Centrum Wiskunde and Informatica (CWI), P.O. Box 94079, 1090 GB Amsterdam (Netherlands); ARC Centre for Antimatter-Matter Studies, School of Engineering and Physical Sciences, James Cook University, Townsville, QLD 4810 (Australia); White, R.D. [ARC Centre for Antimatter-Matter Studies, School of Engineering and Physical Sciences, James Cook University, Townsville, QLD 4810 (Australia); Buckman, S.J. [ARC Centre for Antimatter-Matter Studies, Australian National University, Canberra, ACT 0200 (Australia); Petrovic, Z.Lj. [Institute of Physics, University of Belgrade, Pregrevica 118, 11080 Belgrade (Serbia)
2012-05-15
This work reports on a new series of calculations of positron transport properties in molecular hydrogen under the influence of spatially homogeneous electric field. Calculations are performed using a Monte Carlo simulation technique and multi term theory for solving the Boltzmann equation. Values and general trends of the mean energy, drift velocity and diffusion coefficients as a function of the reduced electric field E/n{sub 0} are reported here. Emphasis is placed on the explicit and implicit effects of positronium (Ps) formation on the drift velocity and diffusion coefficients. Two important phenomena arise; first, for certain regions of E/n{sub 0} the bulk and flux components of the drift velocity and longitudinal diffusion coefficient are markedly different, both qualitatively and quantitatively. Second, and contrary to previous experience in electron swarm physics, there is negative differential conductivity (NDC) effect in the bulk drift velocity component with no indication of any NDC for the flux component. In order to understand this atypical manifestation of the drift and diffusion of positrons in H{sub 2} under the influence of electric field, the spatially dependent positron transport properties such as number of positrons, average energy and velocity and spatially resolved rate for Ps formation are calculated using a Monte Carlo simulation technique. The spatial variation of the positron average energy and extreme skewing of the spatial profile of positron swarm are shown to play a central role in understanding the phenomena.
Monte Carlo calculations of the depth-dose distribution in skin contaminated by hot particles
Energy Technology Data Exchange (ETDEWEB)
Patau, J.-P. (Toulouse-3 Univ., 31 (France))
1991-01-01
Accurate computer programs were developed in order to calculate the spatial distribution of absorbed radiation doses in the skin, near high activity particles (''hot particles''). With a view to ascertaining the reliability of the codes the transport of beta particles was simulated in a complex configuration used for dosimetric measurements: spherical {sup 60}Co sources of 10-1000 {mu}m fastened to an aluminium support with a tissue-equivalent adhesive overlaid with 10 {mu}m thick aluminium foil. Behind it an infinite polystyrene medium including an extrapolation chamber was assumed. The exact energy spectrum of beta emission was sampled. Production and transport of secondary knock-on electrons were also simulated. Energy depositions in polystyrene were calculated with a high spatial resolution. Finally, depth-dose distributions were calculated for hot particles placed on the skin. The calculations will be continued for other radionuclides and for a configuration suited to TLD measurements. (author).
Energy Technology Data Exchange (ETDEWEB)
Abdel-Khalik, Hany S. [North Carolina State Univ., Raleigh, NC (United States); Zhang, Qiong [North Carolina State Univ., Raleigh, NC (United States)
2014-05-20
The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 10^{3} - 10^{5} times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.
The denoising of Monte Carlo dose distributions using convolution superposition calculations
Energy Technology Data Exchange (ETDEWEB)
El Naqa, I [Department of Radiation Oncology, Washington University School of Medicine, St Louis, MO (United States); Cui, J [Department of Radiation Oncology, Washington University School of Medicine, St Louis, MO (United States); Lindsay, P [MD Anderson, Houston, TX (United States); Olivera, G [Tomotherapy Inc., Madison, WI (United States); Deasy, J O [Department of Radiation Oncology, Washington University School of Medicine, St Louis, MO (United States)
2007-09-07
Monte Carlo (MC) dose calculations can be accurate but are also computationally intensive. In contrast, convolution superposition (CS) offers faster and smoother results but by making approximations. We investigated MC denoising techniques, which use available convolution superposition results and new noise filtering methods to guide and accelerate MC calculations. Two main approaches were developed to combine CS information with MC denoising. In the first approach, the denoising result is iteratively updated by adding the denoised residual difference between the result and the MC image. Multi-scale methods were used (wavelets or contourlets) for denoising the residual. The iterations are initialized by the CS data. In the second approach, we used a frequency splitting technique by quadrature filtering to combine low frequency components derived from MC simulations with high frequency components derived from CS components. The rationale is to take the scattering tails as well as dose levels in the high-dose region from the MC calculations, which presumably more accurately incorporates scatter; high-frequency details are taken from CS calculations. 3D Butterworth filters were used to design the quadrature filters. The methods were demonstrated using anonymized clinical lung and head and neck cases. The MC dose distributions were calculated by the open-source dose planning method MC code with varying noise levels. Our results indicate that the frequency-splitting technique for incorporating CS-guided MC denoising is promising in terms of computational efficiency and noise reduction. (note)
NOTE: The denoising of Monte Carlo dose distributions using convolution superposition calculations
El Naqa, I.; Cui, J.; Lindsay, P.; Olivera, G.; Deasy, J. O.
2007-09-01
Monte Carlo (MC) dose calculations can be accurate but are also computationally intensive. In contrast, convolution superposition (CS) offers faster and smoother results but by making approximations. We investigated MC denoising techniques, which use available convolution superposition results and new noise filtering methods to guide and accelerate MC calculations. Two main approaches were developed to combine CS information with MC denoising. In the first approach, the denoising result is iteratively updated by adding the denoised residual difference between the result and the MC image. Multi-scale methods were used (wavelets or contourlets) for denoising the residual. The iterations are initialized by the CS data. In the second approach, we used a frequency splitting technique by quadrature filtering to combine low frequency components derived from MC simulations with high frequency components derived from CS components. The rationale is to take the scattering tails as well as dose levels in the high-dose region from the MC calculations, which presumably more accurately incorporates scatter; high-frequency details are taken from CS calculations. 3D Butterworth filters were used to design the quadrature filters. The methods were demonstrated using anonymized clinical lung and head and neck cases. The MC dose distributions were calculated by the open-source dose planning method MC code with varying noise levels. Our results indicate that the frequency-splitting technique for incorporating CS-guided MC denoising is promising in terms of computational efficiency and noise reduction.
Development of a software package for solid-angle calculations using the Monte Carlo method
Zhang, Jie; Chen, Xiulian; Zhang, Changsheng; Li, Gang; Xu, Jiayun; Sun, Guangai
2014-02-01
Solid-angle calculations play an important role in the absolute calibration of radioactivity measurement systems and in the determination of the activity of radioactive sources, which are often complicated. In the present paper, a software package is developed to provide a convenient tool for solid-angle calculations in nuclear physics. The proposed software calculates solid angles using the Monte Carlo method, in which a new type of variance reduction technique was integrated. The package, developed under the environment of Microsoft Foundation Classes (MFC) in Microsoft Visual C++, has a graphical user interface, in which, the visualization function is integrated in conjunction with OpenGL. One advantage of the proposed software package is that it can calculate the solid angle subtended by a detector with different geometric shapes (e.g., cylinder, square prism, regular triangular prism or regular hexagonal prism) to a point, circular or cylindrical source without any difficulty. The results obtained from the proposed software package were compared with those obtained from previous studies and calculated using Geant4. It shows that the proposed software package can produce accurate solid-angle values with a greater computation speed than Geant4.
Nonlinear acceleration of SN transport calculations
Energy Technology Data Exchange (ETDEWEB)
Fichtl, Erin D [Los Alamos National Laboratory; Warsa, James S [Los Alamos National Laboratory; Calef, Matthew T [Los Alamos National Laboratory
2010-12-20
The use of nonlinear iterative methods, Jacobian-Free Newton-Krylov (JFNK) in particular, for solving eigenvalue problems in transport applications has recently become an active subject of research. While JFNK has been shown to be effective for k-eigenvalue problems, there are a number of input parameters that impact computational efficiency, making it difficult to implement efficiently in a production code using a single set of default parameters. We show that different selections for the forcing parameter in particular can lead to large variations in the amount of computational work for a given problem. In contrast, we present a nonlinear subspace method that sits outside and effectively accelerates nonlinear iterations of a given form and requires only a single input parameter, the subspace size. It is shown to consistently and significantly reduce the amount of computational work when applied to fixed-point iteration, and this combination of methods is shown to be more efficient than JFNK for our application.
Fernandes, A C; Gonçalves, I C; Santos, J; Cardoso, J; Santos, L; Ferro Carvalho, A; Marques, J G; Kling, A; Ramalho, A J G; Osvay, M
2006-01-01
This work presents an extensive study on Monte Carlo radiation transport simulation and thermoluminescent (TL) dosimetry for characterising mixed radiation fields (neutrons and photons) occurring in nuclear reactors. The feasibility of these methods is investigated for radiation fields at various locations of the Portuguese Research Reactor (RPI). The performance of the approaches developed in this work is compared with dosimetric techniques already existing at RPI. The Monte Carlo MCNP-4C code was used for a detailed modelling of the reactor core, the fast neutron beam and the thermal column of RPI. Simulations using these models allow to reproduce the energy and spatial distributions of the neutron field very well (agreement better than 80%). In the case of the photon field, the agreement improves with decreasing intensity of the component related to fission and activation products. (7)LiF:Mg,Ti, (7)LiF:Mg,Cu,P and Al(2)O(3):Mg,Y TL detectors (TLDs) with low neutron sensitivity are able to determine photon dose and dose profiles with high spatial resolution. On the other hand, (nat)LiF:Mg,Ti TLDs with increased neutron sensitivity show a remarkable loss of sensitivity and a high supralinearity in high-intensity fields hampering their application at nuclear reactors.
Infantino, Angelo; Marengo, Mario; Baschetti, Serafina; Cicoria, Gianfranco; Longo Vaschetto, Vittorio; Lucconi, Giulia; Massucci, Piera; Vichi, Sara; Zagni, Federico; Mostacci, Domiziano
2015-11-01
Biomedical cyclotrons for production of Positron Emission Tomography (PET) radionuclides and radiotherapy with hadrons or ions are widely diffused and established in hospitals as well as in industrial facilities and research sites. Guidelines for site planning and installation, as well as for radiation protection assessment, are given in a number of international documents; however, these well-established guides typically offer analytic methods of calculation of both shielding and materials activation, in approximate or idealized geometry set up. The availability of Monte Carlo codes with accurate and up-to-date libraries for transport and interactions of neutrons and charged particles at energies below 250 MeV, together with the continuously increasing power of nowadays computers, makes systematic use of simulations with realistic geometries possible, yielding equipment and site specific evaluation of the source terms, shielding requirements and all quantities relevant to radiation protection. In this work, the well-known Monte Carlo code FLUKA was used to simulate two representative models of cyclotron for PET radionuclides production, including their targetry; and one type of proton therapy cyclotron including the energy selection system. Simulations yield estimates of various quantities of radiological interest, including the effective dose distribution around the equipment, the effective number of neutron produced per incident proton and the activation of target materials, the structure of the cyclotron, the energy degrader, the vault walls and the soil. The model was validated against experimental measurements and comparison with well-established reference data. Neutron ambient dose equivalent H*(10) was measured around a GE PETtrace cyclotron: an average ratio between experimental measurement and simulations of 0.99±0.07 was found. Saturation yield of 18F, produced by the well-known 18O(p,n)18F reaction, was calculated and compared with the IAEA recommended
Monte Carlo path sampling approach to modeling aeolian sediment transport
Hardin, E. J.; Mitasova, H.; Mitas, L.
2011-12-01
Coastal communities and vital infrastructure are subject to coastal hazards including storm surge and hurricanes. Coastal dunes offer protection by acting as natural barriers from waves and storm surge. During storms, these landforms and their protective function can erode; however, they can also erode even in the absence of storms due to daily wind and waves. Costly and often controversial beach nourishment and coastal construction projects are common erosion mitigation practices. With a more complete understanding of coastal morphology, the efficacy and consequences of anthropogenic activities could be better predicted. Currently, the research on coastal landscape evolution is focused on waves and storm surge, while only limited effort is devoted to understanding aeolian forces. Aeolian transport occurs when the wind supplies a shear stress that exceeds a critical value, consequently ejecting sand grains into the air. If the grains are too heavy to be suspended, they fall back to the grain bed where the collision ejects more grains. This is called saltation and is the salient process by which sand mass is transported. The shear stress required to dislodge grains is related to turbulent air speed. Subsequently, as sand mass is injected into the air, the wind loses speed along with its ability to eject more grains. In this way, the flux of saltating grains is itself influenced by the flux of saltating grains and aeolian transport becomes nonlinear. Aeolian sediment transport is difficult to study experimentally for reasons arising from the orders of magnitude difference between grain size and dune size. It is difficult to study theoretically because aeolian transport is highly nonlinear especially over complex landscapes. Current computational approaches have limitations as well; single grain models are mathematically simple but are computationally intractable even with modern computing power whereas cellular automota-based approaches are computationally efficient
Boltzmann transport calculation of collinear spin transport on short timescales
Nenno, Dennis M.; Kaltenborn, Steffen; Schneider, Hans Christian
2016-09-01
A spin-dependent Boltzmann transport equation is used to describe charge and spin dynamics resulting from the excitation of hot electrons in a ferromagnet/normal metal heterostructure. As the microscopic Boltzmann equation works with k -dependent distribution functions, it can describe far-from-equilibrium excitations, which are outside the scope of drift-diffusion theories. We study different scenarios for spin-dependent carrier injection into a nonmagnetic metal using an effectively two-dimensional phase space. While the charge signal is robust for various excitation schemes, the shape of the resulting spin current/density depends critically on the interplay between transport and scattering, and on the energetic distribution of the injected carriers. Our results imply that the energy dependence of the injected hot electrons has a decisive effect on the spin dynamics.
Doucet, R.; Olivares, M.; DeBlois, F.; Podgorsak, E. B.; Kawrakow, I.; Seuntjens, J.
2003-08-01
Calculations of dose distributions in heterogeneous phantoms in clinical electron beams, carried out using the fast voxel Monte Carlo (MC) system XVMC and the conventional MC code EGSnrc, were compared with measurements. Irradiations were performed using the 9 MeV and 15 MeV beams from a Varian Clinac-18 accelerator with a 10 × 10 cm2 applicator and an SSD of 100 cm. Depth doses were measured with thermoluminescent dosimetry techniques (TLD 700) in phantoms consisting of slabs of Solid WaterTM (SW) and bone and slabs of SW and lung tissue-equivalent materials. Lateral profiles in water were measured using an electron diode at different depths behind one and two immersed aluminium rods. The accelerator was modelled using the EGS4/BEAM system and optimized phase-space files were used as input to the EGSnrc and the XVMC calculations. Also, for the XVMC, an experiment-based beam model was used. All measurements were corrected by the EGSnrc-calculated stopping power ratios. Overall, there is excellent agreement between the corrected experimental and the two MC dose distributions. Small remaining discrepancies may be due to the non-equivalence between physical and simulated tissue-equivalent materials and to detector fluence perturbation effect correction factors that were calculated for the 9 MeV beam at selected depths in the heterogeneous phantoms.
Energy Technology Data Exchange (ETDEWEB)
Doucet, R [Medical Physics Unit, McGill University, Montreal General Hospital, 1650 Ave Cedar, Montreal H3G 1A4 (Canada); Olivares, M [Medical Physics Unit, McGill University, Montreal General Hospital, 1650 Ave Cedar, Montreal H3G 1A4 (Canada); DeBlois, F [Medical Physics Unit, McGill University, Montreal General Hospital, 1650 Ave Cedar, Montreal H3G 1A4 (Canada); Podgorsak, E B [Medical Physics Unit, McGill University, Montreal General Hospital, 1650 Ave Cedar, Montreal H3G 1A4 (Canada); Kawrakow, I [National Research Council Canada, Ionizing Radiation Standards Group, Ottawa K1A 0R6, Canada (Canada); Seuntjens, J [Medical Physics Unit, McGill University, Montreal General Hospital, 1650 Ave Cedar, Montreal H3G 1A4 (Canada)
2003-08-07
Calculations of dose distributions in heterogeneous phantoms in clinical electron beams, carried out using the fast voxel Monte Carlo (MC) system XVMC and the conventional MC code EGSnrc, were compared with measurements. Irradiations were performed using the 9 MeV and 15 MeV beams from a Varian Clinac-18 accelerator with a 10 x 10 cm{sup 2} applicator and an SSD of 100 cm. Depth doses were measured with thermoluminescent dosimetry techniques (TLD 700) in phantoms consisting of slabs of Solid Water{sup TM} (SW) and bone and slabs of SW and lung tissue-equivalent materials. Lateral profiles in water were measured using an electron diode at different depths behind one and two immersed aluminium rods. The accelerator was modelled using the EGS4/BEAM system and optimized phase-space files were used as input to the EGSnrc and the XVMC calculations. Also, for the XVMC, an experiment-based beam model was used. All measurements were corrected by the EGSnrc-calculated stopping power ratios. Overall, there is excellent agreement between the corrected experimental and the two MC dose distributions. Small remaining discrepancies may be due to the non-equivalence between physical and simulated tissue-equivalent materials and to detector fluence perturbation effect correction factors that were calculated for the 9 MeV beam at selected depths in the heterogeneous phantoms.
Deep-penetration calculation for the ISIS target station shielding using the MARS Monte Carlo code
Nunomiya, T; Nakamura, T; Nakao, N
2002-01-01
A calculation of neutron penetration through a thick shield was performed with a three-dimensional multi-layer technique using the MARS14(02) Monte Carlo code to compare with the experimental shielding data in 1998 at the ISIS spallation neutron source facility. In this calculation, secondary particles from a tantalum target bombarded by 800-MeV protons were transmitted through a bulk shield of approximately 3-m-thick iron and 1-m-thick concrete. To accomplish this deep-penetration calculation with good statistics, the following three techniques were used in this study. First, the geometry of the bulk shield was three-dimensionally divided into several layers of about 50-cm thickness, and a step-by-step calculation was carried out to multiply the number of penetrated particles at the boundaries between the layers. Second, the source particles in the layers were divided into two parts to maintain the statistical balance on the spatial-flux distribution. Third, only high-energy particles above 20 MeV were trans...
Fast Monte Carlo Simulation for Patient-specific CT/CBCT Imaging Dose Calculation
Jia, Xun; Gu, Xuejun; Jiang, Steve B
2011-01-01
Recently, X-ray imaging dose from computed tomography (CT) or cone beam CT (CBCT) scans has become a serious concern. Patient-specific imaging dose calculation has been proposed for the purpose of dose management. While Monte Carlo (MC) dose calculation can be quite accurate for this purpose, it suffers from low computational efficiency. In response to this problem, we have successfully developed a MC dose calculation package, gCTD, on GPU architecture under the NVIDIA CUDA platform for fast and accurate estimation of the x-ray imaging dose received by a patient during a CT or CBCT scan. Techniques have been developed particularly for the GPU architecture to achieve high computational efficiency. Dose calculations using CBCT scanning geometry in a homogeneous water phantom and a heterogeneous Zubal head phantom have shown good agreement between gCTD and EGSnrc, indicating the accuracy of our code. In terms of improved efficiency, it is found that gCTD attains a speed-up of ~400 times in the homogeneous water ...
Calculating CR-39 Response to Radon in Water Using Monte Carlo Simulation
Directory of Open Access Journals (Sweden)
2012-09-01
Full Text Available Introduction CR-39 detectors are widely used for Radon and progeny measurement in the air. In this paper, using the Monte Carlo simulation, the possibility of using the CR-39 for direct measurement of Radon and progeny in water is investigated. Materials and Methods Assuming the random position and angle of alpha particle emitted by Radon and progeny, alpha energy and angular spectrum that arrive at CR-39, the calibration factor, and the suitable depth of chemical etching of CR-39 in air and water was calculated. In this simulation, a range of data were obtained from SRIM2008 software. Results Calibration factor of CR-39 in water is calculated as 6.6 (kBq.d/m3/(track/cm2 that is corresponding with EPA standard level of Radon concentration in water (10-11 kBq/m3. With replacing the skin instead of CR-39, the volume affected by Radon and progeny was determined to be 2.51 mm3 for one m2 of skin area. The annual dose conversion factor for Radon and progeny was calculated to be between 8.8-58.8 nSv/(Bq.h/m3. Conclusion Using the CR-39 for Radon measurement in water can be beneficial. The annual dose conversion factor for Radon and progeny was calculated to be between 8.8-58.8 nSv/ (Bq.h/m3.
Unified description of pf-shell nuclei by the Monte Carlo shell model calculations
Energy Technology Data Exchange (ETDEWEB)
Mizusaki, Takahiro; Otsuka, Takaharu [Tokyo Univ. (Japan). Dept. of Physics; Honma, Michio
1998-03-01
The attempts to solve shell model by new methods are briefed. The shell model calculation by quantum Monte Carlo diagonalization which was proposed by the authors is a more practical method, and it became to be known that it can solve the problem with sufficiently good accuracy. As to the treatment of angular momentum, in the method of the authors, deformed Slater determinant is used as the basis, therefore, for making angular momentum into the peculiar state, projected operator is used. The space determined dynamically is treated mainly stochastically, and the energy of the multibody by the basis formed as the result is evaluated and selectively adopted. The symmetry is discussed, and the method of decomposing shell model space into dynamically determined space and the product of spin and isospin spaces was devised. The calculation processes are shown with the example of {sup 50}Mn nuclei. The calculation of the level structure of {sup 48}Cr with known exact energy can be done with the accuracy of peculiar absolute energy value within 200 keV. {sup 56}Ni nuclei are the self-conjugate nuclei of Z=N=28. The results of the shell model calculation of {sup 56}Ni nucleus structure by using the interactions of nuclear models are reported. (K.I.)
Energy Technology Data Exchange (ETDEWEB)
Bellezzo, Murillo
2014-09-01
As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo Method (MCM) has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this thesis, the CUBMC code is presented, a GPU-based MC photon transport algorithm for dose calculation under the Compute Unified Device Architecture (CUDA) platform. The simulation of physical events is based on the algorithm used in PENELOPE, and the cross section table used is the one generated by the MATERIAL routine, also present in PENELOPE code. Photons are transported in voxel-based geometries with different compositions. There are two distinct approaches used for transport simulation. The rst of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon ignores the existence of borders and travels in homogeneous fictitious media. The CUBMC code aims to be an alternative of Monte Carlo simulator code that, by using the capability of parallel processing of graphics processing units (GPU), provide high performance simulations in low cost compact machines, and thus can be applied in clinical cases and incorporated in treatment planning systems for radiotherapy. (author)
Fission yield calculation using toy model based on Monte Carlo simulation
Energy Technology Data Exchange (ETDEWEB)
Jubaidah, E-mail: jubaidah@student.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia); Physics Department, Faculty of Mathematics and Natural Science – State University of Medan. Jl. Willem Iskandar Pasar V Medan Estate – North Sumatera, Indonesia 20221 (Indonesia); Kurniadi, Rizal, E-mail: rijalk@fi.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia)
2015-09-30
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90
Energy Technology Data Exchange (ETDEWEB)
Serena, P. A. [Instituto de Ciencias de Materiales de Madrid, Madrid (Spain); Costa-Kraemer, J. L. [Instituto de Microelectronica de Madrid, Madrid (Spain)
2001-03-01
A Monte Carlo algorithm suitable to study systems described by an anisotropic Heisenberg Hamiltonian is presented. This technique has been tested successfully with 3D and 2D systems, illustrating how magnetic properties depend on the dimensionality and the coordination number. We have found that magnetic properties of constrictions differ from those appearing in bulk. In particular, spin fluctuations are considerable larger than those calculated for bulk materials. In addition, domain walls are strongly modified when a constriction is present, with a decrease of the domain-wall width. This decrease is explained in terms of previous theoretical works. [Spanish] Se presenta un algoritmo de Monte Carlo para estudiar sistemas discritos por un hamiltoniano anisotropico de Heisenburg. Esta tecnica ha sido probada exitosamente con sistemas de dos y tres dimensiones, ilustrado con las propiedades magneticas dependen de la dimensionalidad y el numero de coordinacion. Hemos encontrado que las propiedades magneticas de constricciones difieren de aquellas del bulto. En particular, las fluctuaciones de espin son considerablemente mayores. Ademas, las paredes de dominio son fuertemente modificadas cuando una construccion esta presente, originando un decrecimiento del ancho de la pared de dominio. Damos cuenta de este decrecimiento en terminos de un trabajo teorico previo.
Detailed Monte Carlo Simulation of electron transport and electron energy loss spectra.
Attarian Shandiz, M; Salvat, F; Gauvin, R
2016-11-01
A computer program for detailed Monte Carlo simulation of the transport of electrons with kinetic energies in the range between about 0.1 and about 500 keV in bulk materials and in thin solid films is presented. Elastic scattering is described from differential cross sections calculated by the relativistic (Dirac) partial-wave expansion method with different models of the scattering potential. Inelastic interactions are simulated from an optical-data model based on an empirical optical oscillator strength that combines optical functions of the solid with atomic photoelectric data. The generalized oscillator strength is built from the adopted optical oscillator strength by using an extension algorithm derived from Lindhard's dielectric function for a free-electron gas. It is shown that simulated backscattering fractions of electron beams from bulk (semi-infinite) specimens are in good agreement with experimental data for beam energies from 0.1 keV up to about 100 keV. Simulations also yield transmitted and backscattered fractions of electron beams on thin solid films that agree closely with measurements for different film thicknesses and incidence angles. Simulated most probable deflection angles and depth-dose distributions also agree satisfactorily with measurements. Finally, electron energy loss spectra of several elemental solids are simulated and the effects of the beam energy and the foil thickness on the signal to background and signal to noise ratios are investigated. SCANNING 38:475-491, 2016. © 2015 Wiley Periodicals, Inc.
Monte Carlo calculation model for heat radiation of inclined cylindrical flames and its application
Chang, Zhangyu; Ji, Jingwei; Huang, Yuankai; Wang, Zhiyi; Li, Qingjie
2017-02-01
Based on Monte Carlo method, a calculation model and its C++ calculating program for radiant heat transfer from an inclined cylindrical flame are proposed. In this model, the total radiation energy of the inclined cylindrical flame is distributed equally among a certain number of energy beams, which are emitted randomly from the flame surface. The incident heat flux on a surface is calculated by counting the number of energy beams which could reach the surface. The paper mainly studies the geometrical evaluation criterion for validity of energy beams emitted by inclined cylindrical flames and received by other surfaces. Compared to Mudan's formula results for a straight cylinder or a cylinder with 30° tilt angle, the calculated view factors range from 0.0043 to 0.2742 and the predicted view factors agree well with Mudan's results. The changing trend and values of incident heat fluxes computed by the model is consistent with experimental data measured by Rangwala et al. As a case study, incident heat fluxes on a gasoline tank, both the side and the top surface are calculated by the model. The heat radiation is from an inclined cylindrical flame generated by another 1000 m3 gasoline tank 4.6 m away from it. The cone angle of the flame to the adjacent oil tank is 45° and the polar angle is 0°. The top surface and the side surface of the tank are divided into 960 and 5760 grids during the calculation, respectively. The maximum incident heat flux on the side surface is 39.64 and 51.31 kW/m2 on the top surface. Distributions of the incident heat flux on the surface of the oil tank and on the ground around the fire tank are obtained, too.
Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector.
Cabal, Fatima Padilla; Lopez-Pino, Neivy; Bernal-Castillo, Jose Luis; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D'Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar
2010-12-01
A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ((241)Am, (133)Ba, (22)Na, (60)Co, (57)Co, (137)Cs and (152)Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.
Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector
Energy Technology Data Exchange (ETDEWEB)
Padilla Cabal, Fatima, E-mail: fpadilla@instec.c [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba); Lopez-Pino, Neivy; Luis Bernal-Castillo, Jose; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D' Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba)
2010-12-15
A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ({sup 241}Am, {sup 133}Ba, {sup 22}Na, {sup 60}Co, {sup 57}Co, {sup 137}Cs and {sup 152}Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.
Monte Carlo calculations of relativistic solar proton propagation in interplanetary space
Lumme, M.; Torsti, J. J.; Vainikka, E.; Peltonen, J.; Nieminen, M.; Valtonen, E.; Arvelta, H.
1985-01-01
Particle fluxes and pitch angle distributions of relativistic solar protons at 1 AU were determined by Monte Carlo calculations. The analysis covers two hours after the release of the particles from the Sun and total of eight 100000 particle trajectories were simulated. The pitch angle scattering was assumed to be isotropic ad the scattering mean free path was varied from 0.1 to 4 AU. As an application, the solar injection time and interplanetary scattering mean free path of particles that gave rise to the GLE on May, 1978 were determined. Assuming exponential form, the injection decay time was found to be about 11 minutes. The m.f.p. of pitch angle scattering during the event was about 1 AU.
DSMC calculations for the double ellipse. [direct simulation Monte Carlo method
Moss, James N.; Price, Joseph M.; Celenligil, M. Cevdet
1990-01-01
The direct simulation Monte Carlo (DSMC) method involves the simultaneous computation of the trajectories of thousands of simulated molecules in simulated physical space. Rarefied flow about the double ellipse for test case 6.4.1 has been calculated with the DSMC method of Bird. The gas is assumed to be nonreacting nitrogen flowing at a 30 degree incidence with respect to the body axis, and for the surface boundary conditions, the wall is assumed to be diffuse with full thermal accommodation and at a constant wall temperature of 620 K. A parametric study is presented that considers the effect of variations of computational domain, gas model, cell size, and freestream density on surface quantities.
Frozen-orbital and downfolding calculations with auxiliary-field quantum Monte Carlo
Purwanto, Wirawan; Krakauer, Henry
2013-01-01
We describe the implementation of the frozen-orbital and downfolding approximations in the auxiliary-field quantum Monte Carlo (AFQMC) method. These approaches can provide significant computational savings compared to fully correlating all the electrons. While the many-body wave function is never explicit in AFQMC, its random walkers are Slater determinants, whose orbitals may be expressed in terms of any one-particle orbital basis. It is therefore straightforward to partition the full N-particle Hilbert space into active and inactive parts to implement the frozen-orbital method. In the frozen-core approximation, for example, the core electrons can be eliminated in the correlated part of the calculations, greatly increasing the computational efficiency, especially for heavy atoms. Scalar relativistic effects are easily included using the Douglas-Kroll-Hess theory. Using this method, we obtain a way to effectively eliminate the error due to single-projector, norm-conserving pseudopotentials in AFQMC. We also i...
Development of a Monte-Carlo based method for calculating the effect of stationary fluctuations
DEFF Research Database (Denmark)
Pettersen, E. E.; Demazire, C.; Jareteg, K.;
2015-01-01
This paper deals with the development of a novel method for performing Monte Carlo calculations of the effect, on the neutron flux, of stationary fluctuations in macroscopic cross-sections. The basic principle relies on the formulation of two equivalent problems in the frequency domain: one...... equivalent problems nevertheless requires the possibility to modify the macroscopic cross-sections, and we use the work of Kuijper, van der Marck and Hogenbirk to define group-wise macroscopic cross-sections in MCNP [1]. The method is illustrated in this paper at a frequency of 1 Hz, for which only the real...... part of the neutron balance plays a significant role and for driving fluctuations leading to neutron sources having the same sign in the two equivalent sub-critical problems. A semi-analytical diffusion-based solution is used to verily the implementation of the method on a test case representative...
Monte Carlo Calculations for Neutron and Gamma Radiation Fields on a Fast Neutron Irradiation Device
Vieira, A.; Ramalho, A.; Gonçalves, I. C.; Fernandes, A.; Barradas, N.; Marques, J. G.; Prata, J.; Chaussy, Ch.
We used the Monte Carlo program MCNP to calculate the neutron and gamma fluxes on a fast neutron irradiation facility being installed on the Portuguese Research Reactor (RPI). The purpose of this facility is to provide a fast neutron beam for irradiation of electronic circuits. The gamma dose should be minimized. This is achieved by placing a lead shield preceded by a thin layer of boral. A fast neutron flux of the order of 109 n/cm2s is expected at the exit of the tube, while the gamma radiation is kept below 20 Gy/h. We will present results of the neutron and gamma doses for several locations along the tube and different thickness of the lead shield. We found that the neutron beam is very collimated at the end of the tube with a dominant component on the fast region.
{sup 33}S for Neutron Capture Therapy: Nuclear Data for Monte Carlo Calculations
Energy Technology Data Exchange (ETDEWEB)
Porras, I., E-mail: porras@ugr.es [Departamento de Física Atómica, Molecular y Nuclear, Facultad de Ciencias, Universidad de Granada, E-18071 Granada (Spain); Sabaté-Gilarte, M.; Praena, J.; Quesada, J.M. [Departamento de Física Atómica, Molecular y Nuclear, Facultad de Física, Universidad de Sevilla, E-41012 Sevilla (Spain); Esquinas, P.L. [Departament of Physics and Astronomy, University of British Columbia, Vancouver, BC (Canada)
2014-06-15
A study of the nuclear data required for the Monte Carlo simulation of boron neutron capture therapy including the {sup 33}S isotope as an enhancer of the dose at small depths has been performed. In particular, the controversy on the available data for the {sup 33}S(n, α) cross section will be shown, which motivates new measurements. In addition to this, kerma factors for the main components of tissue are calculated with the use of fitting functions. Finally, we have applied these data to a potential neutron capture treatment with boron and sulfur addition to tissue in which part of the hydrogen atoms are replaced by deuterium, which improves the procedure.
Energy Technology Data Exchange (ETDEWEB)
Garcia-Pareja, S.; Galan, P.; Manzano, F.; Brualla, L.; Lallena, A. M. [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ' ' Carlos Haya' ' , Avda. Carlos Haya s/n, E-29010 Malaga (Spain); Unidad de Radiofisica Hospitalaria, Hospital Xanit Internacional, Avda. de los Argonautas s/n, E-29630 Benalmadena (Malaga) (Spain); NCTeam, Strahlenklinik, Universitaetsklinikum Essen, Hufelandstr. 55, D-45122 Essen (Germany); Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)
2010-07-15
Purpose: In this work, the authors describe an approach which has been developed to drive the application of different variance-reduction techniques to the Monte Carlo simulation of photon and electron transport in clinical accelerators. Methods: The new approach considers the following techniques: Russian roulette, splitting, a modified version of the directional bremsstrahlung splitting, and the azimuthal particle redistribution. Their application is controlled by an ant colony algorithm based on an importance map. Results: The procedure has been applied to radiosurgery beams. Specifically, the authors have calculated depth-dose profiles, off-axis ratios, and output factors, quantities usually considered in the commissioning of these beams. The agreement between Monte Carlo results and the corresponding measurements is within {approx}3%/0.3 mm for the central axis percentage depth dose and the dose profiles. The importance map generated in the calculation can be used to discuss simulation details in the different parts of the geometry in a simple way. The simulation CPU times are comparable to those needed within other approaches common in this field. Conclusions: The new approach is competitive with those previously used in this kind of problems (PSF generation or source models) and has some practical advantages that make it to be a good tool to simulate the radiation transport in problems where the quantities of interest are difficult to obtain because of low statistics.
Pölz, Stefan; Laubersheimer, Sven; Eberhardt, Jakob S; Harrendorf, Marco A; Keck, Thomas; Benzler, Andreas; Breustedt, Bastian
2013-08-21
The basic idea of Voxel2MCNP is to provide a framework supporting users in modeling radiation transport scenarios using voxel phantoms and other geometric models, generating corresponding input for the Monte Carlo code MCNPX, and evaluating simulation output. Applications at Karlsruhe Institute of Technology are primarily whole and partial body counter calibration and calculation of dose conversion coefficients. A new generic data model describing data related to radiation transport, including phantom and detector geometries and their properties, sources, tallies and materials, has been developed. It is modular and generally independent of the targeted Monte Carlo code. The data model has been implemented as an XML-based file format to facilitate data exchange, and integrated with Voxel2MCNP to provide a common interface for modeling, visualization, and evaluation of data. Also, extensions to allow compatibility with several file formats, such as ENSDF for nuclear structure properties and radioactive decay data, SimpleGeo for solid geometry modeling, ImageJ for voxel lattices, and MCNPX's MCTAL for simulation results have been added. The framework is presented and discussed in this paper and example workflows for body counter calibration and calculation of dose conversion coefficients is given to illustrate its application.
VVER-440 Ex-Core Neutron Transport Calculations by MCNP-5 Code and Comparison with Experiment
Energy Technology Data Exchange (ETDEWEB)
Borodkin, Pavel; Khrennikov, Nikolay [Scientific and Engineering Centre for Nuclear and Radiation Safety (SEC NRS) Malaya Krasnoselskaya ul., 2/8, bld. 5, 107140 Moscow (Russian Federation)
2008-07-01
Ex-core neutron transport calculations are needed to evaluate radiation loading parameters (neutron fluence, fluence rate and spectra) on the in-vessel equipment, reactor pressure vessel (RPV) and support constructions of VVER type reactors. Due to these parameters are used for reactor equipment life-time assessment, neutron transport calculations should be carried out by precise and reliable calculation methods. In case of RPVs, especially, of first generation VVER-440s, the neutron fluence plays a key role in the prediction of RPV lifetime. Main part of VVER ex-core neutron transport calculations are performed by deterministic and Monte-Carlo methods. This paper deals with precise calculations of the Russian first generation VVER-440 by MCNP-5 code. The purpose of this work was an application of this code for expert calculations, verification of results by comparison with deterministic calculations and validation by neutron activation measured data. Deterministic discrete ordinates DORT code, widely used for RPV neutron dosimetry and many times tested by experiments, was used for comparison analyses. Ex-vessel neutron activation measurements at the VVER-440 NPP have provided space (in azimuth and height directions) and neutron energy (different activation reactions) distributions data for experimental (E) validation of calculated results. Calculational intercomparison (DORT vs. MCNP-5) and comparison with measured values (MCNP-5 and DORT vs. E) have shown agreement within 10-15% for different space points and reaction rates. The paper submits a discussion of results and makes conclusions about practice use of MCNP-5 code for ex-core neutron transport calculations in expert analysis. (authors)
Donovan, Timothy J.
A Monte Carlo algorithm is developed to estimate the ensemble-averaged behavior of neutral particles within a binary stochastic mixture. A special case stochastic mixture is examined, in which non-overlapping spheres of constant radius are uniformly mixed in a matrix material. Spheres are chosen to represent the stochastic volumes due to their geometric simplicity and because spheres are a common approximation to a large number of applications. The boundaries of the mixture are impenetrable, meaning that spheres in the stochastic mixture cannot be assumed to overlap the mixture boundaries. The algorithm employs a method called Limited Chord Length Sampling (LCLS). While in the matrix material, LCLS uses chord-length sampling to sample the distance to the next stochastic interface. After a surface crossing into a stochastic sphere, transport is treated explicitly until the particle exits or is killed. This capability eliminates the need to explicitly model a representation of the random geometry of the mixture. The algorithm is first proposed and tested against benchmark results for a two dimensional, fixed source model using stand-alone Monte Carlo codes. The algorithm is then implemented and tested in a test version of the Los Alamos M&barbelow;onte C&barbelow;arlo ṉ-p&barbelow;article Code MCNP. This prototype MCNP version has the capability to calculate LCLS results for both fixed source and multiplied source (i.e., eigenvalue) problems. Problems analyzed with MCNP range from simple binary mixtures, designed to test LCLS over a range of optical thicknesses, to a detailed High Temperature Gas Reactor fuel element, which tests the value of LCLS in a current problem of practical significance. Comparisons of LCLS and benchmark results include both accuracy and efficiency comparisons. To ensure conservative efficiency comparisons, the statistical basis for the benchmark technique is derived and a formal method for optimizing the benchmark calculations is developed
A Monte Carlo Resampling Approach for the Calculation of Hybrid Classical and Quantum Free Energies.
Cave-Ayland, Christopher; Skylaris, Chris-Kriton; Essex, Jonathan W
2017-02-14
Hybrid free energy methods allow estimation of free energy differences at the quantum mechanics (QM) level with high efficiency by performing sampling at the classical mechanics (MM) level. Various approaches to allow the calculation of QM corrections to classical free energies have been proposed. The single step free energy perturbation approach starts with a classically generated ensemble, a subset of structures of which are postprocessed to obtain QM energies for use with the Zwanzig equation. This gives an estimate of the free energy difference associated with the change from an MM to a QM Hamiltonian. Owing to the poor numerical properties of the Zwanzig equation, however, recent developments have produced alternative methods which aim to provide access to the properties of the true QM ensemble. Here we propose an approach based on the resampling of MM structural ensembles and application of a Monte Carlo acceptance test which in principle, can generate the exact QM ensemble or intermediate ensembles between the MM and QM states. We carry out a detailed comparison against the Zwanzig equation and recently proposed non-Boltzmann methods. As a test system we use a set of small molecule hydration free energies for which hybrid free energy calculations are performed at the semiempirical Density Functional Tight Binding level. Equivalent ensembles at this level of theory have also been generated allowing the reverse QM to MM perturbations to be performed along with a detailed analysis of the results. Additionally, a previously published nucleotide base pair data set simulated at the QM level using ab initio molecular dynamics is also considered. We provide a strong rationale for the use of the Monte Carlo Resampling and non-Boltzmann approaches by showing that configuration space overlaps can be estimated which provide useful diagnostic information regarding the accuracy of these hybrid approaches.
The Monte Carlo approach to transport modeling in deca-nanometer MOSFETs
Sangiorgi, Enrico; Palestri, Pierpaolo; Esseni, David; Fiegna, Claudio; Selmi, Luca
2008-09-01
In this paper, we review recent developments of the Monte Carlo approach to the simulation of semi-classical carrier transport in nano-MOSFETs, with particular focus on the inclusion of quantum-mechanical effects in the simulation (using either the multi-subband approach or quantum corrections to the electrostatic potential) and on the numerical stability issues related to the coupling of the transport with the Poisson equation. Selected applications are presented, including the analysis of quasi-ballistic transport, the determination of the RF characteristics of deca-nanometric MOSFETs, and the study of non-conventional device structures and channel materials.
PCXMC, a Monte Carlo program for calculating patient doses in medical x-ray examinations
Energy Technology Data Exchange (ETDEWEB)
Tapiovaara, M.; Siiskonen, T.
2008-11-15
PCXMC is a Monte Carlo program for calculating patients' organ doses and effective doses in medical x-ray examinations. The organs and tissues considered in the program are: active bone marrow, adrenals, brain, breasts, colon (upper and lower large intestine), extrathoracic airways, gall bladder, heart, kidneys, liver, lungs, lymph nodes, muscle, oesophagus, oral mucosa, ovaries, pancreas, prostate, salivary glands, skeleton, skin, small intestine, spleen, stomach, testicles, thymus, thyroid, urinary bladder and uterus. The program calculates the effective dose with both the present tissue weighting factors of ICRP Publication 103 (2007) and the old tissue weighting factors of ICRP Publication 60 (1991). The anatomical data are based on the mathematical hermaphrodite phantom models of Cristy and Eckerman (1987), which describe patients of six different ages: new-born, 1, 5, 10, 15-year-old and adult patients. Some changes are made to these phantoms in order to make them more realistic for external irradiation conditions and to enable the calculation of the effective dose according to the new ICRP Publication 103 tissue weighting factors. The phantom sizes are adjustable to mimic patients of an arbitrary weight and height. PCXMC allows a free adjustment of the x-ray beam projection and other examination conditions of projection radiography and fluoroscopy
Data decomposition of Monte Carlo particle transport simulations via tally servers
Energy Technology Data Exchange (ETDEWEB)
Romano, Paul K., E-mail: paul.k.romano@gmail.com [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States); Siegel, Andrew R., E-mail: siegala@mcs.anl.gov [Argonne National Laboratory, Theory and Computing Sciences, 9700 S Cass Ave., Argonne, IL 60439 (United States); Forget, Benoit, E-mail: bforget@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States); Smith, Kord, E-mail: kord@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States)
2013-11-01
An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithm in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.
Energy Technology Data Exchange (ETDEWEB)
Botta, F; Di Dia, A; Pedroli, G; Mairani, A; Battistoni, G; Fasso, A; Ferrari, A; Ferrari, M; Paganelli, G
2011-06-01
The calculation of patient-specific dose distribution can be achieved by Monte Carlo simulations or by analytical methods. In this study, fluka Monte Carlo code has been considered for use in nuclear medicine dosimetry. Up to now, fluka has mainly been dedicated to other fields, namely high energy physics, radiation protection, and hadrontherapy. When first employing a Monte Carlo code for nuclear medicine dosimetry, its results concerning electron transport at energies typical of nuclear medicine applications need to be verified. This is commonly achieved by means of calculation of a representative parameter and comparison with reference data. Dose point kernel (DPK), quantifying the energy deposition all around a point isotropic source, is often the one.Methods: fluka DPKs have been calculated in both water and compact bone for monoenergetic electrons (10–3 MeV) and for beta emitting isotopes commonly used for therapy (89Sr, 90Y, 131I, 153Sm, 177Lu, 186Re, and 188Re). Point isotropic sources have been simulated at the center of a water (bone) sphere, and deposed energy has been tallied in concentric shells. fluka outcomes have been compared to penelope v.2008 results, calculated in this study as well. Moreover, in case of monoenergetic electrons in water, comparison with the data from the literature (etran, geant4, mcnpx) has been done. Maximum percentage differences within 0.8·RCSDA and 0.9·RCSDA for monoenergetic electrons (RCSDA being the continuous slowing down approximation range) and within 0.8·X90 and 0.9·X90 for isotopes (X90 being the radius of the sphere in which 90% of the emitted energy is absorbed) have been computed, together with the average percentage difference within 0.9·RCSDA and 0.9·X90 for electrons and isotopes, respectively.Results: Concerning monoenergetic electrons, within 0.8·RCSDA (where 90%–97% of the particle energy is deposed), fluka and penelope agree mostly within 7%, except for 10 and 20 keV electrons (12% in water, 8
Energy Technology Data Exchange (ETDEWEB)
Botta, F.; Mairani, A.; Battistoni, G.; Cremonesi, M.; Di Dia, A.; Fasso, A.; Ferrari, A.; Ferrari, M.; Paganelli, G.; Pedroli, G.; Valente, M. [Medical Physics Department, European Institute of Oncology, Via Ripamonti 435, 20141 Milan (Italy); Istituto Nazionale di Fisica Nucleare (I.N.F.N.), Via Celoria 16, 20133 Milan (Italy); Medical Physics Department, European Institute of Oncology, Via Ripamonti 435, 20141 Milan (Italy); Jefferson Lab, 12000 Jefferson Avenue, Newport News, Virginia 23606 (United States); CERN, 1211 Geneva 23 (Switzerland); Medical Physics Department, European Institute of Oncology, Milan (Italy); Nuclear Medicine Department, European Institute of Oncology, Via Ripamonti 435, 2014 Milan (Italy); Medical Physics Department, European Institute of Oncology, Via Ripamonti 435, 20141 Milan (Italy); FaMAF, Universidad Nacional de Cordoba and CONICET, Cordoba, Argentina C.P. 5000 (Argentina)
2011-07-15
Purpose: The calculation of patient-specific dose distribution can be achieved by Monte Carlo simulations or by analytical methods. In this study, fluka Monte Carlo code has been considered for use in nuclear medicine dosimetry. Up to now, fluka has mainly been dedicated to other fields, namely high energy physics, radiation protection, and hadrontherapy. When first employing a Monte Carlo code for nuclear medicine dosimetry, its results concerning electron transport at energies typical of nuclear medicine applications need to be verified. This is commonly achieved by means of calculation of a representative parameter and comparison with reference data. Dose point kernel (DPK), quantifying the energy deposition all around a point isotropic source, is often the one. Methods: fluka DPKs have been calculated in both water and compact bone for monoenergetic electrons (10{sup -3} MeV) and for beta emitting isotopes commonly used for therapy ({sup 89}Sr, {sup 90}Y, {sup 131}I, {sup 153}Sm, {sup 177}Lu, {sup 186}Re, and {sup 188}Re). Point isotropic sources have been simulated at the center of a water (bone) sphere, and deposed energy has been tallied in concentric shells. fluka outcomes have been compared to penelope v.2008 results, calculated in this study as well. Moreover, in case of monoenergetic electrons in water, comparison with the data from the literature (etran, geant4, mcnpx) has been done. Maximum percentage differences within 0.8{center_dot}R{sub CSDA} and 0.9{center_dot}R{sub CSDA} for monoenergetic electrons (R{sub CSDA} being the continuous slowing down approximation range) and within 0.8{center_dot}X{sub 90} and 0.9{center_dot}X{sub 90} for isotopes (X{sub 90} being the radius of the sphere in which 90% of the emitted energy is absorbed) have been computed, together with the average percentage difference within 0.9{center_dot}R{sub CSDA} and 0.9{center_dot}X{sub 90} for electrons and isotopes, respectively. Results: Concerning monoenergetic electrons
Zhang, Aizhen; Wen, Ning; Nurushev, Teamour; Burmeister, Jay; Chetty, Indrin J
2013-03-04
A commercial electron Monte Carlo (eMC) dose calculation algorithm has become available in Eclipse treatment planning system. The purpose of this work was to evaluate the eMC algorithm and investigate the clinical implementation of this system. The beam modeling of the eMC algorithm was performed for beam energies of 6, 9, 12, 16, and 20 MeV for a Varian Trilogy and all available applicator sizes in the Eclipse treatment planning system. The accuracy of the eMC algorithm was evaluated in a homogeneous water phantom, solid water phantoms containing lung and bone materials, and an anthropomorphic phantom. In addition, dose calculation accuracy was compared between pencil beam (PB) and eMC algorithms in the same treatment planning system for heterogeneous phantoms. The overall agreement between eMC calculations and measurements was within 3%/2 mm, while the PB algorithm had large errors (up to 25%) in predicting dose distributions in the presence of inhomogeneities such as bone and lung. The clinical implementation of the eMC algorithm was investigated by performing treatment planning for 15 patients with lesions in the head and neck, breast, chest wall, and sternum. The dose distributions were calculated using PB and eMC algorithms with no smoothing and all three levels of 3D Gaussian smoothing for comparison. Based on a routine electron beam therapy prescription method, the number of eMC calculated monitor units (MUs) was found to increase with increased 3D Gaussian smoothing levels. 3D Gaussian smoothing greatly improved the visual usability of dose distributions and produced better target coverage. Differences of calculated MUs and dose distributions between eMC and PB algorithms could be significant when oblique beam incidence, surface irregularities, and heterogeneous tissues were present in the treatment plans. In our patient cases, monitor unit differences of up to 7% were observed between PB and eMC algorithms. Monitor unit calculations were also preformed
Žukauskaitėa, A; Plukienė, R; Ridikas, D
2007-01-01
Particle accelerators and other high energy facilities produce penetrating ionizing radiation (neutrons and γ-rays) that must be shielded. The objective of this work was to model photon and neutron transport in various materials, usually used as shielding, such as concrete, iron or graphite. Monte Carlo method allows obtaining answers by simulating individual particles and recording some aspects of their average behavior. In this work several nuclear experiments were modeled: AVF 65 (AVF cyclotron of Research Center of Nuclear Physics, Osaka University, Japan) – γ-ray beams (1-10 MeV), HIMAC (heavy-ion synchrotron of the National Institute of Radiological Sciences in Chiba, Japan) and ISIS-800 (ISIS intensive spallation neutron source facility of the Rutherford Appleton laboratory, UK) – high energy neutron (20-800 MeV) transport in iron and concrete. The calculation results were then compared with experimental data.compared with experimental data.
Application of Photon Transport Monte Carlo Module with GPU-based Parallel System
Energy Technology Data Exchange (ETDEWEB)
Park, Chang Je [Sejong University, Seoul (Korea, Republic of); Shon, Heejeong [Golden Eng. Co. LTD, Seoul (Korea, Republic of); Lee, Donghak [CoCo Link Inc., Seoul (Korea, Republic of)
2015-05-15
In general, it takes lots of computing time to get reliable results in Monte Carlo simulations especially in deep penetration problems with a thick shielding medium. To mitigate such a weakness of Monte Carlo methods, lots of variance reduction algorithms are proposed including geometry splitting and Russian roulette, weight windows, exponential transform, and forced collision, etc. Simultaneously, advanced computing hardware systems such as GPU(Graphics Processing Units)-based parallel machines are used to get a better performance of the Monte Carlo simulation. The GPU is much easier to access and to manage when comparing a CPU cluster system. It also becomes less expensive these days due to enhanced computer technology. There, lots of engineering areas adapt GPU-bases massive parallel computation technique. based photon transport Monte Carlo method. It provides almost 30 times speedup without any optimization and it is expected almost 200 times with fully supported GPU system. It is expected that GPU system with advanced parallelization algorithm will contribute successfully for development of the Monte Carlo module which requires quick and accurate simulations.
Yepes, Pablo; Randeniya, Sharmalee; Taddei, Phillip J; Newhauser, Wayne D
2009-01-07
The Monte Carlo method is used to provide accurate dose estimates in proton radiation therapy research. While it is more accurate than commonly used analytical dose calculations, it is computationally intense. The aim of this work was to characterize for a clinical setup the fast dose calculator (FDC), a Monte Carlo track-repeating algorithm based on GEANT4. FDC was developed to increase computation speed without diminishing dosimetric accuracy. The algorithm used a database of proton trajectories in water to calculate the dose of protons in heterogeneous media. The extrapolation from water to 41 materials was achieved by scaling the proton range and the scattering angles. The scaling parameters were obtained by comparing GEANT4 dose distributions with those calculated with FDC for homogeneous phantoms. The FDC algorithm was tested by comparing dose distributions in a voxelized prostate cancer patient as calculated with well-known Monte Carlo codes (GEANT4 and MCNPX). The track-repeating approach reduced the CPU time required for a complete dose calculation in a voxelized patient anatomy by more than two orders of magnitude, while on average reproducing the results from the Monte Carlo predictions within 2% in terms of dose and within 1 mm in terms of distance.
Shchurovskaya, M. V.; Alferov, V. P.; Geraskin, N. I.; Radaev, A. I.
2017-01-01
The results of the validation of a research reactor calculation using Monte Carlo and deterministic codes against experimental data and based on code-to-code comparison are presented. The continuous energy Monte Carlo code MCU-PTR and the nodal diffusion-based deterministic code TIGRIS were used for full 3-D calculation of the IRT MEPhI research reactor. The validation included the investigations for the reactor with existing high enriched uranium (HEU, 90 w/o) fuel and low enriched uranium (LEU, 19.7 w/o, U-9%Mo) fuel.
Hubber, D A; Dale, J
2015-01-01
Ionising feedback from massive stars dramatically affects the interstellar medium local to star forming regions. Numerical simulations are now starting to include enough complexity to produce morphologies and gas properties that are not too dissimilar from observations. The comparison between the density fields produced by hydrodynamical simulations and observations at given wavelengths relies however on photoionisation/chemistry and radiative transfer calculations. We present here an implementation of Monte Carlo radiation transport through a Voronoi tessellation in the photoionisation and dust radiative transfer code MOCASSIN. We show for the first time a synthetic spectrum and synthetic emission line maps of an hydrodynamical simulation of a molecular cloud affected by massive stellar feedback. We show that the approach on which previous work is based, which remapped hydrodynamical density fields onto Cartesian grids before performing radiative transfer/photoionisation calculations, results in significant ...
Energy Technology Data Exchange (ETDEWEB)
Petrizzi, L.; Batistoni, P.; Migliori, S. [Associazione EURATOM ENEA sulla Fusione, Frascati (Roma) (Italy); Chen, Y.; Fischer, U.; Pereslavtsev, P. [Association FZK-EURATOM Forschungszentrum Karlsruhe (Germany); Loughlin, M. [EURATOM/UKAEA Fusion Association, Culham Science Centre, Abingdon, Oxfordshire, OX (United Kingdom); Secco, A. [Nice Srl Via Serra 33 Camerano Casasco AT (Italy)
2003-07-01
In deuterium-deuterium (D-D) and deuterium-tritium (D-T) fusion plasmas neutrons are produced causing activation of JET machine components. For safe operation and maintenance it is important to be able to predict the induced activation and the resulting shut down dose rates. This requires a suitable system of codes which is capable of simulating both the neutron induced material activation during operation and the decay gamma radiation transport after shut-down in the proper 3-D geometry. Two methodologies to calculate the dose rate in fusion devices have been developed recently and applied to fusion machines, both using the MCNP Monte Carlo code. FZK has developed a more classical approach, the rigorous 2-step (R2S) system in which MCNP is coupled to the FISPACT inventory code with an automated routing. ENEA, in collaboration with the ITER Team, has developed an alternative approach, the direct 1 step method (D1S). Neutron and decay gamma transport are handled in one single MCNP run, using an ad hoc cross section library. The intention was to tightly couple the neutron induced production of a radio-isotope and the emission of its decay gammas for an accurate spatial distribution and a reliable calculated statistical error. The two methods have been used by the two Associations to calculate the dose rate in five positions of JET machine, two inside the vacuum chamber and three outside, at cooling times between 1 second and 1 year after shutdown. The same MCNP model and irradiation conditions have been assumed. The exercise has been proposed and financed in the frame of the Fusion Technological Program of the JET machine. The scope is to supply the designers with the most reliable tool and data to calculate the dose rate on fusion machines. Results showed that there is a good agreement: the differences range between 5-35%. The next step to be considered in 2003 will be an exercise in which the comparison will be done with dose-rate data from JET taken during and
The use of Monte Carlo radiation transport codes in radiation physics and dosimetry
CERN. Geneva; Ferrari, Alfredo; Silari, Marco
2006-01-01
Transport and interaction of electromagnetic radiation Interaction models and simulation schemes implemented in modern Monte Carlo codes for the simulation of coupled electron-photon transport will be briefly reviewed. In these codes, photon transport is simulated by using the detailed scheme, i.e., interaction by interaction. Detailed simulation is easy to implement, and the reliability of the results is only limited by the accuracy of the adopted cross sections. Simulations of electron and positron transport are more difficult, because these particles undergo a large number of interactions in the course of their slowing down. Different schemes for simulating electron transport will be discussed. Condensed algorithms, which rely on multiple-scattering theories, are comparatively fast, but less accurate than mixed algorithms, in which hard interactions (with energy loss or angular deflection larger than certain cut-off values) are simulated individually. The reliability, and limitations, of electron-interacti...
DSMC calculations for the delta wing. [Direct Simulation Monte Carlo method
Celenligil, M. Cevdet; Moss, James N.
1990-01-01
Results are reported from three-dimensional direct simulation Monte Carlo (DSMC) computations, using a variable-hard-sphere molecular model, of hypersonic flow on a delta wing. The body-fitted grid is made up of deformed hexahedral cells divided into six tetrahedral subcells with well defined triangular faces; the simulation is carried out for 9000 time steps using 150,000 molecules. The uniform freestream conditions include M = 20.2, T = 13.32 K, rho = 0.00001729 kg/cu m, and T(wall) = 620 K, corresponding to lambda = 0.00153 m and Re = 14,000. The results are presented in graphs and briefly discussed. It is found that, as the flow expands supersonically around the leading edge, an attached leeside flow develops around the wing, and the near-surface density distribution has a maximum downstream from the stagnation point. Coefficients calculated include C(H) = 0.067, C(DP) = 0.178, C(DF) = 0.110, C(L) = 0.714, and C(D) = 1.089. The calculations required 56 h of CPU time on the NASA Langley Voyager CRAY-2 supercomputer.
Laoues, M.; Khelifi, R.; Moussa, A. S.
2015-01-01
Strontium-90 eye applicators are a beta-ray emitter with a relatively high-energy (maximum energy about 2.28 MeV and average energy about 0.9 MeV). These applicators come in different shapes and dimensions; they are used for the treatment of eye diseases. Whenever, radiation is used in treatment, dosimetry is essential. However, knowledge of the exact dose distribution is a critical decision-making to the outcome of the treatment. The main aim of our study is to simulate the dosimetry of the SIA.20 eye applicator with Monte Carlo GATE 6.1 platform and to compare the calculated results with those measured with EBT2 films. This means that GATE and EBT2 were used to quantify the surface and depths dose- rate, the relative dose profile and the dosimetric parameters in according to international recommendations. Calculated and measured results are in good agreement and they are consistent with the ICRU and NCS recommendations.
Energy Technology Data Exchange (ETDEWEB)
Gomes B, W. O., E-mail: wilsonottobatista@gmail.com [Instituto Federal da Bahia, Rua Emidio dos Santos s/n, Barbalho 40301-015, Salvador de Bahia (Brazil)
2016-10-15
This study aimed to develop a geometry of irradiation applicable to the software PCXMC and the consequent calculation of effective dose in applications of the Computed Tomography Cone Beam (CBCT). We evaluated two different CBCT equipment s for dental applications: Care stream Cs 9000 3-dimensional tomograph; i-CAT and GENDEX GXCB-500. Initially characterize each protocol measuring the surface kerma input and the product kerma air-area, P{sub KA}, with solid state detectors RADCAL and PTW transmission chamber. Then we introduce the technical parameters of each preset protocols and geometric conditions in the PCXMC software to obtain the values of effective dose. The calculated effective dose is within the range of 9.0 to 15.7 μSv for 3-dimensional computer 9000 Cs; within the range 44.5 to 89 μSv for GXCB-500 equipment and in the range of 62-111 μSv for equipment Classical i-CAT. These values were compared with results obtained dosimetry using TLD implanted in anthropomorphic phantom and are considered consistent. Os effective dose results are very sensitive to the geometry of radiation (beam position in mathematical phantom). This factor translates to a factor of fragility software usage. But it is very useful to get quick answers to regarding process optimization tool conclusions protocols. We conclude that use software PCXMC Monte Carlo simulation is useful assessment protocols for CBCT tests in dental applications. (Author)
Calculation of Credit Valuation Adjustment Based on Least Square Monte Carlo Methods
Directory of Open Access Journals (Sweden)
Qian Liu
2015-01-01
Full Text Available Counterparty credit risk has become one of the highest-profile risks facing participants in the financial markets. Despite this, relatively little is known about how counterparty credit risk is actually priced mathematically. We examine this issue using interest rate swaps. This largely traded financial product allows us to well identify the risk profiles of both institutions and their counterparties. Concretely, Hull-White model for rate and mean-reverting model for default intensity have proven to be in correspondence with the reality and to be well suited for financial institutions. Besides, we find that least square Monte Carlo method is quite efficient in the calculation of credit valuation adjustment (CVA, for short as it avoids the redundant step to generate inner scenarios. As a result, it accelerates the convergence speed of the CVA estimators. In the second part, we propose a new method to calculate bilateral CVA to avoid double counting in the existing bibliographies, where several copula functions are adopted to describe the dependence of two first to default times.
A Monte Carlo Simulation for the Ion Transport in Glow Discharges with Dusts
Institute of Scientific and Technical Information of China (English)
SUN Ai-Ping; PU Wei; QIU Xiao-Ming
2001-01-01
We use the Monte Carlo method to simulate theion transport in the rf parallel plate glow discharge with a negative-voltage pulse connected to the electrode. It is found that self-consistent field, dust charge, dust concentration,and dust size influence the energy distribution and the density of the ions arriving at the target, and in particular, the latter two make significant influence. As dust concentration or dust size increases, the number of ions arriving at the target reduces greatly.
Shulenburger, Luke; Desjarlais, M P
2015-01-01
Motivated by the disagreement between recent diffusion Monte Carlo calculations and experiments on the phase transition pressure between the ambient and beta-Sn phases of silicon, we present a study of the HCP to BCC phase transition in beryllium. This lighter element provides an oppor- tunity for directly testing many of the approximations required for calculations on silicon and may suggest a path towards increasing the practical accuracy of diffusion Monte Carlo calculations of solids in general. We demonstrate that the single largest approximation in these calculations is the pseudopotential approximation. After removing this we find excellent agreement with experiment for the ambient HCP phase and results similar to careful calculations using density functional theory for the phase transition pressure.
Directory of Open Access Journals (Sweden)
Claudio Amovilli
2016-02-01
Full Text Available In this work, we present a simple decomposition scheme of the Kohn-Sham optimized orbitals which is able to provide a reduced basis set, made of localized polycentric orbitals, specifically designed for Quantum Monte Carlo. The decomposition follows a standard Density functional theory (DFT calculation and is based on atomic connectivity and shell structure. The new orbitals are used to construct a compact correlated wave function of the Slater–Jastrow form which is optimized at the Variational Monte Carlo level and then used as the trial wave function for a final Diffusion Monte Carlo accurate energy calculation. We are able, in this way, to capture the basic information on the real system brought by the Kohn-Sham orbitals and use it for the calculation of the ground state energy within a strictly variational method. Here, we show test calculations performed on some small selected systems to assess the validity of the proposed approach in a molecular fragmentation, in the calculation of a barrier height of a chemical reaction and in the determination of intermolecular potentials. The final Diffusion Monte Carlo energies are in very good agreement with the best literature data within chemical accuracy.
Kramer, R; Khoury, H J; Vieira, J W; Loureiro, E C M; Lima, V J M; Lima, F R A; Hoff, G
2004-12-01
The International Commission on Radiological Protection (ICRP) has created a task group on dose calculations, which, among other objectives, should replace the currently used mathematical MIRD phantoms by voxel phantoms. Voxel phantoms are based on digital images recorded from scanning of real persons by computed tomography or magnetic resonance imaging (MRI). Compared to the mathematical MIRD phantoms, voxel phantoms are true to the natural representations of a human body. Connected to a radiation transport code, voxel phantoms serve as virtual humans for which equivalent dose to organs and tissues from exposure to ionizing radiation can be calculated. The principal database for the construction of the FAX (Female Adult voXel) phantom consisted of 151 CT images recorded from scanning of trunk and head of a female patient, whose body weight and height were close to the corresponding data recommended by the ICRP in Publication 89. All 22 organs and tissues at risk, except for the red bone marrow and the osteogenic cells on the endosteal surface of bone ('bone surface'), have been segmented manually with a technique recently developed at the Departamento de Energia Nuclear of the UFPE in Recife, Brazil. After segmentation the volumes of the organs and tissues have been adjusted to agree with the organ and tissue masses recommended by ICRP for the Reference Adult Female in Publication 89. Comparisons have been made with the organ and tissue masses of the mathematical EVA phantom, as well as with the corresponding data for other female voxel phantoms. The three-dimensional matrix of the segmented images has eventually been connected to the EGS4 Monte Carlo code. Effective dose conversion coefficients have been calculated for exposures to photons, and compared to data determined for the mathematical MIRD-type phantoms, as well as for other voxel phantoms.
TRIGA IPR-R1 reactor simulation using Monte Carlo transport methods
Hugo Moura Dalle
2005-01-01
Resumo: A utilização do método Monte Carlo na simulação do transporte de partículas em reatores nucleares é crescente e constitui uma tendência mundial. O maior inconveniente dessa técnica, a grande exigência de capacidade de processamento, vem sendo superado pelo contínuo desenvolvimento de processadores cada vez mais rápidos. Esse contexto permitiu o desenvolvimento de metodologias de cálculo neutrônico de reatores nas quais se acopla a parte do transporte de partículas, feita com um código...
Energy Technology Data Exchange (ETDEWEB)
Bauer, Thilo; Jäger, Christof M. [Department of Chemistry and Pharmacy, Computer-Chemistry-Center and Interdisciplinary Center for Molecular Materials, Friedrich-Alexander-Universität Erlangen-Nürnberg, Nägelsbachstrasse 25, 91052 Erlangen (Germany); Jordan, Meredith J. T. [School of Chemistry, University of Sydney, Sydney, NSW 2006 (Australia); Clark, Timothy, E-mail: tim.clark@fau.de [Department of Chemistry and Pharmacy, Computer-Chemistry-Center and Interdisciplinary Center for Molecular Materials, Friedrich-Alexander-Universität Erlangen-Nürnberg, Nägelsbachstrasse 25, 91052 Erlangen (Germany); Centre for Molecular Design, University of Portsmouth, Portsmouth PO1 2DY (United Kingdom)
2015-07-28
We have developed a multi-agent quantum Monte Carlo model to describe the spatial dynamics of multiple majority charge carriers during conduction of electric current in the channel of organic field-effect transistors. The charge carriers are treated by a neglect of diatomic differential overlap Hamiltonian using a lattice of hydrogen-like basis functions. The local ionization energy and local electron affinity defined previously map the bulk structure of the transistor channel to external potentials for the simulations of electron- and hole-conduction, respectively. The model is designed without a specific charge-transport mechanism like hopping- or band-transport in mind and does not arbitrarily localize charge. An electrode model allows dynamic injection and depletion of charge carriers according to source-drain voltage. The field-effect is modeled by using the source-gate voltage in a Metropolis-like acceptance criterion. Although the current cannot be calculated because the simulations have no time axis, using the number of Monte Carlo moves as pseudo-time gives results that resemble experimental I/V curves.
Road Transport Congestion Costs Calculations-Adaptation to Engineering Approach
Directory of Open Access Journals (Sweden)
Marjan Lep
2008-01-01
Full Text Available The article represents so called engineering approach for computing the total road transport congestion costs. According to economic welfare theory, the total costs of transport congestion are defined as dead weight loss (DWL of infrastructure use. With a set of equations DWL could be formulated in a mathematical way. Because such form of equation is not directly applicable for concrete road network calculations it should be transformed into engineering form, which comprises transport engineering related data as classified road links, traffic volumes, passenger unit costs, etc. The equation is well applicable on the interurban road network; adaptations are needed for the urban road network cost calculations, where time losses are not so much related to the link travel time. The final equation was derived for the purposes of national road congestion cost calculation.
A Deterministic-Monte Carlo Hybrid Method for Time-Dependent Neutron Transport Problems
Energy Technology Data Exchange (ETDEWEB)
Justin Pounders; Farzad Rahnema
2001-10-01
A new deterministic-Monte Carlo hybrid solution technique is derived for the time-dependent transport equation. This new approach is based on dividing the time domain into a number of coarse intervals and expanding the transport solution in a series of polynomials within each interval. The solutions within each interval can be represented in terms of arbitrary source terms by using precomputed response functions. In the current work, the time-dependent response function computations are performed using the Monte Carlo method, while the global time-step march is performed deterministically. This work extends previous work by coupling the time-dependent expansions to space- and angle-dependent expansions to fully characterize the 1D transport response/solution. More generally, this approach represents and incremental extension of the steady-state coarse-mesh transport method that is based on global-local decompositions of large neutron transport problems. An example of a homogeneous slab is discussed as an example of the new developments.
Energy Technology Data Exchange (ETDEWEB)
Hashimoto, M.; Saito, K.; Ando, H. [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center
1998-05-01
The method to calculate the response function of spherical BF{sub 3} proportional counter, which is commonly used as neutron dose rate meter and neutron spectrometer with multi moderator system, is developed. As the calculation code for evaluating the response function, the existing code series NRESP, the Monte Carlo code for the calculation of response function of neutron detectors, is selected. However, the application scope of the existing NRESP is restricted, the NRESP98 is tuned as generally applicable code, with expansion of the geometrical condition, the applicable element, etc. The NRESP98 is tested with the response function of the spherical BF{sub 3} proportional counter. Including the effect of the distribution of amplification factor, the detailed evaluation of the charged particle transportation and the effect of the statistical distribution, the result of NRESP98 calculation fit the experience within {+-}10%. (author)
Energy Technology Data Exchange (ETDEWEB)
O' Brien, M J; Procassini, R J; Joy, K I
2009-03-09
Validation of the problem definition and analysis of the results (tallies) produced during a Monte Carlo particle transport calculation can be a complicated, time-intensive processes. The time required for a person to create an accurate, validated combinatorial geometry (CG) or mesh-based representation of a complex problem, free of common errors such as gaps and overlapping cells, can range from days to weeks. The ability to interrogate the internal structure of a complex, three-dimensional (3-D) geometry, prior to running the transport calculation, can improve the user's confidence in the validity of the problem definition. With regard to the analysis of results, the process of extracting tally data from printed tables within a file is laborious and not an intuitive approach to understanding the results. The ability to display tally information overlaid on top of the problem geometry can decrease the time required for analysis and increase the user's understanding of the results. To this end, our team has integrated VisIt, a parallel, production-quality visualization and data analysis tool into Mercury, a massively-parallel Monte Carlo particle transport code. VisIt provides an API for real time visualization of a simulation as it is running. The user may select which plots to display from the VisIt GUI, or by sending VisIt a Python script from Mercury. The frequency at which plots are updated can be set and the user can visualize the simulation results as it is running.
Nguyen van Ye, Romain; Del-Castillo-Negrete, Diego; Spong, D.; Hirshman, S.; Farge, M.
2008-11-01
A limitation of particle-based transport calculations is the noise due to limited statistical sampling. Thus, a key element for the success of these calculations is the development of efficient denoising methods. Here we discuss denoising techniques based on Proper Orthogonal Decomposition (POD) and Wavelet Decomposition (WD). The goal is the reconstruction of smooth (denoised) particle distribution functions from discrete particle data obtained from Monte Carlo simulations. In 2-D, the POD method is based on low rank truncations of the singular value decomposition of the data. For 3-D we propose the use of a generalized low rank approximation of matrices technique. The WD denoising is based on the thresholding of empirical wavelet coefficients [Donoho et al., 1996]. The methods are illustrated and tested with Monte-Carlo particle simulation data of plasma collisional relaxation including pitch angle and energy scattering. As an application we consider guiding-center transport with collisions in a magnetically confined plasma in toroidal geometry. The proposed noise reduction methods allow to achieve high levels of smoothness in the particle distribution function using significantly less particles in the computations.
Energy Technology Data Exchange (ETDEWEB)
Marcus, Ryan C. [Los Alamos National Laboratory
2012-07-25
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.
SU-E-I-28: Evaluating the Organ Dose From Computed Tomography Using Monte Carlo Calculations
Energy Technology Data Exchange (ETDEWEB)
Ono, T; Araki, F [Faculty of Life Sciences, Kumamoto University, Kumamoto (Japan)
2014-06-01
Purpose: To evaluate organ doses from computed tomography (CT) using Monte Carlo (MC) calculations. Methods: A Philips Brilliance CT scanner (64 slice) was simulated using the GMctdospp (IMPS, Germany) based on the EGSnrc user code. The X-ray spectra and a bowtie filter for MC simulations were determined to coincide with measurements of half-value layer (HVL) and off-center ratio (OCR) profile in air. The MC dose was calibrated from absorbed dose measurements using a Farmer chamber and a cylindrical water phantom. The dose distribution from CT was calculated using patient CT images and organ doses were evaluated from dose volume histograms. Results: The HVLs of Al at 80, 100, and 120 kV were 6.3, 7.7, and 8.7 mm, respectively. The calculated HVLs agreed with measurements within 0.3%. The calculated and measured OCR profiles agreed within 3%. For adult head scans (CTDIvol) =51.4 mGy), mean doses for brain stem, eye, and eye lens were 23.2, 34.2, and 37.6 mGy, respectively. For pediatric head scans (CTDIvol =35.6 mGy), mean doses for brain stem, eye, and eye lens were 19.3, 24.5, and 26.8 mGy, respectively. For adult chest scans (CTDIvol=19.0 mGy), mean doses for lung, heart, and spinal cord were 21.1, 22.0, and 15.5 mGy, respectively. For adult abdominal scans (CTDIvol=14.4 mGy), the mean doses for kidney, liver, pancreas, spleen, and spinal cord were 17.4, 16.5, 16.8, 16.8, and 13.1 mGy, respectively. For pediatric abdominal scans (CTDIvol=6.76 mGy), mean doses for kidney, liver, pancreas, spleen, and spinal cord were 8.24, 8.90, 8.17, 8.31, and 6.73 mGy, respectively. In head scan, organ doses were considerably different from CTDIvol values. Conclusion: MC dose distributions calculated by using patient CT images are useful to evaluate organ doses absorbed to individual patients.
Energy Technology Data Exchange (ETDEWEB)
Mangiarotti, A. [Laboratorio de Instrumentacao e Fisica Experimental de Particulas, Coimbra (Portugal); Departamento de Fisica, Faculdade de Ciencias e Tecnologia da Universidade de Coimbra, Coimbra (Portugal); Sona, P., E-mail: pietro.sona@fi.infn.it [Dipartimento di Fisica e Astronomia, Universita degli Studi di Firenze, Polo Scientifico, Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); INFN, Sezione di Firenze (Italy); Ballestrero, S. [Department of Physics University of Johannesburg, Johannesburg (South Africa); CERN PH/ADT, Geneve (Switzerland); Uggerhoj, U.I.; Andersen, K.K. [Department of Physics and Astronomy, University of Aarhus, Aarhus (Denmark)
2012-10-15
Approximate analytical calculations of multi-photon effects in the spectrum of total radiated energy by high-energy electrons crossing thin targets are compared to the results of Monte Carlo type simulations. The limits of validity of the analytical expressions found in the literature are established. The separate contributions to spectral distortion of electromagnetic processes other than bremsstrahlung are also studied in detail.
DEFF Research Database (Denmark)
Mangiarotti, Alessio; Sona, Pietro; Ballestrero, Sergio
2012-01-01
Approximate analytical calculations of multi-photon effects in the spectrum of total radiated energy by high-energy electrons crossing thin targets are compared to the results of Monte Carlo type simulations. The limits of validity of the analytical expressions found in the literature are establi...
A portable, parallel, object-oriented Monte Carlo neutron transport code in C++
Energy Technology Data Exchange (ETDEWEB)
Lee, S.R.; Cummings, J.C. [Los Alamos National Lab., NM (United States); Nolen, S.D. [Texas A and M Univ., College Station, TX (United States)]|[Los Alamos National Lab., NM (United States)
1997-05-01
We have developed a multi-group Monte Carlo neutron transport code using C++ and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and {alpha}-eigenvalues and is portable to and runs parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities of MC++ are discussed, along with physics and performance results on a variety of hardware, including all Accelerated Strategic Computing Initiative (ASCI) hardware. Current parallel performance indicates the ability to compute {alpha}-eigenvalues in seconds to minutes rather than hours to days. Future plans and the implementation of a general transport physics framework are also discussed.
Farah, J.; Martinetti, F.; Sayah, R.; Lacoste, V.; Donadille, L.; Trompier, F.; Nauraye, C.; De Marzi, L.; Vabre, I.; Delacroix, S.; Hérault, J.; Clairand, I.
2014-06-01
Monte Carlo calculations are increasingly used to assess stray radiation dose to healthy organs of proton therapy patients and estimate the risk of secondary cancer. Among the secondary particles, neutrons are of primary concern due to their high relative biological effectiveness. The validation of Monte Carlo simulations for out-of-field neutron doses remains however a major challenge to the community. Therefore this work focused on developing a global experimental approach to test the reliability of the MCNPX models of two proton therapy installations operating at 75 and 178 MeV for ocular and intracranial tumor treatments, respectively. The method consists of comparing Monte Carlo calculations against experimental measurements of: (a) neutron spectrometry inside the treatment room, (b) neutron ambient dose equivalent at several points within the treatment room, (c) secondary organ-specific neutron doses inside the Rando-Alderson anthropomorphic phantom. Results have proven that Monte Carlo models correctly reproduce secondary neutrons within the two proton therapy treatment rooms. Sensitive differences between experimental measurements and simulations were nonetheless observed especially with the highest beam energy. The study demonstrated the need for improved measurement tools, especially at the high neutron energy range, and more accurate physical models and cross sections within the Monte Carlo code to correctly assess secondary neutron doses in proton therapy applications.
Farah, J; Martinetti, F; Sayah, R; Lacoste, V; Donadille, L; Trompier, F; Nauraye, C; De Marzi, L; Vabre, I; Delacroix, S; Hérault, J; Clairand, I
2014-06-07
Monte Carlo calculations are increasingly used to assess stray radiation dose to healthy organs of proton therapy patients and estimate the risk of secondary cancer. Among the secondary particles, neutrons are of primary concern due to their high relative biological effectiveness. The validation of Monte Carlo simulations for out-of-field neutron doses remains however a major challenge to the community. Therefore this work focused on developing a global experimental approach to test the reliability of the MCNPX models of two proton therapy installations operating at 75 and 178 MeV for ocular and intracranial tumor treatments, respectively. The method consists of comparing Monte Carlo calculations against experimental measurements of: (a) neutron spectrometry inside the treatment room, (b) neutron ambient dose equivalent at several points within the treatment room, (c) secondary organ-specific neutron doses inside the Rando-Alderson anthropomorphic phantom. Results have proven that Monte Carlo models correctly reproduce secondary neutrons within the two proton therapy treatment rooms. Sensitive differences between experimental measurements and simulations were nonetheless observed especially with the highest beam energy. The study demonstrated the need for improved measurement tools, especially at the high neutron energy range, and more accurate physical models and cross sections within the Monte Carlo code to correctly assess secondary neutron doses in proton therapy applications.
Velazquez, L.; Castro-Palacio, J. C.
2015-03-01
Velazquez and Curilef [J. Stat. Mech. (2010) P02002, 10.1088/1742-5468/2010/02/P02002; J. Stat. Mech. (2010) P04026, 10.1088/1742-5468/2010/04/P04026] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989), 10.1103/PhysRevLett.63.1195]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L ×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L ) ∝(1/L ) z with exponent z ≃0 .26 ±0 .02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L →+∞ .
Auxiliary-field-based trial wave functions in quantum Monte Carlo calculations
Chang, Chia-Chen; Rubenstein, Brenda M.; Morales, Miguel A.
2016-12-01
Quantum Monte Carlo (QMC) algorithms have long relied on Jastrow factors to incorporate dynamic correlation into trial wave functions. While Jastrow-type wave functions have been widely employed in real-space algorithms, they have seen limited use in second-quantized QMC methods, particularly in projection methods that involve a stochastic evolution of the wave function in imaginary time. Here we propose a scheme for generating Jastrow-type correlated trial wave functions for auxiliary-field QMC methods. The method is based on decoupling the two-body Jastrow into one-body projectors coupled to auxiliary fields, which then operate on a single determinant to produce a multideterminant trial wave function. We demonstrate that intelligent sampling of the most significant determinants in this expansion can produce compact trial wave functions that reduce errors in the calculated energies. Our technique may be readily generalized to accommodate a wide range of two-body Jastrow factors and applied to a variety of model and chemical systems.
Comparison of polynomial approximations to speed up planewave-based quantum Monte Carlo calculations
Parker, William D; Alfè, Dario; Hennig, Richard G; Wilkins, John W
2013-01-01
The computational cost of quantum Monte Carlo (QMC) calculations of realistic periodic systems depends strongly on the method of storing and evaluating the many-particle wave function. Previous work [A. J. Williamson et al., Phys. Rev. Lett. 87, 246406 (2001); D. Alf\\`e and M. J. Gillan, Phys. Rev. B 70, 161101 (2004)] has demonstrated the reduction of the O(N^3) cost of evaluating the Slater determinant with planewaves to O(N^2) using localized basis functions. We compare four polynomial approximations as basis functions -- interpolating Lagrange polynomials, interpolating piecewise-polynomial-form (pp-) splines, and basis-form (B-) splines (interpolating and smoothing). All these basis functions provide a similar speedup relative to the planewave basis. The pp-splines have eight times the memory requirement of the other methods. To test the accuracy of the basis functions, we apply them to the ground state structures of Si, Al, and MgO. The polynomial approximations differ in accuracy most strongly for MgO ...
Efficient heterogeneous execution of Monte Carlo shielding calculations on a Beowulf cluster.
Dewar, David; Hulse, Paul; Cooper, Andrew; Smith, Nigel
2005-01-01
Recent work has been done in using a high-performance 'Beowulf' cluster computer system for the efficient distribution of Monte Carlo shielding calculations. This has enabled the rapid solution of complex shielding problems at low cost and with greater modularity and scalability than traditional platforms. The work has shown that a simple approach to distributing the workload is as efficient as using more traditional techniques such as PVM (Parallel Virtual Machine). In addition, when used in an operational setting this technique is fairer with the use of resources than traditional methods, in that it does not tie up a single computing resource but instead shares the capacity with other tasks. These developments in computing technology have enabled shielding problems to be solved that would have taken an unacceptably long time to run on traditional platforms. This paper discusses the BNFL Beowulf cluster and a number of tests that have recently been run to demonstrate the efficiency of the asynchronous technique in running the MCBEND program. The BNFL Beowulf currently consists of 84 standard PCs running RedHat Linux. Current performance of the machine has been estimated to be between 40 and 100 Gflop s(-1). When the whole system is employed on one problem up to four million particles can be tracked per second. There are plans to review its size in line with future business needs.
Monte Carlo Modeling of Computed Tomography Ceiling Scatter for Shielding Calculations.
Edwards, Stephen; Schick, Daniel
2016-04-01
Radiation protection for clinical staff and members of the public is of paramount importance, particularly in occupied areas adjacent to computed tomography scanner suites. Increased patient workloads and the adoption of multi-slice scanning systems may make unshielded secondary scatter from ceiling surfaces a significant contributor to dose. The present paper expands upon an existing analytical model for calculating ceiling scatter accounting for variable room geometries and provides calibration data for a range of clinical beam qualities. The practical effect of gantry, false ceiling, and wall attenuation in limiting ceiling scatter is also explored and incorporated into the model. Monte Carlo simulations were used to calibrate the model for scatter from both concrete and lead surfaces. Gantry attenuation experimental data showed an effective blocking of scatter directed toward the ceiling at angles up to 20-30° from the vertical for the scanners examined. The contribution of ceiling scatter from computed tomography operation to the effective dose of individuals in areas surrounding the scanner suite could be significant and therefore should be considered in shielding design according to the proposed analytical model.
Institute of Scientific and Technical Information of China (English)
刘松芬; 胡北来
2003-01-01
The internal energy and pressure of dense hydrogen plasma are calculated by the direct path integral Monte Carlo approach. The Kelbg potential is used as interaction potentials both between electrons and between protons and electrons in the calculation. The complete formulae for internal energy and pressure in dense hydrogen plasma derived for the simulation are presented. The correctness of the derived formulae are validated by the obtained simulation results. The numerical results are discussed in details.
Fensin, Michael Lorne
Monte Carlo-linked depletion methods have gained recent interest due to the ability to more accurately model complex 3-dimesional geometries and better track the evolution of temporal nuclide inventory by simulating the actual physical process utilizing continuous energy coefficients. The integration of CINDER90 into the MCNPX Monte Carlo radiation transport code provides a high-fidelity completely self-contained Monte-Carlo-linked depletion capability in a well established, widely accepted Monte Carlo radiation transport code that is compatible with most nuclear criticality (KCODE) particle tracking features in MCNPX. MCNPX depletion tracks all necessary reaction rates and follows as many isotopes as cross section data permits in order to achieve a highly accurate temporal nuclide inventory solution. This work chronicles relevant nuclear history, surveys current methodologies of depletion theory, details the methodology in applied MCNPX and provides benchmark results for three independent OECD/NEA benchmarks. Relevant nuclear history, from the Oklo reactor two billion years ago to the current major United States nuclear fuel cycle development programs, is addressed in order to supply the motivation for the development of this technology. A survey of current reaction rate and temporal nuclide inventory techniques is then provided to offer justification for the depletion strategy applied within MCNPX. The MCNPX depletion strategy is then dissected and each code feature is detailed chronicling the methodology development from the original linking of MONTEBURNS and MCNP to the most recent public release of the integrated capability (MCNPX 2.6.F). Calculation results of the OECD/NEA Phase IB benchmark, H. B. Robinson benchmark and OECD/NEA Phase IVB are then provided. The acceptable results of these calculations offer sufficient confidence in the predictive capability of the MCNPX depletion method. This capability sets up a significant foundation, in a well established
Patient-specific Monte Carlo dose calculations for 103Pd breast brachytherapy
Miksys, N.; Cygler, J. E.; Caudrelier, J. M.; Thomson, R. M.
2016-04-01
This work retrospectively investigates patient-specific Monte Carlo (MC) dose calculations for 103Pd permanent implant breast brachytherapy, exploring various necessary assumptions for deriving virtual patient models: post-implant CT image metallic artifact reduction (MAR), tissue assignment schemes (TAS), and elemental tissue compositions. Three MAR methods (thresholding, 3D median filter, virtual sinogram) are applied to CT images; resulting images are compared to each other and to uncorrected images. Virtual patient models are then derived by application of different TAS ranging from TG-186 basic recommendations (mixed adipose and gland tissue at uniform literature-derived density) to detailed schemes (segmented adipose and gland with CT-derived densities). For detailed schemes, alternate mass density segmentation thresholds between adipose and gland are considered. Several literature-derived elemental compositions for adipose, gland and skin are compared. MC models derived from uncorrected CT images can yield large errors in dose calculations especially when used with detailed TAS. Differences in MAR method result in large differences in local doses when variations in CT number cause differences in tissue assignment. Between different MAR models (same TAS), PTV {{D}90} and skin {{D}1~\\text{c{{\\text{m}}3}}} each vary by up to 6%. Basic TAS (mixed adipose/gland tissue) generally yield higher dose metrics than detailed segmented schemes: PTV {{D}90} and skin {{D}1~\\text{c{{\\text{m}}3}}} are higher by up to 13% and 9% respectively. Employing alternate adipose, gland and skin elemental compositions can cause variations in PTV {{D}90} of up to 11% and skin {{D}1~\\text{c{{\\text{m}}3}}} of up to 30%. Overall, AAPM TG-43 overestimates dose to the PTV ({{D}90} on average 10% and up to 27%) and underestimates dose to the skin ({{D}1~\\text{c{{\\text{m}}3}}} on average 29% and up to 48%) compared to the various MC models derived using the post-MAR CT images studied
A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC)
Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B.; Jia, Xun
2015-09-01
Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia’s CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE’s random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by
Brons, S; Elsässer, T; Ferrari, A; Gadioli, E; Mairani, A; Parodi, K; Sala, P; Scholz, M; Sommerer, F
2010-01-01
Monte Carlo codes are rapidly spreading among hadron therapy community due to their sophisticated nuclear/electromagnetic models which allow an improved description of the complex mixed radiation field produced by nuclear reactions in therapeutic irradiation. In this contribution results obtained with the Monte Carlo code FLUKA are presented focusing on the production of secondary fragments in carbon ion interaction with water and on CT-based calculations of absorbed and biological effective dose for typical clinical situations. The results of the simulations are compared with the available experimental data and with the predictions of the GSI analytical treatment planning code TRiP.
Efficient wave-function matching approach for quantum transport calculations
DEFF Research Database (Denmark)
Sørensen, Hans Henrik Brandenborg; Hansen, Per Christian; Petersen, Dan Erik;
2009-01-01
The wave-function matching (WFM) technique has recently been developed for the calculation of electronic transport in quantum two-probe systems. In terms of efficiency it is comparable to the widely used Green's function approach. The WFM formalism presented so far requires the evaluation of all ...
Energy Technology Data Exchange (ETDEWEB)
Li, JS; Fan, J; Ma, C-M [Fox Chase Cancer Center, Philadelphia, PA (United States)
2015-06-15
Purpose: To improve the treatment efficiency and capabilities for full-body treatment, a robotic radiosurgery system has equipped with a multileaf collimator (MLC) to extend its accuracy and precision to radiation therapy. To model the MLC and include it in the Monte Carlo patient dose calculation is the goal of this work. Methods: The radiation source and the MLC were carefully modeled to consider the effects of the source size, collimator scattering, leaf transmission and leaf end shape. A source model was built based on the output factors, percentage depth dose curves and lateral dose profiles measured in a water phantom. MLC leaf shape, leaf end design and leaf tilt for minimizing the interleaf leakage and their effects on beam fluence and energy spectrum were all considered in the calculation. Transmission/leakage was added to the fluence based on the transmission factors of the leaf and the leaf end. The transmitted photon energy was tuned to consider the beam hardening effects. The calculated results with the Monte Carlo implementation was compared with measurements in homogeneous water phantom and inhomogeneous phantoms with slab lung or bone material for 4 square fields and 9 irregularly shaped fields. Results: The calculated output factors are compared with the measured ones and the difference is within 1% for different field sizes. The calculated dose distributions in the phantoms show good agreement with measurements using diode detector and films. The dose difference is within 2% inside the field and the distance to agreement is within 2mm in the penumbra region. The gamma passing rate is more than 95% with 2%/2mm criteria for all the test cases. Conclusion: Implementation of Monte Carlo dose calculation for a MLC equipped robotic radiosurgery system is completed successfully. The accuracy of Monte Carlo dose calculation with MLC is clinically acceptable. This work was supported by Accuray Inc.
Nguyen, Jennifer; Hayakawa, Carole K; Mourant, Judith R; Venugopalan, Vasan; Spanier, Jerome
2016-05-01
We present a polarization-sensitive, transport-rigorous perturbation Monte Carlo (pMC) method to model the impact of optical property changes on reflectance measurements within a discrete particle scattering model. The model consists of three log-normally distributed populations of Mie scatterers that approximate biologically relevant cervical tissue properties. Our method provides reflectance estimates for perturbations across wavelength and/or scattering model parameters. We test our pMC model performance by perturbing across number densities and mean particle radii, and compare pMC reflectance estimates with those obtained from conventional Monte Carlo simulations. These tests allow us to explore different factors that control pMC performance and to evaluate the gains in computational efficiency that our pMC method provides.
Numerical Study of Light Transport in Apple Models Based on Monte Carlo Simulations
Directory of Open Access Journals (Sweden)
Mohamed Lamine Askoura
2015-12-01
Full Text Available This paper reports on the quantification of light transport in apple models using Monte Carlo simulations. To this end, apple was modeled as a two-layer spherical model including skin and flesh bulk tissues. The optical properties of both tissue types used to generate Monte Carlo data were collected from the literature, and selected to cover a range of values related to three apple varieties. Two different imaging-tissue setups were simulated in order to show the role of the skin on steady-state backscattering images, spatially-resolved reflectance profiles, and assessment of flesh optical properties using an inverse nonlinear least squares fitting algorithm. Simulation results suggest that apple skin cannot be ignored when a Visible/Near-Infrared (Vis/NIR steady-state imaging setup is used for investigating quality attributes of apples. They also help to improve optical inspection techniques in the horticultural products.
Attenuation properties of cement composites: Experimental measurements and Monte Carlo calculations
Florez Meza, Raul Fernando
Developing new cement based materials with excellent mechanical and attenuation properties is critically important for both medical and nuclear power industries. Concrete continues to be the primary choice material for the shielding of gamma and neutron radiation in facilities such as nuclear reactors, nuclear waste repositories, spent nuclear fuel pools, heavy particle radiotherapy rooms, particles accelerators, among others. The purpose of this research was to manufacture cement pastes modified with magnetite and samarium oxide and evaluate the feasibility of utilizing them for shielding of gamma and neutron radiation. Two different experiments were conducted to accomplish these goals. In the first one, Portland cement pastes modified with different loading of fine magnetite were fabricated and investigated for application in gamma radiation shielding. The experimental results were verified theoretically through XCOM and the Monte Carlo N-Particle (MCNP) transport code. Scanning electron microscopy and x-ray diffraction tests were used to investigate the microstructure of the samples. Mechanical characterization was also perfornmed by compression testing. The results suggest that fine magnetite is a suitable aggregate for increasing the compressive and flexural strength of white Portland cement pastes; however, there is no improvement of the attenuation at intermediate energy (662 keV). For the second experiment, cement pastes with different concentrations of samarium oxide were fabricated and tested for shielding against thermal neutrons. MCNP simulations were used to validate the experimental work. The result shows that samarium oxide increases the effective thermal cross section of Portland cement and has the potential to replace boron bearing compounds currently used in neutron shielding.
Energy Technology Data Exchange (ETDEWEB)
Benson, Chris; Joyce, Malcolm J.; Winsby, Andrew [Lancaster University, Engineering Department, Bailrigg, Lancaster LA1 4YR (United Kingdom); Silvie, Jon [BAE SYSTEMS, Barrow-in-Furness, LA14 1AF (United Kingdom)
2002-08-01
The response functions for two cosmic neutron detection systems have been calculated using Monte-Carlo computational methods. The detection systems that form the focus of this research are modified Leake detector designs in which a central thermal neutron detector is surrounded by a sphere of high-density polyethylene. In this arrangement, the surrounding polyethylene moderates the incident fast neutrons that are then detected by the central detector; in this case a {sup 3}He-filled proportional counter. In order to extend the response of these detector systems to cater for cosmic neutron environments, a shell of high-Z material has been included in each to promote (n, xn) reactions in the polyethylene moderator. We have used shells of lead and copper for this purpose to bring the high-energy component of the cosmic field, extending up to several GeV, within the capability of the detector systems. In particular, copper has been used in comparison with lead since the former is easier and safer to machine and handle. The overall diameter of the instruments studies in this work is 208 mm. Calculations of the neutron response have been performed with MCNP4C, for the thermal-20 MeV energy range, and with MCNPX 2.1.5/LA150N neutron libraries for the higher-energy cosmic region of the spectrum beyond 20 MeV. The results of these calculations are compared with experimental data that have been recorded with the instruments at the CERN Cosmic Reference Field Facility (CERF), Geneva, Switzerland. This comparison is discussed in respect of the likely applications of these detector systems to high-energy neutron field measurement on-board aircraft and in the vicinity of high-energy particle accelerators. The former application is gaining considerable research attention following the revised estimates of relative biological effectiveness of cosmic neutron fields and the related recommendation that aircrew be regarded occupationally-exposed radiation workers, on behalf of the
Efficient calculation of dissipative quantum transport properties in semiconductor nanostructures
Energy Technology Data Exchange (ETDEWEB)
Greck, Peter
2012-11-26
We present a novel quantum transport method that follows the non-equilibrium Green's function (NEGF) framework but side steps any self-consistent calculation of lesser self-energies by replacing them by a quasi-equilibrium expression. We termed this method the multi-scattering Buettiker-Probe (MSB) method. It generalizes the so-called Buettiker-Probe model but takes into account all relevant individual scattering mechanisms. It is orders of magnitude more efficient than a fully selfconsistent non-equilibrium Green's function calculation for realistic devices, yet accurately reproduces the results of the latter method as well as experimental data. This method is fairly easy to implement and opens the path towards realistic three-dimensional quantum transport calculations. In this work, we review the fundamentals of the non-equilibrium Green's function formalism for quantum transport calculations. Then, we introduce our novel MSB method after briefly reviewing the original Buettiker-Probe model. Finally, we compare the results of the MSB method to NEGF calculations as well as to experimental data. In particular, we calculate quantum transport properties of quantum cascade lasers in the terahertz (THz) and the mid-infrared (MIR) spectral domain. With a device optimization algorithm based upon the MSB method, we propose a novel THz quantum cascade laser design. It uses a two-well period with alternating barrier heights and complete carrier thermalization for the majority of the carriers within each period. We predict THz laser operation for temperatures up to 250 K implying a new temperature record.
Rabie, M.; Franck, C. M.
2016-06-01
We present a freely available MATLAB code for the simulation of electron transport in arbitrary gas mixtures in the presence of uniform electric fields. For steady-state electron transport, the program provides the transport coefficients, reaction rates and the electron energy distribution function. The program uses established Monte Carlo techniques and is compatible with the electron scattering cross section files from the open-access Plasma Data Exchange Project LXCat. The code is written in object-oriented design, allowing the tracing and visualization of the spatiotemporal evolution of electron swarms and the temporal development of the mean energy and the electron number due to attachment and/or ionization processes. We benchmark our code with well-known model gases as well as the real gases argon, N2, O2, CF4, SF6 and mixtures of N2 and O2.
Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy
Energy Technology Data Exchange (ETDEWEB)
Martinez-Rovira, I.; Sempau, J.; Prezado, Y. [Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain) and ID17 Biomedical Beamline, European Synchrotron Radiation Facility (ESRF), 6 rue Jules Horowitz B.P. 220, F-38043 Grenoble Cedex (France); Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain); Laboratoire Imagerie et modelisation en neurobiologie et cancerologie, UMR8165, Centre National de la Recherche Scientifique (CNRS), Universites Paris 7 et Paris 11, Bat 440., 15 rue Georges Clemenceau, F-91406 Orsay Cedex (France)
2012-05-15
Purpose: Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-{mu}m-wide microbeams spaced by 200-400 {mu}m) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. Methods: The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Results: Good agreement between MC simulations and experimental results was achieved, even at
Three dimensions transport calculations for PWR core; Calcul de coeur R.E.P. en transport 3D
Energy Technology Data Exchange (ETDEWEB)
Richebois, E
2000-07-01
The objective of this work is to define improved 3-D core calculation methods based on the transport theory. These methods can be particularly useful and lead to more precise computations in areas of the core where anisotropy and steep flux gradients occur, especially near interface and boundary conditions and in regions of high heterogeneity (bundle with absorbent rods). In order to apply the transport theory a new method for calculating reflector constants has been developed, since traditional methods were only suited for 2-group diffusion core calculations and could not be extrapolated to transport calculations. In this thesis work, the new method for obtaining reflector constants is derived regardless of the number of energy groups and of the operator used. The core calculations results using the reflector constants thereof obtained have been validated on the EDF's power reactor Saint Laurent B1 with MOX loading. The advantages of a 3-D core transport calculation scheme have been highlighted as opposed to diffusion methods; there are a considerable number of significant effects and potential advantages to be gained in rod worth calculations for instance. These preliminary results obtained with on particular cycle will have to be confirmed by more systematic analysis. Accidents like MSLB (main steam line break) and LOCA (loss of coolant accident) should also be investigated and constitute challenging situations where anisotropy is high and/or flux gradients are steep. This method is now being validated for others EDF's PWRs' reactors, as well as for experimental reactors and other types of commercial reactors. (author)
Graves, Yan Jiang; Jia, Xun; Jiang, Steve B
2013-03-21
The γ-index test has been commonly adopted to quantify the degree of agreement between a reference dose distribution and an evaluation dose distribution. Monte Carlo (MC) simulation has been widely used for the radiotherapy dose calculation for both clinical and research purposes. The goal of this work is to investigate both theoretically and experimentally the impact of the MC statistical fluctuation on the γ-index test when the fluctuation exists in the reference, the evaluation, or both dose distributions. To the first order approximation, we theoretically demonstrated in a simplified model that the statistical fluctuation tends to overestimate γ-index values when existing in the reference dose distribution and underestimate γ-index values when existing in the evaluation dose distribution given the original γ-index is relatively large for the statistical fluctuation. Our numerical experiments using realistic clinical photon radiation therapy cases have shown that (1) when performing a γ-index test between an MC reference dose and a non-MC evaluation dose, the average γ-index is overestimated and the gamma passing rate decreases with the increase of the statistical noise level in the reference dose; (2) when performing a γ-index test between a non-MC reference dose and an MC evaluation dose, the average γ-index is underestimated when they are within the clinically relevant range and the gamma passing rate increases with the increase of the statistical noise level in the evaluation dose; (3) when performing a γ-index test between an MC reference dose and an MC evaluation dose, the gamma passing rate is overestimated due to the statistical noise in the evaluation dose and underestimated due to the statistical noise in the reference dose. We conclude that the γ-index test should be used with caution when comparing dose distributions computed with MC simulation.
Efendiev, Yalchin R.
2013-08-21
In this paper, we propose multilevel Monte Carlo (MLMC) methods that use ensemble level mixed multiscale methods in the simulations of multiphase flow and transport. The contribution of this paper is twofold: (1) a design of ensemble level mixed multiscale finite element methods and (2) a novel use of mixed multiscale finite element methods within multilevel Monte Carlo techniques to speed up the computations. The main idea of ensemble level multiscale methods is to construct local multiscale basis functions that can be used for any member of the ensemble. In this paper, we consider two ensemble level mixed multiscale finite element methods: (1) the no-local-solve-online ensemble level method (NLSO); and (2) the local-solve-online ensemble level method (LSO). The first approach was proposed in Aarnes and Efendiev (SIAM J. Sci. Comput. 30(5):2319-2339, 2008) while the second approach is new. Both mixed multiscale methods use a number of snapshots of the permeability media in generating multiscale basis functions. As a result, in the off-line stage, we construct multiple basis functions for each coarse region where basis functions correspond to different realizations. In the no-local-solve-online ensemble level method, one uses the whole set of precomputed basis functions to approximate the solution for an arbitrary realization. In the local-solve-online ensemble level method, one uses the precomputed functions to construct a multiscale basis for a particular realization. With this basis, the solution corresponding to this particular realization is approximated in LSO mixed multiscale finite element method (MsFEM). In both approaches, the accuracy of the method is related to the number of snapshots computed based on different realizations that one uses to precompute a multiscale basis. In this paper, ensemble level multiscale methods are used in multilevel Monte Carlo methods (Giles 2008a, Oper.Res. 56(3):607-617, b). In multilevel Monte Carlo methods, more accurate
Energy Technology Data Exchange (ETDEWEB)
Ono, Shizuca; Vieira, Wilson J.; Garcia, Roberto D.M. [Centro Tecnico Aeroespacial (CTA-IEAv), Sao Jose dos Campos, SP (Brazil). Inst. de Estudos Avancados
2000-07-01
In this work, the use of multigroup albedo coefficients in Monte Carlo calculations of particle reflection and transmission by ducts is investigated. The procedure consists in modifying the MCNP code so that an albedo matrix computed previously by deterministic methods or Monte Carlo is introduced into the program to describe particle reflection by a surface. This way it becomes possible to avoid the need of considering particle transport in the duct wall explicitly, changing the problem to a problem of transport in the duct interior only and reducing significantly the difficulty of the real problem. The probability of particle reflection at the duct wall is given, for each group, as the sum of the albedo coefficients over the final groups. The calculation is started by sampling a source particle and simulating its reflection on the duct wall by sampling a group for the emerging particle. The particle weight is then reduced by the reflection probability. Next, a new direction and trajectory for the particle is selected. Numerical results obtained for the model are compared with results from a discrete ordinates code and results from Monte Carlo simulations that take particle transport in the wall into account. (author)
Hybrid Parallel Programming Models for AMR Neutron Monte-Carlo Transport
Dureau, David; Poëtte, Gaël
2014-06-01
This paper deals with High Performance Computing (HPC) applied to neutron transport theory on complex geometries, thanks to both an Adaptive Mesh Refinement (AMR) algorithm and a Monte-Carlo (MC) solver. Several Parallelism models are presented and analyzed in this context, among them shared memory and distributed memory ones such as Domain Replication and Domain Decomposition, together with Hybrid strategies. The study is illustrated by weak and strong scalability tests on complex benchmarks on several thousands of cores thanks to the petaflopic supercomputer Tera100.
Energy Technology Data Exchange (ETDEWEB)
Berggren, L.
1995-10-01
This report describes the GRAMCS, a program that calculates the dose-rates in a fallout area using Monte Carlo simulation. GRAMCS processes gamma radiation from a mixture of nuclides, where interaction with the photoelectric effect, Thomson scattering, Compton scattering and pair production occurs. The contaminated field may be vertically inhomogeneous, and the surrounding structure consists of different types of environment with horizontal limits. The detector can be placed at any height or inside a human body. Results are visualized in a graph with dose-rate vs photon energies. Total dose-rate with error interval and primary dose-rate percentage are also shown. Input parameters used for calculations and data describing the graph are written in separate files. 13 refs, 6 figs.
Direct method for calculating temperature-dependent transport properties
Liu, Yi; Yuan, Zhe; Wesselink, R. J. H.; Starikov, Anton A.; van Schilfgaarde, Mark; Kelly, Paul J.
2015-06-01
We show how temperature-induced disorder can be combined in a direct way with first-principles scattering theory to study diffusive transport in real materials. Excellent (good) agreement with experiment is found for the resistivity of Cu, Pd, Pt (and Fe) when lattice (and spin) disorder are calculated from first principles. For Fe, the agreement with experiment is limited by how well the magnetization (of itinerant ferromagnets) can be calculated as a function of temperature. By introducing a simple Debye-like model of spin disorder parameterized to reproduce the experimental magnetization, the temperature dependence of the average resistivity, the anisotropic magnetoresistance, and the spin polarization of a Ni80Fe20 alloy are calculated and found to be in good agreement with existing data. Extension of the method to complex, inhomogeneous materials as well as to the calculation of other finite-temperature physical properties within the adiabatic approximation is straightforward.
Radiation Transport for Explosive Outflows: A Multigroup Hybrid Monte Carlo Method
Wollaeger, Ryan T.; van Rossum, Daniel R.; Graziani, Carlo; Couch, Sean M.; Jordan, George C., IV; Lamb, Donald Q.; Moses, Gregory A.
2013-12-01
We explore Implicit Monte Carlo (IMC) and discrete diffusion Monte Carlo (DDMC) for radiation transport in high-velocity outflows with structured opacity. The IMC method is a stochastic computational technique for nonlinear radiation transport. IMC is partially implicit in time and may suffer in efficiency when tracking MC particles through optically thick materials. DDMC accelerates IMC in diffusive domains. Abdikamalov extended IMC and DDMC to multigroup, velocity-dependent transport with the intent of modeling neutrino dynamics in core-collapse supernovae. Densmore has also formulated a multifrequency extension to the originally gray DDMC method. We rigorously formulate IMC and DDMC over a high-velocity Lagrangian grid for possible application to photon transport in the post-explosion phase of Type Ia supernovae. This formulation includes an analysis that yields an additional factor in the standard IMC-to-DDMC spatial interface condition. To our knowledge the new boundary condition is distinct from others presented in prior DDMC literature. The method is suitable for a variety of opacity distributions and may be applied to semi-relativistic radiation transport in simple fluids and geometries. Additionally, we test the code, called SuperNu, using an analytic solution having static material, as well as with a manufactured solution for moving material with structured opacities. Finally, we demonstrate with a simple source and 10 group logarithmic wavelength grid that IMC-DDMC performs better than pure IMC in terms of accuracy and speed when there are large disparities between the magnitudes of opacities in adjacent groups. We also present and test our implementation of the new boundary condition.
Mizutani, Shohei; Takada, Yoshihisa; Kohno, Ryosuke; Hotta, Kenji; Tansho, Ryohei; Akimoto, Tetsuo
2016-03-01
Full Monte Carlo (FMC) calculation of dose distribution has been recognized to have superior accuracy, compared with the pencil beam algorithm (PBA). However, since the FMC methods require long calculation time, it is difficult to apply them to routine treatment planning at present. In order to improve the situation, a simplified Monte Carlo (SMC) method has been introduced to the dose kernel calculation applicable to dose optimization procedure for the proton pencil beam scanning. We have evaluated accuracy of the SMC calculation by comparing a result of the dose kernel calculation using the SMC method with that using the FMC method in an inhomogeneous phantom. The dose distribution obtained by the SMC method was in good agreement with that obtained by the FMC method. To assess the usefulness of SMC calculation in clinical situations, we have compared results of the dose calculation using the SMC with those using the PBA method for three clinical cases of tumor treatment. The dose distributions calculated with the PBA dose kernels appear to be homogeneous in the planning target volumes (PTVs). In practice, the dose distributions calculated with the SMC dose kernels with the spot weights optimized with the PBA method show largely inhomogeneous dose distributions in the PTVs, while those with the spot weights optimized with the SMC method have moderately homogeneous distributions in the PTVs. Calculation using the SMC method is faster than that using the GEANT4 by three orders of magnitude. In addition, the graphic processing unit (GPU) boosts the calculation speed by 13 times for the treatment planning using the SMC method. Thence, the SMC method will be applicable to routine clinical treatment planning for reproduction of the complex dose distribution more accurately than the PBA method in a reasonably short time by use of the GPU-based calculation engine. PACS number(s): 87.55.Gh.
Khrutchinsky, Arkady; Drozdovitch, Vladimir; Kutsen, Semion; Minenko, Victor; Khrouch, Valeri; Luckyanov, Nickolas; Voillequé, Paul; Bouville, André
2012-04-01
This paper presents results of Monte Carlo modeling of the SRP-68-01 survey meter used to measure exposure rates near the thyroid glands of persons exposed to radioactivity following the Chernobyl accident. This device was not designed to measure radioactivity in humans. To estimate the uncertainty associated with the measurement results, a mathematical model of the SRP-68-01 survey meter was developed and verified. A Monte Carlo method of numerical simulation of radiation transport has been used to calculate the calibration factor for the device and evaluate its uncertainty. The SRP-68-01 survey meter scale coefficient, an important characteristic of the device, was also estimated in this study. The calibration factors of the survey meter were calculated for (131)I, (132)I, (133)I, and (135)I content in the thyroid gland for six age groups of population: newborns; children aged 1 yr, 5 yr, 10 yr, 15 yr; and adults. A realistic scenario of direct thyroid measurements with an "extended" neck was used to calculate the calibration factors for newborns and one-year-olds. Uncertainties in the device calibration factors due to variability of the device scale coefficient, variability in thyroid mass and statistical uncertainty of Monte Carlo method were evaluated. Relative uncertainties in the calibration factor estimates were found to be from 0.06 for children aged 1 yr to 0.1 for 10-yr and 15-yr children. The positioning errors of the detector during measurements deviate mainly in one direction from the estimated calibration factors. Deviations of the device position from the proper geometry of measurements were found to lead to overestimation of the calibration factor by up to 24 percent for adults and up to 60 percent for 1-yr children. The results of this study improve the estimates of (131)I thyroidal content and, consequently, thyroid dose estimates that are derived from direct thyroid measurements performed in Belarus shortly after the Chernobyl accident.
Accounting for chemical kinetics in field scale transport calculations
Energy Technology Data Exchange (ETDEWEB)
Bryan, N.D. [Manchester Univ. (United Kingdom). Dept. of Chemistry
2005-04-01
The modelling of column experiments has shown that the humic acid mediated transport of metal ions is dominated by the non-exchangeable fraction. Metal ions enter this fraction via the exchangeable fraction, and may transfer back again. However, in both directions these chemical reactions are slow. Whether or not a kinetic description of these processes is required during transport calculations, or an assumption of local equilibrium will suffice, will depend upon the ratio of the reaction half-time to the residence time of species within the groundwater column. If the flow rate is sufficiently slow or the reaction sufficiently fast then the assumption of local equilibrium is acceptable. Alternatively, if the reaction is sufficiently slow (or the flow rate fast), then the reaction may be 'decoupled', i.e. removed from the calculation. These distinctions are important, because calculations involving chemical kinetics are computationally very expensive, and should be avoided wherever possible. In addition, column experiments have shown that the sorption of humic substances and metal-humate complexes may be significant, and that these reactions may also be slow. In this work, a set of rules is presented that dictate when the local equilibrium and decoupled assumptions may be used. In addition, it is shown that in all cases to a first approximation, the behaviour of a kinetically controlled species, and in particular its final distribution against distance at the end of a calculation, depends only upon the ratio of the reaction first order rate to the residence time, and hence, even in the region where the simplifications may not be used, the behaviour is predictable. In this way, it is possible to obtain an estimate of the migration of these species, without the need for a complex transport calculation. (orig.)
Development of CT scanner models for patient organ dose calculations using Monte Carlo methods
Gu, Jianwei
There is a serious and growing concern about the CT dose delivered by diagnostic CT examinations or image-guided radiation therapy imaging procedures. To better understand and to accurately quantify radiation dose due to CT imaging, Monte Carlo based CT scanner models are needed. This dissertation describes the development, validation, and application of detailed CT scanner models including a GE LightSpeed 16 MDCT scanner and two image guided radiation therapy (IGRT) cone beam CT (CBCT) scanners, kV CBCT and MV CBCT. The modeling process considered the energy spectrum, beam geometry and movement, and bowtie filter (BTF). The methodology of validating the scanner models using reported CTDI values was also developed and implemented. Finally, the organ doses to different patients undergoing CT scan were obtained by integrating the CT scanner models with anatomically-realistic patient phantoms. The tube current modulation (TCM) technique was also investigated for dose reduction. It was found that for RPI-AM, thyroid, kidneys and thymus received largest dose of 13.05, 11.41 and 11.56 mGy/100 mAs from chest scan, abdomen-pelvis scan and CAP scan, respectively using 120 kVp protocols. For RPI-AF, thymus, small intestine and kidneys received largest dose of 10.28, 12.08 and 11.35 mGy/100 mAs from chest scan, abdomen-pelvis scan and CAP scan, respectively using 120 kVp protocols. The dose to the fetus of the 3 month pregnant patient phantom was 0.13 mGy/100 mAs and 0.57 mGy/100 mAs from the chest and kidney scan, respectively. For the chest scan of the 6 month patient phantom and the 9 month patient phantom, the fetal doses were 0.21 mGy/100 mAs and 0.26 mGy/100 mAs, respectively. For MDCT with TCM schemas, the fetal dose can be reduced with 14%-25%. To demonstrate the applicability of the method proposed in this dissertation for modeling the CT scanner, additional MDCT scanner was modeled and validated by using the measured CTDI values. These results demonstrated that the
Song, Linze; Shi, Qiang
2015-05-07
We present a new non-perturbative method to calculate the charge carrier mobility using the imaginary time path integral approach, which is based on the Kubo formula for the conductivity, and a saddle point approximation to perform the analytic continuation. The new method is first tested using a benchmark calculation from the numerical exact hierarchical equations of motion method. Imaginary time path integral Monte Carlo simulations are then performed to explore the temperature dependence of charge carrier delocalization and mobility in organic molecular crystals (OMCs) within the Holstein and Holstein-Peierls models. The effects of nonlocal electron-phonon interaction on mobility in different charge transport regimes are also investigated.
Energy Technology Data Exchange (ETDEWEB)
Moriarty, K.J.M. (Royal Holloway Coll., Englefield Green (UK). Dept. of Mathematics); Blackshaw, J.E. (Floating Point Systems UK Ltd., Bracknell)
1983-04-01
The computer program calculates the average action per plaquette for SU(6)/Z/sub 6/ lattice gauge theory. By considering quantum field theory on a space-time lattice, the ultraviolet divergences of the theory are regulated through the finite lattice spacing. The continuum theory results can be obtained by a renormalization group procedure. Making use of the FPS Mathematics Library (MATHLIB), we are able to generate an efficient code for the Monte Carlo algorithm for lattice gauge theory calculations which compares favourably with the performance of the CDC 7600.
Barengoltz, Jack
2016-07-01
Monte Carlo (MC) is a common method to estimate probability, effectively by a simulation. For planetary protection, it may be used to estimate the probability of impact P{}_{I} by a launch vehicle (upper stage) of a protected planet. The object of the analysis is to provide a value for P{}_{I} with a given level of confidence (LOC) that the true value does not exceed the maximum allowed value of P{}_{I}. In order to determine the number of MC histories required, one must also guess the maximum number of hits that will occur in the analysis. This extra parameter is needed because a LOC is desired. If more hits occur, the MC analysis would indicate that the true value may exceed the specification value with a higher probability than the LOC. (In the worst case, even the mean value of the estimated P{}_{I} might exceed the specification value.) After the analysis is conducted, the actual number of hits is, of course, the mean. The number of hits arises from a small probability per history and a large number of histories; these are the classic requirements for a Poisson distribution. For a known Poisson distribution (the mean is the only parameter), the probability for some interval in the number of hits is calculable. Before the analysis, this is not possible. Fortunately, there are methods that can bound the unknown mean for a Poisson distribution. F. Garwoodfootnote{ F. Garwood (1936), ``Fiduciary limits for the Poisson distribution.'' Biometrika 28, 437-442.} published an appropriate method that uses the Chi-squared function, actually its inversefootnote{ The integral chi-squared function would yield probability α as a function of the mean µ and an actual value n.} (despite the notation used): This formula for the upper and lower limits of the mean μ with the two-tailed probability 1-α depends on the LOC α and an estimated value of the number of "successes" n. In a MC analysis for planetary protection, only the upper limit is of interest, i.e., the single
Energy Technology Data Exchange (ETDEWEB)
Vergnaud, Th.; Nimal, J.C.; Chiron, M
2001-07-01
The TRIPOLI-3 code applies the Monte Carlo method to neutron, gamma-ray and coupled neutron and gamma-ray transport calculations in three-dimensional geometries, either in steady-state conditions or having a time dependence. It can be used to study problems where there is a high flux attenuation between the source zone and the result zone (studies of shielding configurations or source driven sub-critical systems, with fission being taken into account), as well as problems where there is a low flux attenuation (neutronic calculations -- in a fuel lattice cell, for example -- where fission is taken into account, usually with the calculation on the effective multiplication factor, fine structure studies, numerical experiments to investigate methods approximations, etc). TRIPOLI-3 has been operational since 1995 and is the version of the TRIPOLI code that follows on from TRIPOLI-2; it can be used on SUN, RISC600 and HP workstations and on PC using the Linux or Windows/NT operating systems. The code uses nuclear data libraries generated using the THEMIS/NJOY system. The current libraries were derived from ENDF/B6 and JEF2. There is also a response function library based on a number of evaluations, notably the dosimetry libraries IRDF/85, IRDF/90 and also evaluations from JEF2. The treatment of particle transport is the same in version 3.5 as in version 3.4 of the TRIPOLI code; but the version 3.5 is more convenient for preparing the input data and for reading the output. The french version of the user's manual exists. (authors)
An accurate {delta}f method for neoclassical transport calculation
Energy Technology Data Exchange (ETDEWEB)
Wang, W.X.; Nakajima, N.; Murakami, S.; Okamoto, M. [National Inst. for Fusion Science, Toki, Gifu (Japan)
1999-03-01
A {delta}f method, solving drift kinetic equation, for neoclassical transport calculation is presented in detail. It is demonstrated that valid results essentially rely on the correct evaluation of marker density g in weight calculation. A general and accurate weighting scheme is developed without using some assumed g in weight equation for advancing particle weights, unlike the previous schemes. This scheme employs an additional weight function to directly solve g from its kinetic equation using the idea of {delta}f method. Therefore the severe constraint that the real marker distribution must be consistent with the initially assumed g during a simulation is relaxed. An improved like-particle collision scheme is presented. By performing compensation for momentum, energy and particle losses arising from numerical errors, the conservations of all the three quantities are greatly improved during collisions. Ion neoclassical transport due to self-collisions is examined under finite banana case as well as zero banana limit. A solution with zero particle and zero energy flux (in case of no temperature gradient) over whole poloidal section is obtained. With the improvement in both like-particle collision scheme and weighting scheme, the {delta}f simulation shows a significantly upgraded performance for neoclassical transport study. (author)
Directory of Open Access Journals (Sweden)
P Shokrani
2009-10-01
Full Text Available Introduction & Objective: Brachytherapy using I-125 radioactive seeds in removable episcleral plaques (EP is often used in treatment of ocular malignant melanoma. Some radioactive seeds are fixed in a gold bowl-shaped plaque. The plaque is sutured to the sclera surface corresponding to the base of the intraocular tumor, allowing for a localized radiation dose delivery to the tumor. Minimum target doses as high as 85Gy are directed at malignant tumor. The aim of this study was to develop a Monte Carlo simulation of an ocular plaque in order to calculate the resulting isodose distributions. Materials & Methods: The MCNP-4C Monte Carlo code is used to simulate the plan of an episcleral plaque treatment. A 20-mm Collaborative Ocular Melanoma Study (COMS plaque with 3, I-125 seed of model 6711 was used. Resulting dose distributions, including central axis dose and off-axis dose profiles, were calculated in a water phantom with 12mm radius. The calculated dose distributions were compared to the corresponding dose measured by Knuten et al., 2001. Results: Central axis dose calculations represent a rapid dose fall off, which is an important factor in selection of appropriate eye plaque for management of tumors with known dimension. Calculated off-axis dose profiles show decreased dose uniformity at distances close to the plaque. Increasing of distance from the plaque resulted in increasing of the dose uniformity. Conclusion: Monte Carlo simulation of eye plaques can be used as a useful tool in process of design, development and treatment planning of ocular radioactive plaques.
Energy Technology Data Exchange (ETDEWEB)
Swaminathan-Gopalan, Krishnan; Stephani, Kelly A., E-mail: ksteph@illinois.edu [Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States)
2016-02-15
A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.
Hydraulic calculation of gravity transportation pipeline system for backfill slurry
Institute of Scientific and Technical Information of China (English)
ZHANG Qin-li; HU Guan-yu; WANG Xin-min
2008-01-01
Taking cemented coal gangue pipeline transportation system in Suncun Coal Mine, Xinwen Mining Group, Shandong Province, China, as an example, the hydraulic calculation approaches and process about gravity pipeline transportation of backfill slurry were investigated. The results show that the backfill capability of the backfill system should be higher than 74.4m3/h according to the mining production and backfill times in the mine; the minimum velocity (critical velocity) and practical working velocity of the backfill slurry are 1.44 and 3.82m/s, respectively. Various formulae give the maximum ratio of total length to vertical height of pipeline (L/H ratio) of the backfill system of 5.4, and then the reliability and capability of the system can be evaluated.
Ward Identity Constraints on Ladder Kernels in Transport Coefficient Calculations
Energy Technology Data Exchange (ETDEWEB)
Gagnon, J.-S. [Department of Physics, McGill University, 3600 University Street, Montreal, H3A 2T8 (Canada); Jeon, S. [Department of Physics, McGill University, 3600 University Street, Montreal, H3A 2T8 (Canada)
2007-03-15
Using diagrammatic methods, we show how the Ward identity can be used to constrain the ladder kernel in transport coefficients calculations. More specifically, we use the Ward identity to determine the necessary diagrams that must be resummed (using the usual integral equation). Our main result is an equation relating the kernel of the integral equation with functional derivatives of the full (imaginary) self-energy; it is similar to what is obtained with 2PI effective action methods. However, since we use the Ward identity as our starting point, gauge invariance is preserved. Using power counting arguments, we also show which self-energies must be included in the resummation at leading order, including 2 to 2 scatterings and 1 to 2 collinear scatterings with the Landau-Pomeranchuk-Migdal (LPM) effect. In this study we restrict our discussion to electrical conductivity and shear viscosity in QED, but our method can in principles be generalized to other transport coefficients and other theories.
Lee, Boram; Lee, Jungseok; Kang, Sangwon; Cho, Hyelim; Shin, Gwisoon; Lee, Jeong-Woo; Choi, Jonghak
2013-01-01
The objective of this study was to evaluate the patient effective dose and scattered dose from recently developed dental mobile equipment in Korea. The MCNPX 2.6 (Los Alamos National Laboratory, USA) was used in a Monte Carlo simulation to calculate both the effective and scattered doses. The MCNPX code was constructed identically as in the general use of equipment and the effective dose and scattered dose were calculated using the KTMAN-2 digital phantom. The effective dose was calculated as 906 μSv. The equivalent doses per organ were calculated via the MCNPX code, and were 32 174 and 19 μSv in the salivary gland and oesophagus, respectively. The scattered dose of 22.5-32.6 μSv of the tube side at 25 cm from the centre in anterior and posterior planes was measured as 1.4-3 times higher than the detector side of 10.5-16.0 μSv.
Sherbini, S; Tamasanis, D; Sykes, J; Porter, S W
1986-12-01
A program was developed to calculate the exposure rate resulting from airborne gases inside a reactor containment building. The calculations were performed at the location of a wall-mounted area radiation monitor. The program uses Monte Carlo techniques and accounts for both the direct and scattered components of the radiation field at the detector. The scattered component was found to contribute about 30% of the total exposure rate at 50 keV and dropped to about 7% at 2000 keV. The results of the calculations were normalized to unit activity per unit volume of air in the containment. This allows the exposure rate readings of the area monitor to be used to estimate the airborne activity in containment in the early phases of an accident. Such estimates, coupled with containment leak rates, provide a method to obtain a release rate for use in offsite dose projection calculations.
DEFF Research Database (Denmark)
Leth, Henriette Astrup; Madsen, Lars Bojer; Mølmer, Klaus
2010-01-01
Theoretical calculations on dissociative double ionization of H2 and D2 in short intense laser pulses using the Monte Carlo wave packet technique are presented for several different field intensities, wavelengths, and pulse durations. We find convincing agreement between theory and experimental...... results for the kinetic energy release spectra of the nuclei. Besides the correctly predicted spectra the Monte Carlo wave packet method offers insight into the nuclear dynamics during the pulse and makes it possible to address the origin of different structures observed in the spectra. Three......-photon resonances in the singly ionized molecule and charge-resonance-enhanced ionization are shown to be the main processes responsible for the observed nuclear energy distributions....
Directory of Open Access Journals (Sweden)
Kępisty Grzegorz
2015-09-01
Full Text Available In this paper, we compare the methodology of different time-step models in the context of Monte Carlo burnup calculations for nuclear reactors. We discuss the differences between staircase step model, slope model, bridge scheme and stochastic implicit Euler method proposed in literature. We focus on the spatial stability of depletion procedure and put additional emphasis on the problem of normalization of neutron source strength. Considered methodology has been implemented in our continuous energy Monte Carlo burnup code (MCB5. The burnup simulations have been performed using the simplified high temperature gas-cooled reactor (HTGR system with and without modeling of control rod withdrawal. Useful conclusions have been formulated on the basis of results.
Energy Technology Data Exchange (ETDEWEB)
Sampson, Andrew; Le Yi; Williamson, Jeffrey F. [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia 23298 (United States)
2012-02-15
Purpose: To demonstrate potential of correlated sampling Monte Carlo (CMC) simulation to improve the calculation efficiency for permanent seed brachytherapy (PSB) implants without loss of accuracy. Methods: CMC was implemented within an in-house MC code family (PTRAN) and used to compute 3D dose distributions for two patient cases: a clinical PSB postimplant prostate CT imaging study and a simulated post lumpectomy breast PSB implant planned on a screening dedicated breast cone-beam CT patient exam. CMC tallies the dose difference, {Delta}D, between highly correlated histories in homogeneous and heterogeneous geometries. The heterogeneous geometry histories were derived from photon collisions sampled in a geometrically identical but purely homogeneous medium geometry, by altering their particle weights to correct for bias. The prostate case consisted of 78 Model-6711 {sup 125}I seeds. The breast case consisted of 87 Model-200 {sup 103}Pd seeds embedded around a simulated lumpectomy cavity. Systematic and random errors in CMC were unfolded using low-uncertainty uncorrelated MC (UMC) as the benchmark. CMC efficiency gains, relative to UMC, were computed for all voxels, and the mean was classified in regions that received minimum doses greater than 20%, 50%, and 90% of D{sub 90}, as well as for various anatomical regions. Results: Systematic errors in CMC relative to UMC were less than 0.6% for 99% of the voxels and 0.04% for 100% of the voxels for the prostate and breast cases, respectively. For a 1 x 1 x 1 mm{sup 3} dose grid, efficiency gains were realized in all structures with 38.1- and 59.8-fold average gains within the prostate and breast clinical target volumes (CTVs), respectively. Greater than 99% of the voxels within the prostate and breast CTVs experienced an efficiency gain. Additionally, it was shown that efficiency losses were confined to low dose regions while the largest gains were located where little difference exists between the homogeneous and
Energy Technology Data Exchange (ETDEWEB)
Verde Velasco, J. M.; Garcia Repiso, S.; Martin rincon, C.; Ramos Pacho, J. A.; Delgado Aparicio, J. M.; Perez alvarez, M. E.; Saez Beltran, M.; Gomez Gonzalez, N.; Cons Perez, N.; Sena Espinel, E.
2013-07-01
The implementation of a Monte Carlo algorithm requires not only a careful series of steps, but also adjust various parameters of calculation which will influence both in the goodness of the calculation of doses as in the time required for the calculation, being necessary to reach a compromise solution that get acceptable calculation accuracy in a time of calculation which is acceptable. In this paper we present our experience in this setting. (Author)
Gomà, Carles; Andreo, Pedro; Sempau, Josep
2016-03-01
This work calculates beam quality correction factors (k Q ) in monoenergetic proton beams using detailed Monte Carlo simulation of ionization chambers. It uses the Monte Carlo code penh and the electronic stopping powers resulting from the adoption of two different sets of mean excitation energy values for water and graphite: (i) the currently ICRU 37 and ICRU 49 recommended {{I}\\text{w}}=75~\\text{eV} and {{I}\\text{g}}=78~\\text{eV} and (ii) the recently proposed {{I}\\text{w}}=78~\\text{eV} and {{I}\\text{g}}=81.1~\\text{eV} . Twelve different ionization chambers were studied. The k Q factors calculated using the two different sets of I-values were found to agree with each other within 1.6% or better. k Q factors calculated using current ICRU I-values were found to agree within 2.3% or better with the k Q factors tabulated in IAEA TRS-398, and within 1% or better with experimental values published in the literature. k Q factors calculated using the new I-values were also found to agree within 1.1% or better with the experimental values. This work concludes that perturbation correction factors in proton beams—currently assumed to be equal to unity—are in fact significantly different from unity for some of the ionization chambers studied.
基于GPU的蒙特卡洛放疗剂量并行计算%GPU-based Parallel Monte Carlo Simulation for Radiotherapy Dose Calculation
Institute of Scientific and Technical Information of China (English)
甘旸谷; 黄斐增
2012-01-01
目的:蒙特卡洛模拟在放疗剂量计算领域被广泛视为最精确的计算方法,但对于日常的临床应用,其效率仍有较大提升需求和空间.方法:本文会呈现放疗剂量计算领域的最新成果-维持相同的粒子输运原理的同时,使用CUDA语言,利用显卡的GPU(Graphic Processing Unit)并行处理蒙特卡洛计算中的主要过程,计算光子剂量沉积.这样既可以保证不失去蒙卡模拟的精度,又可以极大地提高运算速度.结果:实践表明在使用NVIDIA GTX460 1G DDR5 plus INTEL i52300的硬件设备,在GPU上并行计算蒙特卡洛放疗剂量沉积时,计算100万个光子剂量沉积时加速因子达到116.6,处理1000万光子入射,加速因子可达127.5.结论:本文中利用显卡GPU运行CUDA语言对放疗剂量计算进行模拟,是一种可以大幅有效提高剂量计算效率方法.%Objective: Monte Carlo simulation is commonly considered to be the most accurate dose calculation method in radiotherapy. However, its efficiency still requires improvement for many routine clinical applications.Methods: This paper will present recent progresses in GPU-based Monte Carlo dose calculation. We utilizes the parallel computation ability of a GPU to achieve high efficiency, while maintaining the same particle transport physics as in the original Monte Carlo simulation code and therefore obtains the same level of simulation accuracy. Results: Our research results show that using an NVIDIA GTX460 GPU card against an INTEL i5 2300 in computing a one-million sample with all 336 processor cores working together,speed-up factors can be as high as 116.6,as for a ten-million situation,even obtain a result as high as 127.5. Conclusions:Using GPU and CUDA to process a Monte Carlo simulation can highly improve the efficiency of dose calculation.
Neutron and photon transport calculations in fusion system. 2
Energy Technology Data Exchange (ETDEWEB)
Sato, Satoshi [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment
1998-03-01
On the application of MCNP to the neutron and {gamma}-ray transport calculations for fusion reactor system, the wide range design calculation has been carried out in the engineering design activities for the international thermonuclear fusion experimental reactor (ITER) being developed jointly by Japan, USA, EU and Russia. As the objects of shielding calculation for fusion reactors, there are the assessment of dose equivalent rate for living body shielding and the assessment of the nuclear response for the soundness of in-core structures. In the case that the detailed analysis of complicated three-dimensional shapes is required, the assessment using MCNP has been carried out. Also when the nuclear response of peripheral equipment due to the gap streaming between blanket modules is evaluated with good accuracy, the calculation with MCNP has been carried out. The analyses of the shieldings for blanket modules and NBI port are explained, and the examples of the results of analyses are shown. In the blanket modules, there are penetrating holes and continuous gap. In the case of the NBI port, shielding plug cannot be installed. These facts necessitate the MCNP analysis with high accuracy. (K.I.)
Energy Technology Data Exchange (ETDEWEB)
Mariotti, F., E-mail: francesca.mariotti@bologna.enea.i [ENEA-BAS-ION IRP Radiation Protection Institute, Via dei Colli 16, 40136, Bologna (Italy); Gualdrini, G. [ENEA-BAS-ION IRP Radiation Protection Institute, Via dei Colli 16, 40136, Bologna (Italy)
2011-04-15
The ORAMED (Optimization of RAdiation protection for MEDical staff) Working Tasks (WP4) is addressed at evaluating extremity doses (and dose distributions across the hands) of medical staff working in nuclear medicine departments, to study the influence of protective devices such as syringe and vial shields, to improve such devices when possible and to propose 'levels of reference doses' for each standard nuclear medicine procedure. In particular task 4 is concerned with the study of the extremity dosimetry for the hand of operators devoted to the preparation and administration stages of the usage, for example, of {sup 99m}Tc, {sup 18}F and {sup 90}Y (Zevalin) radionuclides. The aim of this report consists in the study of photon-electron equilibrium conditions at 0.07 mm in the skin to justify a simplified 'kerma approximation' approach in the planned complex Monte Carlo voxel hand modeling. Furthermore a detailed investigation on primary electron and secondary bremsstrahlung photon transport from {sup 90}Y to speed up the calculations was performed. The results obtained in the simplified investigated conditions could be of help for the production calculations, introducing, if necessary, suited correction factors applicable to the complex condition results.
Energy Technology Data Exchange (ETDEWEB)
Tittelbach, S. [Wissenschaftlich-Technische Ingenieurberatung GmbH (WTI), Juelich (Germany); Biedermann, R. [GNS Gesellschaft fuer Nuklear-Service mbH, Essen (Germany); Schmidt-Wohlfarth, Y.; Louia, A. [EnBW Kernkraft GmbH, Philippsburg (Germany)
2011-07-01
The transport rack for the internal transport of loaded CASTOR {sup registered} casks before the storage in the intermediate storage facility at the site of the NPP Philippsburg is exposed to neutron irradiation from the cask inventory. Using the Monte Carlo code MCNP the activation rates of the transport rack materials are calculated for typical storage times of the casks in the rack. The long-term activation was also calculated for the continuous use of the transport rack over 10 years. Further topics were the dose rate in the near surrounding of the transport rack after long-term activation and finally the disposability of rack components according to the legal regulations. The maximum contact dose rate was calculated to be below 1 micro Sv/h after 10 years of application. The transport rack can be disposed with large safety margins to the radiation protection limits.
Monte Carlo Simulations of Charge Transport in 2D Organic Photovoltaics.
Gagorik, Adam G; Mohin, Jacob W; Kowalewski, Tomasz; Hutchison, Geoffrey R
2013-01-01
The effect of morphology on charge transport in organic photovoltaics is assessed using Monte Carlo. In isotopic two-phase morphologies, increasing the domain size from 6.3 to 18.3 nm improves the fill factor by 11.6%, a result of decreased tortuosity and relaxation of Coulombic barriers. Additionally, when small aggregates of electron acceptors are interdispersed into the electron donor phase, charged defects form in the system, reducing fill factors by 23.3% on average, compared with systems without aggregates. In contrast, systems with idealized connectivity show a 3.31% decrease in fill factor when domain size was increased from 4 to 64 nm. We attribute this to a decreased rate of exciton separation at donor-acceptor interfaces. Finally, we notice that the presence of Coulomb interactions increases device performance as devices become smaller. The results suggest that for commonly found isotropic morphologies the Coulomb interactions between charge carriers dominates exciton separation effects.
Monte Carlo Simulations of Spin Transport in Nanoscale InGaAs Field Effect Transistors
Thorpe, B; Langbein, F; Schirmer, S
2016-01-01
By augmenting an in-house developed, experimentally verified Monte Carlo device simulator with a Bloch equation model with a spin-orbit interaction Hamiltonian accounting for Dresselhaus and Rashba couplings, we simulate electron spin transport in a \\SI{25}{nm} gate length InGaAs MOSFET. We observe non-uniform decay of the net magnetization between the source and gate electrodes and an interesting magnetization recovery effect due to spin refocusing induced by high electric field between the gate and drain electrodes. We demonstrate coherent control of the polarization vector of the drain current via the source-drain and gate voltages, and show that the magnetization of the drain current is sensitive to strain in the channel, suggesting that the device could act as a room-temperature nanoscale strain sensor.
GPU-based high performance Monte Carlo simulation in neutron transport
Energy Technology Data Exchange (ETDEWEB)
Heimlich, Adino; Mol, Antonio C.A.; Pereira, Claudio M.N.A. [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. de Inteligencia Artificial Aplicada], e-mail: cmnap@ien.gov.br
2009-07-01
Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in neutron transport simulation by Monte Carlo method. To accomplish that, GPU- and CPU-based (single and multicore) approaches were developed and applied to a simple, but time-consuming problem. Comparisons demonstrated that the GPU-based approach is about 15 times faster than a parallel 8-core CPU-based approach also developed in this work. (author)
Space applications of the MITS electron-photon Monte Carlo transport code system
Energy Technology Data Exchange (ETDEWEB)
Kensek, R.P.; Lorence, L.J.; Halbleib, J.A. [Sandia National Labs., Albuquerque, NM (United States); Morel, J.E. [Los Alamos National Lab., NM (United States)
1996-07-01
The MITS multigroup/continuous-energy electron-photon Monte Carlo transport code system has matured to the point that it is capable of addressing more realistic three-dimensional adjoint applications. It is first employed to efficiently predict point doses as a function of source energy for simple three-dimensional experimental geometries exposed to simulated uniform isotropic planar sources of monoenergetic electrons up to 4.0 MeV. Results are in very good agreement with experimental data. It is then used to efficiently simulate dose to a detector in a subsystem of a GPS satellite due to its natural electron environment, employing a relatively complex model of the satellite. The capability for survivability analysis of space systems is demonstrated, and results are obtained with and without variance reduction.
Core-scale solute transport model selection using Monte Carlo analysis
Malama, Bwalya; James, Scott C
2013-01-01
Model applicability to core-scale solute transport is evaluated using breakthrough data from column experiments conducted with conservative tracers tritium (H-3) and sodium-22, and the retarding solute uranium-232. The three models considered are single-porosity, double-porosity with single-rate mobile-immobile mass-exchange, and the multirate model, which is a deterministic model that admits the statistics of a random mobile-immobile mass-exchange rate coefficient. The experiments were conducted on intact Culebra Dolomite core samples. Previously, data were analyzed using single- and double-porosity models although the Culebra Dolomite is known to possess multiple types and scales of porosity, and to exhibit multirate mobile-immobile-domain mass transfer characteristics at field scale. The data are reanalyzed here and null-space Monte Carlo analysis is used to facilitate objective model selection. Prediction (or residual) bias is adopted as a measure of the model structural error. The analysis clearly shows ...
Monte Carlo simulation of phonon transport in variable cross-section nanowires
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
A dedicated Monte Carlo (MC) model is proposed to investigate the mechanism of phonon transport in variable cross-section silicon nanowires (NWs). Emphasis is placed on understanding the thermal rectification effect and thermal conduction in tapered cross-section and incremental cross-section NWs. In the simulations, both equal and unequal heat input conditions are discussed. Under the latter condition, the tapered cross-section NW has a more prominent thermal rectification effect. Additionally, the capacity of heat conduction in the tapered cross-section NW is always higher than that of the incremental one. Two reasons may be attributed to these behaviors: one is their different boundary conditions and the other is their different volume distribution. Although boundary scattering plays an important role in nanoscale structures, the results suggest the influence of boundary scattering on heat conduction is less obvious than that of volume distribution in NWs with variable cross-sections.
A Monte Carlo transport code study of the space radiation environment using FLUKA and ROOT
Wilson, T; Carminati, F; Brun, R; Ferrari, A; Sala, P; Empl, A; MacGibbon, J
2001-01-01
We report on the progress of a current study aimed at developing a state-of-the-art Monte-Carlo computer simulation of the space radiation environment using advanced computer software techniques recently available at CERN, the European Laboratory for Particle Physics in Geneva, Switzerland. By taking the next-generation computer software appearing at CERN and adapting it to known problems in the implementation of space exploration strategies, this research is identifying changes necessary to bring these two advanced technologies together. The radiation transport tool being developed is tailored to the problem of taking measured space radiation fluxes impinging on the geometry of any particular spacecraft or planetary habitat and simulating the evolution of that flux through an accurate model of the spacecraft material. The simulation uses the latest known results in low-energy and high-energy physics. The output is a prediction of the detailed nature of the radiation environment experienced in space as well a...
ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.
Energy Technology Data Exchange (ETDEWEB)
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2008-04-01
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.
Hand calculations for transport of radioactive aerosols through sampling systems.
Hogue, Mark; Thompson, Martha; Farfan, Eduardo; Hadlock, Dennis
2014-05-01
Workplace air monitoring programs for sampling radioactive aerosols in nuclear facilities sometimes must rely on sampling systems to move the air to a sample filter in a safe and convenient location. These systems may consist of probes, straight tubing, bends, contractions and other components. Evaluation of these systems for potential loss of radioactive aerosols is important because significant losses can occur. However, it can be very difficult to find fully described equations to model a system manually for a single particle size and even more difficult to evaluate total system efficiency for a polydispersed particle distribution. Some software methods are available, but they may not be directly applicable to the components being evaluated and they may not be completely documented or validated per current software quality assurance requirements. This paper offers a method to model radioactive aerosol transport in sampling systems that is transparent and easily updated with the most applicable models. Calculations are shown with the R Programming Language, but the method is adaptable to other scripting languages. The method has the advantage of transparency and easy verifiability. This paper shows how a set of equations from published aerosol science models may be applied to aspiration and transport efficiency of aerosols in common air sampling system components. An example application using R calculation scripts is demonstrated. The R scripts are provided as electronic attachments.
Using the Chebychev expansion in quantum transport calculations
Energy Technology Data Exchange (ETDEWEB)
Popescu, Bogdan; Rahman, Hasan; Kleinekathöfer, Ulrich, E-mail: u.kleinekathoefer@jacobs-university.de [Department of Physics and Earth Sciences, Jacobs University Bremen, Campus Ring 1, 28759 Bremen (Germany)
2015-04-21
Irradiation by laser pulses and a fluctuating surrounding liquid environment can, for example, lead to time-dependent effects in the transport through molecular junctions. From the theoretical point of view, time-dependent theories of quantum transport are still challenging. In one of these existing transport theories, the energy-dependent coupling between molecule and leads is decomposed into Lorentzian functions. This trick has successfully been combined with quantum master approaches, hierarchical formalisms, and non-equilibrium Green’s functions. The drawback of this approach is, however, its serious limitation to certain forms of the molecule-lead coupling and to higher temperatures. Tian and Chen [J. Chem. Phys. 137, 204114 (2012)] recently employed a Chebychev expansion to circumvent some of these latter problems. Here, we report on a similar approach also based on the Chebychev expansion but leading to a different set of coupled differential equations using the fact that a derivative of a zeroth-order Bessel function can again be given in terms of Bessel functions. Test calculations show the excellent numerical accuracy and stability of the presented formalism. The time span for which this Chebychev expansion scheme is valid without any restrictions on the form of the spectral density or temperature can be determined a priori.
Using the Chebychev expansion in quantum transport calculations.
Popescu, Bogdan; Rahman, Hasan; Kleinekathöfer, Ulrich
2015-04-21
Irradiation by laser pulses and a fluctuating surrounding liquid environment can, for example, lead to time-dependent effects in the transport through molecular junctions. From the theoretical point of view, time-dependent theories of quantum transport are still challenging. In one of these existing transport theories, the energy-dependent coupling between molecule and leads is decomposed into Lorentzian functions. This trick has successfully been combined with quantum master approaches, hierarchical formalisms, and non-equilibrium Green's functions. The drawback of this approach is, however, its serious limitation to certain forms of the molecule-lead coupling and to higher temperatures. Tian and Chen [J. Chem. Phys. 137, 204114 (2012)] recently employed a Chebychev expansion to circumvent some of these latter problems. Here, we report on a similar approach also based on the Chebychev expansion but leading to a different set of coupled differential equations using the fact that a derivative of a zeroth-order Bessel function can again be given in terms of Bessel functions. Test calculations show the excellent numerical accuracy and stability of the presented formalism. The time span for which this Chebychev expansion scheme is valid without any restrictions on the form of the spectral density or temperature can be determined a priori.
Applying Advanced Neutron Transport Calculations for Improving Fuel Performance Codes
Energy Technology Data Exchange (ETDEWEB)
Botazzoli, P.; Luzzi, L. [Politecnico di Milano, Department of Energy, Nuclear Engineering Division - CeSNEF, Milano (Italy); Schubert, A.; Van Uffelen, P. [European Commission, Joint Research Centre, Institute for Transuranium Elements, Karlsruhe (Germany); Haeck, W. [Institute de Radioprotection et de Surete Nucleaire, Fontenay-aux-Roses (France)
2009-06-15
TRANSURANUS is a computer code for the thermal and mechanical analysis of fuel rods in nuclear reactors. As part of the code, the TUBRNP model calculates the local concentration of the actinides (U, Pu, Am, Cm), the main fission products (Xe, Kr, Cs and Nd) and {sup 4}He produced during the irradiation as a function of the radial position across a fuel pellet (radial profiles). These local quantities are required for the determination of the local power density, the local burn-up, and the source term of fission products and other inert gases. In previous works the neutronic code ALEPH has been used to validate the models for the actinides and fission products concentrations in UO{sub 2} fuels. A similar approach has been adopted in the present work for verifying the Helium production. The present paper focuses on the modelling of the Helium production in PWR oxide fuels (MOX and UO{sub 2}). A reliable prediction of the Helium production and release in LWR oxide fuels is of great interest in case of increasing burn-up, linear heat generation rates and Plutonium content. The contribution of the Helium released plays a fundamental role in the gap pressure and subsequently in the mechanical behaviour of the fuel rod, in particular during the storage of the high burn-up spent fuel. Helium is produced in oxide fuels by three main paths: (i) alpha decay of the actinides (the main contribution is due to {sup 242}Cm, {sup 238}Pu and {sup 244}Cm); (ii) (n,{alpha}) reactions; and (iii) ternary fission. In the present work, the contributions due to ternary fission and the (n,{alpha}) reaction on {sup 16}O as well as some refinements in the {sup 241}Am burn-up chain have been included in TUBRNP. The VESTA neutronic code has been used for the validation of the He production model. The generic VESTA Monte Carlo depletion interface developed at IRSN allows us to couple different Monte Carlo codes with a depletion module. It currently allows for combining the ORIGEN 2.2 isotope
Toulouse, Julien; Reinhardt, Peter; Hoggan, Philip E; Umrigar, C J
2010-01-01
We report state-of-the-art quantum Monte Carlo calculations of the singlet $n \\to \\pi^*$ (CO) vertical excitation energy in the acrolein molecule, extending the recent study of Bouab\\c{c}a {\\it et al.} [J. Chem. Phys. {\\bf 130}, 114107 (2009)]. We investigate the effect of using a Slater basis set instead of a Gaussian basis set, and of using state-average versus state-specific complete-active-space (CAS) wave functions, with or without reoptimization of the coefficients of the configuration state functions (CSFs) and of the orbitals in variational Monte Carlo (VMC). It is found that, with the Slater basis set used here, both state-average and state-specific CAS(6,5) wave functions give an accurate excitation energy in diffusion Monte Carlo (DMC), with or without reoptimization of the CSF and orbital coefficients in the presence of the Jastrow factor. In contrast, the CAS(2,2) wave functions require reoptimization of the CSF and orbital coefficients to give a good DMC excitation energy. Our best estimates of ...
TARTNP: a coupled neutron--photon Monte Carlo transport code. [10-/sup 9/ to 20 MeV; in LLL FORTRAN
Energy Technology Data Exchange (ETDEWEB)
Plechaty, E.F.; Kimlinger, J.R.
1976-07-04
A Monte Carlo code was written that calculates the transport of neutrons, photons, and neutron-induced photons. The cross sections of these particles are derived from TARTNP's data base, the Evaluated Nuclear Data Library. The energy range of the neutron data in the Library is 10/sup -9/ MeV to 20 MeV; the photon energy range is 1 keV to 20 MeV. One of the chief advantages of the code is its flexibility: it allows up to 17 different kinds of output to be evaluated in the same problem.
A general method to derive tissue parameters for Monte Carlo dose calculation with multi-energy CT.
Lalonde, Arthur; Bouchard, Hugo
2016-11-21
To develop a general method for human tissue characterization with dual- and multi-energy CT and evaluate its performance in determining elemental compositions and quantities relevant to radiotherapy Monte Carlo dose calculation. Ideal materials to describe human tissue are obtained applying principal component analysis on elemental weight and density data available in literature. The theory is adapted to elemental composition for solving tissue information from CT data. A novel stoichiometric calibration method is integrated to the technique to make it suitable for a clinical environment. The performance of the method is compared with two techniques known in literature using theoretical CT data. In determining elemental weights with dual-energy CT, the method is shown to be systematically superior to the water-lipid-protein material decomposition and comparable to the parameterization technique. In determining proton stopping powers and energy absorption coefficients with dual-energy CT, the method generally shows better accuracy and unbiased results. The generality of the method is demonstrated simulating multi-energy CT data to show the potential to extract more information with multiple energies. The method proposed in this paper shows good performance to determine elemental compositions from dual-energy CT data and physical quantities relevant to radiotherapy dose calculation. The method is particularly suitable for Monte Carlo calculations and shows promise in using more than two energies to characterize human tissue with CT.
A general method to derive tissue parameters for Monte Carlo dose calculation with multi-energy CT
Lalonde, Arthur; Bouchard, Hugo
2016-11-01
To develop a general method for human tissue characterization with dual- and multi-energy CT and evaluate its performance in determining elemental compositions and quantities relevant to radiotherapy Monte Carlo dose calculation. Ideal materials to describe human tissue are obtained applying principal component analysis on elemental weight and density data available in literature. The theory is adapted to elemental composition for solving tissue information from CT data. A novel stoichiometric calibration method is integrated to the technique to make it suitable for a clinical environment. The performance of the method is compared with two techniques known in literature using theoretical CT data. In determining elemental weights with dual-energy CT, the method is shown to be systematically superior to the water-lipid-protein material decomposition and comparable to the parameterization technique. In determining proton stopping powers and energy absorption coefficients with dual-energy CT, the method generally shows better accuracy and unbiased results. The generality of the method is demonstrated simulating multi-energy CT data to show the potential to extract more information with multiple energies. The method proposed in this paper shows good performance to determine elemental compositions from dual-energy CT data and physical quantities relevant to radiotherapy dose calculation. The method is particularly suitable for Monte Carlo calculations and shows promise in using more than two energies to characterize human tissue with CT.
Monte Carlo calculations of the energy deposited in biological samples and shielding materials
Akar Tarim, U.; Gurler, O.; Ozmutlu, E. N.; Yalcin, S.
2014-03-01
The energy deposited by gamma radiation from the Cs-137 isotope into body tissues (bone and muscle), tissue-like medium (water), and radiation shielding materials (concrete, lead, and water), which is of interest for radiation dosimetry, was obtained using a simple Monte Carlo algorithm. The algorithm also provides a realistic picture of the distribution of backscattered photons from the target and the distribution of photons scattered forward after several scatterings in the scatterer, which is useful in studying radiation shielding. The presented method in this work constitutes an attempt to evaluate the amount of energy absorbed by body tissues and shielding materials.
Assaraf, Roland; Domin, Dominik
2014-03-01
We study the efficiency of quantum Monte Carlo (QMC) methods in computing space localized ground state properties (properties which do not depend on distant degrees of freedom) as a function of the system size N. We prove that for the commonly used correlated sampling with reweighting method, the statistical fluctuations σ2(N) do not obey the locality property. σ2(N) grow at least linearly with N and with a slope that is related to the fluctuations of the reweighting factors. We provide numerical illustrations of these tendencies in the form of QMC calculations on linear chains of hydrogen atoms.
Energy Technology Data Exchange (ETDEWEB)
Renner, Franziska [Physikalisch-Technische Bundesanstalt (PTB), Braunschweig (Germany)
2016-11-01
Monte Carlo simulations are regarded as the most accurate method of solving complex problems in the field of dosimetry and radiation transport. In (external) radiation therapy they are increasingly used for the calculation of dose distributions during treatment planning. In comparison to other algorithms for the calculation of dose distributions, Monte Carlo methods have the capability of improving the accuracy of dose calculations - especially under complex circumstances (e.g. consideration of inhomogeneities). However, there is a lack of knowledge of how accurate the results of Monte Carlo calculations are on an absolute basis. A practical verification of the calculations can be performed by direct comparison with the results of a benchmark experiment. This work presents such a benchmark experiment and compares its results (with detailed consideration of measurement uncertainty) with the results of Monte Carlo calculations using the well-established Monte Carlo code EGSnrc. The experiment was designed to have parallels to external beam radiation therapy with respect to the type and energy of the radiation, the materials used and the kind of dose measurement. Because the properties of the beam have to be well known in order to compare the results of the experiment and the simulation on an absolute basis, the benchmark experiment was performed using the research electron accelerator of the Physikalisch-Technische Bundesanstalt (PTB), whose beam was accurately characterized in advance. The benchmark experiment and the corresponding Monte Carlo simulations were carried out for two different types of ionization chambers and the results were compared. Considering the uncertainty, which is about 0.7 % for the experimental values and about 1.0 % for the Monte Carlo simulation, the results of the simulation and the experiment coincide.
Calculated characteristics of subcritical assembly with anisotropic transport of neutrons
Energy Technology Data Exchange (ETDEWEB)
Gorin, N.V.; Lipilina, E.N.; Lyutov, V.D.; Saukov, A.I. [Zababakhin Russian Federal Nuclear Center - All-Russian Scientific Researching Institute of Technical Physics (Russian Federation)
2003-07-01
There was considered possibility of creating enough sub-critical system that multiply neutron fluence from a primary source by many orders. For assemblies with high neutron tie between parts, it is impossible. That is why there was developed a construction consisting of many units (cascades) having weak feedback with preceding cascades. The feedback attenuation was obtained placing layers of slow neutron absorber and moderators between the cascades of fission material. Anisotropy of fast neutron transport through the layers was used. The system consisted of many identical cascades aligning one by another. Each cascade consists of layers of moderator, fissile material and absorber of slow neutrons. The calculations were carried out using the code MCNP.4a with nuclear data library ENDF/B5. In this construction neutrons spread predominantly in one direction multiplying in each next fissile layer, and they attenuate considerably in the opposite direction. In a calculated construction, multiplication factor of one cascade is about 1.5 and multiplication factor of whole construction composed of n cascades is 1.5{sup n}. Calculated keff value is 0.9 for one cascade and does not exceed 0.98 for a system containing any number of cascades. Therefore the assembly is always sub-critical and therefore it is safe in respect of criticality. There was considered using such a sub-critical assembly to create a powerful neutron fluence for neutron boron-capturing therapy. The system merits and demerits were discussed. (authors)
First-principles calculations of mass transport in magnesium borohydride
Yu, Chao; Ozolins, Vidvuds
2013-03-01
Mg(BH4)2 is a hydrogen storage material which can decompose to release hydrogen in the following reaction: Mg(BH4)2(solid) -->1/6 MgB12H12(solid) + 5/6MgH2(solid) +13/6 H2(gas) --> MgH2(solid) + 2B(solid) + 4H2(gas). However, experiments show that hydrogen release only occurs at temperatures above 300 °C, which severely limits applications in mobile storage. Using density-functional theory calculations, we systematically study bulk diffusion of defects in the reactant Mg(BH4)2 and products MgB12H12 and MgH2 during the first step of the solid-state dehydrogenation reaction. The defect concentrations and concentration gradients are calculated for a variety of defects, including charged vacancies and interstitials. We find that neutral [BH3] vacancies have the highest bulk concentration and concentration gradient in Mg(BH4)2. The diffusion mechanism of [BH3] vacancy in Mg(BH4)2 is studied using the nudged elastic band method. Our results shows that the calculated diffusion barrier for [BH3] vacancies is ~ . 2 eV, suggesting that slow mass transport limits the kinetics of hydrogen desorption.
Betz, G
2002-01-01
To extend the time scale in molecular dynamics (MD) calculations of sputtering and ion assisted deposition we have coupled our MD calculations to a kinetic Monte Carlo (KMC) calculation. In this way we have studied surface erosion of Cu(1 0 0) under 200-600 eV Cu ion bombardment and growth of Cu on Cu(1 0 0) for deposition at thermal energies up to energies of 100 eV per atom. Target temperatures were varied from 100 to 400 K. The coupling of the MD calculation to a KMC calculation allows us to extend our calculations from a few ps, a time scale typical for MD, to times of up to seconds until the next Cu particle will impinge/be deposited on the crystal surface of about 100 nm sup 2 in size. The latter value of 1 s is quite realistic for a typical experimental sputter erosion or deposition experiment. In such a calculation thermal diffusion processes at the surface and annealing of the surface after energetic ion bombardment can be taken into account. To achieve homo-epitaxial growth of a film the results cle...
Bianco, F. B.; Modjaz, M.; Oh, S. M.; Fierroz, D.; Liu, Y. Q.; Kewley, L.; Graur, O.
2016-07-01
We present the open-source Python code pyMCZ that determines oxygen abundance and its distribution from strong emission lines in the standard metallicity calibrators, based on the original IDL code of Kewley and Dopita (2002) with updates from Kewley and Ellison (2008), and expanded to include more recently developed calibrators. The standard strong-line diagnostics have been used to estimate the oxygen abundance in the interstellar medium through various emission line ratios (referred to as indicators) in many areas of astrophysics, including galaxy evolution and supernova host galaxy studies. We introduce a Python implementation of these methods that, through Monte Carlo sampling, better characterizes the statistical oxygen abundance confidence region including the effect due to the propagation of observational uncertainties. These uncertainties are likely to dominate the error budget in the case of distant galaxies, hosts of cosmic explosions. Given line flux measurements and their uncertainties, our code produces synthetic distributions for the oxygen abundance in up to 15 metallicity calibrators simultaneously, as well as for E(B- V) , and estimates their median values and their 68% confidence regions. We provide the option of outputting the full Monte Carlo distributions, and their Kernel Density estimates. We test our code on emission line measurements from a sample of nearby supernova host galaxies (z github.com/nyusngroup/pyMCZ.
Farr, W M; Mandel, I; Stevens, D
2015-06-01
Selection among alternative theoretical models given an observed dataset is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty and it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted in the Markov chain Monte Carlo (MCMC) algorithm and convergence is correspondingly slow. Here, we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose intermodel jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in modest dimensionality. We show that our technique leads to improved convergence over naive jumps in an RJMCMC, and compare it to other proposals in the literature to improve the convergence of RJMCMCs. We also demonstrate the use of the same interpolation technique as a way to construct efficient 'global' proposal distributions for single-model MCMCs without prior knowledge of the structure of the posterior distribution, and discuss improvements that permit the method to be used in higher dimensional spaces efficiently.
Calculating potential energy curves with fixed-node diffusion Monte Carlo: CO and N2
Powell, Andrew D.; Dawes, Richard
2016-12-01
This study reports on the prospect for the routine use of Quantum Monte Carlo (QMC) for the electronic structure problem, applying fixed-node Diffusion Monte Carlo (DMC) to generate highly accurate Born-Oppenheimer potential energy curves (PECs) for small molecular systems. The singlet ground electronic states of CO and N2 were used as test cases. The PECs obtained by DMC employing multiconfigurational trial wavefunctions were compared with those obtained by conventional high-accuracy electronic structure methods such as multireference configuration interaction and/or the best available empirical spectroscopic curves. The goal was to test whether a straightforward procedure using available QMC codes could be applied robustly and reliably. Results obtained with DMC codes were found to be in close agreement with the benchmark PECs, and the n3 scaling with the number of electrons (compared with n7 or worse for conventional high-accuracy quantum chemistry) could be advantageous depending on the system size. Due to a large pre-factor in the scaling, for the small systems tested here, it is currently still much more computationally intensive to compute PECs with QMC. Nevertheless, QMC algorithms are particularly well-suited to large-scale parallelization and are therefore likely to become more relevant for future massively parallel hardware architectures.
Dujko, S.; Ebert, U.; White, R.D.; Petrović, Z.L.
2010-01-01
A comprehensive investigation of electron transport in N$_{2}$-O$_{2}$ mixtures has been carried out using a multi term theory for solving the Boltzmann equation and Monte Carlo simulation technique instead of conventional two-term theory often employed in plasma modeling community. We focus on the
Energy Technology Data Exchange (ETDEWEB)
Zwermann, W.; Aures, A.; Bernnat, W.; and others
2013-06-15
This report documents the status of the research and development goals reached within the reactor safety research project RS1503 ''Development and Application of Neutron Transport Methods and Uncertainty Analyses for Reactor Core Calculations'' as of the 1{sup st} quarter of 2013. The superordinate goal of the project is the development, validation, and application of neutron transport methods and uncertainty analyses for reactor core calculations. These calculation methods will mainly be applied to problems related to the core behaviour of light water reactors and innovative reactor concepts. The contributions of this project towards achieving this goal are the further development, validation, and application of deterministic and stochastic calculation programmes and of methods for uncertainty and sensitivity analyses, as well as the assessment of artificial neutral networks, for providing a complete nuclear calculation chain. This comprises processing nuclear basis data, creating multi-group data for diffusion and transport codes, obtaining reference solutions for stationary states with Monte Carlo codes, performing coupled 3D full core analyses in diffusion approximation and with other deterministic and also Monte Carlo transport codes, and implementing uncertainty and sensitivity analyses with the aim of propagating uncertainties through the whole calculation chain from fuel assembly, spectral and depletion calculations to coupled transient analyses. This calculation chain shall be applicable to light water reactors and also to innovative reactor concepts, and therefore has to be extensively validated with the help of benchmarks and critical experiments.
Energy Technology Data Exchange (ETDEWEB)
Aleshin, Sergey S.; Gorodkov, Sergey S.; Shcherenko, Anna I. [National Research Centre ' Kurchatov Institute' , Moscow (Russian Federation)
2016-09-15
A burn-up calculation of large systems by Monte-Carlo code (MCU) is complex process and it requires large computational costs. Previously prepared isotopic compositions are proposed to be used for the Monte-Carlo code calculations of different system states with burnt fuel. Isotopic compositions are calculated by an approximation method. The approximation method is based on usage of a spectral functionality and reference isotopic compositions, that are calculated by the engineering codes (TVS-M, BIPR-7A and PERMAK-A). The multiplication factors and power distributions of FAs from a 3-D reactor core are calculated in this work by the Monte-Carlo code MCU using earlier prepared isotopic compositions. The separate conditions of the burnt core are observed. The results of MCU calculations were compared with those that were obtained by engineering codes.
Energy Technology Data Exchange (ETDEWEB)
Zuca Aparicio, D.; Perez Moreno, J. M.; Fernandez Leton, P.; Garcia Ruiz-Zorrila, J.; Minambres Moro, A.
2013-07-01
At present it is not common to find commercial planning systems that incorporate dose calculation algorithms to do based on Monte Carlo [1,2] photons This paper summarizes the process followed in the evaluation of a dose calculation algorithm for MC beams of 6 MV photons from an accelerator dedicated to radiosurgery (SRS), cranial stereotactic radiotherapy (SRT) and extracranial (SBRT). (Author)
A Monte Carlo model for out-of-field dose calculation from high-energy photon therapy.
Kry, Stephen F; Titt, Uwe; Followill, David; Pönisch, Falk; Vassiliev, Oleg N; White, R Allen; Stovall, Marilyn; Salehpour, Mohammad
2007-09-01
As cancer therapy becomes more efficacious and patients survive longer, the potential for late effects increases, including effects induced by radiation dose delivered away from the treatment site. This out-of-field radiation is of particular concern with high-energy radiotherapy, as neutrons are produced in the accelerator head. We recently developed an accurate Monte Carlo model of a Varian 2100 accelerator using MCNPX for calculating the dose away from the treatment field resulting from low-energy therapy. In this study, we expanded and validated our Monte Carlo model for high-energy (18 MV) photon therapy, including both photons and neutrons. Simulated out-of-field photon doses were compared with measurements made with thermoluminescent dosimeters in an acrylic phantom up to 55 cm from the central axis. Simulated neutron fluences and energy spectra were compared with measurements using moderated gold foil activation in moderators and data from the literature. The average local difference between the calculated and measured photon dose was 17%, including doses as low as 0.01% of the central axis dose. The out-of-field photon dose varied substantially with field size and distance from the edge of the field but varied little with depth in the phantom, except at depths shallower than 3 cm, where the dose sharply increased. On average, the difference between the simulated and measured neutron fluences was 19% and good agreement was observed with the neutron spectra. The neutron dose equivalent varied little with field size or distance from the central axis but decreased with depth in the phantom. Neutrons were the dominant component of the out-of-field dose equivalent for shallow depths and large distances from the edge of the treatment field. This Monte Carlo model is useful to both physicists and clinicians when evaluating out-of-field doses and associated potential risks.
Energy Technology Data Exchange (ETDEWEB)
Fotina, Irina; Kragl, Gabriele; Kroupa, Bernhard; Trausmuth, Robert; Georg, Dietmar [Medical Univ. Vienna (Austria). Division of Medical Radiation Physics, Dept. of Radiotherapy
2011-07-15
Comparison of the dosimetric accuracy of the enhanced collapsed cone (eCC) algorithm with the commercially available Monte Carlo (MC) dose calculation for complex treatment techniques. A total of 8 intensity-modulated radiotherapy (IMRT) and 2 stereotactic body radiotherapy (SBRT) lung cases were calculated with eCC and MC algorithms with the treatment planning systems (TPS) Oncentra MasterPlan 3.2 (Nucletron) and Monaco 2.01 (Elekta/CMS). Fluence optimization as well as sequencing of IMRT plans was primarily performed using Monaco. Dose prediction errors were calculated using MC as reference. The dose-volume histrogram (DVH) analysis was complemented with 2D and 3D gamma evaluation. Both algorithms were compared to measurements using the Delta4 system (Scandidos). Recalculated with eCC IMRT plans resulted in lower planned target volume (PTV) coverage, as well as in lower organs-at-risk (OAR) doses up to 8%. Small deviations between MC and eCC in PTV dose (1-2%) were detected for IMRT cases, while larger deviations were observed for SBRT (up to 5%). Conformity indices of both calculations were similar; however, the homogeneity of the eCC calculated plans was slightly better. Delta4 measurements confirmed high dosimetric accuracy of both TPS. Mean dose prediction errors < 3% for PTV suggest that both algorithms enable highly accurate dose calculations under clinical conditions. However, users should be aware of slightly underestimated OAR doses using the eCC algorithm. (orig.)
Tian, Zhen; Li, Yongbao; Shi, Feng; Jiang, Steve B; Jia, Xun
2015-01-01
We recently built an analytical source model for GPU-based MC dose engine. In this paper, we present a sampling strategy to efficiently utilize this source model in GPU-based dose calculation. Our source model was based on a concept of phase-space-ring (PSR). This ring structure makes it effective to account for beam rotational symmetry, but not suitable for dose calculations due to rectangular jaw settings. Hence, we first convert PSR source model to its phase-space let (PSL) representation. Then in dose calculation, different types of sub-sources were separately sampled. Source sampling and particle transport were iterated. So that the particles being sampled and transported simultaneously are of same type and close in energy to alleviate GPU thread divergence. We also present an automatic commissioning approach to adjust the model for a good representation of a clinical linear accelerator . Weighting factors were introduced to adjust relative weights of PSRs, determined by solving a quadratic minimization ...
Monte Carlo calculation of energy deposition in ionization chambers for tritium measurements
Zhilin, Chen; Shuming, Peng; Dan, Meng; Yuehong, He; Heyi, Wang
2014-10-01
Energy deposition in ionization chambers for tritium measurements has been theoretically studied using Monte Carlo code MCNP 5. The influence of many factors, including carrier gas, chamber size, wall materials and gas pressure, has been evaluated in the simulations. It is found that β rays emitted by tritium deposit much more energy into chambers flowing through with argon than with deuterium in them, as much as 2.7 times higher at pressure 100 Pa. As chamber size gets smaller, energy deposition decreases sharply. For an ionization chamber of 1 mL, β rays deposit less than 1% of their energy at pressure 100 Pa and only 84% even if gas pressure is as high as 100 kPa. It also indicates that gold plated ionization chamber results in the highest deposition ratio while aluminum one leads to the lowest. In addition, simulations were validated by comparison with experimental data. Results show that simulations agree well with experimental data.
Bianco, Federica B; Oh, Seung Man; Fierroz, David; Liu, Yuqian; Kewley, Lisa; Graur, Or
2015-01-01
We present the open-source Python code pyMCZ that determines oxygen abundance and its distribution from strong emission lines in the standard metallicity scales, based on the original IDL code of Kewley & Dopita (2002) with updates from Kewley & Ellison (2008), and expanded to include more recently developed scales. The standard strong-line diagnostics have been used to estimate the oxygen abundance in the interstellar medium through various emission line ratios in many areas of astrophysics, including galaxy evolution and supernova host galaxy studies. We introduce a Python implementation of these methods that, through Monte Carlo (MC) sampling, better characterizes the statistical reddening-corrected oxygen abundance confidence region. Given line flux measurements and their uncertainties, our code produces synthetic distributions for the oxygen abundance in up to 13 metallicity scales simultaneously, as well as for E(B-V), and estimates their median values and their 66% confidence regions. In additi...
Wetting of polymer liquids: Monte Carlo simulations and self-consistent field calculations
Müller, M
2003-01-01
Using Monte Carlo simulations and self-consistent field (SCF) theory we study the surface and interface properties of a coarse grained off-lattice model. In the simulations we employ the grand canonical ensemble together with a reweighting scheme in order to measure surface and interface free energies and discuss various methods for accurately locating the wetting transition. In the SCF theory, we use a partial enumeration scheme to incorporate single-chain properties on all length scales and use a weighted density functional for the excess free energy. The results of various forms of the density functional are compared quantitatively to the simulation results. For the theory to be accurate, it is important to decompose the free energy functional into a repulsive and an attractive part, with different approximations for the two parts. Measuring the effective interface potential for our coarse grained model we explore routes for controlling the equilibrium wetting properties. (i) Coating of the substrate by an...
Ramilowski, Jordan A; Farrelly, David
2010-10-21
The fixed-node diffusion Monte Carlo (DMC) algorithm is a powerful way of computing excited state energies in a remarkably diverse number of contexts in quantum chemistry and physics. The main difficulty in implementing the procedure lies in obtaining a good estimate of the nodal surface of the excited state in question. Although the nodal surface can sometimes be obtained from symmetry or by making approximations this is not always the case. In any event, nodal surfaces are usually obtained in an ad hoc way. In fact, the search for nodal surfaces can be formulated as an optimization problem within the DMC procedure itself. Here we investigate the use of a genetic algorithm to systematically and automatically compute nodal surfaces. Application is made to the computation of excited states of the HCN-(4)He complex and to the computation of tunneling splittings in the hydrogen bonded HCl-HCl complex.
Calculation of the free energy of NiFe2O4 nanopoarticles by Monte Carlo simulation
Zhou, Chenggang; Landau, D. P.
2005-03-01
Magnetic properties of nanoparticles are of great current interest in light of possible applications to high density magnetic storage media. Finite size and surface effect are important for magnetic nanoparticles and differentiate them from their bulk counterparts. We use Monte Carlo simulation to study a model of NiFe2O4 nanopoarticles proposed by Kodama and Berkowitz [1]. The Hamiltonian of the nanoparticle contains superexchanges between magnetic ions modeled by Heisenberg spins, and surface/bulk anisotropy terms. A continuous version of the Wang- Landau algorithm [2] is used to calculate the joint density of states ρ(M, E) efficiently. From ρ(M, E), we can directly evaluate the free energy of the particle, and many other physical quantities. A hysteresis loop for particles with surface disorder and surface anisotropy is observed, in agreement with previous studies [1]. We found that such a hysteresis loop is the result of interplay between surface disorder and surface anisotropy. Compared with micromagnetic modeling, our Monte Carlo simulation treats the thermodynamic effects properly and is capable of calculating physical quantities at all temperatures and magnetic fields with very limited CPU time. [1] R. H. Kodama, et. al. Phys. Rev. Lett. 77, 394 (1996); Phys. Rev. B 59, 6321 (1999). [2] C. Zhou, et al., in preparation.
Energy Technology Data Exchange (ETDEWEB)
Schach von Wittenau, A.E.; Cox, L.J.; Bergstrom, P.M. Jr.; Hornstein, S.M. [Lawrence Livermore National Lab., CA (United States); Mohan, R.; Libby, B.; Wu, Q. [Medical Coll. of Virginia, Richmond, VA (United States); Lovelock, D.M.J. [Memorial Sloan-Kettering Cancer Center, New York, NY (United States)
1997-03-01
The goal of the PEREGRINE Monte Carlo Dose Calculation Project is to deliver a Monte Carlo package that is both accurate and sufficiently fast for routine clinical use. One of the operational requirements for photon-treatment plans is a fast, accurate method of describing the photon phase-space distribution at the surface of the patient. The open-field case is computationally the most tractable; we know, a priori, for a given machine and energy, the locations and compositions of the relevant accelerator components (i.e., target, primary collimator, flattening filter, and monitor chamber). Therefore, we can precalculate and store the expected photon distributions. For any open-field treatment plan, we then evaluate these existing photon phase-space distributions at the patient`s surface, and pass the obtained photons to the dose calculation routines within PEREGRINE. We neglect any effect of the intervening air column, including attenuation of the photons and production of contaminant electrons. In principle, for treatment plans requiring jaws, blocks, and wedges, we could precalculate and store photon phase-space distributions for various combinations of field sizes and wedges. This has the disadvantage that we would have to anticipate those combinations and that subsequently PEREGRINE would not be able to treat other plans. Therefore, PEREGRINE tracks photons through the patient-dependent beam modifiers. The geometric and physics methods used to do this are described here. 4 refs., 8 figs.
Energy Technology Data Exchange (ETDEWEB)
Faught, A [UT MD Anderson Cancer Center, Houston, TX (United States); University of Texas Health Science Center Houston, Graduate School of Biomedical Sciences, Houston, TX (United States); Davidson, S [University of Texas Medical Branch of Galveston, Galveston, TX (United States); Kry, S; Ibbott, G; Followill, D [UT MD Anderson Cancer Center, Houston, TX (United States); Fontenot, J [Mary Bird Perkins Cancer Center, Baton Rouge, LA (United States); Etzel, C [Consortium of Rheumatology Researchers of North America (CORRONA), Inc., Southborough, MA (United States)
2014-06-01
Purpose: To develop a comprehensive end-to-end test for Varian's TrueBeam linear accelerator for head and neck IMRT using a custom phantom designed to utilize multiple dosimetry devices. Purpose: To commission a multiple-source Monte Carlo model of Elekta linear accelerator beams of nominal energies 6MV and 10MV. Methods: A three source, Monte Carlo model of Elekta 6 and 10MV therapeutic x-ray beams was developed. Energy spectra of two photon sources corresponding to primary photons created in the target and scattered photons originating in the linear accelerator head were determined by an optimization process that fit the relative fluence of 0.25 MeV energy bins to the product of Fatigue-Life and Fermi functions to match calculated percent depth dose (PDD) data with that measured in a water tank for a 10x10cm2 field. Off-axis effects were modeled by a 3rd degree polynomial used to describe the off-axis half-value layer as a function of off-axis angle and fitting the off-axis fluence to a piecewise linear function to match calculated dose profiles with measured dose profiles for a 40×40cm2 field. The model was validated by comparing calculated PDDs and dose profiles for field sizes ranging from 3×3cm2 to 30×30cm2 to those obtained from measurements. A benchmarking study compared calculated data to measurements for IMRT plans delivered to anthropomorphic phantoms. Results: Along the central axis of the beam 99.6% and 99.7% of all data passed the 2%/2mm gamma criterion for 6 and 10MV models, respectively. Dose profiles at depths of dmax, through 25cm agreed with measured data for 99.4% and 99.6% of data tested for 6 and 10MV models, respectively. A comparison of calculated dose to film measurement in a head and neck phantom showed an average of 85.3% and 90.5% of pixels passing a 3%/2mm gamma criterion for 6 and 10MV models respectively. Conclusion: A Monte Carlo multiple-source model for Elekta 6 and 10MV therapeutic x-ray beams has been developed as a
STUDI PEMODELAN DAN PERHITUNGAN TRANSPORT MONTE CARLO DALAM TERAS HTR PEBBLE BED
Directory of Open Access Journals (Sweden)
Zuhair .
2013-01-01
Full Text Available Konsep sistem energi VHTR baik yang berbahan bakar pebble (VHTR pebble bed maupun blok prismatik (VHTR prismatik menarik perhatian fisikawan reaktor nuklir. Salah satu kelebihan teknologi bahan bakar bola adalah menawarkan terobosan teknologi pengisian bahan bakar tanpa harus menghentikan produksi listrik. Selain itu, partikel bahan bakar pebble dengan kernel uranium oksida (UO2 atau uranium oksikarbida (UCO yang dibalut TRISO dan pelapisan silikon karbida (SiC dianggap sebagai opsi utama dengan pertimbangan performa tinggi pada burn-up bahan bakar dan temperatur tinggi. Makalah ini mendiskusikan pemodelan dan perhitungan transport Monte Carlo dalam teras HTR pebble bed. HTR pebble bed adalah reaktor berpendingin gas temperatur tinggi dan bermoderator grafit dengan kemampuan kogenerasi. Perhitungan dikerjakan dengan program MCNP5 pada temperatur 1200 K. Pustaka data nuklir energi kontinu ENDF/B-V dan ENDF/B-VI dimanfaatkan untuk melengkapi analisis. Hasil perhitungan secara keseluruhan menunjukkan konsistensi dengan nilai keff yang hampir sama untuk pustaka data nuklir yang digunakan. Pustaka ENDF/B-VI (66c selalu memproduksi keff lebih besar dibandingkan ENDF/B-V (50c maupun ENDF/B-VI (60c dengan bias kurang dari 0,25%. Kisi BCC memprediksi keff hampir selalu lebih kecil daripada kisi lainnya, khususnya FCC. Nilai keff kisi BCC lebih dekat dengan kisi FCC dengan bias kurang dari 0,19% sedangkan dengan kisi SH bias perhitungannya kurang dari 0,22%. Fraksi packing yang sedikit berbeda (BCC= 61%, SH= 60,459% tidak membuat bias perhitungan menjadi berbeda jauh. Estimasi keff ketiga model kisi menyimpulkan bahwa model BCC lebih bisa diadopsi dalam perhitungan HTR pebble bed dibandingkan model FCC dan SH. Verifikasi hasil estimasi ini perlu dilakukan dengan simulasi Monte Carlo atau bahkan program deterministik lainnya guna optimisasi perhitungan teras reaktor temperatur tinggi. Kata-kunci: kernel, TRISO, bahan bakar pebble, HTR pebble bed
Event-by-event Monte Carlo simulation of radiation transport in vapor and liquid water
Papamichael, Georgios Ioannis
A Monte-Carlo Simulation is presented for Radiation Transport in water. This process is of utmost importance, having applications in oncology and therapy of cancer, in protecting people and the environment, waste management, radiation chemistry and on some solid-state detectors. It's also a phenomenon of interest in microelectronics on satellites in orbit that are subject to the solar radiation and in space-craft design for deep-space missions receiving background radiation. The interaction of charged particles with the medium is primarily due to their electromagnetic field. Three types of interaction events are considered: Elastic scattering, impact excitation and impact ionization. Secondary particles (electrons) can be generated by ionization. At each stage, along with the primary particle we explicitly follow all secondary electrons (and subsequent generations). Theoretical, semi-empirical and experimental formulae with suitable corrections have been used in each case to model the cross sections governing the quantum mechanical process of interactions, thus determining stochastically the energy and direction of outgoing particles following an event. Monte-Carlo sampling techniques have been applied to accurate probability distribution functions describing the primary particle track and all secondary particle-medium interaction. A simple account of the simulation code and a critical exposition of its underlying assumptions (often missing in the relevant literature) are also presented with reference to the model cross sections. Model predictions are in good agreement with existing computational data and experimental results. By relying heavily on a theoretical formulation, instead of merely fitting data, it is hoped that the model will be of value in a wider range of applications. Possible future directions that are the object of further research are pointed out.
Wu, D; Yu, W; Fritzsche, S
2016-01-01
A physical model based on Monte-Carlo approach is proposed to calculate the ionization dynamics of warm dense matters within particle-in-cell simulations, where impact ionization, electron-ion recombination and ionization potential depression (IPD) by surrounding plasmas are taken into consideration self-consistently. When compared with other models, which are applied in the literature for plasmas near thermal equilibrium, the temporal relaxation of ionizations can also be simulated by the proposed model with the final thermal equilibrium determined by the competition between impact ionization and its inverse process, i.e., electron-ion recombination. Our model is general and can be applied for both single elements and alloys with quite different compositions. The proposed model is implemented into a particle-in-cell (PIC) simulation code, and the average ionization degree of bulk aluminium varying with temperature is calculated, showing good agreement with the data provided by FLYCHK code.
Inglis, Stephen; Melko, Roger G
2013-01-01
We implement a Wang-Landau sampling technique in quantum Monte Carlo (QMC) simulations for the purpose of calculating the Rényi entanglement entropies and associated mutual information. The algorithm converges an estimate for an analog to the density of states for stochastic series expansion QMC, allowing a direct calculation of Rényi entropies without explicit thermodynamic integration. We benchmark results for the mutual information on two-dimensional (2D) isotropic and anisotropic Heisenberg models, a 2D transverse field Ising model, and a three-dimensional Heisenberg model, confirming a critical scaling of the mutual information in cases with a finite-temperature transition. We discuss the benefits and limitations of broad sampling techniques compared to standard importance sampling methods.
Institute of Scientific and Technical Information of China (English)
SONG; Ming-zhe; WEI; Ke-xin; HOU; Jin-bing; WANG; Hong-yu; GAO; Fei; NI; Ning
2015-01-01
The Bragg-Gray cavity theory(B-G theory)provided a theoretical basis for the analytical calculation of the energy response for ionization chamber.It was widely used in the theoretical calculation of the ionization chamber detector and the tissue equivalent detector.However,the B-G
Energy Technology Data Exchange (ETDEWEB)
Procassini, R.J. [Lawrence Livermore National lab., CA (United States)
1997-12-31
The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution of particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.
Coccia, Emanuele; Guidoni, Leonardo
2014-01-01
In this letter we report the singlet ground state structure of the full carotenoid peridinin by means of variational Monte Carlo (VMC) calculations. The VMC relaxed geometry has an average bond length alternation of 0.1165(10) {\\AA}, larger than the values obtained by DFT (PBE, B3LYP and CAM-B3LYP) and shorter than that calculated at the Hartree-Fock (HF) level. TDDFT and EOM-CCSD calculations on a reduced peridinin model confirm the HOMO-LUMO major contribution of the Bu+-like (S2) bright excited state. Many Body Green's Function Theory (MBGFT) calculations of the vertical excitation energy of the Bu+-like state for the VMC structure (VMC/MBGFT) provide excitation energy of 2.62 eV, in agreement with experimental results in n-hexane (2.72 eV). The dependence of the excitation energy on the bond length alternation in the MBGFT and TDDFT calculations with different functionals is discussed.
Energy Technology Data Exchange (ETDEWEB)
Blazy-Aubignac, L
2007-09-15
The treatment planning systems (T.P.S.) occupy a key position in the radiotherapy service: they realize the projected calculation of the dose distribution and the treatment duration. Traditionally, the quality control of the calculated distribution doses relies on their comparisons with dose distributions measured under the device of treatment. This thesis proposes to substitute these dosimetry measures to the profile of reference dosimetry calculations got by the Penelope Monte-Carlo code. The Monte-Carlo simulations give a broad choice of test configurations and allow to envisage a quality control of dosimetry aspects of T.P.S. without monopolizing the treatment devices. This quality control, based on the Monte-Carlo simulations has been tested on a clinical T.P.S. and has allowed to simplify the quality procedures of the T.P.S.. This quality control, in depth, more precise and simpler to implement could be generalized to every center of radiotherapy. (N.C.)
Energy Technology Data Exchange (ETDEWEB)
Zankl, M. [GSF - Forschungszentrum fuer Umwelt und Gesundheit Neuherberg GmbH, Oberschleissheim (Germany). Inst. fuer Strahlenschutz; Drexler, G. [GSF - Forschungszentrum fuer Umwelt und Gesundheit Neuherberg GmbH, Oberschleissheim (Germany). Inst. fuer Strahlenschutz; Petoussi-Henss, N. [GSF - Forschungszentrum fuer Umwelt und Gesundheit Neuherberg GmbH, Oberschleissheim (Germany). Inst. fuer Strahlenschutz; Saito, K. [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan)
1997-03-01
This report presents a tabulation of organ and tissue equivalent dose as well as effective dose conversion coefficients, normalised to air kerma free in air, for occupational exposures and environmental exposures of the public to external photon radiation. For occupational exposures, whole-body irradiation with idealised geometries, i.e. broad parallel beams and fully isotropic radiation incidence, is considered. The directions of incidence for the parallel beams are anterior-posterior, posterior-anterior, left lateral, right lateral and a full 360 rotation around the body`s longitudinal axis. The influence of beam divergence on the body doses is also considered as well as the dependence of effective dose on the angle of radiation incidence. Regarding exposure of the public to environmental sources, three source geometries are considered: exposure from a radioactive cloud, from ground contamination and from the natural radionuclides distributed homogeneously in the ground. The precise angular and energy distributions of the gamma rays incident on the human body were taken into account. The organ dose conversion coefficients given in this catalogue were calculated using a Monte Carlo code simulating the photon transport in mathematical models of an adult male and an adult female, respectively. Conversion coefficients are given for the equivalent dose of 23 organs and tissues as well as for effective dose and the equivalent dose of the so-called `remainder`. The organ equivalent dose conversion coefficients are given separately for the adult male and female models and - as arithmetic mean of the conversion coefficients of both - for an average adult. Fitted data of the coefficients are presented in tables; the primary raw data as resulting from the Monte Carlo calculation are shown in figures together with the fitted data. (orig.)
Farr, Will M
2011-01-01
Selection among alternative theoretical models given an observed data set is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty: it requires jumps between model parameter spaces, but cannot retain a memory of the favored locations in more than one parameter space at a time. Thus, a naive jump between parameter spaces is unlikely to be accepted in the MCMC algorithm and convergence is correspondingly slow. Here we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose inter-model jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in arbitrary dimensions. We show that our technique leads to dramatically improved convergence over naive jumps in an RJMCMC, and compare it ...
Saritas, Kayahan; Grossman, Jeffrey C.
2015-03-01
Molecules that undergo pericyclic isomerization reactions find interesting optical and energy storage applications, because of their usually high quantum yields, large spectral shifts and small structural changes upon light absorption. These reactions induce a drastic change in the conjugated structure such that substituents that become a part of the conjugated system upon isomerization can play an important role in determining properties such as enthalpy of isomerization and HOMO-LUMO gap. Therefore, theoretical investigations dealing with such systems should be capable of accurately capturing the interplay between electron correlation and exchange effects. In this work, we examine the dihydroazulene isomerization as an example conjugated system. We employ the highly accurate quantum Monte Carlo (QMC) method to predict thermochemical properties and to benchmark results from density functional theory (DFT) methods. Although DFT provides sufficient accuracy for similar systems, in this particular system, DFT predictions of ground state and reaction paths are inconsistent and non-systematic errors arise. We present a comparison between QMC and DFT results for enthalpy of isomerization, HOMO-LUMO gap and charge densities with a range of DFT functionals.
Querlioz, Damien
2013-01-01
This book gives an overview of the quantum transport approaches for nanodevices and focuses on the Wigner formalism. It details the implementation of a particle-based Monte Carlo solution of the Wigner transport equation and how the technique is applied to typical devices exhibiting quantum phenomena, such as the resonant tunnelling diode, the ultra-short silicon MOSFET and the carbon nanotube transistor. In the final part, decoherence theory is used to explain the emergence of the semi-classical transport in nanodevices.
Pan, Yuxi; Qiu, Rui; Gao, Linfeng; Ge, Chaoyong; Zheng, Junzheng; Xie, Wenzhang; Li, Junli
2014-09-21
With the rapidly growing number of CT examinations, the consequential radiation risk has aroused more and more attention. The average dose in each organ during CT scans can only be obtained by using Monte Carlo simulation with computational phantoms. Since children tend to have higher radiation sensitivity than adults, the radiation dose of pediatric CT examinations requires special attention and needs to be assessed accurately. So far, studies on organ doses from CT exposures for pediatric patients are still limited. In this work, a 1-year-old computational phantom was constructed. The body contour was obtained from the CT images of a 1-year-old physical phantom and the internal organs were deformed from an existing Chinese reference adult phantom. To ensure the organ locations in the 1-year-old computational phantom were consistent with those of the physical phantom, the organ locations in 1-year-old computational phantom were manually adjusted one by one, and the organ masses were adjusted to the corresponding Chinese reference values. Moreover, a CT scanner model was developed using the Monte Carlo technique and the 1-year-old computational phantom was applied to estimate organ doses derived from simulated CT exposures. As a result, a database including doses to 36 organs and tissues from 47 single axial scans was built. It has been verified by calculation that doses of axial scans are close to those of helical scans; therefore, this database could be applied to helical scans as well. Organ doses were calculated using the database and compared with those obtained from the measurements made in the physical phantom for helical scans. The differences between simulation and measurement were less than 25% for all organs. The result shows that the 1-year-old phantom developed in this work can be used to calculate organ doses in CT exposures, and the dose database provides a method for the estimation of 1-year-old patient doses in a variety of CT examinations.
Tritium transport calculations for the IFMIF Tritium Release Test Module
Energy Technology Data Exchange (ETDEWEB)
Freund, Jana, E-mail: jana.freund@kit.edu; Arbeiter, Frederik; Abou-Sena, Ali; Franza, Fabrizio; Kondo, Keitaro
2014-10-15
Highlights: • Delivery of material data for the tritium balance in the IFMIF Tritium Release Test Module. • Description of the topological models in TMAP and the adapted fusion-devoted Tritium Permeation Code (FUS-TPC). • Computation of release of tritium from the breeder solid material into the purge gas. • Computation of the loss of tritium over the capsule wall, rig hull, container wall and purge gas return line. - Abstract: The IFMIF Tritium Release Test Module (TRTM) is projected to measure online the tritium release from breeder ceramics and beryllium pebble beds under high energy neutron irradiation. Tritium produced in the pebble bed of TRTM is swept out continuously by a purge gas flow, but can also permeate into the module's metal structures, and can be lost by permeation to the environment. According analyses on the tritium inventory are performed to support IFMIF plant safety studies, and to support the experiment planning. This paper describes the necessary elements for calculation of the tritium transport in the Tritium Release Test Module as follows: (i) applied equations for the tritium balance, (ii) material data from literature and (iii) the topological models and the computation of the five different cases; namely release of tritium from the breeder solid material into the purge gas, loss of tritium over the capsule wall, rig hull, container wall and purge gas return line in detail. The problem of tritium transport in the TRTM has been studied and analyzed by the Tritium Migration Analysis Program (TMAP) and the adapted fusion-devoted Tritium Permeation Code (FUS-TPC). TMAP has been developed at INEEL and now exists in Version 7. FUS-TPC Code was written in MATLAB with the original purpose to study the tritium transport in Helium Cooled Lead Lithium (HCLL) blanket and in a later version the Helium Cooled Pebble Bed (HCPB) blanket by [6] (Franza, 2012). This code has been further modified to be applicable to the TRTM. Results from the
Iftimie, R; Schofield, J P; Iftimie, Radu; Salahub, Dennis; Schofield, Jeremy
2003-01-01
In this article, we propose an efficient method for sampling the relevant state space in condensed phase reactions. In the present method, the reaction is described by solving the electronic Schr\\"{o}dinger equation for the solute atoms in the presence of explicit solvent molecules. The sampling algorithm uses a molecular mechanics guiding potential in combination with simulated tempering ideas and allows thorough exploration of the solvent state space in the context of an ab initio calculation even when the dielectric relaxation time of the solvent is long. The method is applied to the study of the double proton transfer reaction that takes place between a molecule of acetic acid and a molecule of methanol in tetrahydrofuran. It is demonstrated that calculations of rates of chemical transformations occurring in solvents of medium polarity can be performed with an increase in the cpu time of factors ranging from 4 to 15 with respect to gas-phase calculations.
An automated Monte-Carlo based method for the calculation of cascade summing factors
Jackson, M. J.; Britton, R.; Davies, A. V.; McLarty, J. L.; Goodwin, M.
2016-10-01
A versatile method has been developed to calculate cascade summing factors for use in quantitative gamma-spectrometry analysis procedures. The proposed method is based solely on Evaluated Nuclear Structure Data File (ENSDF) nuclear data, an X-ray energy library, and accurate efficiency characterisations for single detector counting geometries. The algorithm, which accounts for γ-γ, γ-X, γ-511 and γ-e- coincidences, can be applied to any design of gamma spectrometer and can be expanded to incorporate any number of nuclides. Efficiency characterisations can be derived from measured or mathematically modelled functions, and can accommodate both point and volumetric source types. The calculated results are shown to be consistent with an industry standard gamma-spectrometry software package. Additional benefits including calculation of cascade summing factors for all gamma and X-ray emissions, not just the major emission lines, are also highlighted.
Mairani, A; Kraemer, M; Sommerer, F; Parodi, K; Scholz, M; Cerutti, F; Ferrari, A; Fasso, A
2010-01-01
Clinical Monte Carlo (MC) calculations for carbon ion therapy have to provide absorbed and RBE-weighted dose. The latter is defined as the product of the dose and the relative biological effectiveness (RBE). At the GSI Helmholtzzentrum fur Schwerionenforschung as well as at the Heidelberg Ion Therapy Center (HIT), the RBE values are calculated according to the local effect model (LEM). In this paper, we describe the approach followed for coupling the FLUKA MC code with the LEM and its application to dose and RBE-weighted dose calculations for a superimposition of two opposed C-12 ion fields as applied in therapeutic irradiations. The obtained results are compared with the available experimental data of CHO (Chinese hamster ovary) cell survival and the outcomes of the GSI analytical treatment planning code TRiP98. Some discrepancies have been observed between the analytical and MC calculations of absorbed physical dose profiles, which can be explained by the differences between the laterally integrated depth-d...
Statistical fluctuations in Monte Carlo calculations. [for solution of rarefied flow problems
Boyd, I. D.; Stark, J. P. W.
1989-01-01
The time counter and modified Nanbu simulation techniques are analyzed, with emphasis placed on the convergence of the calculations to a steady macroscopic state. Such variables as translational and rotational temperature, and flow velocity, sampled at several points in the flowfield, are considered. Both macroscopic averages and molecular distribution functions are analyzed. The calculation of inelastic collisions, in which transfer of energy between translational and internal energy modes is performed, is achieved through the use of the Larsen-Borgnakke phenomenological model. It is noted that, with reference to translational temperature, the time counter method shows less statistical scatter than that found with the modified Nanbu simulation technique.
Energy Technology Data Exchange (ETDEWEB)
Jin, L; Wang, L; Li, J; Luo, W; Feigenberg, S J; Ma, C-M [Department of Radiation Oncology, Fox Chase Cancer Center, Philadelphia, PA 19111 (United States)
2007-07-21
This work investigated the selection of beam margins in lung-cancer stereotactic body radiotherapy (SBRT) with 6 MV photon beams. Monte Carlo dose calculations were used to systematically and quantitatively study the dosimetric effects of beam margins for different lung densities (0.1, 0.15, 0.25, 0.35 and 0.5 g cm{sup -3}), planning target volumes (PTVs) (14.4, 22.1 and 55.3 cm{sup 3}) and numbers of beam angles (three, six and seven) in lung-cancer SBRT in order to search for optimal beam margins for various clinical situations. First, a large number of treatment plans were generated in a commercial treatment planning system, and then recalculated using Monte Carlo simulations. All the plans were normalized to ensure that 95% of the PTV at least receives the prescription dose and compared quantitatively. Based on these plans, the relationships between the beam margin and quantities such as the lung toxicity (quantified by V{sub 20}, the percentage volume of the two lungs receiving at least 20 Gy) and the maximum target (PTV) dose were established for different PTVs and lung densities. The impact of the number of beam angles on the relationship between V{sub 20} and the beam margin was assessed. Quantitative information about optimal beam margins for lung-cancer SBRT was obtained for clinical applications.
Wako, H
1989-12-01
Monte Carlo simulations of a small protein, crambin, were carried out with and without hydration energy. The methodology presented here is characterized, as compared with the other similar simulations of proteins in solution, by two points: (1) protein conformations are treated in fixed geometry so that dihedral angles are independent variables rather than cartesian coordinates of atoms; and (2) instead of treating water molecules explicitly in the calculation, hydration energy is incorporated in the conformational energy function in the form of sigma giAi, where Ai is the accessible surface area of an atomic group i in a given conformation, and gi is the free energy of hydration per unit surface area of the atomic group (i.e., hydration-shell model). Reality of this model was tested by carrying out Monte Carlo simulations for the two kinds of starting conformations, native and unfolded ones, and in the two kinds of systems, in vacuo and solution. In the simulations starting from the native conformation, the differences between the mean properties in vacuo and solution simulations are not very large, but their fluctuations around the mean conformation during the simulation are relatively smaller in solution than in vacuo. On the other hand, in the simulations starting from the unfolded conformation, the molecule fluctuates much more largely in solution than in vacuo, and the effects of taking into account the hydration energy are pronounced very much. The results suggest that the method presented in this paper is useful for the simulations of proteins in solution.
Energy Technology Data Exchange (ETDEWEB)
Martinez Ovalle, S. A.; Olaya Davila, H.; Reyes Caballero, F.
2013-07-01
The main objective of this work is to verify through Monte Carlo, the dimensions most appropriate in the shielding of an installation designed for Industrial radiography with a Co-60 Irradiator. (Author)
Wilson, Robert H.; Dooley, Kathryn A.; Morris, Michael D.; Mycek, Mary-Ann
2009-02-01
Light-scattering spectroscopy has the potential to provide information about bone composition via a fiber-optic probe placed on the skin. In order to design efficient probes, one must understand the effect of all tissue layers on photon transport. To quantitatively understand the effect of overlying tissue layers on the detected bone Raman signal, a layered Monte Carlo model was modified for Raman scattering. The model incorporated the absorption and scattering properties of three overlying tissue layers (dermis, subdermis, muscle), as well as the underlying bone tissue. The attenuation of the collected bone Raman signal, predominantly due to elastic light scattering in the overlying tissue layers, affected the carbonate/phosphate (C/P) ratio by increasing the standard deviation of the computational result. Furthermore, the mean C/P ratio varied when the relative thicknesses of the layers were varied and the elastic scattering coefficient at the Raman scattering wavelength of carbonate was modeled to be different from that at the Raman scattering wavelength of phosphate. These results represent the first portion of a computational study designed to predict optimal probe geometry and help to analyze detected signal for Raman scattering experiments involving bone.
Comparison of some popular Monte Carlo solution for proton transportation within pCT problem
Energy Technology Data Exchange (ETDEWEB)
Evseev, Ivan; Assis, Joaquim T. de; Yevseyeva, Olga [Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburgo, RJ (Brazil). Inst. Politecnico], E-mail: evseev@iprj.uerj.br, E-mail: joaquim@iprj.uerj.br, E-mail: yevseyeva@iprj.uerj.br; Lopes, Ricardo T.; Cardoso, Jose J.B.; Silva, Ademir X. da [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE). Lab. de Instrumentacao Nuclear], E-mail: ricardo@lin.ufrj.br, E-mail: jjbrum@oi.com.br, E-mail: ademir@con.ufrj.br; Vinagre Filho, Ubirajara M. [Instituto de Engenharia Nuclear IEN/CNEN-RJ, Rio de Janeiro, RJ (Brazil)], E-mail: bira@ien.gov.br; Hormaza, Joel M. [UNESP, Botucatu, SP (Brazil). Inst. de Biociencias], E-mail: jmesa@ibb.unesp.br; Schelin, Hugo R.; Paschuk, Sergei A.; Setti, Joao A.P.; Milhoretto, Edney [Universidade Tecnologica Federal do Parana, Curitiba, PR (Brazil)], E-mail: schelin@cpgei.cefetpr.br, E-mail: sergei@utfpr.edu.br, E-mail: jsetti@gmail.com, E-mail: edneymilhoretto@yahoo.com
2007-07-01
The proton transport in matter is described by the Boltzmann kinetic equation for the proton flux density. This equation, however, does not have a general analytical solution. Some approximate analytical solutions have been developed within a number of significant simplifications. Alternatively, the Monte Carlo simulations are widely used. Current work is devoted to the discussion of the proton energy spectra obtained by simulation with SRIM2006, GEANT4 and MCNPX packages. The simulations have been performed considering some further applications of the obtained results in computed tomography with proton beam (pCT). Thus the initial and outgoing proton energies (3 / 300 MeV) as well as the thickness of irradiated target (water and aluminum phantoms within 90% of the full range for a given proton beam energy) were considered in the interval of values typical for pCT applications. One from the most interesting results of this comparison is that while the MCNPX spectra are in a good agreement with analytical description within Fokker-Plank approximation and the GEANT4 simulated spectra are slightly shifted from them the SRIM2006 simulations predict a notably higher mean energy loss for protons. (author)
Core-scale solute transport model selection using Monte Carlo analysis
Malama, Bwalya; Kuhlman, Kristopher L.; James, Scott C.
2013-06-01
Model applicability to core-scale solute transport is evaluated using breakthrough data from column experiments conducted with conservative tracers tritium (3H) and sodium-22 (22Na ), and the retarding solute uranium-232 (232U). The three models considered are single-porosity, double-porosity with single-rate mobile-immobile mass-exchange, and the multirate model, which is a deterministic model that admits the statistics of a random mobile-immobile mass-exchange rate coefficient. The experiments were conducted on intact Culebra Dolomite core samples. Previously, data were analyzed using single-porosity and double-porosity models although the Culebra Dolomite is known to possess multiple types and scales of porosity, and to exhibit multirate mobile-immobile-domain mass transfer characteristics at field scale. The data are reanalyzed here and null-space Monte Carlo analysis is used to facilitate objective model selection. Prediction (or residual) bias is adopted as a measure of the model structural error. The analysis clearly shows single-porosity and double-porosity models are structurally deficient, yielding late-time residual bias that grows with time. On the other hand, the multirate model yields unbiased predictions consistent with the late-time -5/2 slope diagnostic of multirate mass transfer. The analysis indicates the multirate model is better suited to describing core-scale solute breakthrough in the Culebra Dolomite than the other two models.
Majaron, Boris; Milanič, Matija; Premru, Jan
2015-01-01
In three-dimensional (3-D) modeling of light transport in heterogeneous biological structures using the Monte Carlo (MC) approach, space is commonly discretized into optically homogeneous voxels by a rectangular spatial grid. Any round or oblique boundaries between neighboring tissues thus become serrated, which raises legitimate concerns about the realism of modeling results with regard to reflection and refraction of light on such boundaries. We analyze the related effects by systematic comparison with an augmented 3-D MC code, in which analytically defined tissue boundaries are treated in a rigorous manner. At specific locations within our test geometries, energy deposition predicted by the two models can vary by 10%. Even highly relevant integral quantities, such as linear density of the energy absorbed by modeled blood vessels, differ by up to 30%. Most notably, the values predicted by the customary model vary strongly and quite erratically with the spatial discretization step and upon minor repositioning of the computational grid. Meanwhile, the augmented model shows no such unphysical behavior. Artifacts of the former approach do not converge toward zero with ever finer spatial discretization, confirming that it suffers from inherent deficiencies due to inaccurate treatment of reflection and refraction at round tissue boundaries.
Institute of Scientific and Technical Information of China (English)
范钦敏; 刘亚雯; 等
1995-01-01
Simulation approach includes such processes as photon emissions from X-ray tube with a spectral distribution,total reflection on the sample support,photoelectric effect in thin layer sample,as well as characteristic line absorption and detection,The calculation results are in agreement with experimental ones.
KOELINK, MH; DEMUL, FFM; GREVE, J; GRAAFF, R; DASSEL, ACM; AARNOUDSE, JG
1992-01-01
In addition to the static cubic lattice model for photon migration in turbid biological media by Bonner et al. [J. Opt. Soc. Am. A 4, 423-432 (1987)], a dynamic method is presented to calculate the average absolute Doppler shift as a function of the distance between the point of injection of photons
Considerations of beta and electron transport in internal dose calculations
Energy Technology Data Exchange (ETDEWEB)
Bolch, W.E.; Poston, J.W. Sr.
1990-12-01
Ionizing radiation has broad uses in modern science and medicine. These uses often require the calculation of energy deposition in the irradiated media and, usually, the medium of interest is the human body. Energy deposition from radioactive sources within the human body and the effects of such deposition are considered in the field of internal dosimetry. In July of 1988, a three-year research project was initiated by the Nuclear Engineering Department at Texas A M University under the sponsorship of the US Department of Energy. The main thrust of the research was to consider, for the first time, the detailed spatial transport of electron and beta particles in the estimation of average organ doses under the Medical Internal Radiation Dose (MIRD) schema. At the present time (December of 1990), research activities are continuing within five areas. Several are new initiatives begun within the second or third year of the current contract period. They include: (1) development of small-scale dosimetry; (2) development of a differential volume phantom; (3) development of a dosimetric bone model; (4) assessment of the new ICRP lung model; and (5) studies into the mechanisms of DNA damage. A progress report is given for each of these tasks within the Comprehensive Report. In each case, preliminary results are very encouraging and plans for further research are detailed within this document.
Griesheimer, D. P.; Gill, D. F.; Nease, B. R.; Sutton, T. M.; Stedry, M. H.; Dobreff, P. S.; Carpenter, D. C.; Trumbull, T. H.; Caro, E.; Joo, H.; Millman, D. L.
2014-06-01
MC21 is a continuous-energy Monte Carlo radiation transport code for the calculation of the steady-state spatial distributions of reaction rates in three-dimensional models. The code supports neutron and photon transport in fixed source problems, as well as iterated-fission-source (eigenvalue) neutron transport problems. MC21 has been designed and optimized to support large-scale problems in reactor physics, shielding, and criticality analysis applications. The code also supports many in-line reactor feedback effects, including depletion, thermal feedback, xenon feedback, eigenvalue search, and neutron and photon heating. MC21 uses continuous-energy neutron/nucleus interaction physics over the range from 10-5 eV to 20 MeV. The code treats all common neutron scattering mechanisms, including fast-range elastic and non-elastic scattering, and thermal- and epithermal-range scattering from molecules and crystalline materials. For photon transport, MC21 uses continuous-energy interaction physics over the energy range from 1 keV to 100 GeV. The code treats all common photon interaction mechanisms, including Compton scattering, pair production, and photoelectric interactions. All of the nuclear data required by MC21 is provided by the NDEX system of codes, which extracts and processes data from EPDL-, ENDF-, and ACE-formatted source files. For geometry representation, MC21 employs a flexible constructive solid geometry system that allows users to create spatial cells from first- and second-order surfaces. The system also allows models to be built up as hierarchical collections of previously defined spatial cells, with interior detail provided by grids and template overlays. Results are collected by a generalized tally capability which allows users to edit integral flux and reaction rate information. Results can be collected over the entire problem or within specific regions of interest through the use of phase filters that control which particles are allowed to score each
Benacka, Jan
2016-08-01
This paper reports on lessons in which 18-19 years old high school students modelled random processes with Excel. In the first lesson, 26 students formulated a hypothesis on the area of ellipse by using the analogy between the areas of circle, square and rectangle. They verified the hypothesis by the Monte Carlo method with a spreadsheet model developed in the lesson. In the second lesson, 27 students analysed the dice poker game. First, they calculated the probability of the hands by combinatorial formulae. Then, they verified the result with a spreadsheet model developed in the lesson. The students were given a questionnaire to find out if they found the lesson interesting and contributing to their mathematical and technological knowledge.
Energy Technology Data Exchange (ETDEWEB)
Kanai, Y; Takeuchi, N
2009-10-14
We revisit the molecular line growth mechanism of styrene on the hydrogenated Si(001) 2x1 surface. In particular, we investigate the energetics of the radical chain reaction mechanism by means of diffusion quantum Monte Carlo (QMC) and density functional theory (DFT) calculations. For the exchange correlation (XC) functional we use the non-empirical generalized-gradient approximation (GGA) and meta-GGA. We find that the QMC result also predicts the intra dimer-row growth of the molecular line over the inter dimer-row growth, supporting the conclusion based on DFT results. However, the absolute magnitudes of the adsorption and reaction energies, and the heights of the energy barriers differ considerably between the QMC and DFT with the GGA/meta-GGA XC functionals.
Atamas, Alexander A; Cuppen, Herma M; Koudriachova, Marina V; de Leeuw, Simon W
2013-01-31
The thermodynamics of binary sII hydrogen clathrates with secondary guest molecules is studied with Monte Carlo simulations. The small cages of the sII unit cell are occupied by one H(2) guest molecule. Different promoter molecules entrapped in the large cages are considered. Simulations are conducted at a pressure of 1000 atm in a temperature range of 233-293 K. To determine the stabilizing effect of different promoter molecules on the clathrate, the Gibbs free energy of fully and partially occupied sII hydrogen clathrates are calculated. Our aim is to predict what would be an efficient promoter molecule using properties such as size, dipole moment, and hydrogen bonding capability. The gas clathrate configurational and free energies are compared. The entropy makes a considerable contribution to the free energy and should be taken into account in determining stability conditions of binary sII hydrogen clathrates.
Bensadiq, A.; Zaari, H.; Benyoussef, A.; El Kenz, A.
2016-09-01
Using the density functional theory, the electronic structure; density of states, band structure and exchange couplings of Tb Ni4 Si compound have been investigated. Magnetic and magnetocaloric properties of this material have been studied using Monte Carlo Simulation (MCS) and Mean Field Approximation (MFA) within a three dimensional Ising model. We calculated the isothermal magnetic entropy change, adiabatic temperature change and relative cooling power (RCP) for different external magnetic field and temperature. The highest obtained isothermal magnetic entropy change is of -14.52 J kg-1 K-1 for a magnetic field of H=4 T. The adiabatic temperature reaches a maximum value equal to 3.7 K and the RCP maximum value is found to be 125.12 J kg-1 for a field magnetic of 14 T.
Sarkadi, L.
2016-09-01
The ionization of the uracil molecule induced by heavy-ion impact has been investigated using the classical trajectory Monte Carlo (CTMC) method. Assuming the validity of the independent-particle model approximation, the collision problem is solved by considering the three-body dynamics of the projectile, an active electron and the molecule core. The interaction of the molecule core with the other two particles is described by a multi-center potential built from screened atomic potentials. The cross section differential with respect to the energy and angle of the electrons ejected in the ionization process has been calculated for an impact of 3.5 MeV u-1 {{{C}}}6+ ions. Total electron emission cross sections (TCS) are presented for {{{C}}}q+ (q=0-6) and {{{O}}}6+ projectiles as a function of the impact energy in the range from 10 keV u-1 to 10 MeV u-1. The dependence of the TCS on the charge state of the projectile has been investigated for 2.5 MeV u-1 {{{O}}}q+ (q=4-8) and {{{F}}}q+ (q=5-9) ions. The results of the calculations are compared with available experimental data and the predictions of other theoretical models: the first Born approximation with correct boundary conditions (CB1), the continuum-distorted-wave-eikonal-initial-state approach (CDW-EIS), and the combined classical-trajectory Monte Carlo-classical over-the-barrier model (CTMC-COB).
Monte Carlo techniques in radiation therapy
Verhaegen, Frank
2013-01-01
Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...
CONDENSED MONTE-CARLO SIMULATIONS FOR THE DESCRIPTION OF LIGHT TRANSPORT
GRAAFF, R; KOELINK, MH; DEMUL, FFM; ZIJLSTRA, WG; DASSEL, ACM; AARNOUDSE, JG
1993-01-01
A novel method, condensed Monte Carlo simulation, is presented that applies the results of a single Monte Carlo simulation for a given albedo mu(s)/(mu(a) + mu(s)) to obtaining results for other albedos; mu(s) and mu(a) are the scattering and absorption coefficients, respectively. The method require
Energy Technology Data Exchange (ETDEWEB)
Habib, B.; Poumarede, B.; Tola, F.; Barthe, J. [CEA, LIST, Dept Technol Capteur et Signal, F-91191 Gif Sur Yvette, (France)
2010-07-01
The aim of the present study is to demonstrate the potential of accelerated dose calculations, using the fast Monte Carlo (MC) code referred to as PENFAST, rather than the conventional MC code PENELOPE, without losing accuracy in the computed dose. For this purpose, experimental measurements of dose distributions in homogeneous and inhomogeneous phantoms were compared with simulated results using both PENELOPE and PENFAST. The simulations and experiments were performed using a Saturne 43 linac operated at 12 MV (photons), and at 18 MeV (electrons). Pre-calculated phase space files (PSFs) were used as input data to both the PENELOPE and PENFAST dose simulations. Since depth-dose and dose profile comparisons between simulations and measurements in water were found to be in good agreement (within {+-} 1% to 1 mm), the PSF calculation is considered to have been validated. In addition, measured dose distributions were compared to simulated results in a set of clinically relevant, inhomogeneous phantoms, consisting of lung and bone heterogeneities in a water tank. In general, the PENFAST results agree to within a 1% to 1 mm difference with those produced by PENELOPE, and to within a 2% to 2 mm difference with measured values. Our study thus provides a pre-clinical validation of the PENFAST code. It also demonstrates that PENFAST provides accurate results for both photon and electron beams, equivalent to those obtained with PENELOPE. CPU time comparisons between both MC codes show that PENFAST is generally about 9-21 times faster than PENELOPE. (authors)
Gifford, Kent A; Wareing, Todd A; Failla, Gregory; Horton, John L; Eifel, Patricia J; Mourtada, Firas
2009-12-03
A patient dose distribution was calculated by a 3D multi-group S N particle transport code for intracavitary brachytherapy of the cervix uteri and compared to previously published Monte Carlo results. A Cs-137 LDR intracavitary brachytherapy CT data set was chosen from our clinical database. MCNPX version 2.5.c, was used to calculate the dose distribution. A 3D multi-group S N particle transport code, Attila version 6.1.1 was used to simulate the same patient. Each patient applicator was built in SolidWorks, a mechanical design package, and then assembled with a coordinate transformation and rotation for the patient. The SolidWorks exported applicator geometry was imported into Attila for calculation. Dose matrices were overlaid on the patient CT data set. Dose volume histograms and point doses were compared. The MCNPX calculation required 14.8 hours, whereas the Attila calculation required 22.2 minutes on a 1.8 GHz AMD Opteron CPU. Agreement between Attila and MCNPX dose calculations at the ICRU 38 points was within +/- 3%. Calculated doses to the 2 cc and 5 cc volumes of highest dose differed by not more than +/- 1.1% between the two codes. Dose and DVH overlays agreed well qualitatively. Attila can calculate dose accurately and efficiently for this Cs-137 CT-based patient geometry. Our data showed that a three-group cross-section set is adequate for Cs-137 computations. Future work is aimed at implementing an optimized version of Attila for radiotherapy calculations.
Energy Technology Data Exchange (ETDEWEB)
Jo, Yu Gwon; Cho, Nam Zin [KAIST, Daejeon (Korea, Republic of)
2014-10-15
The OLG iteration scheme uses overlapping regions for each local problem solved by continuous-energy MC calculation to reduce errors in inaccurate boundary conditions (BCs) that are caused by discretization in space, energy, and angle. However, the overlapping region increases computational burdens and the discretized BCs for continuous-energy MC calculation result in an inaccurate global p-CMFD solution. On the other hand, there also have been several studies on the direct domain decomposed MC calculation where each processor simulates particles within its own domain and exchanges the particles crossing the domain boundary between processors with certain frequency. The efficiency of this method depends on the message checking frequency and the buffer size. Furthermore, it should overcome the load-imbalance problem for better parallel efficiency. Recently, fission and surface source (FSS) iteration method based on banking both fission and surface sources for the next iteration (i.e., cycle) was proposed to give exact BCs for non overlapping local problems in domain decomposition and tested in one-dimensional continuous-energy reactor problems. In this paper, the FSS iteration method is combined with a source splitting scheme to reduce the load imbalance problem and achieve global variance reduction. The performances are tested on a two dimensional continuous-energy reactor problem with domain-based parallelism and compared with the FSS iteration without source splitting. Numerical results show the improvements of the FSS iteration with source splitting. This paper describes the FSS iteration scheme in the domain decomposition method and proposes the FSS iteration combined with the source splitting based on the number of sampled sources, reducing the load-imbalance problem in domain-based parallelism and achieving global variance reduction.
Energy Technology Data Exchange (ETDEWEB)
Hofmann, H.M.; Mertelmeier, T. (Erlangen-Nuernberg Univ., Erlangen (Germany, F.R.). Inst. fuer Theoretische Physik); Mello, P.A. (Instituto Nacional de Investigaciones Nucleares, Mexico City. Lab. del Acelerador); Seligman, T.H. (Universidad Nacional Autonoma de Mexico, Mexico City. Inst. de Fisica)
1981-12-14
A comparison is presented between predictions of the entropy approach to statistical nuclear reactions, and numerical calculations performed by generating an ensemble of S-matrices in terms of K-matrices with specified statistical distributions for their parameters. The comparison is done for: (a) the 2nd, 3rd and 4th moments of S in a 4-channel case and (b) the actual distribution of the S-matrix elements in a 2-channel case. In both cases the agreement is found to be very good in the domain of strong absorption.
Energy Technology Data Exchange (ETDEWEB)
Vergnaud, T.; Nimal, J.C. (CEA Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France))
1990-01-01
The three-dimensional polycinetic Monte Carlo particle transport code TRIPOLI has been under development in the French Shielding Laboratory at Saclay since 1965. TRIPOLI-1 began to run in 1970 and became TRIPOLI-2 in 1978: since then its capabilities have been improved and many studies have been performed. TRIPOLI can treat stationary or time dependent problems in shielding and in neutronics. Some examples of solved problems are presented to demonstrate the many possibilities of the system. (author).
Juste, Belén; Miró, Rafael; Monasor, Paula; Verdú, Gumersindo
2015-11-01
Phosphor screens are commonly used in many X-ray imaging applications. The design and optimization of these detectors can be achieved using Monte Carlo codes to simulate radiation transport in scintillation materials and to improve the spatial response. This work presents an exhaustive procedure to measure the spatial resolution of a scintillation flat panel image and to evaluate the agreement with data obtained by simulation. To evaluate the spatial response we have used the Modulated Transfer Function (MTF) parameter. According to this, we have obtained the Line Spread Function (LSF) of the system since the Fourier Transform (FT) of the LSF gives the MTF. The experimental images were carried out using a medical X-ray tube (Toshiba E7299X) and a flat panel (Hammamatsu C9312SK). Measurements were based on the slit methodology experimental implementation, which measures the response of the system to a line. LSF measurements have been performed using a 0.2 mm wide lead slit superimposed over the flat panel. The detector screen was modelled with MCNP (version 6) Monte Carlo simulation code in order to analyze the effect of the acquisition setup configuration and to compare the response of scintillator screens with the experimental results. MCNP6 offers the possibility of studying the optical physics parameters (optical scattering and absorption coefficients) that occur in the phosphor screen. The study has been tested for different X-ray tube voltages, from 100 to 140 kV. An acceptable convergence between the MTF results obtained with MCNP6 and the experimental measurements have been obtained.
Energy Technology Data Exchange (ETDEWEB)
Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
Nagaya, Yasunobu
2014-06-01
The methods to calculate the kinetics parameters of βeff and Λ with the differential operator sampling have been reviewed. The comparison of the results obtained with the differential operator sampling and iterated fission probability approaches has been performed. It is shown that the differential operator sampling approach gives the same results as the iterated fission probability approach within the statistical uncertainty. In addition, the prediction accuracy of the evaluated nuclear data library JENDL-4.0 for the measured βeff/Λ and βeff values is also examined. It is shown that JENDL-4.0 gives a good prediction except for the uranium-233 systems. The present results imply the need for revisiting the uranium-233 nuclear data evaluation and performing the detailed sensitivity analysis.
Energy Technology Data Exchange (ETDEWEB)
Zucca Aparicio, D.; Perez Moreno, J. M.; Fernandez Leton, P.; Garcia Ruiz-Zorrilla, J.; Minambres Moro, A.
2013-07-01
The objective of this work consists of the evaluation of those clinical parameters described in RTOG 0813 and 0915 RTOG protocols relevant applicable to PTV, lung tissue and healthy normal, of those patients treated in our institution since April 2008, calculated initially through Pencil Beam and recalculated currently using Monte Carlo is interesting remark that the RTOG 0813 Protocol replaces the previous RTOG 0236 which expressly mentioned do not make corrections by heterogeneity in the calculation of dose in lung lesions. (Author)
Žukauskaite, A; Plukiene, R; Plukis, A
2007-01-01
Particle accelerators and other high energy facilities produce penetrating ionizing radiation (neutrons and γ-rays) that must be shielded. The objective of this work was to model photon and neutron transport in various materials, usually used as shielding, such as concrete, iron or graphite. Monte Carlo method allows obtaining answers by simulating individual particles and recording some aspects of their average behavior. In this work several nuclear experiments were modeled: AVF 65 – γ-ray beams (1-10 MeV), HIMAC and ISIS-800 – high energy neutrons (20-800 MeV) transport in iron and concrete. The results were then compared with experimental data.
Update on the Status of the FLUKA Monte Carlo Transport Code
Pinsky, L.; Anderson, V.; Empl, A.; Lee, K.; Smirnov, G.; Zapp, N; Ferrari, A.; Tsoulou, K.; Roesler, S.; Vlachoudis, V.; Battisoni, G.; Ceruti, F.; Gadioli, M. V.; Garzelli, M.; Muraro, S.; Rancati, T.; Sala, P.; Ballarini, R.; Ottolenghi, A.; Parini, V.; Scannicchio, D.; Pelliccioni, M.; Wilson, T. L.
2004-01-01
The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy Physics. FLUKA is a dynamic tool in the sense that it is being continually updated and improved by the authors. Here we review the progresses achieved in the last year on the physics models. From the point of view of hadronic physics, most of the effort is still in the field of nucleus--nucleus interactions. The currently available version of FLUKA already includes the internal capability to simulate inelastic nuclear interactions beginning with lab kinetic energies of 100 MeV/A up the the highest accessible energies by means of the DPMJET-II.5 event generator to handle the interactions for greater than 5 GeV/A and rQMD for energies below that. The new developments concern, at high energy, the embedding of the DPMJET-III generator, which represent a major change with respect to the DPMJET-II structure. This will also allow to achieve a better consistency between the nucleus-nucleus section with the original FLUKA model for hadron-nucleus collisions. Work is also in progress to implement a third event generator model based on the Master Boltzmann Equation approach, in order to extend the energy capability from 100 MeV/A down to the threshold for these reactions. In addition to these extended physics capabilities, structural changes to the programs input and scoring capabilities are continually being upgraded. In particular we want to mention the upgrades in the geometry packages, now capable of reaching higher levels of abstraction. Work is also proceeding to provide direct import into ROOT of the FLUKA output files for analysis and to deploy a user-friendly GUI input interface.
Antiproton annihilation physics in the Monte Carlo particle transport code SHIELD-HIT12A
Energy Technology Data Exchange (ETDEWEB)
Taasti, Vicki Trier; Knudsen, Helge [Dept. of Physics and Astronomy, Aarhus University (Denmark); Holzscheiter, Michael H. [Dept. of Physics and Astronomy, Aarhus University (Denmark); Dept. of Physics and Astronomy, University of New Mexico (United States); Sobolevsky, Nikolai [Institute for Nuclear Research of the Russian Academy of Sciences (INR), Moscow (Russian Federation); Moscow Institute of Physics and Technology (MIPT), Dolgoprudny (Russian Federation); Thomsen, Bjarne [Dept. of Physics and Astronomy, Aarhus University (Denmark); Bassler, Niels, E-mail: bassler@phys.au.dk [Dept. of Physics and Astronomy, Aarhus University (Denmark)
2015-03-15
The Monte Carlo particle transport code SHIELD-HIT12A is designed to simulate therapeutic beams for cancer radiotherapy with fast ions. SHIELD-HIT12A allows creation of antiproton beam kernels for the treatment planning system TRiP98, but first it must be benchmarked against experimental data. An experimental depth dose curve obtained by the AD-4/ACE collaboration was compared with an earlier version of SHIELD-HIT, but since then inelastic annihilation cross sections for antiprotons have been updated and a more detailed geometric model of the AD-4/ACE experiment was applied. Furthermore, the Fermi–Teller Z-law, which is implemented by default in SHIELD-HIT12A has been shown not to be a good approximation for the capture probability of negative projectiles by nuclei. We investigate other theories which have been developed, and give a better agreement with experimental findings. The consequence of these updates is tested by comparing simulated data with the antiproton depth dose curve in water. It is found that the implementation of these new capture probabilities results in an overestimation of the depth dose curve in the Bragg peak. This can be mitigated by scaling the antiproton collision cross sections, which restores the agreement, but some small deviations still remain. Best agreement is achieved by using the most recent antiproton collision cross sections and the Fermi–Teller Z-law, even if experimental data conclude that the Z-law is inadequately describing annihilation on compounds. We conclude that more experimental cross section data are needed in the lower energy range in order to resolve this contradiction, ideally combined with more rigorous models for annihilation on compounds.
Townson, Reid W.; Zavgorodni, Sergei
2014-12-01
In GPU-based Monte Carlo simulations for radiotherapy dose calculation, source modelling from a phase-space source can be an efficiency bottleneck. Previously, this has been addressed using phase-space-let (PSL) sources, which provided significant efficiency enhancement. We propose that additional speed-up can be achieved through the use of a hybrid primary photon point source model combined with a secondary PSL source. A novel phase-space derived and histogram-based implementation of this model has been integrated into gDPM v3.0. Additionally, a simple method for approximately deriving target photon source characteristics from a phase-space that does not contain inheritable particle history variables (LATCH) has been demonstrated to succeed in selecting over 99% of the true target photons with only ~0.3% contamination (for a Varian 21EX 18 MV machine). The hybrid source model was tested using an array of open fields for various Varian 21EX and TrueBeam energies, and all cases achieved greater than 97% chi-test agreement (the mean was 99%) above the 2% isodose with 1% / 1 mm criteria. The root mean square deviations (RMSDs) were less than 1%, with a mean of 0.5%, and the source generation time was 4-5 times faster. A seven-field intensity modulated radiation therapy patient treatment achieved 95% chi-test agreement above the 10% isodose with 1% / 1 mm criteria, 99.8% for 2% / 2 mm, a RMSD of 0.8%, and source generation speed-up factor of 2.5. Presented as part of the International Workshop on Monte Carlo Techniques in Medical Physics
Energy Technology Data Exchange (ETDEWEB)
Renaud, M; Seuntjens, J [McGill University, Montreal, QC (Canada); Roberge, D [Centre Hospitalier de l' Universite de Montreal, Montreal, QC (Canada)
2014-06-15
Purpose: Assessing the performance and uncertainty of a pre-calculated Monte Carlo (PMC) algorithm for proton and electron transport running on graphics processing units (GPU). While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from recycling a limited number of tracks in the pre-generated track bank is missing from the literature. With a proper uncertainty analysis, an optimal pre-generated track bank size can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pre-generated for electrons and protons using EGSnrc and GEANT4, respectively. The PMC algorithm for track transport was implemented on the CUDA programming framework. GPU-PMC dose distributions were compared to benchmark dose distributions simulated using general-purpose MC codes in the same conditions. A latent uncertainty analysis was performed by comparing GPUPMC dose values to a “ground truth” benchmark while varying the track bank size and primary particle histories. Results: GPU-PMC dose distributions and benchmark doses were within 1% of each other in voxels with dose greater than 50% of Dmax. In proton calculations, a submillimeter distance-to-agreement error was observed at the Bragg Peak. Latent uncertainty followed a Poisson distribution with the number of tracks per energy (TPE) and a track bank of 20,000 TPE produced a latent uncertainty of approximately 1%. Efficiency analysis showed a 937× and 508× gain over a single processor core running DOSXYZnrc for 16 MeV electrons in water and bone, respectively. Conclusion: The GPU-PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty below 1%. The track bank size necessary to achieve an optimal efficiency can be tuned based on the desired uncertainty. Coupled with a model to calculate dose contributions from uncharged particles, GPU-PMC is a candidate for inverse planning of modulated electron radiotherapy
Gaudoin, R
2000-01-01
correlation terms. 2. We use standard VMC in conjunction with iterative variance minimisation to study bulk aluminium as a test bed for future work on surfaces. QMC has been used successfully for insulators and semiconductors, but little is known about applying it to metals. LDA calculations for aluminium are reasonably accurate for the bulk modulus and lattice constant. In contrast, the LDA cohesive energy is 1.25 times the experimental value. Due to the large statistical uncertainties the VMC result for the bulk modulus is disappointing, but the VMC cohesive energy is a clear improvement on LDA. In general, we find that QMC is applicable to metals and that the finite-size and other errors are qualitatively no different from those encountered in non-metallic systems. The quantum many-body problem is among the most challenging in physics. A popular approach is to reduce the problem to the study of a single particle in an effective potential. These one-particle schemes, the most popular of which is density fun...
Computer program for calculating thermodynamic and transport properties of fluids
Hendricks, R. C.; Braon, A. K.; Peller, I. C.
1975-01-01
Computer code has been developed to provide thermodynamic and transport properties of liquid argon, carbon dioxide, carbon monoxide, fluorine, helium, methane, neon, nitrogen, oxygen, and parahydrogen. Equation of state and transport coefficients are updated and other fluids added as new material becomes available.
Directory of Open Access Journals (Sweden)
Ghavami Seyed Mostafa
2016-01-01
Full Text Available Using the nano-scaled radionuclides in the radionuclide therapy significantly reduces the particles trapping in the organs vessels and avoids thrombosis formations. Additionally, uniform distribution in the target organ may be another benefit of the nanoradionuclides in the radionuclide therapy. Monte Carlo simulation was conducted to model a mathematical humanoid phantom and the liver cells of the simulated phantom were filled with the 90Y nanospheres. Healthy organs doses, fatal and nonfatal risks of the surrounding organs were estimated. The estimations and calculations were made in four different distribution patterns of the radionuclide seeds. Maximum doses and risks estimated for the surrounding organs were obtained in the high edge concentrated distribution model of the liver including the nanoradionuclides. For the dose equivalent, effective dose, fatal and non-fatal risks, the values obtained as 7.51E-03 Sv/Bq, 3.01E-01 Sv/Bq, and 9.16E-01 cases/104 persons for the bladder, colon, and kidney of the modeled phantom, respectively. The mentioned values were the maximum values among the studied modeled distributions. Maximum values of Normal Tissue Complication Probability for the healthy organs calculated as 5.9-8.9 %. Result of using nanoparticles of the 90Y provides promising dosimetric properties in MC simulation results considering non-toxicity reports for the radionuclide.
Sarkadi, L
2015-01-01
The three-body dynamics of the ionization of the atomic hydrogen by 30 keV antiproton impact has been investigated by calculation of fully differential cross sections (FDCS) using the classical trajectory Monte Carlo (CTMC) method. The results of the calculations are compared with the predictions of quantum mechanical descriptions: The semi-classical time-dependent close-coupling theory, the fully quantal, time-independent close-coupling theory, and the continuum-distorted-wave-eikonal-initial-state model. In the analysis particular emphasis was put on the role of the nucleus-nucleus (NN) interaction played in the ionization process. For low-energy electron ejection CTMC predicts a large NN interaction effect on FDCS, in agreement with the quantum mechanical descriptions. By examining individual particle trajectories it was found that the relative motion between the electron and the nuclei is coupled very weakly with that between the nuclei, consequently the two motions can be treated independently. A simple ...
Energy Technology Data Exchange (ETDEWEB)
Medeiros, Marcos P.C.; Rebello, Wilson F.; Andrade, Edson R., E-mail: rebello@ime.eb.br, E-mail: daltongirao@yahoo.com.br [Instituto Militar de Engenharia (IME), Rio de Janeiro, RJ (Brazil). Secao de Engenharia Nuclear; Silva, Ademir X., E-mail: ademir@nuclear.ufrj.br [Corrdenacao dos Programas de Pos-Graduacao em Egenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear
2015-07-01
Nuclear explosions are usually described in terms of its total yield and associated shock wave, thermal radiation and nuclear radiation effects. The nuclear radiation produced in such events has several components, consisting mainly of alpha and beta particles, neutrinos, X-rays, neutrons and gamma rays. For practical purposes, the radiation from a nuclear explosion is divided into {sup i}nitial nuclear radiation{sup ,} referring to what is issued within one minute after the detonation, and 'residual nuclear radiation' covering everything else. The initial nuclear radiation can also be split between 'instantaneous or 'prompt' radiation, which involves neutrons and gamma rays from fission and from interactions between neutrons and nuclei of surrounding materials, and 'delayed' radiation, comprising emissions from the decay of fission products and from interactions of neutrons with nuclei of the air. This work aims at presenting isodose curves calculations at ground level by Monte Carlo simulation, allowing risk assessment and consequences modeling in radiation protection context. The isodose curves are related to neutrons produced by the prompt nuclear radiation from a hypothetical nuclear explosion with a total yield of 20 KT. Neutron fluency and emission spectrum were based on data available in the literature. Doses were calculated in the form of ambient dose equivalent due to neutrons H*(10){sub n}{sup -}. (author)
Study of deposited energy in lung tissue from radon's progeny calculated by Monte Carlo
Energy Technology Data Exchange (ETDEWEB)
Angeles, A. [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico); Espinosa, G. [UNAM, Instituto de Fisica, Apdo. Postal 20-364, 01000 Mexico D. F. (Mexico)
2011-02-15
Because the deposited {sup 222}Rn progeny distribution in lung airways, these sources can contribute hardly to critical cells absorbed dose in neighbourhood of a alpha track by the alpha particles from {sup 218}Po and {sup 214}Po. According to epidemiological data, lung cancers are primarily bronchogenic and mainly originate in the first five airway generations of the bronchial tree. Generally for deposited energy calculations, uniform deposit in source layers and the whole layers as sources has been considerate d too. Discretion al point deposits in the different and most important bronqui (B B) and bronchial (b b) layers for main generations is a more realistic case. Because that facts we have calculated the average deposited energy by Monte Carlo in the most important different target cell layers for the main B B and b b branch generations considering the radioactive {sup 222}Rn progeny punctual deposit in the source epithelium walls, from this location. It irradiate the neighbor cells in all directions. (Author)
State-of-the-art Monte Carlo 1988
Energy Technology Data Exchange (ETDEWEB)
Soran, P.D.
1988-06-28
Particle transport calculations in highly dimensional and physically complex geometries, such as detector calibration, radiation shielding, space reactors, and oil-well logging, generally require Monte Carlo transport techniques. Monte Carlo particle transport can be performed on a variety of computers ranging from APOLLOs to VAXs. Some of the hardware and software developments, which now permit Monte Carlo methods to be routinely used, are reviewed in this paper. The development of inexpensive, large, fast computer memory, coupled with fast central processing units, permits Monte Carlo calculations to be performed on workstations, minicomputers, and supercomputers. The Monte Carlo renaissance is further aided by innovations in computer architecture and software development. Advances in vectorization and parallelization architecture have resulted in the development of new algorithms which have greatly reduced processing times. Finally, the renewed interest in Monte Carlo has spawned new variance reduction techniques which are being implemented in large computer codes. 45 refs.
Energy Technology Data Exchange (ETDEWEB)
Pacilio, M.; Lanconelli, N.; Lo Meo, S.; Betti, M.; Montani, L.; Torres Aroche, L. A.; Coca Perez, M. A. [Department of Medical Physics, Azienda Ospedaliera S. Camillo Forlanini, Piazza Forlanini 1, Rome 00151 (Italy); Department of Physics, Alma Mater Studiorum University of Bologna, Viale Berti-Pichat 6/2, Bologna 40127 (Italy); Department of Medical Physics, Azienda Ospedaliera S. Camillo Forlanini, Piazza Forlanini 1, Rome 00151 (Italy); Department of Medical Physics, Azienda Ospedaliera Sant' Andrea, Via di Grotarossa 1035, Rome 00189 (Italy); Department of Medical Physics, Center for Clinical Researches, Calle 34 North 4501, Havana 11300 (Cuba)
2009-05-15
Several updated Monte Carlo (MC) codes are available to perform calculations of voxel S values for radionuclide targeted therapy. The aim of this work is to analyze the differences in the calculations obtained by different MC codes and their impact on absorbed dose evaluations performed by voxel dosimetry. Voxel S values for monoenergetic sources (electrons and photons) and different radionuclides ({sup 90}Y, {sup 131}I, and {sup 188}Re) were calculated. Simulations were performed in soft tissue. Three general-purpose MC codes were employed for simulating radiation transport: MCNP4C, EGSnrc, and GEANT4. The data published by the MIRD Committee in Pamphlet No. 17, obtained with the EGS4 MC code, were also included in the comparisons. The impact of the differences (in terms of voxel S values) among the MC codes was also studied by convolution calculations of the absorbed dose in a volume of interest. For uniform activity distribution of a given radionuclide, dose calculations were performed on spherical and elliptical volumes, varying the mass from 1 to 500 g. For simulations with monochromatic sources, differences for self-irradiation voxel S values were mostly confined within 10% for both photons and electrons, but with electron energy less than 500 keV, the voxel S values referred to the first neighbor voxels showed large differences (up to 130%, with respect to EGSnrc) among the updated MC codes. For radionuclide simulations, noticeable differences arose in voxel S values, especially in the bremsstrahlung tails, or when a high contribution from electrons with energy of less than 500 keV is involved. In particular, for {sup 90}Y the updated codes showed a remarkable divergence in the bremsstrahlung region (up to about 90% in terms of voxel S values) with respect to the EGS4 code. Further, variations were observed up to about 30%, for small source-target voxel distances, when low-energy electrons cover an important part of the emission spectrum of the radionuclide
Patni, H K; Nadar, M Y; Akar, D K; Bhati, S; Sarkar, P K
2011-11-01
The adult reference male and female computational voxel phantoms recommended by ICRP are adapted into the Monte Carlo transport code FLUKA. The FLUKA code is then utilised for computation of dose conversion coefficients (DCCs) expressed in absorbed dose per air kerma free-in-air for colon, lungs, stomach wall, breast, gonads, urinary bladder, oesophagus, liver and thyroid due to a broad parallel beam of mono-energetic photons impinging in anterior-posterior and posterior-anterior directions in the energy range of 15 keV-10 MeV. The computed DCCs of colon, lungs, stomach wall and breast are found to be in good agreement with the results published in ICRP publication 110. The present work thus validates the use of FLUKA code in computation of organ DCCs for photons using ICRP adult voxel phantoms. Further, the DCCs for gonads, urinary bladder, oesophagus, liver and thyroid are evaluated and compared with results published in ICRP 74 in the above-mentioned energy range and geometries. Significant differences in DCCs are observed for breast, testis and thyroid above 1 MeV, and for most of the organs at energies below 60 keV in comparison with the results published in ICRP 74. The DCCs of female voxel phantom were found to be higher in comparison with male phantom for almost all organs in both the geometries.
A graphics-card implementation of Monte-Carlo simulations for cosmic-ray transport
Tautz, R. C.
2016-05-01
A graphics card implementation of a test-particle simulation code is presented that is based on the CUDA extension of the C/C++ programming language. The original CPU version has been developed for the calculation of cosmic-ray diffusion coefficients in artificial Kolmogorov-type turbulence. In the new implementation, the magnetic turbulence generation, which is the most time-consuming part, is separated from the particle transport and is performed on a graphics card. In this article, the modification of the basic approach of integrating test particle trajectories to employ the SIMD (single instruction, multiple data) model is presented and verified. The efficiency of the new code is tested and several language-specific accelerating factors are discussed. For the example of isotropic magnetostatic turbulence, sample results are shown and a comparison to the results of the CPU implementation is performed.
Energy Technology Data Exchange (ETDEWEB)
Lacornerie, T.; Prevost, B.; Reynaert, N. [Centre Oscar-Lambret, Lille (France); Lisbona, A.; Thillays, F. [Institut de cancerologie de l' Ouest Rene-Gauducheau, Nantes (France)
2011-10-15
As important differences are noticed in lung for some dose calculation algorithms (Pencil Beam and Monte Carlo for IPlan RT Dose, Ray-Tracing and Monte Carlo for CyberKnife, Pencil Beam and Collapsed Cone for Clinac 6V), the authors report the search for a way to adapt protocols established with old algorithms and to minimize the difference between teams who are using a same irradiation scheme, for example three 20 Gy fractions. They have studied whether the prescription of a peripheral isodose to the previsional target volume (PTV) is the best approach. Irradiation plans have been calculated for different types of accelerators, with two types of algorithms, and for three different lesion sizes. The doses received by 98, 50 and 2 per cent of the volume are analyzed for the PTV, the gross tumour volume (GTV) and for the irradiated lung volumes. Differences are as much important as target size is low. It appears that type B algorithms (Monte Carlo, Collapsed Cone) are recommended. Short communication
Energy Technology Data Exchange (ETDEWEB)
Wang, L; Fourkal, E; Hayes, S; Jin, L; Ma, C [Fox Chase Cancer Center, Philadelphia, PA (United States)
2014-06-01
Purpose: To study the dosimetric difference resulted in using the pencil beam algorithm instead of Monte Carlo (MC) methods for tumors adjacent to the skull. Methods: We retrospectively calculated the dosimetric differences between RT and MC algorithms for brain tumors treated with CyberKnife located adjacent to the skull for 18 patients (total of 27 tumors). The median tumor sizes was 0.53-cc (range 0.018-cc to 26.2-cc). The absolute mean distance from the tumor to the skull was 2.11 mm (range - 17.0 mm to 9.2 mm). The dosimetric variables examined include the mean, maximum, and minimum doses to the target, the target coverage (TC) and conformality index. The MC calculation used the same MUs as the RT dose calculation without further normalization and 1% statistical uncertainty. The differences were analyzed by tumor size and distance from the skull. Results: The TC was generally reduced with the MC calculation (24 out of 27 cases). The average difference in TC between RT and MC was 3.3% (range 0.0% to 23.5%). When the TC was deemed unacceptable, the plans were re-normalized in order to increase the TC to 99%. This resulted in a 6.9% maximum change in the prescription isodose line. The maximum changes in the mean, maximum, and minimum doses were 5.4 %, 7.7%, and 8.4%, respectively, before re-normalization. When the TC was analyzed with regards to target size, it was found that the worst coverage occurred with the smaller targets (0.018-cc). When the TC was analyzed with regards to the distance to the skull, there was no correlation between proximity to the skull and TC between the RT and MC plans. Conclusions: For smaller targets (< 4.0-cc), MC should be used to re-evaluate the dose coverage after RT is used for the initial dose calculation in order to ensure target coverage.
Energy Technology Data Exchange (ETDEWEB)
Mainardi, E. E-mail: enrico@nuc.berkeley.edu; Premuda, F.; Lee, E
2004-01-01
Inertial confinement fusion (ICF) aims to induce implosions of D-T pellets to obtain a extremely dense and hot plasma with lasers or heavy-ion beams. For heavy-ion fusion (HIF), recent research has focused on 'liquid-protected' designs that allow highly compact target chambers. In the design of a reactor such as HYLIFE-II [Fus. Techol. 25 (1984); HYLIFE-II Progress Report, UCID-21816, 4.82-100], the liquid used is a molten salt made of F{sup 10}, Li{sup 6}, Li{sup 7}, Be{sup 9} (called flibe). Flibe allows the final-focus magnets to be closer to the target, which helps to reduce the focus spot size and in turn the size of the driver, with a large reduction of the cost of HIF electricity. Consequently the superconducting coils of the magnets closer to the D-T neutron source will potentially suffer higher damage though they can stand only a certain amount of energy deposited before quenching. This work has been primarily focusing on verifying that total energy deposited by fusion neutrons and induced {gamma} rays remain under such limit values and the final purpose is the optimization of the shielding of the magnetic lens system from the points of view of the geometrical configuration and of the physical nature of the materials adopted. The system is analyzed in terms of six geometrical models going from simplified up to much more realistic representations of a system of 192 beam lines, each focused by six magnets. A 3-D transport calculation of the radiation penetrating through ducts, that takes into account the complexity of the system, requires Monte Carlo methods. The technical nature of the design problem and the methodology followed were presented in a previous paper [Nucl. Instr. and Meth. A 464 (2001) 410] by summarizing briefly the results for the deposited energy distribution on the six focal magnets of a beam line. Now a comparison of the performances of the two codes TART98 [TART98: A Coupled Neutron-Photon 3-D Combinational Geometry Monte Carlo
Ko, Hyunseok; Szlufarska, Izabela; Morgan, Dane
2016-01-01
The diffusion of silver (Ag) impurities in high energy grain boundaries (HEGBs) of cubic (3C) silicon carbide (SiC) is studied using an ab initio based kinetic Monte Carlo (kMC) model. This study assesses the hypothesis that the HEGB diffusion is responsible for Ag release in Tristructural-Isotropic fuel particles, and provides a specific example to increase understanding of impurity diffusion in highly disordered grain boundaries. The HEGB environment was modeled by an amorphous SiC. The structure and stability of Ag defects were calculated using density functional theory code. The defect energetics suggested that the fastest diffusion takes place via an interstitial mechanism in a-SiC. The formation energy of Ag interstitials and the kinetic resolved activation energies between them were well approximated with Gaussian distributions that were then sampled in the kMC. The diffusion of Ag was simulated with the effective medium model using kMC. At 1200-1600C, Ag in a HEGB is predicted to exhibit an Arrhenius ...
Wu, D.; He, X. T.; Yu, W.; Fritzsche, S.
2017-02-01
A physical model based on a Monte Carlo approach is proposed to calculate the ionization dynamics of hot-solid-density plasmas within particle-in-cell (PIC) simulations, and where the impact (collision) ionization (CI), electron-ion recombination (RE), and ionization potential depression (IPD) by surrounding plasmas are taken into consideration self-consistently. When compared with other models, which are applied in the literature for plasmas near thermal equilibrium, the temporal relaxation of ionization dynamics can also be simulated by the proposed model. Besides, this model is general and can be applied for both single elements and alloys with quite different compositions. The proposed model is implemented into a PIC code, with (final) ionization equilibriums sustained by competitions between CI and its inverse process (i.e., RE). Comparisons between the full model and model without IPD or RE are performed. Our results indicate that for bulk aluminium at temperature of 1 to 1000 eV, (i) the averaged ionization degree increases by including IPD; while (ii) the averaged ionization degree is significantly over estimated when the RE is neglected. A direct comparison from the PIC code is made with the existing models for the dependence of averaged ionization degree on thermal equilibrium temperatures and shows good agreements with that generated from Saha-Boltzmann model and/or FLYCHK code.
Wu, D; Yu, W; Fritzsche, S
2016-01-01
A Monte-Carlo approach to proton stopping in warm dense matter is implemented into an existing particle-in-cell code. The model is based on multiple binary-collisions among electron-electron, electron-ion and ion-ion, taking into account contributions from both free and bound electrons, and allows to calculate particle stopping in much more natural manner. At low temperature limit, when ``all'' electron are bounded at the nucleus, the stopping power converges to the predictions of Bethe-Bloch theory, which shows good consistency with data provided by the NIST. With the rising of temperatures, more and more bound electron are ionized, thus giving rise to an increased stopping power to cold matter, which is consistent with the report of a recently experimental measurement [Phys. Rev. Lett. 114, 215002 (2015)]. When temperature is further increased, with ionizations reaching the maximum, lowered stopping power is observed, which is due to the suppression of collision frequency between projected proton beam and h...
Wu, D.; He, X. T.; Yu, W.; Fritzsche, S.
2017-02-01
A Monte Carlo approach to proton stopping in warm dense matter is implemented into an existing particle-in-cell code. This approach is based on multiple electron-electron, electron-ion, and ion-ion binary collision and accounts for both the free and the bound electrons in the plasmas. This approach enables one to calculate the stopping of particles in a more natural manner than existing theoretical treatment. In the low-temperature limit, when "all" electrons are bound to the nucleus, the stopping power coincides with the predictions from the Bethe-Bloch formula and is consistent with the data from the National Institute of Standard and Technology database. At higher temperatures, some of the bound electrons are ionized, and this increases the stopping power in the plasmas, as demonstrated by A. B. Zylstra et al. [Phys. Rev. Lett. 114, 215002 (2015)], 10.1103/PhysRevLett.114.215002. At even higher temperatures, the degree of ionization reaches a maximum and thus decreases the stopping power due to the suppression of collision frequency between projected proton beam and hot plasmas in the target.
A Study of Neutronics Effects of the Spacer Grids in a Typical PWR via Monte Carlo Calculation
Energy Technology Data Exchange (ETDEWEB)
Bach, Tran Xuan; Cho, Nam Zin [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)
2014-10-15
Neutronics analysis, the spacer grids which support fuel rods are not explicitly described, but they are homogenized with coolant. However, the effects of neglecting or simplifying the spacer grids are not reported in the literature to the best of our knowledge. In this paper, to investigate the effects of spacer grids in neutronics analysis, a detailed description of spacer grids is added to the KAIST benchmark problem 1B. Then, the effective multiplication factor, spatial distributions of neutron flux, and its energy spectrum are obtained for the two cases (with and without spacer grids). Numerical results show that the effects of spacer grids are not negligible. In this paper, to investigate the effect of spacer grids, the spacer grid geometry is described in detail in the Monte Carlo calculation. In the numerical test, the two cases are compared in the context of a modified KAIST benchmark problem 1B. Case 1 does not have spacer grids, while the space is filled by coolant instead. Case 2 includes the spacer grids. The difference in neutron flux spectra is also observed. Thus, the effect of the spacer grids should be considered in the whole-core reactor analysis. In practice, the spacer grids are homogenized into coolant to consider its effect. As a further study, therefore, it would be worthwhile to investigate the differences between the homogenization and the explicit description of the spacer grids.
Mohammadyari, Parvin; Faghihi, Reza; Mosleh-Shirazi, Mohammad Amin; Lotfi, Mehrzad; Rahim Hematiyan, Mohammad; Koontz, Craig; Meigooni, Ali S.
2015-12-01
Compression is a technique to immobilize the target or improve the dose distribution within the treatment volume during different irradiation techniques such as AccuBoost® brachytherapy. However, there is no systematic method for determination of dose distribution for uncompressed tissue after irradiation under compression. In this study, the mechanical behavior of breast tissue between compressed and uncompressed states was investigated. With that, a novel method was developed to determine the dose distribution in uncompressed tissue after irradiation of compressed breast tissue. Dosimetry was performed using two different methods, namely, Monte Carlo simulations using the MCNP5 code and measurements using thermoluminescent dosimeters (TLD). The displacement of the breast elements was simulated using a finite element model and calculated using ABAQUS software. From these results, the 3D dose distribution in uncompressed tissue was determined. The geometry of the model was constructed from magnetic resonance images of six different women volunteers. The mechanical properties were modeled by using the Mooney-Rivlin hyperelastic material model. Experimental dosimetry was performed by placing the TLD chips into the polyvinyl alcohol breast equivalent phantom. The results determined that the nodal displacements, due to the gravitational force and the 60 Newton compression forces (with 43% contraction in the loading direction and 37% expansion in the orthogonal direction) were determined. Finally, a comparison of the experimental data and the simulated data showed agreement within 11.5% ± 5.9%.
Monte Carlo calculations of LR115 detector response to {sup 222}Rn in the presence of {sup 220}Rn
Energy Technology Data Exchange (ETDEWEB)
Nikezic, D.; Yu, K.N.
2000-04-01
The sensitivities (in m) of bare LR115 detectors and detectors in diffusion chambers to {sup 222}Rn and {sup 220}Rn chains are calculated by the Monte Carlo method. The partial sensitivities of bare detectors to the {sup 222}Rn chain are larger than those to the {sup 220}Rn chain, which is due to the higher energies of alpha particles in the {sup 220}Rn chain and the upper energy limit for detection for the LR115 detector. However, the total sensitivities are approximately equal because {sup 220}Rn is always in equilibrium with its first progeny, which is not the case for the {sup 222}Rn chain. The total sensitivity of bare LR115 detectors to {sup 222}Rn chain depends linearly on the equilibrium factor. The overestimation in {sup 222}Rn measurements with bare detectors caused by {sup 220}Rn in air can reach 10% in normal environmental conditions. An analytical relationship between the equilibrium factor and the ratio between track densities on the bare detector and the detector enclosed in chamber is given in the last part of the paper. This ratio is also affected by {sup 220}Rn, which can disturb the determination of the equilibrium factor.
Greco, Cristina; Jiang, Ying; Chen, Jeff Z. Y.; Kremer, Kurt; Daoulas, Kostas Ch.
2016-11-01
Self Consistent Field (SCF) theory serves as an efficient tool for studying mesoscale structure and thermodynamics of polymeric liquid crystals (LC). We investigate how some of the intrinsic approximations of SCF affect the description of the thermodynamics of polymeric LC, using a coarse-grained model. Polymer nematics are represented as discrete worm-like chains (WLC) where non-bonded interactions are defined combining an isotropic repulsive and an anisotropic attractive Maier-Saupe (MS) potential. The range of the potentials, σ, controls the strength of correlations due to non-bonded interactions. Increasing σ (which can be seen as an increase of coarse-graining) while preserving the integrated strength of the potentials reduces correlations. The model is studied with particle-based Monte Carlo (MC) simulations and SCF theory which uses partial enumeration to describe discrete WLC. In MC simulations the Helmholtz free energy is calculated as a function of strength of MS interactions to obtain reference thermodynamic data. To calculate the free energy of the nematic branch with respect to the disordered melt, we employ a special thermodynamic integration (TI) scheme invoking an external field to bypass the first-order isotropic-nematic transition. Methodological aspects which have not been discussed in earlier implementations of the TI to LC are considered. Special attention is given to the rotational Goldstone mode. The free-energy landscape in MC and SCF is directly compared. For moderate σ the differences highlight the importance of local non-bonded orientation correlations between segments, which SCF neglects. Simple renormalization of parameters in SCF cannot compensate the missing correlations. Increasing σ reduces correlations and SCF reproduces well the free energy in MC simulations.
Direct method for calculating temperature-dependent transport properties
Liu, Y.; Yuan, Z.; Wesselink, R.J.H.; Starikov, A.A.; Schilfgaarde, van M.; Kelly, P.J.
2015-01-01
We show how temperature-induced disorder can be combined in a direct way with first-principles scattering theory to study diffusive transport in real materials. Excellent (good) agreement with experiment is found for the resistivity of Cu, Pd, Pt (and Fe) when lattice (and spin) disorder are calcula
Energy Technology Data Exchange (ETDEWEB)
Damilakis, J; Stratakis, J; Solomou, G [University of Crete, Heraklion (Greece)
2014-06-01
Purpose: It is well known that pacemaker implantation is sometimes needed in pregnant patients with symptomatic bradycardia. To our knowledge, there is no reported experience regarding radiation doses to the unborn child resulting from fluoroscopy during pacemaker implantation. The purpose of the current study was to develop a method for estimating embryo/fetus dose from fluoroscopically guided pacemaker implantation procedures performed on pregnant patients during all trimesters of gestation. Methods: The Monte Carlo N-Particle (MCNP) radiation transport code was employed in this study. Three mathematical anthropomorphic phantoms representing the average pregnant patient at the first, second and third trimesters of gestation were generated using Bodybuilder software (White Rock science, White Rock, NM). The normalized embryo/fetus dose from the posteroanterior (PA), the 30° left-anterior oblique (LAO) and the 30° right-anterior oblique (RAO) projections were calculated for a wide range of kVp (50–120 kVp) and total filtration values (2.5–9.0 mm Al). Results: The results consist of radiation doses normalized to a) entrance skin dose (ESD) and b) dose area product (DAP) so that the dose to the unborn child from any fluoroscopic technique and x-ray device used can be calculated. ESD normalized doses ranged from 0.008 (PA, first trimester) to 2.519 μGy/mGy (RAO, third trimester). DAP normalized doses ranged from 0.051 (PA, first trimester) to 12.852 μGy/Gycm2 (RAO, third trimester). Conclusion: Embryo/fetus doses from fluoroscopically guided pacemaker implantation procedures performed on pregnant patients during all stages of gestation can be estimated using the method developed in this study. This study was supported by the Greek Ministry of Education and Religious Affairs, General Secretariat for Research and Technology, Operational Program ‘Education and Lifelong Learning’, ARISTIA (Research project: CONCERT)
Energy Technology Data Exchange (ETDEWEB)
Farah, Jad
2011-10-06
To optimize the monitoring of female workers using in vivo spectrometry measurements, it is necessary to correct the typical calibration coefficients obtained with the Livermore male physical phantom. To do so, numerical calibrations based on the use of Monte Carlo simulations combined with anthropomorphic 3D phantoms were used. Such computational calibrations require on the one hand the development of representative female phantoms of different size and morphologies and on the other hand rapid and reliable Monte Carlo calculations. A library of female torso models was hence developed by fitting the weight of internal organs and breasts according to the body height and to relevant plastic surgery recommendations. This library was next used to realize a numerical calibration of the AREVA NC La Hague in vivo counting installation. Moreover, the morphology-induced counting efficiency variations with energy were put into equation and recommendations were given to correct the typical calibration coefficients for any monitored female worker as a function of body height and breast size. Meanwhile, variance reduction techniques and geometry simplification operations were considered to accelerate simulations. Furthermore, to determine the activity mapping in the case of complex contaminations, a method that combines Monte Carlo simulations with in vivo measurements was developed. This method consists of realizing several spectrometry measurements with different detector positioning. Next, the contribution of each contaminated organ to the count is assessed from Monte Carlo calculations. The in vivo measurements realized at LEDI, CIEMAT and KIT have demonstrated the effectiveness of the method and highlighted the valuable contribution of Monte Carlo simulations for a more detailed analysis of spectrometry measurements. Thus, a more precise estimate of the activity distribution is given in the case of an internal contamination. (author)
Transport properties of boron nanotubes investigated by ab initio calculation
Institute of Scientific and Technical Information of China (English)
Guo Wei; Hu Yi-Bin; Zhang Yu-Yang; Du Shi-Xuan; Gao Hong-Jun
2009-01-01
We investigate atomic and electronic structures of boron nanotubes (BNTs) by using the density functional theory(DFT). The transport properties of BNTs with different diameters and chiralities are studied by the Keldysh nonequi-librium Green function (NEGF) method. It is found that the cohesive energies and conductances of BNTs decrease as their diameters decrease. It is more difficult to form (N, 0) tubes than (M, M) tubes when the diameters of the two kinds of tubes are comparable. However, the (N, 0) tubes have a higher conductance than the (M, M) tubes. When the BNTs are connected to gold electrodes, the coupling between the BNTs and the electrodes will affect the transport properties of tubes significantly.
Molecular-dynamics calculation of the vacancy heat of transport
Energy Technology Data Exchange (ETDEWEB)
Schelling, Patrick K.; Ernotte, Jacques; Shokeen, Lalit; Tucker, William C. [Advanced Material Processing and Analysis Center and Department of Physics, University of Central Florida, 4000 Central Florida Blvd., Orlando, Florida 32816 (United States); Woods Halley, J. [Department of Physics, University of Minnesota, 116 Church Street SE, Minneapolis, Minnesota 555455 (United States)
2014-07-14
We apply the recently developed constrained-dynamics method to elucidate the thermodiffusion of vacancies in a single-component material. The derivation and assumptions used in the method are clearly explained. Next, the method is applied to compute the reduced heat of transport Q{sub v}{sup *}−h{sub fv} for vacancies in a single-component material. Results from simulations using three different Morse potentials, with one providing an approximate description of Au, and an embedded-atom model potential for Ni are presented. It is found that the reduced heat of transport Q{sub v}{sup *}−h{sub fv} may take either positive or negative values depending on the potential parameters and exhibits some dependence on temperature. It is also found that Q{sub v}{sup *}−h{sub fv} may be correlated with the activation entropy. The results are discussed in comparison with experimental and previous simulation results.
Computer program for calculating technological parameters of underground transport
Energy Technology Data Exchange (ETDEWEB)
Kreimer, E.L. (DonUGI (USSR))
1990-05-01
Reports on an analytical method developed at DonUGI for determining technological parameters and indices of mine haulage performance. A calculation program intended for personal computers and minicomputers is described and designed especially to consider haulage by electric locomotives. The program can be used in an interactive manner and it enables haulage systems of arbitrary complexity to be calculated in 2-4 minutes. The program also allows the effect of haulage on working face output to be evaluated quantitatively. Haulage systems of all mines of the Selidovugol' association were analyzed with the aid of the program in 1988; results for the Ukraina mine are presented in tables.
The calculation of transport phenomena in electromagnetically levitated metal droplets
El-Kaddah, N.; Szekely, J.
1982-01-01
A mathematical representation has been developed for the electromagnetic force field, fluid flow field, and solute concentration field of levitation-melted metal specimens. The governing equations consist of the conventional transport equations combined with the appropriate expressions for the electromagnetic force field. The predictions obtained by solving the governing equations numerically on a digital computer are in good agreement with lifting force and average temperature measurements reported in the literature.
Mosleh-Shirazi, Mohammad Amin; Zarrini-Monfared, Zinat; Karbasi, Sareh; Zamani, Ali
2014-01-01
Two-dimensional (2D) arrays of thick segmented scintillators are of interest as X-ray detectors for both 2D and 3D image-guided radiotherapy (IGRT). Their detection process involves ionizing radiation energy deposition followed by production and transport of optical photons. Only a very limited number of optical Monte Carlo simulation models exist, which has limited the number of modeling studies that have considered both stages of the detection process. We present ScintSim1, an in-house optical Monte Carlo simulation code for 2D arrays of scintillation crystals, developed in the MATLAB programming environment. The code was rewritten and revised based on an existing program for single-element detectors, with the additional capability to model 2D arrays of elements with configurable dimensions, material, etc., The code generates and follows each optical photon history through the detector element (and, in case of cross-talk, the surrounding ones) until it reaches a configurable receptor, or is attenuated. The new model was verified by testing against relevant theoretically known behaviors or quantities and the results of a validated single-element model. For both sets of comparisons, the discrepancies in the calculated quantities were all detector optimization.
Energy Technology Data Exchange (ETDEWEB)
Nimal, J.C.; Vergnaud, T. (CEA Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France))
1990-01-01
This paper describes the most important features of the Monte Carlo code TRIPOLI-2. This code solves the Boltzmann equation in three-dimensional geometries for coupled neutron and gamma rays problems. A particular emphasis is devoted to the biasing techniques, which are very important for deep penetration. Future developments in TRIPOLI are described in the conclusion. (author).
Liu, Baoshun; Li, Ziqiang; Zhao, Xiujian
2015-02-21
In this research, Monte-Carlo Continuity Random Walking (MC-RW) model was used to study the relation between electron transport and photocatalysis of nano-crystalline (nc) clusters. The effects of defect energy disorder, spatial disorder of material structure, electron density, and interfacial transfer/recombination on the electron transport and the photocatalysis were studied. Photocatalytic activity is defined as 1/τ from a statistical viewpoint with τ being the electron average lifetime. Based on the MC-RW simulation, a clear physical and chemical "picture" was given for the photocatalytic kinetic analysis of nc-clusters. It is shown that the increase of defect energy disorder and material spatial structural disorder, such as the decrease of defect trap number, the increase of crystallinity, the increase of particle size, and the increase of inter-particle connection, can enhance photocatalytic activity through increasing electron transport ability. The increase of electron density increases the electron Fermi level, which decreases the activation energy for electron de-trapping from traps to extending states, and correspondingly increases electron transport ability and photocatalytic activity. Reducing recombination of electrons and holes can increase electron transport through the increase of electron density and then increases the photocatalytic activity. In addition to the electron transport, the increase of probability for electrons to undergo photocatalysis can increase photocatalytic activity through the increase of the electron interfacial transfer speed.
Development of Library Processing System for Neutron Transport Calculation
Energy Technology Data Exchange (ETDEWEB)
Song, J. S.; Park, S. Y.; Kim, H. Y. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)] (and others)
2008-12-15
A system for library generation was developed for the lattice neutron transport program for pressurized water reactor core analysis. The system extracts multi energy group nuclear data for requested nuclides from ENDF/B whose data are based on continuous energy, generates hydrogen equivalent factor and resonance integral table as functions of temperature and background cross section for resonance nuclides, generates subgroup data for the lattice program to treat resonance exactly as possible, and generates multi-group neutron library file including nuclide depletion data for use of the lattice program.
An improved filtered spherical harmonic method for transport calculations
Energy Technology Data Exchange (ETDEWEB)
Ahrens, C. [Department of Applied Mathematics and Statistics, Colorado School of Mines, Golden, CO 80401 (United States); Merton, S. [Computational Physics Group, AWE Aldermaston, Berkshire (United Kingdom)
2013-07-01
Motivated by the work of R. G. McClarren, C. D. Hauck, and R. B. Lowrie on a filtered spherical harmonic method, we present a new filter for such numerical approximations to the multi-dimensional transport equation. In several test problems, we demonstrate that the new filter produces results with significantly less Gibbs phenomena than the filter used by McClarren, Hauck and Lowrie. This reduction in Gibbs phenomena translates into propagation speeds that more closely match the correct propagation speed and solutions that have fewer regions where the scalar flux is negative. (authors)
Tattersall, W J; Boyle, G J; White, R D
2015-01-01
We generalize a simple Monte Carlo (MC) model for dilute gases to consider the transport behavior of positrons and electrons in Percus-Yevick model liquids under highly non-equilibrium conditions, accounting rigorously for coherent scattering processes. The procedure extends an existing technique [Wojcik and Tachiya, Chem. Phys. Lett. 363, 3--4 (1992)], using the static structure factor to account for the altered anisotropy of coherent scattering in structured material. We identify the effects of the approximation used in the original method, and develop a modified method that does not require that approximation. We also present an enhanced MC technique that has been designed to improve the accuracy and flexibility of simulations in spatially-varying electric fields. All of the results are found to be in excellent agreement with an independent multi-term Boltzmann equation solution, providing benchmarks for future transport models in liquids and structured systems.
The Suppression of Energy Discretization Errors in Multigroup Transport Calculations
Energy Technology Data Exchange (ETDEWEB)
Larsen, Edward [Univ. of Michigan, Ann Arbor, MI (United States). Dept. of Nuclear Engineering and Radiological Sciences
2013-06-17
The Objective of this project is to develop, implement, and test new deterministric methods to solve, as efficiently as possible, multigroup neutron transport problems having an extremely large number of groups. Our approach was to (i) use the standard CMFD method to "coarsen" the space-angle grid, yielding a multigroup diffusion equation, and (ii) use a new multigrid-in-space-and-energy technique to efficiently solve the multigroup diffusion problem. The overall strategy of (i) how to coarsen the spatial an energy grids, and (ii) how to navigate through the various grids, has the goal of minimizing the overall computational effort. This approach yields not only the fine-grid solution, but also coarse-group flux-weighted cross sections that can be used for other related problems.
Berg, Eric; Roncali, Emilie; Cherry, Simon R
2015-06-01
Achieving excellent timing resolution in gamma ray detectors is crucial in several applications such as medical imaging with time-of-flight positron emission tomography (TOF-PET). Although many factors impact the overall system timing resolution, the statistical nature of scintillation light, including photon production and transport in the crystal to the photodetector, is typically the limiting factor for modern scintillation detectors. In this study, we investigated the impact of surface treatment, in particular, roughening select areas of otherwise polished crystals, on light transport and timing resolution. A custom Monte Carlo photon tracking tool was used to gain insight into changes in light collection and timing resolution that were observed experimentally: select roughening configurations increased the light collection up to 25% and improved timing resolution by 15% compared to crystals with all polished surfaces. Simulations showed that partial surface roughening caused a greater number of photons to be reflected towards the photodetector and increased the initial rate of photoelectron production. This study provides a simple method to improve timing resolution and light collection in scintillator-based gamma ray detectors, a topic of high importance in the field of TOF-PET. Additionally, we demonstrated utility of our Monte Carlo simulation tool to accurately predict the effect of altering crystal surfaces on light collection and timing resolution.
Nasir, M.; Pratama, D.; Anam, C.; Haryanto, F.
2016-03-01
The aim of this research was to calculate Size Specific Dose Estimates (SSDE) generated by the varian OBI CBCT v1.4 X-ray tube working at 100 kV using EGSnrc Monte Carlo simulations. The EGSnrc Monte Carlo code used in this simulation was divided into two parts. Phase space file data resulted by the first part simulation became an input to the second part. This research was performed with varying phantom diameters of 5 to 35 cm and varying phantom lengths of 10 to 25 cm. Dose distribution data were used to calculate SSDE values using trapezoidal rule (trapz) function in a Matlab program. SSDE obtained from this calculation was compared to that in AAPM report and experimental data. It was obtained that the normalization of SSDE value for each phantom diameter was between 1.00 and 3.19. The normalization of SSDE value for each phantom length was between 0.96 and 1.07. The statistical error in this simulation was 4.98% for varying phantom diameters and 5.20% for varying phantom lengths. This study demonstrated the accuracy of the Monte Carlo technique in simulating the dose calculation. In the future, the influence of cylindrical phantom material to SSDE would be studied.
Energy Technology Data Exchange (ETDEWEB)
Borysheva, N. [Medical Radiological Research Center, Korolyov str., 4, Obninsk 249020 (Russian Federation); Ivannikov, A. [Medical Radiological Research Center, Korolyov str., 4, Obninsk 249020 (Russian Federation)], E-mail: Ivannikov-Alexander@yandex.ru; Tikunov, D.; Orlenko, S.; Skvortsov, V.; Stepanenko, V. [Medical Radiological Research Center, Korolyov str., 4, Obninsk 249020 (Russian Federation); Hoshi, M. [Research Institute for Radiation Biology and Medicine, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima 734-8553 (Japan)
2007-07-15
By Monte-Carlo simulation of ionizing particles transport, for a realistic mathematical phantom of a man supplemented by a dental region, absorbed doses in teeth enamel and whole body doses are calculated for cases of internal irradiation by {sup 137}Cs and {sup 134}Cs isotopes incorporated in the human body resulted from staying in radioactive contaminated territory. It is shown that dose in enamel constitutes (40{+-}4)% and (59{+-}6)% of whole body dose resulted from the decay of {sup 137}Cs and {sup 134}Cs isotopes, respectively. The results of calculations may be used for conversion of absorbed dose in enamel obtained by the tooth enamel EPR spectroscopy method to whole body dose for dosimetric investigation of population of territories contaminated by the radioactive cesium, which is specific for the Chernobyl accident.
Energy Technology Data Exchange (ETDEWEB)
Morillon, B
1998-10-01
Two methods to estimate the variations on the collision density between different configurations are presented: the `multiple estimate method` and the `Taylor expansion method`. First and foremost, we recall the bases of analogue simulation and present the notations used to define the collision density in the integral Boltzmann equation. In the second part we discuss the different non analogue techniques used to obtain a small variance. The third part of this work deals with correlated sampling that we call multiple estimate. The principle is similar to non analogue simulation when we introduce a biased law to simulate the propagation of particles. Finally, we calculate perturbed results owing to a Taylor expansion following Rief`s nice representation. We extend the method to an arbitrary order of the Taylor expansion for one and two variables. We present several examples and bring in light the advantages and drawbacks of the `multiple estimate method` and the `Taylor expansion method`. In every case, we compare the collision densities estimated by these two methods with the collision densities estimated by independent simulations. (author) 10 refs.
Mandrekas, John
2004-08-01
GTNEUT is a two-dimensional code for the calculation of the transport of neutral particles in fusion plasmas. It is based on the Transmission and Escape Probabilities (TEP) method and can be considered a computationally efficient alternative to traditional Monte Carlo methods. The code has been benchmarked extensively against Monte Carlo and has been used to model the distribution of neutrals in fusion experiments. Program summaryTitle of program: GTNEUT Catalogue identifier: ADTX Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTX Computer for which the program is designed and others on which it has been tested: The program was developed on a SUN Ultra 10 workstation and has been tested on other Unix workstations and PCs. Operating systems or monitors under which the program has been tested: Solaris 8, 9, HP-UX 11i, Linux Red Hat v8.0, Windows NT/2000/XP. Programming language used: Fortran 77 Memory required to execute with typical data: 6 219 388 bytes No. of bits in a word: 32 No. of processors used: 1 Has the code been vectorized or parallelized?: No No. of bytes in distributed program, including test data, etc.: 300 709 No. of lines in distributed program, including test data, etc.: 17 365 Distribution format: compressed tar gzip file Keywords: Neutral transport in plasmas, Escape probability methods Nature of physical problem: This code calculates the transport of neutral particles in thermonuclear plasmas in two-dimensional geometric configurations. Method of solution: The code is based on the Transmission and Escape Probability (TEP) methodology [1], which is part of the family of integral transport methods for neutral particles and neutrons. The resulting linear system of equations is solved by standard direct linear system solvers (sparse and non-sparse versions are included). Restrictions on the complexity of the problem: The current version of the code can
Energy Technology Data Exchange (ETDEWEB)
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2004-06-01
ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade fr