Parallel MCNP Monte Carlo transport calculations with MPI
International Nuclear Information System (INIS)
The steady increase in computational performance has made Monte Carlo calculations for large/complex systems possible. However, in order to make these calculations practical, order of magnitude increases in performance are necessary. The Monte Carlo method is inherently parallel (particles are simulated independently) and thus has the potential for near-linear speedup with respect to the number of processors. Further, the ever-increasing accessibility of parallel computers, such as workstation clusters, facilitates the practical use of parallel Monte Carlo. Recognizing the nature of the Monte Carlo method and the trends in available computing, the code developers at Los Alamos National Laboratory implemented the message-passing general-purpose Monte Carlo radiation transport code MCNP (version 4A). The PVM package was chosen by the MCNP code developers because it supports a variety of communication networks, several UNIX platforms, and heterogeneous computer systems. This PVM version of MCNP has been shown to produce speedups that approach the number of processors and thus, is a very useful tool for transport analysis. Due to software incompatibilities on the local IBM SP2, PVM has not been available, and thus it is not possible to take advantage of this useful tool. Hence, it became necessary to implement an alternative message-passing library package into MCNP. Because the message-passing interface (MPI) is supported on the local system, takes advantage of the high-speed communication switches in the SP2, and is considered to be the emerging standard, it was selected
Guideline of Monte Carlo calculation. Neutron/gamma ray transport simulation by Monte Carlo method
2002-01-01
This report condenses basic theories and advanced applications of neutron/gamma ray transport calculations in many fields of nuclear energy research. Chapters 1 through 5 treat historical progress of Monte Carlo methods, general issues of variance reduction technique, cross section libraries used in continuous energy Monte Carlo codes. In chapter 6, the following issues are discussed: fusion benchmark experiments, design of ITER, experiment analyses of fast critical assembly, core analyses of JMTR, simulation of pulsed neutron experiment, core analyses of HTTR, duct streaming calculations, bulk shielding calculations, neutron/gamma ray transport calculations of the Hiroshima atomic bomb. Chapters 8 and 9 treat function enhancements of MCNP and MVP codes, and a parallel processing of Monte Carlo calculation, respectively. An important references are attached at the end of this report.
Monte Carlo perturbation theory in neutron transport calculations
International Nuclear Information System (INIS)
The need to obtain sensitivities in complicated geometrical configurations has resulted in the development of Monte Carlo sensitivity estimation. A new method has been developed to calculate energy-dependent sensitivities of any number of responses in a single Monte Carlo calculation with a very small time penalty. This estimation typically increases the tracking time per source particle by about 30%. The method of estimation is explained. Sensitivities obtained are compared with those calculated by discrete ordinates methods. Further theoretical developments, such as second-order perturbation theory and application to k/sub eff/ calculations, are discussed. The application of the method to uncertainty analysis and to the analysis of benchmark experiments is illustrated. 5 figures
Adjoint electron-photon transport Monte Carlo calculations with ITS
International Nuclear Information System (INIS)
A general adjoint coupled electron-photon Monte Carlo code for solving the Boltzmann-Fokker-Planck equation has recently been created. It is a modified version of ITS 3.0, a coupled electronphoton Monte Carlo code that has world-wide distribution. The applicability of the new code to radiation-interaction problems of the type found in space environments is demonstrated
International Nuclear Information System (INIS)
The general purpose code BALTORO was written for coupling the three-dimensional Monte-Carlo /MC/ with the one-dimensional Discrete Ordinates /DO/ radiation transport calculations. The quantity of a radiation-induced /neutrons or gamma-rays/ nuclear effect or the score from a radiation-yielding nuclear effect can be analysed in this way. (author)
Ge(Li) intrinsic efficiency calculation using Monte Carlo simulation for γ radiation transport
International Nuclear Information System (INIS)
To solve a radiation transport problem by using Monte Carlo simulation method, the evolution of a large number of radiations must be simulated and also the analysis of their history must be done. The evolution of a radiation starts by the radiation emission, followed by the radiation unperturbed propagation in the medium between the successive interactions and then the radiation parameters modification in the points where interactions occur. The goal of this paper consists in the calculation of the total detection efficiency and the intrinsic efficiency for a coaxial Ge(Li) detector, using Monte Carlo method in order to simulate the γ radiation transport. A Ge(Li) detector with 106 cm3 active volume and γ photons with energies in 50 keV - 2 MeV range, emitted by a point source situated on the detector axis, were considered. Each γ photon evolution is simulated by an analogue process step-by-step until the photon escapes from the detector or is completely absorbed in the active volume of the detector. (author)
International Nuclear Information System (INIS)
This work concerns calculation of a neutron response, caused by a neutron field perturbed by materials surrounding the source or the detector. Solution of a problem is obtained using coupling of the Monte Carlo radiation transport computation for the perturbed region and the discrete ordinates transport computation for the unperturbed system. (author). 62 refs
A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom
International Nuclear Information System (INIS)
As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)
A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom
Energy Technology Data Exchange (ETDEWEB)
Bellezzo, M.; Do Nascimento, E.; Yoriyaz, H., E-mail: mbellezzo@gmail.br [Instituto de Pesquisas Energeticas e Nucleares / CNEN, Av. Lineu Prestes 2242, Cidade Universitaria, 05508-000 Sao Paulo (Brazil)
2014-08-15
As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)
Monte Carlo method for neutron transport calculations in graphics processing units (GPUs)
International Nuclear Information System (INIS)
Monte Carlo simulation is well suited for solving the Boltzmann neutron transport equation in an inhomogeneous media for complicated geometries. However, routine applications require the computation time to be reduced to hours and even minutes in a desktop PC. The interest in adopting Graphics Processing Units (GPUs) for Monte Carlo acceleration is rapidly growing. This is due to the massive parallelism provided by the latest GPU technologies which is the most promising solution to the challenge of performing full-size reactor core analysis on a routine basis. In this study, Monte Carlo codes for a fixed-source neutron transport problem were developed for GPU environments in order to evaluate issues associated with computational speedup using GPUs. Results obtained in this work suggest that a speedup of several orders of magnitude is possible using the state-of-the-art GPU technologies. (author)
Development of a GPU-based Monte Carlo dose calculation code for coupled electron-photon transport
Jia, Xun; Sempau, Josep; Choi, Dongju; Majumdar, Amitava; Jiang, Steve B
2009-01-01
Monte Carlo simulation is the most accurate method for absorbed dose calculations in radiotherapy. Its efficiency still requires improvement for routine clinical applications, especially for online adaptive radiotherapy. In this paper, we report our recent development on a GPU-based Monte Carlo dose calculation code for coupled electron-photon transport. We have implemented the Dose Planning Method (DPM) Monte Carlo dose calculation package (Sempau et al, Phys. Med. Biol., 45(2000)2263-2291) on GPU architecture under CUDA platform. The implementation has been tested with respect to the original sequential DPM code on CPU in two cases. Our results demonstrate the adequate accuracy of the GPU implementation for both electron and photon beams in radiotherapy energy range. A speed up factor of 4.5 and 5.5 times have been observed for electron and photon testing cases, respectively, using an NVIDIA Tesla C1060 GPU card against a 2.27GHz Intel Xeon CPU processor .
Improved cache performance in Monte Carlo transport calculations using energy banding
Siegel, A.; Smith, K.; Felker, K.; Romano, P.; Forget, B.; Beckman, P.
2014-04-01
We present an energy banding algorithm for Monte Carlo (MC) neutral particle transport simulations which depend on large cross section lookup tables. In MC codes, read-only cross section data tables are accessed frequently, exhibit poor locality, and are typically too much large to fit in fast memory. Thus, performance is often limited by long latencies to RAM, or by off-node communication latencies when the data footprint is very large and must be decomposed on a distributed memory machine. The proposed energy banding algorithm allows maximal temporal reuse of data in band sizes that can flexibly accommodate different architectural features. The energy banding algorithm is general and has a number of benefits compared to the traditional approach. In the present analysis we explore its potential to achieve improvements in time-to-solution on modern cache-based architectures.
Energy Technology Data Exchange (ETDEWEB)
Palau, J.M. [CEA Cadarache, Service de Physique des Reacteurs et du Cycle, Lab. de Projets Nucleaires, 13 - Saint-Paul-lez-Durance (France)
2005-07-01
This paper presents how Monte-Carlo calculations (French TRIPOLI4 poly-kinetic code with an appropriate pre-processing and post-processing software called OVNI) are used in the case of 3-dimensional heterogeneous benchmarks (slab reactor cores) to reduce model biases and enable a thorough and detailed analysis of the performances of deterministic methods and their associated data libraries with respect to key neutron parameters (reactivity, local power). Outstanding examples of application of these tools are presented regarding the new numerical methods implemented in the French lattice code APOLLO2 (advanced self-shielding models, new IDT characteristics method implemented within the discrete-ordinates flux solver model) and the JEFF3.1 nuclear data library (checked against JEF2.2 previous file). In particular we have pointed out, by performing multigroup/point-wise TRIPOLI4 (assembly and core) calculations, the efficiency (in terms of accuracy and computation time) of the new IDT method developed in APOLLO2. In addition, by performing 3-dimensional TRIPOLI4 calculations of the whole slab core (few millions of elementary volumes), the high quality of the new JEFF3.1 nuclear data files and revised evaluations (U{sup 235}, U{sup 238}, Hf) for reactivity prediction of slab cores critical experiments has been stressed. As a feedback of the whole validation process, improvements in terms of nuclear data (mainly Hf capture cross-sections) and numerical methods (advanced quadrature formulas accounting validation results, validation of new self-shielding models, parallelization) are suggested to improve even more the APOLLO2-CRONOS2 standard calculation route. (author)
International Nuclear Information System (INIS)
This paper presents how Monte-Carlo calculations (French TRIPOLI4 poly-kinetic code with an appropriate pre-processing and post-processing software called OVNI) are used in the case of 3-dimensional heterogeneous benchmarks (slab reactor cores) to reduce model biases and enable a thorough and detailed analysis of the performances of deterministic methods and their associated data libraries with respect to key neutron parameters (reactivity, local power). Outstanding examples of application of these tools are presented regarding the new numerical methods implemented in the French lattice code APOLLO2 (advanced self-shielding models, new IDT characteristics method implemented within the discrete-ordinates flux solver model) and the JEFF3.1 nuclear data library (checked against JEF2.2 previous file). In particular we have pointed out, by performing multigroup/point-wise TRIPOLI4 (assembly and core) calculations, the efficiency (in terms of accuracy and computation time) of the new IDT method developed in APOLLO2. In addition, by performing 3-dimensional TRIPOLI4 calculations of the whole slab core (few millions of elementary volumes), the high quality of the new JEFF3.1 nuclear data files and revised evaluations (U235, U238, Hf) for reactivity prediction of slab cores critical experiments has been stressed. As a feedback of the whole validation process, improvements in terms of nuclear data (mainly Hf capture cross-sections) and numerical methods (advanced quadrature formulas accounting validation results, validation of new self-shielding models, parallelization) are suggested to improve even more the APOLLO2-CRONOS2 standard calculation route. (author)
An OpenCL-based Monte Carlo dose calculation engine (oclMC) for coupled photon-electron transport
Tian, Zhen; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun
2015-01-01
Monte Carlo (MC) method has been recognized the most accurate dose calculation method for radiotherapy. However, its extremely long computation time impedes clinical applications. Recently, a lot of efforts have been made to realize fast MC dose calculation on GPUs. Nonetheless, most of the GPU-based MC dose engines were developed in NVidia CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a fast cross-platform MC dose engine oclMC using OpenCL environment for external beam photon and electron radiotherapy in MeV energy range. Coupled photon-electron MC simulation was implemented with analogue simulations for photon transports and a Class II condensed history scheme for electron transports. To test the accuracy and efficiency of our dose engine oclMC, we compared dose calculation results of oclMC and gDPM, our previously developed GPU-based MC code, for a 15 MeV electron ...
Energy Technology Data Exchange (ETDEWEB)
Morgan C. White
2000-07-01
The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V&V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second
International Nuclear Information System (INIS)
The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V and V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second, the ability to
Monte Carlo method application to shielding calculations
International Nuclear Information System (INIS)
CANDU spent fuel discharged from the reactor core contains Pu, so it must be stressed in two directions: tracing for the fuel reactivity in order to prevent critical mass formation and personnel protection during the spent fuel manipulation. The basic tasks accomplished by the shielding calculations in a nuclear safety analysis consist in dose rates calculations in order to prevent any risks both for personnel protection and impact on the environment during the spent fuel manipulation, transport and storage. To perform photon dose rates calculations the Monte Carlo MORSE-SGC code incorporated in SAS4 sequence from SCALE system was used. The paper objective was to obtain the photon dose rates to the spent fuel transport cask wall, both in radial and axial directions. As source of radiation one spent CANDU fuel bundle was used. All the geometrical and material data related to the transport cask were considered according to the shipping cask type B model, whose prototype has been realized and tested in the Institute for Nuclear Research Pitesti. (authors)
Problems in radiation shielding calculations with Monte Carlo methods
International Nuclear Information System (INIS)
The Monte Carlo method is a very useful tool for solving a large class of radiation transport problem. In contrast with deterministic method, geometric complexity is a much less significant problem for Monte Carlo calculations. However, the accuracy of Monte Carlo calculations is of course, limited by statistical error of the quantities to be estimated. In this report, we point out some typical problems to solve a large shielding system including radiation streaming. The Monte Carlo coupling technique was developed to settle such a shielding problem accurately. However, the variance of the Monte Carlo results using the coupling technique of which detectors were located outside the radiation streaming, was still not enough. So as to bring on more accurate results for the detectors located outside the streaming and also for a multi-legged-duct streaming problem, a practicable way of ''Prism Scattering technique'' is proposed in the study. (author)
Monte Carlo method in radiation transport problems
International Nuclear Information System (INIS)
In neutral radiation transport problems (neutrons, photons), two values are important: the flux in the phase space and the density of particles. To solve the problem with Monte Carlo method leads to, among other things, build a statistical process (called the play) and to provide a numerical value to a variable x (this attribution is called score). Sampling techniques are presented. Play biasing necessity is proved. A biased simulation is made. At last, the current developments (rewriting of programs for instance) are presented due to several reasons: two of them are the vectorial calculation apparition and the photon and neutron transport in vacancy media
International Nuclear Information System (INIS)
Highlights: • A new Monte Carlo photon transport code ARCHER-CT for CT dose calculations is developed to execute on the GPU and coprocessor. • ARCHER-CT is verified against MCNP. • The GPU code on an Nvidia M2090 GPU is 5.15–5.81 times faster than the parallel CPU code on an Intel X5650 6-core CPU. • The coprocessor code on an Intel Xeon Phi 5110p coprocessor is 3.30–3.38 times faster than the CPU code. - Abstract: Hardware accelerators are currently becoming increasingly important in boosting high performance computing systems. In this study, we tested the performance of two accelerator models, Nvidia Tesla M2090 GPU and Intel Xeon Phi 5110p coprocessor, using a new Monte Carlo photon transport package called ARCHER-CT we have developed for fast CT imaging dose calculation. The package contains three components, ARCHER-CTCPU, ARCHER-CTGPU and ARCHER-CTCOP designed to be run on the multi-core CPU, GPU and coprocessor architectures respectively. A detailed GE LightSpeed Multi-Detector Computed Tomography (MDCT) scanner model and a family of voxel patient phantoms are included in the code to calculate absorbed dose to radiosensitive organs under user-specified scan protocols. The results from ARCHER agree well with those from the production code Monte Carlo N-Particle eXtended (MCNPX). It is found that all the code components are significantly faster than the parallel MCNPX run on 12 MPI processes, and that the GPU and coprocessor codes are 5.15–5.81 and 3.30–3.38 times faster than the parallel ARCHER-CTCPU, respectively. The M2090 GPU performs better than the 5110p coprocessor in our specific test. Besides, the heterogeneous computation mode in which the CPU and the hardware accelerator work concurrently can increase the overall performance by 13–18%
Challenges of Monte Carlo Transport
Energy Technology Data Exchange (ETDEWEB)
Long, Alex Roberts [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Computational Physics and Methods (CCS-2)
2016-06-10
These are slides from a presentation for Parallel Summer School at Los Alamos National Laboratory. Solving discretized partial differential equations (PDEs) of interest can require a large number of computations. We can identify concurrency to allow parallel solution of discrete PDEs. Simulated particles histories can be used to solve the Boltzmann transport equation. Particle histories are independent in neutral particle transport, making them amenable to parallel computation. Physical parameters and method type determine the data dependencies of particle histories. Data requirements shape parallel algorithms for Monte Carlo. Then, Parallel Computational Physics and Parallel Monte Carlo are discussed and, finally, the results are given. The mesh passing method greatly simplifies the IMC implementation and allows simple load-balancing. Using MPI windows and passive, one-sided RMA further simplifies the implementation by removing target synchronization. The author is very interested in implementations of PGAS that may allow further optimization for one-sided, read-only memory access (e.g. Open SHMEM). The MPICH_RMA_OVER_DMAPP option and library is required to make one-sided messaging scale on Trinitite - Moonlight scales poorly. Interconnect specific libraries or functions are likely necessary to ensure performance. BRANSON has been used to directly compare the current standard method to a proposed method on idealized problems. The mesh passing algorithm performs well on problems that are designed to show the scalability of the particle passing method. BRANSON can now run load-imbalanced, dynamic problems. Potential avenues of improvement in the mesh passing algorithm will be implemented and explored. A suite of test problems that stress DD methods will elucidate a possible path forward for production codes.
Challenges of Monte Carlo Transport
Energy Technology Data Exchange (ETDEWEB)
Long, Alex Roberts [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Computational Physics and Methods (CCS-2)
2016-06-10
These are slides from a presentation for Parallel Summer School at Los Alamos National Laboratory. Solving discretized partial differential equations (PDEs) of interest can require a large number of computations. We can identify concurrency to allow parallel solution of discrete PDEs. Simulated particles histories can be used to solve the Boltzmann transport equation. Particle histories are independent in neutral particle transport, making them amenable to parallel computation. Physical parameters and method type determine the data dependencies of particle histories. Data requirements shape parallel algorithms for Monte Carlo. Then, Parallel Computational Physics and Parallel Monte Carlo are discussed and finally the results are given. The mesh passing method greatly simplifies the IMC implementation and allows simple load-balancing. Using MPI windows and passive, one-sided RMA further simplifies the implementation by removing target synchronization. The author is very interested in implementations of PGAS that may allow further optimization for one-sided, read-only memory access (e.g. Open SHMEM). The MPICH_RMA_OVER_DMAPP option and library is required to make one-sided messaging scale on Trinitite - Moonlight scales poorly. Interconnect specific libraries or functions are likely necessary to ensure performance. BRANSON has been used to directly compare the current standard method to a proposed method on idealized problems. The mesh passing algorithm performs well on problems that are designed to show the scalability of the particle passing method. BRANSON can now run load-imbalanced, dynamic problems. Potential avenues of improvement in the mesh passing algorithm will be implemented and explored. A suite of test problems that stress DD methods will elucidate a possible path forward for production codes.
International Nuclear Information System (INIS)
Hardware accelerators are currently becoming increasingly important in boosting high performance computing systems. In this study, we tested the performance of two accelerator models, NVIDIA Tesla M2090 GPU and Intel Xeon Phi 5110p coprocessor, using a new Monte Carlo photon transport package called ARCHER-CT we have developed for fast CT imaging dose calculation. The package contains three code variants, ARCHER-CT(CPU), ARCHER-CT(GPU) and ARCHER-CT(COP) to run in parallel on the multi-core CPU, GPU and coprocessor architectures respectively. A detailed GE LightSpeed Multi-Detector Computed Tomography (MDCT) scanner model and a family of voxel patient phantoms were included in the code to calculate absorbed dose to radiosensitive organs under specified scan protocols. The results from ARCHER agreed well with those from the production code Monte Carlo N-Particle eXtended (MCNPX). It was found that all the code variants were significantly faster than the parallel MCNPX running on 12 MPI processes, and that the GPU and coprocessor performed equally well, being 2.89-4.49 and 3.01-3.23 times faster than the parallel ARCHER-CT(CPU) running with 12 hyper-threads. (authors)
THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE
Energy Technology Data Exchange (ETDEWEB)
WATERS, LAURIE S. [Los Alamos National Laboratory; MCKINNEY, GREGG W. [Los Alamos National Laboratory; DURKEE, JOE W. [Los Alamos National Laboratory; FENSIN, MICHAEL L. [Los Alamos National Laboratory; JAMES, MICHAEL R. [Los Alamos National Laboratory; JOHNS, RUSSELL C. [Los Alamos National Laboratory; PELOWITZ, DENISE B. [Los Alamos National Laboratory
2007-01-10
MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.
Reactor perturbation calculations by Monte Carlo methods
International Nuclear Information System (INIS)
Whilst Monte Carlo methods are useful for reactor calculations involving complicated geometry, it is difficult to apply them to the calculation of perturbation worths because of the large amount of computing time needed to obtain good accuracy. Various ways of overcoming these difficulties are investigated in this report, with the problem of estimating absorbing control rod worths particularly in mind. As a basis for discussion a method of carrying out multigroup reactor calculations by Monte Carlo methods is described. Two methods of estimating a perturbation worth directly, without differencing two quantities of like magnitude, are examined closely but are passed over in favour of a third method based on a correlation technique. This correlation method is described, and demonstrated by a limited range of calculations for absorbing control rods in a fast reactor. In these calculations control rod worths of between 1% and 7% in reactivity are estimated to an accuracy better than 10% (3 standard errors) in about one hour's computing time on the English Electric KDF.9 digital computer. (author)
Monte Carlo methods for particle transport
Haghighat, Alireza
2015-01-01
The Monte Carlo method has become the de facto standard in radiation transport. Although powerful, if not understood and used appropriately, the method can give misleading results. Monte Carlo Methods for Particle Transport teaches appropriate use of the Monte Carlo method, explaining the method's fundamental concepts as well as its limitations. Concise yet comprehensive, this well-organized text: * Introduces the particle importance equation and its use for variance reduction * Describes general and particle-transport-specific variance reduction techniques * Presents particle transport eigenvalue issues and methodologies to address these issues * Explores advanced formulations based on the author's research activities * Discusses parallel processing concepts and factors affecting parallel performance Featuring illustrative examples, mathematical derivations, computer algorithms, and homework problems, Monte Carlo Methods for Particle Transport provides nuclear engineers and scientists with a practical guide ...
Quantum Monte Carlo Calculations of Neutron Matter
Carlson, J; Ravenhall, D G
2003-01-01
Uniform neutron matter is approximated by a cubic box containing a finite number of neutrons, with periodic boundary conditions. We report variational and Green's function Monte Carlo calculations of the ground state of fourteen neutrons in a periodic box using the Argonne $\\vep $ two-nucleon interaction at densities up to one and half times the nuclear matter density. The effects of the finite box size are estimated using variational wave functions together with cluster expansion and chain summation techniques. They are small at subnuclear densities. We discuss the expansion of the energy of low-density neutron gas in powers of its Fermi momentum. This expansion is strongly modified by the large nn scattering length, and does not begin with the Fermi-gas kinetic energy as assumed in both Skyrme and relativistic mean field theories. The leading term of neutron gas energy is ~ half the Fermi-gas kinetic energy. The quantum Monte Carlo results are also used to calibrate the accuracy of variational calculations ...
A hybrid Monte Carlo and response matrix Monte Carlo method in criticality calculation
International Nuclear Information System (INIS)
Full core calculations are very useful and important in reactor physics analysis, especially in computing the full core power distributions, optimizing the refueling strategies and analyzing the depletion of fuels. To reduce the computing time and accelerate the convergence, a method named Response Matrix Monte Carlo (RMMC) method based on analog Monte Carlo simulation was used to calculate the fixed source neutron transport problems in repeated structures. To make more accurate calculations, we put forward the RMMC method based on non-analog Monte Carlo simulation and investigate the way to use RMMC method in criticality calculations. Then a new hybrid RMMC and MC (RMMC+MC) method is put forward to solve the criticality problems with combined repeated and flexible geometries. This new RMMC+MC method, having the advantages of both MC method and RMMC method, can not only increase the efficiency of calculations, also simulate more complex geometries rather than repeated structures. Several 1-D numerical problems are constructed to test the new RMMC and RMMC+MC method. The results show that RMMC method and RMMC+MC method can efficiently reduce the computing time and variations in the calculations. Finally, the future research directions are mentioned and discussed at the end of this paper to make RMMC method and RMMC+MC method more powerful. (authors)
Linear Scaling Quantum Monte Carlo Calculations
Williamson, Andrew
2002-03-01
New developments to the quantum Monte Carlo approach are presented that improve the scaling of the time required to calculate the total energy of a configuration of electronic coordinates from N^3 to nearly linear[1]. The first factor of N is achieved by applying a unitary transform to the set of single particle orbitals used to construct the Slater determinant, creating a set of maximally localized Wannier orbitals. These localized functions are then truncated beyond a given cutoff radius to introduce sparsity into the Slater determinant. The second factor of N is achieved by evaluating the maximally localized Wannier orbitals on a cubic spline grid, which removes the size dependence of the basis set (e.g. plane waves, Gaussians) typically used to expand the orbitals. Application of this method to the calculation of the binding energy of carbon fullerenes and silicon nanostructures will be presented. An extension of the approach to deal with excited states of systems will also be presented in the context of the calculation of the excitonic gap of a variety of systems. This work was performed under the auspices of the U.S. Dept. of Energy at the University of California/LLNL under contract no. W-7405-Eng-48. [1] A.J. Williamson, R.Q. Hood and J.C. Grossman, Phys. Rev. Lett. 87 246406 (2001)
Madsen, J. R.; Akabani, G.
2014-05-01
The present state of modeling radio-induced effects at the cellular level does not account for the microscopic inhomogeneity of the nucleus from the non-aqueous contents (i.e. proteins, DNA) by approximating the entire cellular nucleus as a homogenous medium of water. Charged particle track-structure calculations utilizing this approximation are therefore neglecting to account for approximately 30% of the molecular variation within the nucleus. To truly understand what happens when biological matter is irradiated, charged particle track-structure calculations need detailed knowledge of the secondary electron cascade, resulting from interactions with not only the primary biological component—water--but also the non-aqueous contents, down to very low energies. This paper presents our work on a generic approach for calculating low-energy interaction cross-sections between incident charged particles and individual molecules. The purpose of our work is to develop a self-consistent computational method for predicting molecule-specific interaction cross-sections, such as the component molecules of DNA and proteins (i.e. nucleotides and amino acids), in the very low-energy regime. These results would then be applied in a track-structure code and thereby reduce the homogenous water approximation. The present methodology—inspired by seeking a combination of the accuracy of quantum mechanics and the scalability, robustness, and flexibility of Monte Carlo methods—begins with the calculation of a solution to the many-body Schrödinger equation and proceeds to use Monte Carlo methods to calculate the perturbations in the internal electron field to determine the interaction processes, such as ionization and excitation. As a test of our model, the approach is applied to a water molecule in the same method as it would be applied to a nucleotide or amino acid and compared with the low-energy cross-sections from the GEANT4-DNA physics package of the Geant4 simulation toolkit
A Monte Carlo dose calculation tool for radiotherapy treatment planning
Ma, C.-M.; Li, J. S.; Pawlicki, T.; Jiang, S. B.; Deng, J.; Lee, M. C.; Koumrian, T.; Luxton, M.; Brain, S.
2002-05-01
A Monte Carlo user code, MCDOSE, has been developed for radiotherapy treatment planning (RTP) dose calculations. MCDOSE is designed as a dose calculation module suitable for adaptation to host RTP systems. MCDOSE can be used for both conventional photon/electron beam calculation and intensity modulated radiotherapy (IMRT) treatment planning. MCDOSE uses a multiple-source model to reconstruct the treatment beam phase space. Based on Monte Carlo simulated or measured beam data acquired during commissioning, source-model parameters are adjusted through an automated procedure. Beam modifiers such as jaws, physical and dynamic wedges, compensators, blocks, electron cut-outs and bolus are simulated by MCDOSE together with a 3D rectilinear patient geometry model built from CT data. Dose distributions calculated using MCDOSE agreed well with those calculated by the EGS4/DOSXYZ code using different beam set-ups and beam modifiers. Heterogeneity correction factors for layered-lung or layered-bone phantoms as calculated by both codes were consistent with measured data to within 1%. The effect of energy cut-offs for particle transport was investigated. Variance reduction techniques were implemented in MCDOSE to achieve a speedup factor of 10-30 compared to DOSXYZ.
Radiation Transport Calculations and Simulations
Energy Technology Data Exchange (ETDEWEB)
Fasso, Alberto; /SLAC; Ferrari, A.; /CERN
2011-06-30
This article is an introduction to the Monte Carlo method as used in particle transport. After a description at an elementary level of the mathematical basis of the method, the Boltzmann equation and its physical meaning are presented, followed by Monte Carlo integration and random sampling, and by a general description of the main aspects and components of a typical Monte Carlo particle transport code. In particular, the most common biasing techniques are described, as well as the concepts of estimator and detector. After a discussion of the different types of errors, the issue of Quality Assurance is briefly considered.
Energy Technology Data Exchange (ETDEWEB)
Both, J.P.; Lee, Y.K.; Mazzolo, A.; Peneliau, Y.; Petit, O.; Roesslinger, B. [CEA Saclay, Dir. de l' Energie Nucleaire (DEN), Service d' Etudes de Reacteurs et de Modelisation Avancee, 91 - Gif sur Yvette (France)
2003-07-01
Tripoli-4 is a three dimensional calculations code using the Monte Carlo method to simulate the transport of neutrons, photons, electrons and positrons. This code is used in four application fields: the protection studies, the criticality studies, the core studies and the instrumentation studies. Geometry, cross sections, description of sources, principle. (N.C.)
Scalable Domain Decomposed Monte Carlo Particle Transport
Energy Technology Data Exchange (ETDEWEB)
O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)
2013-12-05
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.
Energy Technology Data Exchange (ETDEWEB)
Wang, Ping, E-mail: pingwang@xidian.edu.cn [State Key Laboratory of Integrated Service Networks, School of Telecommunications Engineering, Xidian University, Xi’an 710071 (China); School of Physics and Optoelectronic Engineering, Xidian University, Xi’an 710071 (China); Hu, Linlin; Shan, Xuefei [State Key Laboratory of Integrated Service Networks, School of Telecommunications Engineering, Xidian University, Xi’an 710071 (China); Yang, Yintang [Key Laboratory of the Ministry of Education for Wide Band-Gap Semiconductor Materials and Devices, School of Microelectronics, Xidian University, Xi’an 710071 (China); Song, Jiuxu; Guo, Lixin [School of Physics and Optoelectronic Engineering, Xidian University, Xi’an 710071 (China); Zhang, Zhiyong [School of Information Science and Technology, Northwest University, Xi’an 710127 (China)
2015-01-15
Transient characteristics of wurtzite Zn{sub 1−x}Mg{sub x}O are investigated using a three-valley Ensemble Monte Carlo model verified by the agreement between the simulated low-field mobility and the experiment result reported. The electronic structures are obtained by first principles calculations with density functional theory. The results show that the peak electron drift velocities of Zn{sub 1−x}Mg{sub x}O (x = 11.1%, 16.7%, 19.4%, 25%) at 3000 kV/cm are 3.735 × 10{sup 7}, 2.133 × 10{sup 7}, 1.889 × 10{sup 7}, 1.295 × 10{sup 7} cm/s, respectively. With the increase of Mg concentration, a higher electric field is required for the onset of velocity overshoot. When the applied field exceeds 2000 kV/cm and 2500 kV/cm, a phenomena of velocity undershoot is observed in Zn{sub 0.889}Mg{sub 0.111}O and Zn{sub 0.833}Mg{sub 0.167}O respectively, while it is not observed for Zn{sub 0.806}Mg{sub 0.194}O and Zn{sub 0.75}Mg{sub 0.25}O even at 3000 kV/cm which is especially important for high frequency devices.
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
International Nuclear Information System (INIS)
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Energy Technology Data Exchange (ETDEWEB)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
The macro response Monte Carlo method for electron transport
Svatos, M M
1999-01-01
This thesis demonstrates the feasibility of basing dose calculations for electrons in radiotherapy on first-principles single scatter physics, in a calculation time that is comparable to or better than current electron Monte Carlo methods. The macro response Monte Carlo (MRMC) method achieves run times that have potential to be much faster than conventional electron transport methods such as condensed history. The problem is broken down into two separate transport calculations. The first stage is a local, single scatter calculation, which generates probability distribution functions (PDFs) to describe the electron's energy, position, and trajectory after leaving the local geometry, a small sphere or "kugel." A number of local kugel calculations were run for calcium and carbon, creating a library of kugel data sets over a range of incident energies (0.25-8 MeV) and sizes (0.025 to 0.1 cm in radius). The second transport stage is a global calculation, in which steps that conform to the size of the kugels in the...
International Nuclear Information System (INIS)
Conversion coefficients were calculated for fluence-to-absorbed dose, fluence-to-equivalent dose, fluence-to-effective dose and fluence-to-gray equivalent for isotropic exposure of an adult female and an adult male to tritons (3H+) in the energy range of 10 MeV to 1 TeV (0.01-1000 GeV). Coefficients were calculated using Monte Carlo transport code MCNPX 2.7.C and BodyBuilderTM 1.3 anthropomorphic phantoms. Phantoms were modified to allow calculation of effective dose to a Reference Person using tissues and tissue weighting factors from 1990 and 2007 recommendations of the International Commission on Radiological Protection (ICRP) and calculation of gray equivalent to selected tissues as recommended by the National Council on Radiation Protection and Measurements. At 15 of the 19 energies for which coefficients for effective dose were calculated, coefficients based on ICRP 2007 and 1990 recommendations differed by less than 3%. The greatest difference, 43%, occurred at 30 MeV. Published by Oxford Univ. Press on behalf of the US Government 2010. (authors)
Dorval, Eric
2016-01-01
Neutron transport calculations by Monte Carlo methods are finding increased application in nuclear reactor simulations. In particular, a versatile approach entails the use of a 2-step pro-cedure, with Monte Carlo as a few-group cross section data generator at lattice level, followed by deterministic multi-group diffusion calculations at core level. In this thesis, the Serpent 2 Monte Carlo reactor physics burnup calculation code is used in order to test a set of diffusion coefficient model...
Confidence and efficiency scaling in Variational Quantum Monte Carlo calculations
Delyon, François; Holzmann, Markus
2016-01-01
Based on the central limit theorem, we discuss the problem of evaluation of the statistical error of Monte Carlo calculations using a time discretized diffusion process. We present a robust and practical method to determine the effective variance of general observables and show how to verify the equilibrium hypothesis by the Kolmogorov-Smirnov test. We then derive scaling laws of the efficiency illustrated by Variational Monte Carlo calculations on the two dimensional electron gas.
International Nuclear Information System (INIS)
Conversion coefficients were calculated for fluence-to-absorbed dose, fluence-to-equivalent dose, fluence-to-effective dose and fluence-to-gray equivalent for isotropic exposure of an adult female and an adult male to deuterons (2H+) in the energy range 10 MeV-1 TeV (0.01-1000 GeV). Coefficients were calculated using the Monte Carlo transport code MCNPX 2.7.C and BodyBuilderTM 1.3 anthropomorphic phantoms. Phantoms were modified to allow calculation of the effective dose to a Reference Person using tissues and tissue weighting factors from 1990 and 2007 recommendations of the International Commission on Radiological Protection (ICRP) and gray equivalent to selected tissues as recommended by the National Council on Radiation Protection and Measurements. Coefficients for the equivalent and effective dose incorporated a radiation weighting factor of 2. At 15 of 19 energies for which coefficients for the effective dose were calculated, coefficients based on ICRP 1990 and 2007 recommendations differed by < 3 %. The greatest difference, 47 %, occurred at 30 MeV. (authors)
International Nuclear Information System (INIS)
Conversion coefficients were calculated for fluence-to-absorbed dose, fluence-to-equivalent dose, fluence-to-effective dose and fluence-to-gray equivalent, for isotropic exposure of an adult male and an adult female to helions (3He2+) in the energy range of 10 MeV to 1 TeV (0.01-1000 GeV). Calculations were performed using Monte Carlo transport code MCNPX 2.7.C and BodyBuilderTM 1.3 anthropomorphic phantoms modified to allow calculation of effective dose using tissues and tissue weighting factors from either the 1990 or 2007 recommendations of the International Commission on Radiological Protection (ICRP), and gray equivalent to selected tissues as recommended by the National Council on Radiation Protection and Measurements. At 15 of the 19 energies for which coefficients for effective dose were calculated, coefficients based on ICRP 2007 and 1990 recommendations differed by less than 2%. The greatest difference, 62%, occurred at 100 MeV. Published by Oxford Univ. Press on behalf of the U.S. Government 2010. (authors)
TRIPOLI-3: a neutron/photon Monte Carlo transport code
Energy Technology Data Exchange (ETDEWEB)
Nimal, J.C.; Vergnaud, T. [Commissariat a l' Energie Atomique, Gif-sur-Yvette (France). Service d' Etudes de Reacteurs et de Mathematiques Appliquees
2001-07-01
The present version of TRIPOLI-3 solves the transport equation for coupled neutron and gamma ray problems in three dimensional geometries by using the Monte Carlo method. This code is devoted both to shielding and criticality problems. The most important feature for particle transport equation solving is the fine treatment of the physical phenomena and sophisticated biasing technics useful for deep penetrations. The code is used either for shielding design studies or for reference and benchmark to validate cross sections. Neutronic studies are essentially cell or small core calculations and criticality problems. TRIPOLI-3 has been used as reference method, for example, for resonance self shielding qualification. (orig.)
MONTE CARLO CALCULATION OF ENERGY DEPOSITION BY DELTA RAYS AROUND ION TRACKS
Institute of Scientific and Technical Information of China (English)
张纯祥; 刘小伟; 等
1994-01-01
The radial distribution of dose around the path of a heavy ion has been studied by a Monte Carlo transport analysis of the delta rays produced along the track of a heavy ion based on classical binary collision dynamics and a single scattering model for the electron transport process.Result comparisons among this work and semi-empirical expression based delta ray theory of track structure,as well as other Monte Carlo calculations are made for 1,3MeV protons and several heavy ions.The results of the Monte Carlo simulations for energetic heavy ions are in agreement with experimental data and with results of different methods.The characteristic of this Monte Carlo calculation is a simulation of the delta rays theory of track structure.
Applications of the Monte Carlo radiation transport toolkit at LLNL
Sale, Kenneth E.; Bergstrom, Paul M., Jr.; Buck, Richard M.; Cullen, Dermot; Fujino, D.; Hartmann-Siantar, Christine
1999-09-01
Modern Monte Carlo radiation transport codes can be applied to model most applications of radiation, from optical to TeV photons, from thermal neutrons to heavy ions. Simulations can include any desired level of detail in three-dimensional geometries using the right level of detail in the reaction physics. The technology areas to which we have applied these codes include medical applications, defense, safety and security programs, nuclear safeguards and industrial and research system design and control. The main reason such applications are interesting is that by using these tools substantial savings of time and effort (i.e. money) can be realized. In addition it is possible to separate out and investigate computationally effects which can not be isolated and studied in experiments. In model calculations, just as in real life, one must take care in order to get the correct answer to the right question. Advancing computing technology allows extensions of Monte Carlo applications in two directions. First, as computers become more powerful more problems can be accurately modeled. Second, as computing power becomes cheaper Monte Carlo methods become accessible more widely. An overview of the set of Monte Carlo radiation transport tools in use a LLNL will be presented along with a few examples of applications and future directions.
International Nuclear Information System (INIS)
Although Russian roulette is applied very often in Monte Carlo calculations, not much literature exists on its quantitative influence on the variance and efficiency of a Monte Carlo calculation. Elaborating on the work of Lux and Koblinger using moment equations, new relevant equations are derived to calculate the variance of a Monte Carlo simulation using Russian roulette. To demonstrate its practical application the theory is applied to a simplified transport model resulting in explicit analytical expressions for the variance of a Monte Carlo calculation and for the expected number of collisions per history. From these expressions numerical results are shown and compared with actual Monte Carlo calculations, showing an excellent agreement. By considering the number of collisions in a Monte Carlo calculation as a measure of the CPU time, also the efficiency of the Russian roulette can be studied. It opens the way for further investigations, including optimization of Russian roulette parameters. (authors)
Quantum Monte Carlo diagonalization method as a variational calculation
Energy Technology Data Exchange (ETDEWEB)
Mizusaki, Takahiro; Otsuka, Takaharu [Tokyo Univ. (Japan). Dept. of Physics; Honma, Michio
1997-05-01
A stochastic method for performing large-scale shell model calculations is presented, which utilizes the auxiliary field Monte Carlo technique and diagonalization method. This method overcomes the limitation of the conventional shell model diagonalization and can extremely widen the feasibility of shell model calculations with realistic interactions for spectroscopic study of nuclear structure. (author)
Optimization of next-event estimation probability in Monte Carlo shielding calculations
International Nuclear Information System (INIS)
In Monte Carlo radiation transport calculations with point detectors, the next-event estimation is employed to estimate the response to each detector from all collision sites. The computation time required for this estimation process is substantial and often exceeds the time required to generate and process particle histories in a calculation. This estimation from all collision sites is, therefore, very wasteful in Monte Carlo shielding calculations. For example, in the source region and in regions far away from the detectors, the next-event contribution of a particle is often very small and insignificant. A method for reducing this inefficiency is described
Bias in Dynamic Monte Carlo Alpha Calculations
Energy Technology Data Exchange (ETDEWEB)
Sweezy, Jeremy Ed [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Nolen, Steven Douglas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Adams, Terry R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Trahan, Travis John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-02-06
A 1/N bias in the estimate of the neutron time-constant (commonly denoted as α) has been seen in dynamic neutronic calculations performed with MCATK. In this paper we show that the bias is most likely caused by taking the logarithm of a stochastic quantity. We also investigate the known bias due to the particle population control method used in MCATK. We conclude that this bias due to the particle population control method is negligible compared to other sources of bias.
The macro response Monte Carlo method for electron transport
Energy Technology Data Exchange (ETDEWEB)
Svatos, M M
1998-09-01
The main goal of this thesis was to prove the feasibility of basing electron depth dose calculations in a phantom on first-principles single scatter physics, in an amount of time that is equal to or better than current electron Monte Carlo methods. The Macro Response Monte Carlo (MRMC) method achieves run times that are on the order of conventional electron transport methods such as condensed history, with the potential to be much faster. This is possible because MRMC is a Local-to-Global method, meaning the problem is broken down into two separate transport calculations. The first stage is a local, in this case, single scatter calculation, which generates probability distribution functions (PDFs) to describe the electron's energy, position and trajectory after leaving the local geometry, a small sphere or "kugel" A number of local kugel calculations were run for calcium and carbon, creating a library of kugel data sets over a range of incident energies (0.25 MeV - 8 MeV) and sizes (0.025 cm to 0.1 cm in radius). The second transport stage is a global calculation, where steps that conform to the size of the kugels in the library are taken through the global geometry. For each step, the appropriate PDFs from the MRMC library are sampled to determine the electron's new energy, position and trajectory. The electron is immediately advanced to the end of the step and then chooses another kugel to sample, which continues until transport is completed. The MRMC global stepping code was benchmarked as a series of subroutines inside of the Peregrine Monte Carlo code. It was compared to Peregrine's class II condensed history electron transport package, EGS4, and MCNP for depth dose in simple phantoms having density inhomogeneities. Since the kugels completed in the library were of relatively small size, the zoning of the phantoms was scaled down from a clinical size, so that the energy deposition algorithms for spreading dose across 5-10 zones per kugel could
Morse Monte Carlo Radiation Transport Code System
Energy Technology Data Exchange (ETDEWEB)
Emmett, M.B.
1975-02-01
The report contains sections containing descriptions of the MORSE and PICTURE codes, input descriptions, sample problems, deviations of the physical equations and explanations of the various error messages. The MORSE code is a multipurpose neutron and gamma-ray transport Monte Carlo code. Time dependence for both shielding and criticality problems is provided. General three-dimensional geometry may be used with an albedo option available at any material surface. The PICTURE code provide aid in preparing correct input data for the combinatorial geometry package CG. It provides a printed view of arbitrary two-dimensional slices through the geometry. By inspecting these pictures one may determine if the geometry specified by the input cards is indeed the desired geometry. 23 refs. (WRF)
Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning
Ma, C.-M.; Li, J. S.; Deng, J.; Fan, J.
2008-02-01
Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife® SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head & neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations.
Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning
International Nuclear Information System (INIS)
Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife (registered) SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head and neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations
Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning
Energy Technology Data Exchange (ETDEWEB)
Ma, C-M; Li, J S; Deng, J; Fan, J [Radiation Oncology Department, Fox Chase Cancer Center, Philadelphia, PA (United States)], E-mail: Charlie.ma@fccc.edu
2008-02-01
Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife (registered) SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head and neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations.
Maucec, M
2005-01-01
Monte Carlo simulations for nuclear logging applications are considered to be highly demanding transport problems. In this paper, the implementation of weight-window variance reduction schemes in a 'manual' fashion to improve the efficiency of calculations for a neutron logging tool is presented. Th
MOx benchmark calculations by deterministic and Monte Carlo codes
International Nuclear Information System (INIS)
Highlights: ► MOx based depletion calculation. ► Methodology to create continuous energy pseudo cross section for lump of minor fission products. ► Mass inventory comparison between deterministic and Monte Carlo codes. ► Higher deviation was found for several isotopes. - Abstract: A depletion calculation benchmark devoted to MOx fuel is an ongoing objective of the OECD/NEA WPRS following the study of depletion calculation concerning UOx fuels. The objective of the proposed benchmark is to compare existing depletion calculations obtained with various codes and data libraries applied to fuel and back-end cycle configurations. In the present work the deterministic code NEWT/ORIGEN-S of the SCALE6 codes package and the Monte Carlo based code MONTEBURNS2.0 were used to calculate the masses of inventory isotopes. The methodology to apply the MONTEBURNS2.0 to this benchmark is also presented. Then the results from both code were compared.
Strategies for improving the efficiency of quantum Monte Carlo calculations
Lee, R M; Nemec, N; Rios, P Lopez; Drummond, N D
2010-01-01
We describe a number of strategies for optimizing the efficiency of quantum Monte Carlo (QMC) calculations. We investigate the dependence of the efficiency of the variational Monte Carlo method on the sampling algorithm. Within a unified framework, we compare several commonly used variants of diffusion Monte Carlo (DMC). We then investigate the behavior of DMC calculations on parallel computers and the details of parallel implementations, before proposing a technique to optimize the efficiency of the extrapolation of DMC results to zero time step, finding that a relative time step ratio of 1:4 is optimal. Finally, we discuss the removal of serial correlation from data sets by reblocking, setting out criteria for the choice of block length and quantifying the effects of the uncertainty in the estimated correlation length and the presence of divergences in the local energy on estimated error bars on QMC energies.
A New Method for the Calculation of Diffusion Coefficients with Monte Carlo
Dorval, Eric
2014-06-01
This paper presents a new Monte Carlo-based method for the calculation of diffusion coefficients. One distinctive feature of this method is that it does not resort to the computation of transport cross sections directly, although their functional form is retained. Instead, a special type of tally derived from a deterministic estimate of Fick's Law is used for tallying the total cross section, which is then combined with a set of other standard Monte Carlo tallies. Some properties of this method are presented by means of numerical examples for a multi-group 1-D implementation. Calculated diffusion coefficients are in general good agreement with values obtained by other methods.
Monte Carlo shipping cask calculations using an automated biasing procedure
International Nuclear Information System (INIS)
This paper describes an automated biasing procedure for Monte Carlo shipping cask calculations within the SCALE system - a modular code system for Standardized Computer Analysis for Licensing Evaluation. The SCALE system was conceived and funded by the US Nuclear Regulatory Commission to satisfy a strong need for performing standardized criticality, shielding, and heat transfer analyses of nuclear systems
Monte Carlo dose calculation in dental amalgam phantom
Mohd Zahri Abdul Aziz; Yusoff, A. L.; N D Osman; R. Abdullah; Rabaie, N. A.; M S Salikin
2015-01-01
It has become a great challenge in the modern radiation treatment to ensure the accuracy of treatment delivery in electron beam therapy. Tissue inhomogeneity has become one of the factors for accurate dose calculation, and this requires complex algorithm calculation like Monte Carlo (MC). On the other hand, computed tomography (CT) images used in treatment planning system need to be trustful as they are the input in radiotherapy treatment. However, with the presence of metal amalgam in treatm...
Vibrato Monte Carlo and the calculation of greeks
Keegan, Sinead
2008-01-01
In computational ¯nance Monte Carlo simulation can be used to calculate the correct prices of ¯nancial options, and to compute the values of the as- sociated Greeks (the derivatives of the option price with respect to certain input parameters). The main methods used for the calculation of Greeks are finite difference, likelihood ratio, and pathwise sensitivity. Each of these has its limitations and in particular the pathwise sensitivity approach may not be used for an option...
Quantum Monte Carlo calculations of two neutrons in finite volume
Klos, P.; Lynn, J. E.; Tews, I.; Gandolfi, S.; Gezerlis, A.; Hammer, H. -W.; Hoferichter, M.; Schwenk, A.
2016-01-01
Ab initio calculations provide direct access to the properties of pure neutron systems that are challenging to study experimentally. In addition to their importance for fundamental physics, their properties are required as input for effective field theories of the strong interaction. In this work, we perform auxiliary-field diffusion Monte Carlo calculations of the ground and first excited state of two neutrons in a finite box, considering a simple contact potential as well as chiral effectiv...
A Monte Carlo simulation of ion transport at finite temperatures
International Nuclear Information System (INIS)
We have developed a Monte Carlo simulation for ion transport in hot background gases, which is an alternative way of solving the corresponding Boltzmann equation that determines the distribution function of ions. We consider the limit of low ion densities when the distribution function of the background gas remains unchanged due to collision with ions. Special attention has been paid to properly treating the thermal motion of the host gas particles and their influence on ions, which is very important at low electric fields, when the mean ion energy is comparable to the thermal energy of the host gas. We found the conditional probability distribution of gas velocities that correspond to an ion of specific velocity which collides with a gas particle. Also, we have derived exact analytical formulae for piecewise calculation of the collision frequency integrals. We address the cases when the background gas is monocomponent and when it is a mixture of different gases. The techniques described here are required for Monte Carlo simulations of ion transport and for hybrid models of non-equilibrium plasmas. The range of energies where it is necessary to apply the technique has been defined. The results we obtained are in excellent agreement with the existing ones obtained by complementary methods. Having verified our algorithm, we were able to produce calculations for Ar+ ions in Ar and propose them as a new benchmark for thermal effects. The developed method is widely applicable for solving the Boltzmann equation that appears in many different contexts in physics. (paper)
Monte Carlo calculation of the neutron and gamma sensitivities of self-powered detectors
Energy Technology Data Exchange (ETDEWEB)
Pytel, K.
1981-01-01
A calculational model is presented for the self-powered detector response prediction in various radiation environments. The fast beta particles and electron transport is treated by Monte Carlo technique. A new model of electronic processes within the insulator is introduced. Calculated neutron and gamma sensitivities of five detectors (with Rh, V, Co, Ag and Pt emitters) are compared with reported experimental values. The comparison gives a satisfactory agreement for the majority of examined detectors.
Variance Estimation In Domain Decomposed Monte Carlo Eigenvalue Calculations
International Nuclear Information System (INIS)
The number of tallies performed in a given Monte Carlo calculation is limited in most modern Monte Carlo codes by the amount of memory that can be allocated on a single processor. By using domain decomposition, the calculation is now limited by the total amount of memory available on all processors, allowing for significantly more tallies to be performed. However, decomposing the problem geometry introduces significant issues with the way tally statistics are conventionally calculated. In order to deal with the issue of calculating tally variances in domain decomposed environments for the Shift hybrid Monte Carlo code, this paper presents an alternative approach for reactor scenarios in which an assumption is made that once a particle leaves a domain, it does not reenter the domain. Particles that reenter the domain are instead treated as separate independent histories. This assumption introduces a bias that inevitably leads to under-prediction of the calculated variances for tallies within a few mean free paths of the domain boundaries. However, through the use of different decomposition strategies, primarily overlapping domains, the negative effects of such an assumption can be significantly reduced to within reasonable levels.
On the Calculation of Reactor Time Constants Using the Monte Carlo Method
Energy Technology Data Exchange (ETDEWEB)
Leppaenen, Jaakko [VTT Technical Research Centre of Finland, P.O. Box 1000, FI-02044 VTT (Finland)
2008-07-01
Full-core reactor dynamics calculation involves the coupled modelling of thermal hydraulics and the time-dependent behaviour of core neutronics. The reactor time constants include prompt neutron lifetimes, neutron reproduction times, effective delayed neutron fractions and the corresponding decay constants, typically divided into six or eight precursor groups. The calculation of these parameters is traditionally carried out using deterministic lattice transport codes, which also produce the homogenised few-group constants needed for resolving the spatial dependence of neutron flux. In recent years, there has been a growing interest in the production of simulator input parameters using the stochastic Monte Carlo method, which has several advantages over deterministic transport calculation. This paper reviews the methodology used for the calculation of reactor time constants. The calculation techniques are put to practice using two codes, the PSG continuous-energy Monte Carlo reactor physics code and MORA, a new full-core Monte Carlo neutron transport code entirely based on homogenisation. Both codes are being developed at the VTT Technical Research Centre of Finland. The results are compared to other codes and experimental reference data in the CROCUS reactor kinetics benchmark calculation. (author)
Energy Technology Data Exchange (ETDEWEB)
Authier, N
1998-12-01
One of the questions asked in radiation shielding problems is the estimation of the radiation level in particular to determine accessibility of working persons in controlled area (nuclear power plants, nuclear fuel reprocessing plants) or to study the dose gradients encountered in material (iron nuclear vessel, medical therapy, electronics in satellite). The flux and reaction rate estimators used in Monte Carlo codes give average values in volumes or on surfaces of the geometrical description of the system. But in certain configurations, the knowledge of punctual deposited energy and dose estimates are necessary. The Monte Carlo estimate of the flux at a point of interest is a calculus which presents an unbounded variance. The central limit theorem cannot be applied thus no easy confidencelevel may be calculated. The convergence rate is then very poor. We propose in this study a new solution for the photon flux at a point estimator. The method is based on the 'once more collided flux estimator' developed earlier for neutron calculations. It solves the problem of the unbounded variance and do not add any bias to the estimation. We show however that our new sampling schemes specially developed to treat the anisotropy of the photon coherent scattering is necessary for a good and regular behavior of the estimator. This developments integrated in the TRIPOLI-4 Monte Carlo code add the possibility of an unbiased punctual estimate on media interfaces. (author)
Diffusion Monte Carlo calculations of three-body systems
Institute of Scientific and Technical Information of China (English)
L(U) Meng-Jiao; REN Zhong-Zhou; LIN Qi-Hu
2012-01-01
The application of the diffusion Monte Carlo algorithm in three-body systems is studied.We develop a program and use it to calculate the property of various three-body systems.Regular Coulomb systems such as atoms,molecules,and ions are investigated.The calculation is then extended to exotic systems where electrons are replaced by muons.Some nuclei with neutron halos are also calculated as three-body systems consisting of a core and two external nucleons.Our results agree well with experiments and others' work.
Calculating Variable Annuity Liability 'Greeks' Using Monte Carlo Simulation
Cathcart, Mark J.; Steven Morrison; McNeil, Alexander J.
2011-01-01
Hedging methods to mitigate the exposure of variable annuity products to market risks require the calculation of market risk sensitivities (or "Greeks"). The complex, path-dependent nature of these products means these sensitivities typically must be estimated by Monte Carlo simulation. Standard market practice is to measure such sensitivities using a "bump and revalue" method. As well as requiring multiple valuations, such approaches can be unreliable for higher order Greeks, e.g., gamma. In...
Calculations of pair production by Monte Carlo methods
Energy Technology Data Exchange (ETDEWEB)
Bottcher, C.; Strayer, M.R.
1991-01-01
We describe some of the technical design issues associated with the production of particle-antiparticle pairs in very large accelerators. To answer these questions requires extensive calculation of Feynman diagrams, in effect multi-dimensional integrals, which we evaluate by Monte Carlo methods on a variety of supercomputers. We present some portable algorithms for generating random numbers on vector and parallel architecture machines. 12 refs., 14 figs.
Calculations of pair production by Monte Carlo methods
International Nuclear Information System (INIS)
We describe some of the technical design issues associated with the production of particle-antiparticle pairs in very large accelerators. To answer these questions requires extensive calculation of Feynman diagrams, in effect multi-dimensional integrals, which we evaluate by Monte Carlo methods on a variety of supercomputers. We present some portable algorithms for generating random numbers on vector and parallel architecture machines. 12 refs., 14 figs
Energy Technology Data Exchange (ETDEWEB)
Liang, Jingang; Wang, Kan; Qiu, Yishu [Dept. of Engineering Physics, LiuQing Building, Tsinghua University, Beijing (China); Chai, Xiao Ming; Qiang, Sheng Long [Science and Technology on Reactor System Design Technology Laboratory, Nuclear Power Institute of China, Chengdu (China)
2016-06-15
Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC) codes in accomplishing pin-wise three-dimensional (3D) full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC) code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.
AVATAR -- Automatic variance reduction in Monte Carlo calculations
Energy Technology Data Exchange (ETDEWEB)
Van Riper, K.A.; Urbatsch, T.J.; Soran, P.D. [and others
1997-05-01
AVATAR{trademark} (Automatic Variance And Time of Analysis Reduction), accessed through the graphical user interface application, Justine{trademark}, is a superset of MCNP{trademark} that automatically invokes THREEDANT{trademark} for a three-dimensional deterministic adjoint calculation on a mesh independent of the Monte Carlo geometry, calculates weight windows, and runs MCNP. Computational efficiency increases by a factor of 2 to 5 for a three-detector oil well logging tool model. Human efficiency increases dramatically, since AVATAR eliminates the need for deep intuition and hours of tedious handwork.
Towards exact variational Monte-Carlo calculations in light nuclei
Usmani, Q N; Singh, A
2005-01-01
We propose a new variational wave function, which is a modification of an earlier one with operatorial correlations. Calculations are carried out for light nuclei with the new wave function using AV₁₈ NN and UrbanaIX (UIX) NNN interactions. The new variational ansatz is based on an error analysis of the earlier wave function. The calculated energies are in better agreement with the Green's Function Monte Carlo (GFMC) and other techniques. Error analysis is extended further and additional reasonable modification of the wave function are also proposed for future studies.
Propagation of statistical and nuclear data uncertainties in Monte Carlo burn-up calculations
Energy Technology Data Exchange (ETDEWEB)
Garcia-Herranz, Nuria [Departamento de Ingenieria Nuclear, Universidad Politecnica de Madrid, UPM (Spain)], E-mail: nuria@din.upm.es; Cabellos, Oscar [Departamento de Ingenieria Nuclear, Universidad Politecnica de Madrid, UPM (Spain); Sanz, Javier [Departamento de Ingenieria Energetica, Universidad Nacional de Educacion a Distancia, UNED (Spain); Juan, Jesus [Laboratorio de Estadistica, Universidad Politecnica de Madrid, UPM (Spain); Kuijper, Jim C. [NRG - Fuels, Actinides and Isotopes Group, Petten (Netherlands)
2008-04-15
Two methodologies to propagate the uncertainties on the nuclide inventory in combined Monte Carlo-spectrum and burn-up calculations are presented, based on sensitivity/uncertainty and random sampling techniques (uncertainty Monte Carlo method). Both enable the assessment of the impact of uncertainties in the nuclear data as well as uncertainties due to the statistical nature of the Monte Carlo neutron transport calculation. The methodologies are implemented in our MCNP-ACAB system, which combines the neutron transport code MCNP-4C and the inventory code ACAB. A high burn-up benchmark problem is used to test the MCNP-ACAB performance in inventory predictions, with no uncertainties. A good agreement is found with the results of other participants. This benchmark problem is also used to assess the impact of nuclear data uncertainties and statistical flux errors in high burn-up applications. A detailed calculation is performed to evaluate the effect of cross-section uncertainties in the inventory prediction, taking into account the temporal evolution of the neutron flux level and spectrum. Very large uncertainties are found at the unusually high burn-up of this exercise (800 MWd/kgHM). To compare the impact of the statistical errors in the calculated flux with respect to the cross uncertainties, a simplified problem is considered, taking a constant neutron flux level and spectrum. It is shown that, provided that the flux statistical deviations in the Monte Carlo transport calculation do not exceed a given value, the effect of the flux errors in the calculated isotopic inventory are negligible (even at very high burn-up) compared to the effect of the large cross-section uncertainties available at present in the data files.
Quantum Monte Carlo Calculations in Solids with Downfolded Hamiltonians.
Ma, Fengjie; Purwanto, Wirawan; Zhang, Shiwei; Krakauer, Henry
2015-06-01
We present a combination of a downfolding many-body approach with auxiliary-field quantum Monte Carlo (AFQMC) calculations for extended systems. Many-body calculations operate on a simpler Hamiltonian which retains material-specific properties. The Hamiltonian is systematically improvable and allows one to dial, in principle, between the simplest model and the original Hamiltonian. As a by-product, pseudopotential errors are essentially eliminated using frozen orbitals constructed adaptively from the solid environment. The computational cost of the many-body calculation is dramatically reduced without sacrificing accuracy. Excellent accuracy is achieved for a range of solids, including semiconductors, ionic insulators, and metals. We apply the method to calculate the equation of state of cubic BN under ultrahigh pressure, and determine the spin gap in NiO, a challenging prototypical material with strong electron correlation effects. PMID:26196632
Infinite variance in fermion quantum Monte Carlo calculations
Shi, Hao; Zhang, Shiwei
2016-03-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.
Infinite variance in fermion quantum Monte Carlo calculations.
Shi, Hao; Zhang, Shiwei
2016-03-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling. PMID:27078480
Electron transport in radiotherapy using local-to-global Monte Carlo
International Nuclear Information System (INIS)
Local-to-Global (L-G) Monte Carlo methods are a way to make three-dimensional electron transport both fast and accurate relative to other Monte Carlo methods. This is achieved by breaking the simulation into two stages: a local calculation done over small geometries having the size and shape of the ''steps'' to be taken through the mesh; and a global calculation which relies on a stepping code that samples the stored results of the local calculation. The increase in speed results from taking fewer steps in the global calculation than required by ordinary Monte Carlo codes and by speeding up the calculation per step. The potential for accuracy comes from the ability to use long runs of detailed codes to compile probability distribution functions (PDFs) in the local calculation. Specific examples of successful Local-to-Global algorithms are given
Monte-Carlo calculations of positron implantation profiles in silver and gold
Aydin, A
2000-01-01
To investigate the implantation profiles of positrons in silver and gold, the Monte-Carlo programs developed previously by to simulate the transport of positrons in metals was used. The simulation technique is mainly based on the screened Rutherford differential cross section with a spin-relativistic correction factor for the elastic scattering at high energies supplemented by total cross sections at low energies, Gryzinski's semi-empirical expression to simulate the energy loss due to inelastic scattering, and Liljequist's model to calculate the total inelastic scattering cross section. Backscattering probabilities and mean penetration depths were calculated from the implantation profiles of positrons at energies between 1 and 50 keV, entering normally to semi-infinite silver and gold targets. The calculated backscattering probabilities and mean penetration depths are compared with comparable Monte-Carlo data and experimental results for semi-infinite silver and gold targets. The agreement is quite satisfact...
Neutron spectrum obtained with Monte Carlo and transport theory
International Nuclear Information System (INIS)
The development of the computer, resulting in increasing memory capacity and processing speed, has enabled the application of Monte Carlo method to estimate the fluxes in thousands of fine bin energy structure. Usually the MC calculation is made using continuous energy nuclear data and exact geometry. Self shielding and interference of nuclides resonances are properly considered. Therefore, the fluxes obtained by this method may be a good estimation of the neutron energy distribution (spectrum) for the problem. In an early work it was proposed to use these fluxes as weighting spectrum to generate multigroup cross section for fast reactor analysis using deterministic codes. This non-traditional use of MC calculation needs a validation to gain confidence in the results. The work presented here is the validation start step of this scheme. The spectra of the JOYO first core fuel assembly MK-I and the benchmark Godiva were calculated using the tally flux estimator of the MCNP code and compared with the reference. Also, the two problems were solved with the multigroup transport theory code XSDRN of the AMPX system using the 171 energy groups VITAMIN-C library. The spectra differences arising from the utilization of these codes, the influence of evaluated data file and the application to fast reactor calculation are discussed. (author)
Hellman-Feynman operator sampling in diffusion Monte Carlo calculations.
Gaudoin, R; Pitarke, J M
2007-09-21
Diffusion Monte Carlo (DMC) calculations typically yield highly accurate results in solid-state and quantum-chemical calculations. However, operators that do not commute with the Hamiltonian are at best sampled correctly up to second order in the error of the underlying trial wave function once simple corrections have been applied. This error is of the same order as that for the energy in variational calculations. Operators that suffer from these problems include potential energies and the density. This Letter presents a new method, based on the Hellman-Feynman theorem, for the correct DMC sampling of all operators diagonal in real space. Our method is easy to implement in any standard DMC code.
Quantum Monte Carlo calculations of two neutrons in finite volume
Klos, P; Tews, I; Gandolfi, S; Gezerlis, A; Hammer, H -W; Hoferichter, M; Schwenk, A
2016-01-01
Ab initio calculations provide direct access to the properties of pure neutron systems that are challenging to study experimentally. In addition to their importance for fundamental physics, their properties are required as input for effective field theories of the strong interaction. In this work, we perform auxiliary-field diffusion Monte Carlo calculations of the ground and first excited state of two neutrons in a finite box, considering a simple contact potential as well as chiral effective field theory interactions. We compare the results against exact diagonalizations and present a detailed analysis of the finite-volume effects, whose understanding is crucial for determining observables from the calculated energies. Using the L\\"uscher formula, we extract the low-energy S-wave scattering parameters from ground- and excited-state energies for different box sizes.
Infinite Variance in Fermion Quantum Monte Carlo Calculations
Shi, Hao
2015-01-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties, without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, lattice QCD calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied upon to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple sub-areas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations turn out to have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calc...
Patient-dependent beam-modifier physics in Monte Carlo photon dose calculations.
Schach von Wittenau, A E; Bergstrom, P M; Cox, L J
2000-05-01
Model pencil-beam on slab calculations are used as well as a series of detailed calculations of photon and electron output from commercial accelerators to quantify level(s) of physics required for the Monte Carlo transport of photons and electrons in treatment-dependent beam modifiers, such as jaws, wedges, blocks, and multileaf collimators, in photon teletherapy dose calculations. The physics approximations investigated comprise (1) not tracking particles below a given kinetic energy, (2) continuing to track particles, but performing simplified collision physics, particularly in handling secondary particle production, and (3) not tracking particles in specific spatial regions. Figures-of-merit needed to estimate the effects of these approximations are developed, and these estimates are compared with full-physics Monte Carlo calculations of the contribution of the collimating jaws to the on-axis depth-dose curve in a water phantom. These figures of merit are next used to evaluate various approximations used in coupled photon/electron physics in beam modifiers. Approximations for tracking electrons in air are then evaluated. It is found that knowledge of the materials used for beam modifiers, of the energies of the photon beams used, as well as of the length scales typically found in photon teletherapy plans, allows a number of simplifying approximations to be made in the Monte Carlo transport of secondary particles from the accelerator head and beam modifiers to the isocenter plane.
MCOR - Monte Carlo depletion code for reference LWR calculations
Energy Technology Data Exchange (ETDEWEB)
Puente Espel, Federico, E-mail: fup104@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Tippayakul, Chanatip, E-mail: cut110@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Ivanov, Kostadin, E-mail: kni1@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Misu, Stefan, E-mail: Stefan.Misu@areva.com [AREVA, AREVA NP GmbH, Erlangen (Germany)
2011-04-15
Research highlights: > Introduction of a reference Monte Carlo based depletion code with extended capabilities. > Verification and validation results for MCOR. > Utilization of MCOR for benchmarking deterministic lattice physics (spectral) codes. - Abstract: The MCOR (MCnp-kORigen) code system is a Monte Carlo based depletion system for reference fuel assembly and core calculations. The MCOR code is designed as an interfacing code that provides depletion capability to the LANL Monte Carlo code by coupling two codes: MCNP5 with the AREVA NP depletion code, KORIGEN. The physical quality of both codes is unchanged. The MCOR code system has been maintained and continuously enhanced since it was initially developed and validated. The verification of the coupling was made by evaluating the MCOR code against similar sophisticated code systems like MONTEBURNS, OCTOPUS and TRIPOLI-PEPIN. After its validation, the MCOR code has been further improved with important features. The MCOR code presents several valuable capabilities such as: (a) a predictor-corrector depletion algorithm, (b) utilization of KORIGEN as the depletion module, (c) individual depletion calculation of each burnup zone (no burnup zone grouping is required, which is particularly important for the modeling of gadolinium rings), and (d) on-line burnup cross-section generation by the Monte Carlo calculation for 88 isotopes and usage of the KORIGEN libraries for PWR and BWR typical spectra for the remaining isotopes. Besides the just mentioned capabilities, the MCOR code newest enhancements focus on the possibility of executing the MCNP5 calculation in sequential or parallel mode, a user-friendly automatic re-start capability, a modification of the burnup step size evaluation, and a post-processor and test-matrix, just to name the most important. The article describes the capabilities of the MCOR code system; from its design and development to its latest improvements and further ameliorations. Additionally
Adjoint Monte Carlo techniques and codes for organ dose calculations
International Nuclear Information System (INIS)
Adjoint Monte Carlo simulations can be effectively used for the estimation of doses in small targets when the sources are extended in large volumes or surfaces. The main features of two computer codes for calculating doses at free points or in organs of an anthropomorphic phantom are described. In the first program (REBEL-3) natural gamma-emitting sources are contained in the walls of a dwelling room; in the second one (POKER-CAMP) the user can specify arbitrary gamma sources with different spatial distributions in the environment: in (or on the surface of) the ground and in the air. 3 figures
Monte Carlo Particle Transport Capability for Inertial Confinement Fusion Applications
Energy Technology Data Exchange (ETDEWEB)
Brantley, P S; Stuart, L M
2006-11-06
A time-dependent massively-parallel Monte Carlo particle transport calculational module (ParticleMC) for inertial confinement fusion (ICF) applications is described. The ParticleMC package is designed with the long-term goal of transporting neutrons, charged particles, and gamma rays created during the simulation of ICF targets and surrounding materials, although currently the package treats neutrons and gamma rays. Neutrons created during thermonuclear burn provide a source of neutrons to the ParticleMC package. Other user-defined sources of particles are also available. The module is used within the context of a hydrodynamics client code, and the particle tracking is performed on the same computational mesh as used in the broader simulation. The module uses domain-decomposition and the MPI message passing interface to achieve parallel scaling for large numbers of computational cells. The Doppler effects of bulk hydrodynamic motion and the thermal effects due to the high temperatures encountered in ICF plasmas are directly included in the simulation. Numerical results for a three-dimensional benchmark test problem are presented in 3D XYZ geometry as a verification of the basic transport capability. In the full paper, additional numerical results including a prototype ICF simulation will be presented.
Global variance reduction for Monte Carlo reactor physics calculations
International Nuclear Information System (INIS)
Over the past few decades, hybrid Monte-Carlo-Deterministic (MC-DT) techniques have been mostly focusing on the development of techniques primarily with shielding applications in mind, i.e. problems featuring a limited number of responses. This paper focuses on the application of a new hybrid MC-DT technique: the SUBSPACE method, for reactor analysis calculation. The SUBSPACE method is designed to overcome the lack of efficiency that hampers the application of MC methods in routine analysis calculations on the assembly level where typically one needs to execute the flux solver in the order of 103-105 times. It places high premium on attaining high computational efficiency for reactor analysis application by identifying and capitalizing on the existing correlations between responses of interest. This paper places particular emphasis on using the SUBSPACE method for preparing homogenized few-group cross section sets on the assembly level for subsequent use in full-core diffusion calculations. A BWR assembly model is employed to calculate homogenized few-group cross sections for different burn-up steps. It is found that using the SUBSPACE method significant speedup can be achieved over the state of the art FW-CADIS method. While the presented speed-up alone is not sufficient to render the MC method competitive with the DT method, we believe this work will become a major step on the way of leveraging the accuracy of MC calculations for assembly calculations. (authors)
Russian roulette efficiency in Monte Carlo resonant absorption calculations
Energy Technology Data Exchange (ETDEWEB)
Ghassoun, J. E-mail: ghassoun@ucam.ac.ma; Jehouani, A
2000-11-15
The resonant absorption calculation in media containing heavy resonant nuclei is one of the most difficult problems treated in reactor physics. Deterministic techniques need many approximations to solve this kind of problem. On the other hand, the Monte Carlo method is a reliable mathematical tool for evaluating the neutron resonance escape probability. But it suffers from large statistical deviations of results and long computation times. In order to overcome this problem, we have used the Splitting and Russian Roulette technique coupled separately to the survival biasing and to the importance sampling for the energy parameter. These techniques have been used to calculate the neutron resonance absorption in infinite homogenous media containing hydrogen and uranium characterized by the dilution (ratio of the concentrations of hydrogen to uranium). The punctual neutron source energy is taken at E{sub s}=2 MeV and E{sub s}=676.45 eV, whereas the energy cut-off is fixed at E{sub c}=2.768 eV. The results show a large reduction of computation time and statistical deviation, without altering the mean resonance escape probability compared to the usual analog simulation. The Splitting and Russian Roulette coupled to the survival biasing method is found to be the best methods for studying the neutron resonant absorption, particularly for high energies. A comparison is done between the Monte Carlo and deterministic methods based on the numerical solution of the neutron slowing down equations by the iterative method results for several dilutions.
Russian roulette efficiency in Monte Carlo resonant absorption calculations
Ghassoun; Jehouani
2000-10-01
The resonant absorption calculation in media containing heavy resonant nuclei is one of the most difficult problems treated in reactor physics. Deterministic techniques need many approximations to solve this kind of problem. On the other hand, the Monte Carlo method is a reliable mathematical tool for evaluating the neutron resonance escape probability. But it suffers from large statistical deviations of results and long computation times. In order to overcome this problem, we have used the Splitting and Russian Roulette technique coupled separately to the survival biasing and to the importance sampling for the energy parameter. These techniques have been used to calculate the neutron resonance absorption in infinite homogenous media containing hydrogen and uranium characterized by the dilution (ratio of the concentrations of hydrogen to uranium). The punctual neutron source energy is taken at Es = 2 MeV and Es = 676.45 eV, whereas the energy cut-off is fixed at Ec = 2.768 eV. The results show a large reduction of computation time and statistical deviation, without altering the mean resonance escape probability compared to the usual analog simulation. The Splitting and Russian Roulette coupled to the survival biasing method is found to be the best methods for studying the neutron resonant absorption, particularly for high energies. A comparison is done between the Monte Carlo and deterministic methods based on the numerical solution of the neutron slowing down equations by the iterative method results for several dilutions. PMID:11003535
Russian roulette efficiency in Monte Carlo resonant absorption calculations
International Nuclear Information System (INIS)
The resonant absorption calculation in media containing heavy resonant nuclei is one of the most difficult problems treated in reactor physics. Deterministic techniques need many approximations to solve this kind of problem. On the other hand, the Monte Carlo method is a reliable mathematical tool for evaluating the neutron resonance escape probability. But it suffers from large statistical deviations of results and long computation times. In order to overcome this problem, we have used the Splitting and Russian Roulette technique coupled separately to the survival biasing and to the importance sampling for the energy parameter. These techniques have been used to calculate the neutron resonance absorption in infinite homogenous media containing hydrogen and uranium characterized by the dilution (ratio of the concentrations of hydrogen to uranium). The punctual neutron source energy is taken at Es=2 MeV and Es=676.45 eV, whereas the energy cut-off is fixed at Ec=2.768 eV. The results show a large reduction of computation time and statistical deviation, without altering the mean resonance escape probability compared to the usual analog simulation. The Splitting and Russian Roulette coupled to the survival biasing method is found to be the best methods for studying the neutron resonant absorption, particularly for high energies. A comparison is done between the Monte Carlo and deterministic methods based on the numerical solution of the neutron slowing down equations by the iterative method results for several dilutions
Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark
International Nuclear Information System (INIS)
There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as
Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark.
Renner, F; Wulff, J; Kapsch, R-P; Zink, K
2015-10-01
There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as
Therapeutic Applications of Monte Carlo Calculations in Nuclear Medicine
Sgouros, George
2003-01-01
This book examines the applications of Monte Carlo (MC) calculations in therapeutic nuclear medicine, from basic principles to computer implementations of software packages and their applications in radiation dosimetry and treatment planning. It is written for nuclear medicine physicists and physicians as well as radiation oncologists, and can serve as a supplementary text for medical imaging, radiation dosimetry and nuclear engineering graduate courses in science, medical and engineering faculties. With chapters is written by recognised authorities in that particular field, the book covers the entire range of MC applications in therapeutic medical and health physics, from its use in imaging prior to therapy to dose distribution modelling targeted radiotherapy. The contributions discuss the fundamental concepts of radiation dosimetry, radiobiological aspects of targeted radionuclide therapy and the various components and steps required for implementing a dose calculation and treatment planning methodology in ...
Quantum Monte Carlo Calculations of Nucleon-Nucleus Scattering
Wiringa, R. B.; Nollett, Kenneth M.; Pieper, Steven C.; Brida, I.
2009-10-01
We report recent quantum Monte Carlo (variational and Green's function) calculations of elastic nucleon-nucleus scattering. We are adding the cases of proton-^4He, neutron-^3H and proton-^3He scattering to a previous GFMC study of neutron-^4He scattering [1]. To do this requires generalizing our methods to include long-range Coulomb forces and to treat coupled channels. The two four-body cases can be compared to other accurate four-body calculational methods such as the AGS equations and hyperspherical harmonic expansions. We will present results for the Argonne v18 interaction alone and with Urbana and Illinois three-nucleon potentials. [4pt] [1] K.M. Nollett, S. C. Pieper, R.B. Wiringa, J. Carlson, and G.M. Hale, Phys. Rev. Lett. 99, 022502 (2007)
Path Integral Monte Carlo Calculation of the Deuterium Hugoniot
International Nuclear Information System (INIS)
Restricted path integral Monte Carlo simulations have been used to calculate the equilibrium properties of deuterium for two densities: 0.674 and 0.838 g cm -3 (rs=2.00 and 1.86) in the temperature range of 105≤T≤106 K . We carefully assess size effects and dependence on the time step of the path integral. Further, we compare the results obtained with a free particle nodal restriction with those from a self-consistent variational principle, which includes interactions and bound states. By using the calculated internal energies and pressures, we determine the shock Hugoniot and compare with recent laser shock wave experiments as well as other theories. (c) 2000 The American Physical Society
Monte Carlo dose calculation in dental amalgam phantom.
Aziz, Mohd Zahri Abdul; Yusoff, A L; Osman, N D; Abdullah, R; Rabaie, N A; Salikin, M S
2015-01-01
It has become a great challenge in the modern radiation treatment to ensure the accuracy of treatment delivery in electron beam therapy. Tissue inhomogeneity has become one of the factors for accurate dose calculation, and this requires complex algorithm calculation like Monte Carlo (MC). On the other hand, computed tomography (CT) images used in treatment planning system need to be trustful as they are the input in radiotherapy treatment. However, with the presence of metal amalgam in treatment volume, the CT images input showed prominent streak artefact, thus, contributed sources of error. Hence, metal amalgam phantom often creates streak artifacts, which cause an error in the dose calculation. Thus, a streak artifact reduction technique was applied to correct the images, and as a result, better images were observed in terms of structure delineation and density assigning. Furthermore, the amalgam density data were corrected to provide amalgam voxel with accurate density value. As for the errors of dose uncertainties due to metal amalgam, they were reduced from 46% to as low as 2% at d80 (depth of the 80% dose beyond Zmax) using the presented strategies. Considering the number of vital and radiosensitive organs in the head and the neck regions, this correction strategy is suggested in reducing calculation uncertainties through MC calculation. PMID:26500401
Monte Carlo dose calculation in dental amalgam phantom.
Aziz, Mohd Zahri Abdul; Yusoff, A L; Osman, N D; Abdullah, R; Rabaie, N A; Salikin, M S
2015-01-01
It has become a great challenge in the modern radiation treatment to ensure the accuracy of treatment delivery in electron beam therapy. Tissue inhomogeneity has become one of the factors for accurate dose calculation, and this requires complex algorithm calculation like Monte Carlo (MC). On the other hand, computed tomography (CT) images used in treatment planning system need to be trustful as they are the input in radiotherapy treatment. However, with the presence of metal amalgam in treatment volume, the CT images input showed prominent streak artefact, thus, contributed sources of error. Hence, metal amalgam phantom often creates streak artifacts, which cause an error in the dose calculation. Thus, a streak artifact reduction technique was applied to correct the images, and as a result, better images were observed in terms of structure delineation and density assigning. Furthermore, the amalgam density data were corrected to provide amalgam voxel with accurate density value. As for the errors of dose uncertainties due to metal amalgam, they were reduced from 46% to as low as 2% at d80 (depth of the 80% dose beyond Zmax) using the presented strategies. Considering the number of vital and radiosensitive organs in the head and the neck regions, this correction strategy is suggested in reducing calculation uncertainties through MC calculation.
Monte carlo dose calculation in dental amalgam phantom
Directory of Open Access Journals (Sweden)
Mohd Zahri Abdul Aziz
2015-01-01
Full Text Available It has become a great challenge in the modern radiation treatment to ensure the accuracy of treatment delivery in electron beam therapy. Tissue inhomogeneity has become one of the factors for accurate dose calculation, and this requires complex algorithm calculation like Monte Carlo (MC. On the other hand, computed tomography (CT images used in treatment planning system need to be trustful as they are the input in radiotherapy treatment. However, with the presence of metal amalgam in treatment volume, the CT images input showed prominent streak artefact, thus, contributed sources of error. Hence, metal amalgam phantom often creates streak artifacts, which cause an error in the dose calculation. Thus, a streak artifact reduction technique was applied to correct the images, and as a result, better images were observed in terms of structure delineation and density assigning. Furthermore, the amalgam density data were corrected to provide amalgam voxel with accurate density value. As for the errors of dose uncertainties due to metal amalgam, they were reduced from 46% to as low as 2% at d80 (depth of the 80% dose beyond Zmax using the presented strategies. Considering the number of vital and radiosensitive organs in the head and the neck regions, this correction strategy is suggested in reducing calculation uncertainties through MC calculation.
Monte Carlo shielding calculations in the double null configuration of NET
International Nuclear Information System (INIS)
Multi-dimensional Monte Carlo shielding calculations have been performed for evaluating the shielding performance of the NET reactor components. Biased Monte Carlo techniques, that are available in the MCNP-code, have been applied for describing the neutron and photon transport through the shielding components. A realistic three-dimensional model of a NET torus sector has been used, that takes into account all relevant reactor components adequately. The poloidal variations of the physical quantities relevant for the radiation shielding, the shielding performance of the divertors, and the neutron streaming through toroidal segment gaps and its impact on the shielding performance of the vacuum vessel are the main objects of the analysis. Furthermore, the relations between idealized one-dimensional and realistic three-dimensional shielding calculations are analyzed. (orig.)
A Monte Carlo Green's function method for three-dimensional neutron transport
International Nuclear Information System (INIS)
This paper describes a Monte Carlo transport kernel capability, which has recently been incorporated into the RACER continuous-energy Monte Carlo code. The kernels represent a Green's function method for neutron transport from a fixed-source volume out to a particular volume of interest. This method is very powerful transport technique. Also, since kernels are evaluated numerically by Monte Carlo, the problem geometry can be arbitrarily complex, yet exact. This method is intended for problems where an ex-core neutron response must be determined for a variety of reactor conditions. Two examples are ex-core neutron detector response and vessel critical weld fast flux. The response is expressed in terms of neutron transport kernels weighted by a core fission source distribution. In these types of calculations, the response must be computed for hundreds of source distributions, but the kernels only need to be calculated once. The advance described in this paper is that the kernels are generated with a highly accurate three-dimensional Monte Carlo transport calculation instead of an approximate method such as line-of-sight attenuation theory or a synthesized three-dimensional discrete ordinates solution
Using Nuclear Theory, Data and Uncertainties in Monte Carlo Transport Applications
Energy Technology Data Exchange (ETDEWEB)
Rising, Michael Evan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-11-03
These are slides for a presentation on using nuclear theory, data and uncertainties in Monte Carlo transport applications. The following topics are covered: nuclear data (experimental data versus theoretical models, data evaluation and uncertainty quantification), fission multiplicity models (fixed source applications, criticality calculations), uncertainties and their impact (integral quantities, sensitivity analysis, uncertainty propagation).
Shape based Monte Carlo code for light transport in complex heterogeneous tissues
Margallo-Balbás, E.; French, P.J.
2007-01-01
A Monte Carlo code for the calculation of light transport in heterogeneous scattering media is presented together with its validation. Triangle meshes are used to define the interfaces between different materials, in contrast with techniques based on individual volume elements. This approach allows
MCNP, a general Monte Carlo code for neutron and photon transport: a summary
International Nuclear Information System (INIS)
The general-purpose Monte Carlo code MCNP can be used for neutron, photon, or coupled neutron-photon transport, including the capability to calculate eigenvalues for critical systems. The code treats an arbitrary three-dimensional configuration of materials in geometric cells bounded by first- and second-degree surfaces and some special fourth-degree surfaces
Calculation of Gamma-ray Responses for HPGe Detectors with TRIPOLI-4 Monte Carlo Code
Lee, Yi-Kang; Garg, Ruchi
2014-06-01
The gamma-ray response calculation of HPGe (High Purity Germanium) detector is one of the most important topics of the Monte Carlo transport codes for nuclear instrumentation applications. In this study the new options of TRIPOLI-4 Monte Carlo transport code for gamma-ray spectrometry were investigated. Recent improvements include the gamma-rays modeling of the electron-position annihilation, the low energy electron transport modeling, and the low energy characteristic X-ray production. The impact of these improvements on the detector efficiency of the gamma-ray spectrometry calculations was verified. Four models of HPGe detectors and sample sources were studied. The germanium crystal, the dead layer of the crystal, the central hole, the beryllium window, and the metal housing are the essential parts in detector modeling. A point source, a disc source, and a cylindrical extended source containing a liquid radioactive solution were used to study the TRIPOLI-4 calculations for the gamma-ray energy deposition and the gamma-ray self-shielding. The calculations of full-energy-peak and total detector efficiencies for different sample-detector geometries were performed. Using TRIPOLI-4 code, different gamma-ray energies were applied in order to establish the efficiency curves of the HPGe gamma-ray detectors.
Juste, Belén; Miró, R.; Abella, V.; Santos, A.; Verdú, Gumersindo
2015-11-01
Radiation therapy treatment planning based on Monte Carlo simulation provide a very accurate dose calculation compared to deterministic systems. Nowadays, Metal-Oxide-Semiconductor Field Effect Transistor (MOSFET) dosimeters are increasingly utilized in radiation therapy to verify the received dose by patients. In the present work, we have used the MCNP6 (Monte Carlo N-Particle transport code) to simulate the irradiation of an anthropomorphic phantom (RANDO) with a medical linear accelerator. The detailed model of the Elekta Precise multileaf collimator using a 6 MeV photon beam was designed and validated by means of different beam sizes and shapes in previous works. To include in the simulation the RANDO phantom geometry a set of Computer Tomography images of the phantom was obtained and formatted. The slices are input in PLUNC software, which performs the segmentation by defining anatomical structures and a Matlab algorithm writes the phantom information in MCNP6 input deck format. The simulation was verified and therefore the phantom model and irradiation was validated throughout the comparison of High-Sensitivity MOSFET dosimeter (Best medical Canada) measurements in different points inside the phantom with simulation results. On-line Wireless MOSFET provide dose estimation in the extremely thin sensitive volume, so a meticulous and accurate validation has been performed. The comparison show good agreement between the MOSFET measurements and the Monte Carlo calculations, confirming the validity of the developed procedure to include patients CT in simulations and approving the use of Monte Carlo simulations as an accurate therapy treatment plan.
Quantum Monte Carlo Calculations Applied to Magnetic Molecules
Energy Technology Data Exchange (ETDEWEB)
Engelhardt, Larry [Iowa State Univ., Ames, IA (United States)
2006-01-01
We have calculated the equilibrium thermodynamic properties of Heisenberg spin systems using a quantum Monte Carlo (QMC) method. We have used some of these systems as models to describe recently synthesized magnetic molecules, and-upon comparing the results of these calculations with experimental data-have obtained accurate estimates for the basic parameters of these models. We have also performed calculations for other systems that are of more general interest, being relevant both for existing experimental data and for future experiments. Utilizing the concept of importance sampling, these calculations can be carried out in an arbitrarily large quantum Hilbert space, while still avoiding any approximations that would introduce systematic errors. The only errors are statistical in nature, and as such, their magnitudes are accurately estimated during the course of a simulation. Frustrated spin systems present a major challenge to the QMC method, nevertheless, in many instances progress can be made. In this chapter, the field of magnetic molecules is introduced, paying particular attention to the characteristics that distinguish magnetic molecules from other systems that are studied in condensed matter physics. We briefly outline the typical path by which we learn about magnetic molecules, which requires a close relationship between experiments and theoretical calculations. The typical experiments are introduced here, while the theoretical methods are discussed in the next chapter. Each of these theoretical methods has a considerable limitation, also described in Chapter 2, which together serve to motivate the present work. As is shown throughout the later chapters, the present QMC method is often able to provide useful information where other methods fail. In Chapter 3, the use of Monte Carlo methods in statistical physics is reviewed, building up the fundamental ideas that are necessary in order to understand the method that has been used in this work. With these
Quantum Monte Carlo Calculations Applied to Magnetic Molecules
International Nuclear Information System (INIS)
We have calculated the equilibrium thermodynamic properties of Heisenberg spin systems using a quantum Monte Carlo (QMC) method. We have used some of these systems as models to describe recently synthesized magnetic molecules, and-upon comparing the results of these calculations with experimental data-have obtained accurate estimates for the basic parameters of these models. We have also performed calculations for other systems that are of more general interest, being relevant both for existing experimental data and for future experiments. Utilizing the concept of importance sampling, these calculations can be carried out in an arbitrarily large quantum Hilbert space, while still avoiding any approximations that would introduce systematic errors. The only errors are statistical in nature, and as such, their magnitudes are accurately estimated during the course of a simulation. Frustrated spin systems present a major challenge to the QMC method, nevertheless, in many instances progress can be made. In this chapter, the field of magnetic molecules is introduced, paying particular attention to the characteristics that distinguish magnetic molecules from other systems that are studied in condensed matter physics. We briefly outline the typical path by which we learn about magnetic molecules, which requires a close relationship between experiments and theoretical calculations. The typical experiments are introduced here, while the theoretical methods are discussed in the next chapter. Each of these theoretical methods has a considerable limitation, also described in Chapter 2, which together serve to motivate the present work. As is shown throughout the later chapters, the present QMC method is often able to provide useful information where other methods fail. In Chapter 3, the use of Monte Carlo methods in statistical physics is reviewed, building up the fundamental ideas that are necessary in order to understand the method that has been used in this work. With these
GPU-based fast Monte Carlo simulation for radiotherapy dose calculation
Jia, Xun; Graves, Yan Jiang; Folkerts, Michael; Jiang, Steve B
2011-01-01
Monte Carlo (MC) simulation is commonly considered to be the most accurate dose calculation method in radiotherapy. However, its efficiency still requires improvement for many routine clinical applications. In this paper, we present our recent progress towards the development a GPU-based MC dose calculation package, gDPM v2.0. It utilizes the parallel computation ability of a GPU to achieve high efficiency, while maintaining the same particle transport physics as in the original DPM code and hence the same level of simulation accuracy. In GPU computing, divergence of execution paths between threads can considerably reduce the efficiency. Since photons and electrons undergo different physics and hence attain different execution paths, we use a simulation scheme where photon transport and electron transport are separated to partially relieve the thread divergence issue. High performance random number generator and hardware linear interpolation are also utilized. We have also developed various components to hand...
International Nuclear Information System (INIS)
After an accidental release of radionuclides to the inhabited environment the external gamma irradiation from deposited radioactivity contributes significantly to the radiation exposure of the population for extended periods. For evaluating this exposure pathway, three main model requirements are needed: (i) to calculate the air kerma value per photon emitted per unit source area, based on Monte Carlo (MC) simulations; (ii) to describe the distribution and dynamics of radionuclides on the diverse urban surfaces; and (iii) to combine all these elements in a relevant urban model to calculate the resulting doses according to the actual scenario. This paper provides an overview about the different approaches to calculate photon transport in urban areas and about several dose calculation codes published. Two types of Monte Carlo simulations are presented using the global and the local approaches of photon transport. Moreover, two different philosophies of the dose calculation, the 'location factor method' and a combination of relative contamination of surfaces with air kerma values are described. The main features of six codes (ECOSYS, EDEM2M, EXPURT, PARATI, TEMAS, URGENT) are highlighted together with a short model-model features intercomparison
A generic algorithm for Monte Carlo simulation of proton transport
Salvat, Francesc
2013-12-01
A mixed (class II) algorithm for Monte Carlo simulation of the transport of protons, and other heavy charged particles, in matter is presented. The emphasis is on the electromagnetic interactions (elastic and inelastic collisions) which are simulated using strategies similar to those employed in the electron-photon code PENELOPE. Elastic collisions are described in terms of numerical differential cross sections (DCSs) in the center-of-mass frame, calculated from the eikonal approximation with the Dirac-Hartree-Fock-Slater atomic potential. The polar scattering angle is sampled by employing an adaptive numerical algorithm which allows control of interpolation errors. The energy transferred to the recoiling target atoms (nuclear stopping) is consistently described by transformation to the laboratory frame. Inelastic collisions are simulated from DCSs based on the plane-wave Born approximation (PWBA), making use of the Sternheimer-Liljequist model of the generalized oscillator strength, with parameters adjusted to reproduce (1) the electronic stopping power read from the input file, and (2) the total cross sections for impact ionization of inner subshells. The latter were calculated from the PWBA including screening and Coulomb corrections. This approach provides quite a realistic description of the energy-loss distribution in single collisions, and of the emission of X-rays induced by proton impact. The simulation algorithm can be readily modified to include nuclear reactions, when the corresponding cross sections and emission probabilities are available, and bremsstrahlung emission.
Titrating Polyelectrolytes - Variational Calculations and Monte Carlo Simulations
Jönsson, B; Peterson, C; Sommelius, O; Söderberg, B
1995-01-01
Variational methods are used to calculate structural and thermodynamical properties of a titrating polyelectrolyte in a discrete representation. The Coulomb interactions are emulated by harmonic repulsive forces, the force constants being used as variational parameters to minimize the free energy. For the titrating charges, a mean field approach is used. The accuracy is tested against Monte Carlo data for up to 1000 monomers. For an unscreened chain, excellent agreement is obtained for the end-to-end distance and the apparent dissociation constant. With screening, the thermodynamical properties are invariably well described, although the structural agreement deteriorates. A very simple rigid-rod approximation is also considered, giving surprisingly good results for certain properties.
Quantum Monte Carlo calculations with chiral effective field theory interactions
Energy Technology Data Exchange (ETDEWEB)
Tews, Ingo
2015-10-12
The neutron-matter equation of state connects several physical systems over a wide density range, from cold atomic gases in the unitary limit at low densities, to neutron-rich nuclei at intermediate densities, up to neutron stars which reach supranuclear densities in their core. An accurate description of the neutron-matter equation of state is therefore crucial to describe these systems. To calculate the neutron-matter equation of state reliably, precise many-body methods in combination with a systematic theory for nuclear forces are needed. Chiral effective field theory (EFT) is such a theory. It provides a systematic framework for the description of low-energy hadronic interactions and enables calculations with controlled theoretical uncertainties. Chiral EFT makes use of a momentum-space expansion of nuclear forces based on the symmetries of Quantum Chromodynamics, which is the fundamental theory of strong interactions. In chiral EFT, the description of nuclear forces can be systematically improved by going to higher orders in the chiral expansion. On the other hand, continuum Quantum Monte Carlo (QMC) methods are among the most precise many-body methods available to study strongly interacting systems at finite densities. They treat the Schroedinger equation as a diffusion equation in imaginary time and project out the ground-state wave function of the system starting from a trial wave function by propagating the system in imaginary time. To perform this propagation, continuum QMC methods require as input local interactions. However, chiral EFT, which is naturally formulated in momentum space, contains several sources of nonlocality. In this Thesis, we show how to construct local chiral two-nucleon (NN) and three-nucleon (3N) interactions and discuss results of first QMC calculations for pure neutron systems. We have performed systematic auxiliary-field diffusion Monte Carlo (AFDMC) calculations for neutron matter using local chiral NN interactions. By
Quantum Monte Carlo calculations with chiral effective field theory interactions
International Nuclear Information System (INIS)
The neutron-matter equation of state connects several physical systems over a wide density range, from cold atomic gases in the unitary limit at low densities, to neutron-rich nuclei at intermediate densities, up to neutron stars which reach supranuclear densities in their core. An accurate description of the neutron-matter equation of state is therefore crucial to describe these systems. To calculate the neutron-matter equation of state reliably, precise many-body methods in combination with a systematic theory for nuclear forces are needed. Chiral effective field theory (EFT) is such a theory. It provides a systematic framework for the description of low-energy hadronic interactions and enables calculations with controlled theoretical uncertainties. Chiral EFT makes use of a momentum-space expansion of nuclear forces based on the symmetries of Quantum Chromodynamics, which is the fundamental theory of strong interactions. In chiral EFT, the description of nuclear forces can be systematically improved by going to higher orders in the chiral expansion. On the other hand, continuum Quantum Monte Carlo (QMC) methods are among the most precise many-body methods available to study strongly interacting systems at finite densities. They treat the Schroedinger equation as a diffusion equation in imaginary time and project out the ground-state wave function of the system starting from a trial wave function by propagating the system in imaginary time. To perform this propagation, continuum QMC methods require as input local interactions. However, chiral EFT, which is naturally formulated in momentum space, contains several sources of nonlocality. In this Thesis, we show how to construct local chiral two-nucleon (NN) and three-nucleon (3N) interactions and discuss results of first QMC calculations for pure neutron systems. We have performed systematic auxiliary-field diffusion Monte Carlo (AFDMC) calculations for neutron matter using local chiral NN interactions. By
Energy Technology Data Exchange (ETDEWEB)
Burkatzki, Mark Thomas
2008-07-01
The author presents scalar-relativistic energy-consistent Hartree-Fock pseudopotentials for the main-group and 3d-transition-metal elements. The pseudopotentials do not exhibit a singularity at the nucleus and are therefore suitable for quantum Monte Carlo (QMC) calculations. The author demonstrates their transferability through extensive benchmark calculations of atomic excitation spectra as well as molecular properties. In particular, the author computes the vibrational frequencies and binding energies of 26 first- and second-row diatomic molecules using post Hartree-Fock methods, finding excellent agreement with the corresponding all-electron values. The author shows that the presented pseudopotentials give superior accuracy than other existing pseudopotentials constructed specifically for QMC. The localization error and the efficiency in QMC are discussed. The author also presents QMC calculations for selected atomic and diatomic 3d-transitionmetal systems. Finally, valence basis sets of different sizes (VnZ with n=D,T,Q,5 for 1st and 2nd row; with n=D,T for 3rd to 5th row; with n=D,T,Q for the 3d transition metals) optimized for the pseudopotentials are presented. (orig.)
Espel, Federico Puente
The main objective of this PhD research is to develop a high accuracy modeling tool using a Monte Carlo based coupled system. The presented research comprises the development of models to include the thermal-hydraulic feedback to the Monte Carlo method and speed-up mechanisms to accelerate the Monte Carlo criticality calculation. Presently, deterministic codes based on the diffusion approximation of the Boltzmann transport equation, coupled with channel-based (or sub-channel based) thermal-hydraulic codes, carry out the three-dimensional (3-D) reactor core calculations of the Light Water Reactors (LWRs). These deterministic codes utilize nuclear homogenized data (normally over large spatial zones, consisting of fuel assembly or parts of fuel assembly, and in the best case, over small spatial zones, consisting of pin cell), which is functionalized in terms of thermal-hydraulic feedback parameters (in the form of off-line pre-generated cross-section libraries). High accuracy modeling is required for advanced nuclear reactor core designs that present increased geometry complexity and material heterogeneity. Such high-fidelity methods take advantage of the recent progress in computation technology and coupled neutron transport solutions with thermal-hydraulic feedback models on pin or even on sub-pin level (in terms of spatial scale). The continuous energy Monte Carlo method is well suited for solving such core environments with the detailed representation of the complicated 3-D problem. The major advantages of the Monte Carlo method over the deterministic methods are the continuous energy treatment and the exact 3-D geometry modeling. However, the Monte Carlo method involves vast computational time. The interest in Monte Carlo methods has increased thanks to the improvements of the capabilities of high performance computers. Coupled Monte-Carlo calculations can serve as reference solutions for verifying high-fidelity coupled deterministic neutron transport methods
Monte Carlo calculation of ''skyshine'' neutron dose from ALS [Advanced Light Source
International Nuclear Information System (INIS)
This report discusses the following topics on ''skyshine'' neutron dose from ALS: Sources of radiation; ALS modeling for skyshine calculations; MORSE Monte-Carlo; Implementation of MORSE; Results of skyshine calculations from storage ring; and Comparison of MORSE shielding calculations
International Nuclear Information System (INIS)
The mathematics model of particle transportation was built, based on the sample of the impaction trace of the narrow beam γ photon in the medium according to the principle of interaction between γ photon and the material, and a computer procedure was organized to simulate the process of transportation for the γ photon in the medium and record the emission probability of γ photon and its corresponding thickness of medium with LabWindows/CVI, which was used to calculate narrow beam γ ray mass attenuation coefficients of absorbing medium. The results show that it is feasible for Monte Carlo method to calculate narrow beam γ ray mass attenuation coefficients of absorbing medium. (authors)
Efficient, Automated Monte Carlo Methods for Radiation Transport.
Kong, Rong; Ambrose, Martin; Spanier, Jerome
2008-11-20
Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k + 1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed. PMID:23226872
Quantum Transport Calculations Using Periodic Boundary Conditions
Wang, Lin-Wang
2004-01-01
An efficient new method is presented to calculate the quantum transports using periodic boundary conditions. This method allows the use of conventional ground state ab initio programs without big changes. The computational effort is only a few times of a normal ground state calculations, thus is makes accurate quantum transport calculations for large systems possible.
Neutron cross-section probability tables in TRIPOLI-3 Monte Carlo transport code
Energy Technology Data Exchange (ETDEWEB)
Zheng, S.H.; Vergnaud, T.; Nimal, J.C. [Commissariat a l`Energie Atomique, Gif-sur-Yvette (France). Lab. d`Etudes de Protection et de Probabilite
1998-03-01
Neutron transport calculations need an accurate treatment of cross sections. Two methods (multi-group and pointwise) are usually used. A third one, the probability table (PT) method, has been developed to produce a set of cross-section libraries, well adapted to describe the neutron interaction in the unresolved resonance energy range. Its advantage is to present properly the neutron cross-section fluctuation within a given energy group, allowing correct calculation of the self-shielding effect. Also, this PT cross-section representation is suitable for simulation of neutron propagation by the Monte Carlo method. The implementation of PTs in the TRIPOLI-3 three-dimensional general Monte Carlo transport code, developed at Commissariat a l`Energie Atomique, and several validation calculations are presented. The PT method is proved to be valid not only in the unresolved resonance range but also in all the other energy ranges.
Implict Monte Carlo Radiation Transport Simulations of Four Test Problems
Energy Technology Data Exchange (ETDEWEB)
Gentile, N
2007-08-01
Radiation transport codes, like almost all codes, are difficult to develop and debug. It is helpful to have small, easy to run test problems with known answers to use in development and debugging. It is also prudent to re-run test problems periodically during development to ensure that previous code capabilities have not been lost. We describe four radiation transport test problems with analytic or approximate analytic answers. These test problems are suitable for use in debugging and testing radiation transport codes. We also give results of simulations of these test problems performed with an Implicit Monte Carlo photonics code.
Assessment of the Influence of Thermal Scattering Library on Monte-Carlo Calculation
Energy Technology Data Exchange (ETDEWEB)
Kim, Gwanyoung; Woo, Swengwoong [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)
2014-05-15
Monte-Carlo Neutron Transport Code uses continuous energy neutron libraries generally. Also thermal scattering libraries are used to represent a thermal neutron scattering by molecules and crystalline solids completely. Both neutron libraries and thermal scattering libraries are generated by NJOY based on ENDF data. While a neutron library can be generated for any specific temperature, a thermal scattering library can be generated for restricted temperatures when using ENDF data. However it is able to generate a thermal scattering for any specific temperature by using the LEAPR module in NJOY instead of using ENDF data. In this study, thermal scattering libraries of hydrogen bound in light water and carbon bound in graphite are generated by using the LEAPR module and ENDF data, and it is assessed the influence of each libraries on Monte-Carlo calculations. In addition, it is assessed the influence of a library temperature on Monte-Carlo calculations. In this study, thermal scattering libraries are generated by using LEAPR module in NJOY, and it is developed NIM program to do this work. It is compared above libraries with libraries generated from ENDF thermal scattering data. And the comparison carried out for H in H{sub 2}O and C in graphite. As a result, similar results came out between libraries generated from LEAPR module and that generated from ENDF thermal scattering data. Hereby, it is conclude that the generation of thermal scattering libraries with LEAPR module is appropriate to use and it is able to generate a library with user-specific temperature. Also it is assessed how much a temperature in a thermal scattering library influences on Monte-Carlo calculations.
Baräo, Fernando; Nakagawa, Masayuki; Távora, Luis; Vaz, Pedro
2001-01-01
This book focusses on the state of the art of Monte Carlo methods in radiation physics and particle transport simulation and applications, the latter involving in particular, the use and development of electron--gamma, neutron--gamma and hadronic codes. Besides the basic theory and the methods employed, special attention is paid to algorithm development for modeling, and the analysis of experiments and measurements in a variety of fields ranging from particle to medical physics.
Lattice Monte Carlo calculations of finite temperature QCD
International Nuclear Information System (INIS)
The author discusses fairly generally the current status of the lattice description of the deconfinement transition and the properties of hadronic matter at high (and low) temperature T. An ultimate goal of these investigations is to learn whether or not QCD actually predicts the naive phase diagram. A more realistic goal, which is at present partially within our grasp, is to compute the static properties of QCD matter at T > 0 from first principles. These include the order of phase transitions, critical temperatures T/sub c/, critical exponents or latent heat, but not dynamical critical properties, such as the behavior of Green's functions near T/sub c/. The author knows of no first- principles discussions of non-equilibrium properties of QCD, which would be required for a description of the experiments. In fact, experimentalists should think of the world studied by lattice or Monte Carlo methods as a little crystal in an oven whose temperature is kept constant in time. The author begins by giving a short description of how we set up the finite-temperature field theory on a lattice to display the important parts of the calculation without going too much into details. Then the author discusses recent progress in our understanding of the glue world - pure gauge theories - and ends by discussing the physically relevant case of fermions and gauge fields
Neutron point-flux calculation by Monte Carlo
International Nuclear Information System (INIS)
A survey of the usual methods for estimating flux at a point is given. The associated variance-reducing techniques in direct Monte Carlo games are explained. The multigroup Monte Carlo codes MC for critical systems and PUNKT for point source-point detector-systems are represented, and problems in applying the codes to practical tasks are discussed. (author)
Parallelization of MCATNP MONTE CARLO particle transport code by using MPI
International Nuclear Information System (INIS)
A Monte Carlo code for simulating Atmospheric Transport of Neutrons and Photons (MCATNP) is used to simulate the ionization effects caused by high altitude nuclear detonation (HAND) and it was parallelized in MPI by adopting the leap random number producer and modifying the original serial code. The parallel results and serial results are identical. The speedup increases almost linearly with the number of processors used. The parallel efficiency is up to to 97% while 16 processors are used, and 94% while 32 are used. The experimental results show that parallelization can obviously reduce the calculation time of Monte Carlo simulation of HAND ionization effects. (authors)
Monte Carlo Calculations Supporting Patient Plan Verification in Proton Therapy.
Lima, Thiago V M; Dosanjh, Manjit; Ferrari, Alfredo; Molineli, Silvia; Ciocca, Mario; Mairani, Andrea
2016-01-01
Patient's treatment plan verification covers substantial amount of the quality assurance (QA) resources; this is especially true for Intensity-Modulated Proton Therapy (IMPT). The use of Monte Carlo (MC) simulations in supporting QA has been widely discussed, and several methods have been proposed. In this paper, we studied an alternative approach from the one being currently applied clinically at Centro Nazionale di Adroterapia Oncologica (CNAO). We reanalyzed the previously published data (Molinelli et al. (1)), where 9 patient plans were investigated in which the warning QA threshold of 3% mean dose deviation was crossed. The possibility that these differences between measurement and calculated dose were related to dose modeling (Treatment Planning Systems (TPS) vs. MC), limitations on dose delivery system, or detectors mispositioning was originally explored, but other factors, such as the geometric description of the detectors, were not ruled out. For the purpose of this work, we compared ionization chambers' measurements with different MC simulation results. It was also studied that some physical effects were introduced by this new approach, for example, inter-detector interference and the delta ray thresholds. The simulations accounting for a detailed geometry typically are superior (statistical difference - p-value around 0.01) to most of the MC simulations used at CNAO (only inferior to the shift approach used). No real improvement was observed in reducing the current delta ray threshold used (100 keV), and no significant interference between ion chambers in the phantom were detected (p-value 0.81). In conclusion, it was observed that the detailed geometrical description improves the agreement between measurement and MC calculations in some cases. But in other cases, position uncertainty represents the dominant uncertainty. The inter-chamber disturbance was not detected for the therapeutic protons energies, and the results from the current delta threshold
Monte Carlo Calculations Supporting Patient Plan Verification in Proton Therapy
Lima, Thiago V. M.; Dosanjh, Manjit; Ferrari, Alfredo; Molineli, Silvia; Ciocca, Mario; Mairani, Andrea
2016-01-01
Patient’s treatment plan verification covers substantial amount of the quality assurance (QA) resources; this is especially true for Intensity-Modulated Proton Therapy (IMPT). The use of Monte Carlo (MC) simulations in supporting QA has been widely discussed, and several methods have been proposed. In this paper, we studied an alternative approach from the one being currently applied clinically at Centro Nazionale di Adroterapia Oncologica (CNAO). We reanalyzed the previously published data (Molinelli et al. (1)), where 9 patient plans were investigated in which the warning QA threshold of 3% mean dose deviation was crossed. The possibility that these differences between measurement and calculated dose were related to dose modeling (Treatment Planning Systems (TPS) vs. MC), limitations on dose delivery system, or detectors mispositioning was originally explored, but other factors, such as the geometric description of the detectors, were not ruled out. For the purpose of this work, we compared ionization chambers’ measurements with different MC simulation results. It was also studied that some physical effects were introduced by this new approach, for example, inter-detector interference and the delta ray thresholds. The simulations accounting for a detailed geometry typically are superior (statistical difference – p-value around 0.01) to most of the MC simulations used at CNAO (only inferior to the shift approach used). No real improvement was observed in reducing the current delta ray threshold used (100 keV), and no significant interference between ion chambers in the phantom were detected (p-value 0.81). In conclusion, it was observed that the detailed geometrical description improves the agreement between measurement and MC calculations in some cases. But in other cases, position uncertainty represents the dominant uncertainty. The inter-chamber disturbance was not detected for the therapeutic protons energies, and the results from the current delta
Monte Carlo calculations supporting patient plan verification in proton therapy
Directory of Open Access Journals (Sweden)
Thiago Viana Miranda Lima
2016-03-01
Full Text Available Patient’s treatment plan verification covers substantial amount of the quality assurance (QA resources, this is especially true for Intensity Modulated Proton Therapy (IMPT. The use of Monte Carlo (MC simulations in supporting QA has been widely discussed and several methods have been proposed. In this paper we studied an alternative approach from the one being currently applied clinically at Centro Nazionale di Adroterapia Oncologica (CNAO. We reanalysed the previously published data (Molinelli et al. 2013, where 9 patient plans were investigated in which the warning QA threshold of 3% mean dose deviation was crossed. The possibility that these differences between measurement and calculated dose were related to dose modelling (Treatment Planning Systems (TPS vs MC, limitations on dose delivery system or detectors mispositioning was originally explored but other factors such as the geometric description of the detectors were not ruled out. For the purpose of this work we compared ionisation-chambers measurements with different MC simulations results. It was also studied some physical effects introduced by this new approach for example inter detector interference and the delta ray thresholds. The simulations accounting for a detailed geometry typically are superior (statistical difference - p-value around 0.01 to most of the MC simulations used at CNAO (only inferior to the shift approach used. No real improvement were observed in reducing the current delta-ray threshold used (100 keV and no significant interference between ion chambers in the phantom were detected (p-value 0.81. In conclusion, it was observed that the detailed geometrical description improves the agreement between measurement and MC calculations in some cases. But in other cases position uncertainty represents the dominant uncertainty. The inter chamber disturbance was not detected for the therapeutic protons energies and the results from the current delta threshold are
MAMONT program for neutron field calculation by the Monte Carlo method
International Nuclear Information System (INIS)
The MAMONT program (MAthematical MOdelling of Neutron Trajectories) designed for three-dimensional calculation of neutron transport by analogue and nonanalogue Monte Carlo methods in the range of energies from 15 MeV to the thermal ones is described. The program is written in FORTRAN and is realized at the BESM-6 computer. Group constants of the library modulus are compiled of the ENDL-83, ENDF/B-4 and JENDL-2 files. The possibility of calculation for the layer spherical, cylindrical and rectangular configurations is envisaged. Accumulation and averaging of slowing-down kinetics functionals (averaged logarithmic losses of energy, time of slowing- down, free paths, the number of collisions, age), diffusion parameters, leakage spectra and fluxes as well as formation of separate isotopes over zones are realized in the process of calculation. 16 tabs
Zhong, Zhaopeng; Talamo, Alberto; Gohar, Yousry
2013-07-01
The effective delayed neutron fraction β plays an important role in kinetics and static analysis of the reactor physics experiments. It is used as reactivity unit referred to as "dollar". Usually, it is obtained by computer simulation due to the difficulty in measuring it experimentally. In 1965, Keepin proposed a method, widely used in the literature, for the calculation of the effective delayed neutron fraction β. This method requires calculation of the adjoint neutron flux as a weighting function of the phase space inner products and is easy to implement by deterministic codes. With Monte Carlo codes, the solution of the adjoint neutron transport equation is much more difficult because of the continuous-energy treatment of nuclear data. Consequently, alternative methods, which do not require the explicit calculation of the adjoint neutron flux, have been proposed. In 1997, Bretscher introduced the k-ratio method for calculating the effective delayed neutron fraction; this method is based on calculating the multiplication factor of a nuclear reactor core with and without the contribution of delayed neutrons. The multiplication factor set by the delayed neutrons (the delayed multiplication factor) is obtained as the difference between the total and the prompt multiplication factors. Using Monte Carlo calculation Bretscher evaluated the β as the ratio between the delayed and total multiplication factors (therefore the method is often referred to as the k-ratio method). In the present work, the k-ratio method is applied by Monte Carlo (MCNPX) and deterministic (PARTISN) codes. In the latter case, the ENDF/B nuclear data library of the fuel isotopes (235U and 238U) has been processed by the NJOY code with and without the delayed neutron data to prepare multi-group WIMSD neutron libraries for the lattice physics code DRAGON, which was used to generate the PARTISN macroscopic cross sections. In recent years Meulekamp and van der Marck in 2006 and Nauchi and Kameyama
International Nuclear Information System (INIS)
Numerous variance reduction techniques, such as splitting/Russian roulette, weight windows, and the exponential transform exist for improving the efficiency of Monte Carlo transport calculations. Typically, however, these methods, while reducing the variance in the problem area of interest tend to increase the variance in other, presumably less important, regions. As such, these methods tend to be not as effective in Monte Carlo calculations which require the minimization of the variance everywhere. Recently, ''Local'' Exponential Transform (LET) methods have been developed as a means of approximating the zero-variance solution. A numerical solution to the adjoint diffusion equation is used, along with an exponential representation of the adjoint flux in each cell, to determine ''local'' biasing parameters. These parameters are then used to bias the forward Monte Carlo transport calculation in a manner similar to the conventional exponential transform, but such that the transform parameters are now local in space and energy, not global. Results have shown that the Local Exponential Transform often offers a significant improvement over conventional geometry splitting/Russian roulette with weight windows. Since the biasing parameters for the Local Exponential Transform were determined from a low-order solution to the adjoint transport problem, the LET has been applied in problems where it was desirable to minimize the variance in a detector region. The purpose of this paper is to show that by basing the LET method upon a low-order solution to the forward transport problem, one can instead obtain biasing parameters which will minimize the maximum variance in a Monte Carlo transport calculation
MCNP - transport calculations in ducts using multigroup albedo coefficients
International Nuclear Information System (INIS)
In this work, the use of multigroup albedo coefficients in Monte Carlo calculations of particle reflection and transmission by ducts is investigated. The procedure consists in modifying the MCNP code so that an albedo matrix computed previously by deterministic methods or Monte Carlo is introduced into the program to describe particle reflection by a surface. This way it becomes possible to avoid the need of considering particle transport in the duct wall explicitly, changing the problem to a problem of transport in the duct interior only and reducing significantly the difficulty of the real problem. The probability of particle reflection at the duct wall is given, for each group, as the sum of the albedo coefficients over the final groups. The calculation is started by sampling a source particle and simulating its reflection on the duct wall by sampling a group for the emerging particle. The particle weight is then reduced by the reflection probability. Next, a new direction and trajectory for the particle is selected. Numerical results obtained for the model are compared with results from a discrete ordinates code and results from Monte Carlo simulations that take particle transport in the wall into account. (author)
A burnup credit calculation methodology for PWR spent fuel transportation
International Nuclear Information System (INIS)
A burnup credit calculation methodology for PWR spent fuel transportation has been developed and validated in CEA/Saclay. To perform the calculation, the spent fuel composition are first determined by the PEPIN-2 depletion analysis. Secondly the most important actinides and fission product poisons are automatically selected in PEPIN-2 according to the reactivity worth and the burnup for critically consideration. Then the 3D Monte Carlo critically code TRIMARAN-2 is used to examine the subcriticality. All the resonance self-shielded cross sections used in this calculation system are prepared with the APOLLO-2 lattice cell code. The burnup credit calculation methodology and related PWR spent fuel transportation benchmark results are reported and discussed. (authors)
Multipurpose Monte Carlo simulator for photon transport in turbid media
Guerra, Pedro; Aguirre, Juan; Ortuño, Juan E.; María J Ledesma-Carbayo; Vaquero, Juan José; Desco, Manuel; Santos, Andrés
2009-01-01
Monte Carlo methods provide a flexible and rigorous solution to the problem of light transport in turbid media, which enable approaching complex geometries for a closed analytical solution is not feasible. The simulator implements local rules of propagation in the form of probability density functions that depend on the local optical properties of the tissue. This work presents a flexible simulator that can be applied in multiple applications related to optical tomography. In particular...
Energy Technology Data Exchange (ETDEWEB)
Zychor, I. [Soltan Inst. for Nuclear Studies, Otwock-Swierk (Poland)
1994-12-31
The application of a Monte Carlo method to study a transport in matter of electron and photon beams is presented, especially for electrons with energies up to 18 MeV. The SHOWME Monte Carlo code, a modified version of GEANT3 code, was used on the CONVEX C3210 computer at Swierk. It was assumed that an electron beam is mono directional and monoenergetic. Arbitrary user-defined, complex geometries made of any element or material can be used in calculation. All principal phenomena occurring when electron beam penetrates the matter are taken into account. The use of calculation for a therapeutic electron beam collimation is presented. (author). 20 refs, 29 figs.
Investigation of Nonuniform Dose Voxel Geometry in Monte Carlo Calculations.
Yuan, Jiankui; Chen, Quan; Brindle, James; Zheng, Yiran; Lo, Simon; Sohn, Jason; Wessels, Barry
2015-08-01
The purpose of this work is to investigate the efficacy of using multi-resolution nonuniform dose voxel geometry in Monte Carlo (MC) simulations. An in-house MC code based on the dose planning method MC code was developed in C++ to accommodate the nonuniform dose voxel geometry package since general purpose MC codes use their own coupled geometry packages. We devised the package in a manner that the entire calculation volume was first divided into a coarse mesh and then the coarse mesh was subdivided into nonuniform voxels with variable voxel sizes based on density difference. We name this approach as multi-resolution subdivision (MRS). It generates larger voxels in small density gradient regions and smaller voxels in large density gradient regions. To take into account the large dose gradients due to the beam penumbra, the nonuniform voxels can be further split using ray tracing starting from the beam edges. The accuracy of the implementation of the algorithm was verified by comparing with the data published by Rogers and Mohan. The discrepancy was found to be 1% to 2%, with a maximum of 3% at the interfaces. Two clinical cases were used to investigate the efficacy of nonuniform voxel geometry in the MC code. Applying our MRS approach, we started with the initial voxel size of 5 × 5 × 3 mm(3), which was further divided into smaller voxels. The smallest voxel size was 1.25 × 1.25 × 3 mm(3). We found that the simulation time per history for the nonuniform voxels is about 30% to 40% faster than the uniform fine voxels (1.25 × 1.25 × 3 mm(3)) while maintaining similar accuracy.
Minimizing the cost of splitting in Monte Carlo radiation transport simulation
Energy Technology Data Exchange (ETDEWEB)
Juzaitis, R.J.
1980-10-01
A deterministic analysis of the computational cost associated with geometric splitting/Russian roulette in Monte Carlo radiation transport calculations is presented. Appropriate integro-differential equations are developed for the first and second moments of the Monte Carlo tally as well as time per particle history, given that splitting with Russian roulette takes place at one (or several) internal surfaces of the geometry. The equations are solved using a standard S/sub n/ (discrete ordinates) solution technique, allowing for the prediction of computer cost (formulated as the product of sample variance and time per particle history, sigma/sup 2//sub s/tau p) associated with a given set of splitting parameters. Optimum splitting surface locations and splitting ratios are determined. Benefits of such an analysis are particularly noteworthy for transport problems in which splitting is apt to be extensively employed (e.g., deep penetration calculations).
Monte Carlo simulations of charge transport in heterogeneous organic semiconductors
Aung, Pyie Phyo; Khanal, Kiran; Luettmer-Strathmann, Jutta
2015-03-01
The efficiency of organic solar cells depends on the morphology and electronic properties of the active layer. Research teams have been experimenting with different conducting materials to achieve more efficient solar panels. In this work, we perform Monte Carlo simulations to study charge transport in heterogeneous materials. We have developed a coarse-grained lattice model of polymeric photovoltaics and use it to generate active layers with ordered and disordered regions. We determine carrier mobilities for a range of conditions to investigate the effect of the morphology on charge transport.
JCOGIN. A parallel programming infrastructure for Monte Carlo particle transport
International Nuclear Information System (INIS)
The advantages of the Monte Carlo method for reactor analysis are well known, but the full-core reactor analysis challenges the computational time and computer memory. Meanwhile, the exponential growth of computer power in the last 10 years is now creating a great opportunity for large scale parallel computing on the Monte Carlo full-core reactor analysis. In this paper, a parallel programming infrastructure is introduced for Monte Carlo particle transport, named JCOGIN, which aims at accelerating the development of Monte Carlo codes for the large scale parallelism simulations of the full-core reactor. Now, JCOGIN implements the hybrid parallelism of the spatial decomposition and the traditional particle parallelism on MPI and OpenMP. Finally, JMCT code is developed on JCOGIN, which reaches the parallel efficiency of 70% on 20480 cores for fixed source problem. By the hybrid parallelism, the full-core pin-by-pin simulation of the Dayawan reactor was implemented, with the number of the cells up to 10 million and the tallies of the fluxes utilizing over 40GB of memory. (author)
Effects of human model configuration in Monte Carlo calculations on organ doses from CT examinations
International Nuclear Information System (INIS)
A new dosimetry system, WAZA-ARI, is being developed to estimate radiation dose from Computed Tomography (CT) examination in Japan. The dose estimation in WAZA-ARI utilizes organ dose data, which have been derived by Monte Carlo calculations using Particle and Heavy Ion Transport code System, PHITS. A Japanese adult male phantom, JM phantom, is adapted as a reference human model in the calculations, because the physique and inner organ masses agree well with the average values for Japanese adult males. On the other hand, each patient has arbitrary physical characteristics. Thus, the effects of human body configuration on organ doses are studied by applying another Japanese male model and the reference phantom by the International Commission on Radiological Protection (ICRP) to PHITS. In addition, this paper describes computation conditions for the three human models, which are constructed in the format of voxel phantom with different resolutions. (author)
Calculation and analysis of heat source of PWR assemblies based on Monte Carlo method
International Nuclear Information System (INIS)
When fission occurs in nuclear fuel in reactor core, it releases numerous neutron and γ radiation, which takes energy deposition in fuel components and yields many factors such as thermal stressing and radiation damage influencing the safe operation of a reactor. Using the three-dimensional Monte Carlo transport calculation program MCNP and continuous cross-section database based on ENDF/B series to calculate the heat rate of the heat source on reference assemblies of a PWR when loading with 18-month short refueling cycle mode, and get the precise values of the control rod, thimble plug and new burnable poison rod within Gd, so as to provide basis for reactor design and safety verification. (authors)
Application de la methode des sous-groupes au calcul Monte-Carlo multigroupe
Martin, Nicolas
This thesis is dedicated to the development of a Monte Carlo neutron transport solver based on the subgroup (or multiband) method. In this formalism, cross sections for resonant isotopes are represented in the form of probability tables on the whole energy spectrum. This study is intended in order to test and validate this approach in lattice physics and criticality-safety applications. The probability table method seems promising since it introduces an alternative computational way between the legacy continuous-energy representation and the multigroup method. In the first case, the amount of data invoked in continuous-energy Monte Carlo calculations can be very important and tend to slow down the overall computational time. In addition, this model preserves the quality of the physical laws present in the ENDF format. Due to its cheap computational cost, the multigroup Monte Carlo way is usually at the basis of production codes in criticality-safety studies. However, the use of a multigroup representation of the cross sections implies a preliminary calculation to take into account self-shielding effects for resonant isotopes. This is generally performed by deterministic lattice codes relying on the collision probability method. Using cross-section probability tables on the whole energy range permits to directly take into account self-shielding effects and can be employed in both lattice physics and criticality-safety calculations. Several aspects have been thoroughly studied: (1) The consistent computation of probability tables with a energy grid comprising only 295 or 361 groups. The CALENDF moment approach conducted to probability tables suitable for a Monte Carlo code. (2) The combination of the probability table sampling for the energy variable with the delta-tracking rejection technique for the space variable, and its impact on the overall efficiency of the proposed Monte Carlo algorithm. (3) The derivation of a model for taking into account anisotropic
Usage of burnt fuel isotopic compositions from engineering codes in Monte-Carlo code calculations
Energy Technology Data Exchange (ETDEWEB)
Aleshin, Sergey S.; Gorodkov, Sergey S.; Shcherenko, Anna I. [Nuclear Research Centre ' ' Kurchatov Institute' ' , Moscow (Russian Federation)
2015-09-15
A burn-up calculation of VVER's cores by Monte-Carlo code is complex process and requires large computational costs. This fact makes Monte-Carlo codes usage complicated for project and operating calculations. Previously prepared isotopic compositions are proposed to use for the Monte-Carlo code (MCU) calculations of different states of VVER's core with burnt fuel. Isotopic compositions are proposed to calculate by an approximation method. The approximation method is based on usage of a spectral functionality and reference isotopic compositions, that are calculated by engineering codes (TVS-M, PERMAK-A). The multiplication factors and power distributions of FA and VVER with infinite height are calculated in this work by the Monte-Carlo code MCU using earlier prepared isotopic compositions. The MCU calculation data were compared with the data which were obtained by engineering codes.
SFR whole core burnup calculations with TRIPOLI-4 Monte Carlo code
International Nuclear Information System (INIS)
Under the Working Party on Scientific Issues of Reactor Systems (WPRS) of the OECD/NEA, an international collaboration benchmark was recently established on the neutronic analysis of four Sodium-cooled Fast Reactor (SFR) concepts of the Generation- IV nuclear energy systems. As the whole core Monte Carlo depletion calculation is one of the essential challenges of current reactor physics studies, the continuous-energy TRIPOLI-4 Monte Carlo transport code was firstly used in this study to perform whole core 3D neutronic calculations for these four SFR cores. Two medium size (1000 MWt) and two large size (3600 MWt) SFR of GEN-IV systems were analyzed. The medium size SFR concepts are from the Advanced Burner Reactors (ABR). The large size SFR concepts are from the self-breeding reactors. The TRIPOLI-4 depletion calculations were made with MOX and metallic U-Pu-Zr fuels for the ABR cores and with MOX and Carbide (U,Pu)C fuels for the self-breeding cores. The whole core reactor physics parameters calculations were performed for the BOEC and EOEC (Beginning and End of Equilibrium Cycle) conditions. This paper summarizes the TRIPOLI-4 calculation results of Keff, βeff, sodium void worth, Doppler constant, control rod worth, and core power distributions for the BOEC and EOEC conditions. The one-cycle depletion calculation results of the core inventory of U and TRU (Pu, Am, Cm, and Np) are also analyzed, after 328.5 days depletion irradiation for the ABR cores, 410 days for the large MOX core, and 500 days for the large carbide core. (author)
Application of Monte Carlo code EGS4 to calculate gamma exposure buildup factors
International Nuclear Information System (INIS)
Exposure buildup factors up to 40 mean free paths ranging from 0.015 MeV to 15 MeV photon energy were calculated by using the Monte Carlo simulation code EGS4 for ordinary concrete. The calculation involves PHOTX cross section library, a point isotropic source, infinite uniform medium model and a particle splitting method and considers the Bremsstrahlung, fluorescent effect, correlative (Rayleigh) scatter. The results were compared with the relevant data. Results show that the data of the buildup factors calculated by the Monte Carlo code EGS4 was reliable. The Monte Carlo method can be used widely to calculate gamma-ray exposure buildup factors. (authors)
Analytical band Monte Carlo analysis of electron transport in silicene
Yeoh, K. H.; Ong, D. S.; Ooi, C. H. Raymond; Yong, T. K.; Lim, S. K.
2016-06-01
An analytical band Monte Carlo (AMC) with linear energy band dispersion has been developed to study the electron transport in suspended silicene and silicene on aluminium oxide (Al2O3) substrate. We have calibrated our model against the full band Monte Carlo (FMC) results by matching the velocity-field curve. Using this model, we discover that the collective effects of charge impurity scattering and surface optical phonon scattering can degrade the electron mobility down to about 400 cm2 V‑1 s‑1 and thereafter it is less sensitive to the changes of charge impurity in the substrate and surface optical phonon. We also found that further reduction of mobility to ∼100 cm2 V‑1 s‑1 as experimentally demonstrated by Tao et al (2015 Nat. Nanotechnol. 10 227) can only be explained by the renormalization of Fermi velocity due to interaction with Al2O3 substrate.
Cross sections needed for investigations into track phenomena and Monte-Carlo calculations
International Nuclear Information System (INIS)
Investigations into basic radiation action mechanisms as well as into applied radiation transport problems (e.g. electron microscopy) greatly benefit from detailed computer simulations of charged particle track structures in matter. The first and in fact most important and most difficult step in any such calculation is the derivation of reliable cross sections for the most relevant interaction processes in the material(s) under consideration. The second step in radiation transport calculations is the testing of results or intermediate results for quantitative or qualitative consistency with other experimental or theoretical information (e.g. yields, backscatter factors). This paper discusses the types of the most important collision cross sections for studies on track phenomena by detailed Monte-Carlo calculations, the necessary accuracy of such data and various means of consistency checks of calculated results. This will be done mainly with examples taken from radiation physics as applied to dosimetric and biological problems (i.e. to gaseous and condensed targets). 12 references, 8 figures
Improved Monte Carlo model for multiple scattering calculations
Institute of Scientific and Technical Information of China (English)
Weiwei Cai; Lin Ma
2012-01-01
The coupling between the Monte Carlo (MC) method and geometrical optics to improve accuracy is investigated.The results obtained show improved agreement with previous experimental data,demonstrating that the MC method,when coupled with simple geometrical optics,can simulate multiple scattering with enhanced fidelity.
Variational Monte Carlo Calculations of Energy per Particle Nuclear Matter
Manisa, K.
2004-01-01
In this paper, symmetrical nuclear matter has been investigated. Total, kinetic and potential energies per particle were obtained for nuclear matter by Variational Monte Carlo method. We have observed that the results are in good agreement with those obtained by various authors who used different potentials and techniques.
GPU-based Monte Carlo radiotherapy dose calculation using phase-space sources
Townson, Reid; Tian, Zhen; Graves, Yan Jiang; Zavgorodni, Sergei; Jiang, Steve B
2013-01-01
A novel phase-space source implementation has been designed for GPU-based Monte Carlo dose calculation engines. Due to the parallelized nature of GPU hardware, it is essential to simultaneously transport particles of the same type and similar energies but separated spatially to yield a high efficiency. We present three methods for phase-space implementation that have been integrated into the most recent version of the GPU-based Monte Carlo radiotherapy dose calculation package gDPM v3.0. The first method is to sequentially read particles from a patient-dependent phase-space and sort them on-the-fly based on particle type and energy. The second method supplements this with a simple secondary collimator model and fluence map implementation so that patient-independent phase-space sources can be used. Finally, as the third method (called the phase-space-let, or PSL, method) we introduce a novel strategy to pre-process patient-independent phase-spaces and bin particles by type, energy and position. Position bins l...
GPU-based fast Monte Carlo dose calculation for proton therapy
Jia, Xun; Schümann, Jan; Paganetti, Harald; Jiang, Steve B.
2012-12-01
Accurate radiation dose calculation is essential for successful proton radiotherapy. Monte Carlo (MC) simulation is considered to be the most accurate method. However, the long computation time limits it from routine clinical applications. Recently, graphics processing units (GPUs) have been widely used to accelerate computationally intensive tasks in radiotherapy. We have developed a fast MC dose calculation package, gPMC, for proton dose calculation on a GPU. In gPMC, proton transport is modeled by the class II condensed history simulation scheme with a continuous slowing down approximation. Ionization, elastic and inelastic proton nucleus interactions are considered. Energy straggling and multiple scattering are modeled. Secondary electrons are not transported and their energies are locally deposited. After an inelastic nuclear interaction event, a variety of products are generated using an empirical model. Among them, charged nuclear fragments are terminated with energy locally deposited. Secondary protons are stored in a stack and transported after finishing transport of the primary protons, while secondary neutral particles are neglected. gPMC is implemented on the GPU under the CUDA platform. We have validated gPMC using the TOPAS/Geant4 MC code as the gold standard. For various cases including homogeneous and inhomogeneous phantoms as well as a patient case, good agreements between gPMC and TOPAS/Geant4 are observed. The gamma passing rate for the 2%/2 mm criterion is over 98.7% in the region with dose greater than 10% maximum dose in all cases, excluding low-density air regions. With gPMC it takes only 6-22 s to simulate 10 million source protons to achieve ˜1% relative statistical uncertainty, depending on the phantoms and energy. This is an extremely high efficiency compared to the computational time of tens of CPU hours for TOPAS/Geant4. Our fast GPU-based code can thus facilitate the routine use of MC dose calculation in proton therapy.
International Nuclear Information System (INIS)
The perturbation source method may be a powerful Monte-Carlo means to calculate small effects in a particle field. In a preceding paper we have formulated this methos in inhomogeneous linear particle transport problems describing the particle fields by solutions of Fredholm integral equations and have derived formulae for the second moment of the difference event point estimator. In the present paper we analyse the general structure of its variance, point out the variance peculiarities, discuss the dependence on certain transport games and on generation procedures of the auxiliary particles and draw conclusions to improve this method
International Nuclear Information System (INIS)
The present report describes a computer code DEEP which calculates the organ dose equivalents and the effective dose equivalent for external photon exposure by the Monte Carlo method. MORSE-CG, Monte Carlo radiation transport code, is incorporated into the DEEP code to simulate photon transport phenomena in and around a human body. The code treats an anthropomorphic phantom represented by mathematical formulae and user has a choice for the phantom sex: male, female and unisex. The phantom can wear personal dosimeters on it and user can specify their location and dimension. This document includes instruction and sample problem for the code as well as the general description of dose calculation, human phantom and computer code. (author)
Monte Carlo calculations for r-process nucleosynthesis
Energy Technology Data Exchange (ETDEWEB)
Mumpower, Matthew Ryan [Los Alamos National Laboratory
2015-11-12
A Monte Carlo framework is developed for exploring the impact of nuclear model uncertainties on the formation of the heavy elements. Mass measurements tightly constrain the macroscopic sector of FRDM2012. For r-process nucleosynthesis, it is necessary to understand the microscopic physics of the nuclear model employed. A combined approach of measurements and a deeper understanding of the microphysics is thus warranted to elucidate the site of the r-process.
Calculation of kinetic parameters for mixed TRIGA cores with Monte Carlo
Energy Technology Data Exchange (ETDEWEB)
Snoj, Luka, E-mail: luka.snoj@ijs.s [Reactor Physics Division, Jozef Stefan Institute, Jamova 39, SI-1000 Ljubljana (Slovenia); Kavcic, Andrej [Nuclear Training Centre, Jozef Stefan Institute, Jamova 39, SI-1000 Ljubljana (Slovenia); Zerovnik, Gasper; Ravnik, Matjaz [Reactor Physics Division, Jozef Stefan Institute, Jamova 39, SI-1000 Ljubljana (Slovenia)
2010-02-15
Modern Monte Carlo transport codes in combination with fast computer clusters enable very accurate calculations of the most important reactor kinetic parameters. Such are the effective delayed neutron fraction, beta{sub eff}, and mean neutron generation time, LAMBDA. We calculate beta{sub eff} and LAMBDA for various realistic and hypothetical annular TRIGA Mark II cores with different types and amount of fuel. It is observed that the effective delayed neutron fraction strongly depends on the number of fuel elements in the core or on the core size. beta{sub eff} varies for 12 wt.% uranium standard fuel with 20% enrichment from 0.0080 for a small core (43 fuel rods) to 0.0070 for a full core (90 fuel rods). It is found that calculated value of beta{sub eff} strongly depends also on the nuclear data set used in calculations. The prompt neutron lifetime mainly depends on the amount (due to either content or enrichment) of {sup 235}U in the fuel as it is approximately inversely proportional to the average absorption cross-section. It varies from 28 mus for 30 wt.% uranium content fuelled core to 48 mus for 8.5 wt.% uranium content LEU fuelled core. Description of the calculation method and detailed results are presented in the paper.
Overview and applications of the Monte Carlo radiation transport kit at LLNL
International Nuclear Information System (INIS)
Modern Monte Carlo radiation transport codes can be applied to model most applications of radiation, from optical to TeV photons, from thermal neutrons to heavy ions. Simulations can include any desired level of detail in three-dimensional geometries using the right level of detail in the reaction physics. The technology areas to which we have applied these codes include medical applications, defense, safety and security programs, nuclear safeguards and industrial and research system design and control. The main reason such applications are interesting is that by using these tools substantial savings of time and effort (i.e. money) can be realized. In addition it is possible to separate out and investigate computationally effects which can not be isolated and studied in experiments. In model calculations, just as in real life, one must take care in order to get the correct answer to the right question. Advancing computing technology allows extensions of Monte Carlo applications in two directions. First, as computers become more powerful more problems can be accurately modeled. Second, as computing power becomes cheaper Monte Carlo methods become accessible more widely. An overview of the set of Monte Carlo radiation transport tools in use a LLNL will be presented along with a few examples of applications and future directions
Development of burnup calculation function in reactor Monte Carlo code RMC
International Nuclear Information System (INIS)
This paper presents the burnup calculation capability of RMC, which is a new Monte Carlo (MC) neutron transport code developed by Reactor Engineering Analysis Laboratory (REAL) in Tsinghua University of China. Unlike most of existing MC depletion codes which explicitly couple the depletion module, RMC incorporates ORIGEN 2.1 in an implicit way. Different burn step strategies, including the middle-of-step approximation and the predictor-corrector method, are adopted by RMC to assure the accuracy under large burnup step size. RMC employs a spectrum-based method of tallying one-group cross section, which can considerably saves computational time with negligible accuracy loss. According to the validation results of benchmarks and examples, it is proved that the burnup function of RMC performs quite well in accuracy and efficiency. (authors)
Energy Technology Data Exchange (ETDEWEB)
Cobut, V.; Frongillo, Y.; Jay-Gerin, J.-P. (Sherbrooke Univ., PQ (Canada). Faculte de Medecine); Patau, J.-P. (Toulouse-3 Univ., 31 (France))
1992-12-01
An energy spectrum of ''subexcitation electrons'' produced in liquid water by electrons with initial energies of a few keV is obtained by using a Monte Carlo transport simulation calculation. It is found that the introduction of vibrational-excitation cross sections leads to the appearance of a sharp peak in the probability density function near the electronic-excitation threshold. Electrons contributing to this peak are shown to be more naturally described if a novel energy spectrum, that we propose to name ''vibrationally-relaxing electron'' spectrum, is introduced. The corresponding distribution function is presented, and an empirical expression of it is given. (author).
Large-scale Monte Carlo calculations with thermal-hydraulic feedback
International Nuclear Information System (INIS)
Monte Carlo based codes provide the most accurate solution of the particle transport problem. Individual particle trajectories are followed, and the interaction physics is simulated using detailed modeling of the physical reaction. The calculations are usually done using uniform temperature and density distributions. This is a significant approximation and leads to significantly distorted solution when applied to hot full power conditions. In this paper a method for introducing the thermal-hydraulic feedback by dynamic material distributions is introduced. The global variance reduction technique has been used to optimize the power tallying. The fission source convergence was accelerated by applying the Wielandt's acceleration method. Since the aim of this work is to solve coupled neutronic/thermal-hydraulic problems a convergence acceleration strategy based on stochastic approximation was proposed. The coupled system was applied to a quarter PWR core at pin and sub-channel level resolution. (author)
Monte Carlo modelling of positron transport in real world applications
International Nuclear Information System (INIS)
Due to the unstable nature of positrons and their short lifetime, it is difficult to obtain high positron particle densities. This is why the Monte Carlo simulation technique, as a swarm method, is very suitable for modelling most of the current positron applications involving gaseous and liquid media. The ongoing work on the measurements of cross-sections for positron interactions with atoms and molecules and swarm calculations for positrons in gasses led to the establishment of good cross-section sets for positron interaction with gasses commonly used in real-world applications. Using the standard Monte Carlo technique and codes that can follow both low- (down to thermal energy) and high- (up to keV) energy particles, we are able to model different systems directly applicable to existing experimental setups and techniques. This paper reviews the results on modelling Surko-type positron buffer gas traps, application of the rotating wall technique and simulation of positron tracks in water vapor as a substitute for human tissue, and pinpoints the challenges in and advantages of applying Monte Carlo simulations to these systems.
Monte Carlo modelling of positron transport in real world applications
Marjanović, S.; Banković, A.; Šuvakov, M.; Petrović, Z. Lj
2014-05-01
Due to the unstable nature of positrons and their short lifetime, it is difficult to obtain high positron particle densities. This is why the Monte Carlo simulation technique, as a swarm method, is very suitable for modelling most of the current positron applications involving gaseous and liquid media. The ongoing work on the measurements of cross-sections for positron interactions with atoms and molecules and swarm calculations for positrons in gasses led to the establishment of good cross-section sets for positron interaction with gasses commonly used in real-world applications. Using the standard Monte Carlo technique and codes that can follow both low- (down to thermal energy) and high- (up to keV) energy particles, we are able to model different systems directly applicable to existing experimental setups and techniques. This paper reviews the results on modelling Surko-type positron buffer gas traps, application of the rotating wall technique and simulation of positron tracks in water vapor as a substitute for human tissue, and pinpoints the challenges in and advantages of applying Monte Carlo simulations to these systems.
Monte Carlo solution of a semi-discrete transport equation
International Nuclear Information System (INIS)
The authors present the S∞ method, a hybrid neutron transport method in which Monte Carlo particles traverse discrete space. The goal of any deterministic/stochastic hybrid method is to couple selected characters from each of the methods in hopes of producing a better method. The S∞ method has the features of the lumped, linear-discontinuous (LLD) spatial discretization, yet it has no ray-effects because of the continuous angular variable. They derive the S∞ method for the solid-state, mono-energetic transport equation in one-dimensional slab geometry with isotropic scattering and an isotropic internal source. They demonstrate the viability of the S∞ method by comparing their results favorably to analytic and deterministic results
Application of Monte Carlo method for dose calculation in thyroid follicle
International Nuclear Information System (INIS)
The Monte Carlo method is an important tool to simulate radioactive particles interaction with biologic medium. The principal advantage of the method when compared with deterministic methods is the ability to simulate a complex geometry. Several computational codes use the Monte Carlo method to simulate the particles transport and they have the capacity to simulate energy deposition in models of organs and/or tissues, as well models of cells of human body. Thus, the calculation of the absorbed dose to thyroid's follicles (compound of colloid and follicles' cells) have a fundamental importance to dosimetry, because these cells are radiosensitive due to ionizing radiation exposition, in particular, exposition due to radioisotopes of iodine, because a great amount of radioiodine may be released into the environment in case of a nuclear accidents. In this case, the goal of this work was use the code of particles transport MNCP4C to calculate absorbed doses in models of thyroid's follicles, for Auger electrons, internal conversion electrons and beta particles, by iodine-131 and short-lived iodines (131, 132, 133, 134 e 135), with diameters varying from 30 to 500 μm. The results obtained from simulation with the MCNP4C code shown an average percentage of the 25% of total absorbed dose by colloid to iodine- 131 and 75% to short-lived iodine's. For follicular cells, this percentage was of 13% to iodine-131 and 87% to short-lived iodine's. The contributions from particles with low energies, like Auger and internal conversion electrons should not be neglected, to assessment the absorbed dose in cellular level. Agglomerative hierarchical clustering was used to compare doses obtained by codes MCNP4C, EPOTRAN, EGS4 and by deterministic methods. (author)
Monte Carlo simulation of transport from an electrothermal vaporizer
Energy Technology Data Exchange (ETDEWEB)
Holcombe, James A. [Department of Chemistry and Biochemistry, University of Texas at Austin, Austin, TX 78712 (United States)]. E-mail: holcombe@mail.utexas.edu; Ertas, Gulay [Department of Chemistry and Biochemistry, University of Texas at Austin, Austin, TX 78712 (United States)
2006-06-15
Monte Carlo simulations were developed to elucidate the time and spatial distribution of analyte during the transport process from an electrothermal vaporizer to an inductively coupled plasma. A time-of-flight mass spectrometer was employed to collect experimental data that was compared with the simulated transient signals. Consideration was given to analyte transport as gaseous species as well as aerosol particles. In the case of aerosols, the simulation assumed formation of 5 nm particles and used the Einstein-Stokes equation to estimate the aerosol's diffusion coefficient, which was ca. 1% of the value for free atom diffusion. Desorption conditions for Cu that had been previously elucidated for electrothermal atomic absorption spectrometry were employed for the release processes from the electrothermal vaporizer. The primary distinguishing feature in the output signal to differentiate between gas and aerosol transport was a pronounced, long lived signal after the transient peak if aerosols were transported. Time and spatial distributions of particles within the transport system are presented. This characteristic was supported by independent atomic absorption measurements using a heated (or unheated) quartz T-tube with electrothermal vaporizer introduction.
Monte Carlo calculation of 60Co γ-ray's albedo-dose rate from the air
International Nuclear Information System (INIS)
The Monte Carlo calculation of 60Co γ-ray's albedo-dose rate from the air is reported. A formula is presented with which the relations of the albedo-doserate with some parameters are simulated and fitted
Monte Carlo calculations of fast effects in uranium graphite lattices
International Nuclear Information System (INIS)
Details are given of the results of a series of computations of fast neutron effects in natural uranium metal/graphite cells. The computations were performed using the Monte Carlo code SPEC. It is shown that neutron capture in U238 is conveniently discussed in terms of a capture escape probability ζ as well as the conventional probability p. The latter is associated with the slowing down flux and has the classical exponential dependence on fuel-to-moderator volume ratio whilst the former is identified with the component of neutron flux above 1/E. (author)
Description of a stable scheme for steady-state coupled Monte Carlo-thermal-hydraulic calculations
Dufek, Jan; Eduard Hoogenboom, J.
2014-01-01
We provide a detailed description of a numerically stable and efficient coupling scheme for steady-state Monte Carlo neutronic calculations with thermal-hydraulic feedback. While we have previously derived and published the stochastic approximation based method for coupling the Monte Carlo criticality and thermal-hydraulic calculations, its possible implementation has not been described in a step-by-step manner. As the simple description of the coupling scheme was repeatedly requested from us...
Clinical implementation of full Monte Carlo dose calculation in proton beam therapy
Energy Technology Data Exchange (ETDEWEB)
Paganetti, Harald; Jiang, Hongyu; Parodi, Katia; Slopsema, Roelf; Engelsman, Martijn [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114 (United States)
2008-09-07
The goal of this work was to facilitate the clinical use of Monte Carlo proton dose calculation to support routine treatment planning and delivery. The Monte Carlo code Geant4 was used to simulate the treatment head setup, including a time-dependent simulation of modulator wheels (for broad beam modulation) and magnetic field settings (for beam scanning). Any patient-field-specific setup can be modeled according to the treatment control system of the facility. The code was benchmarked against phantom measurements. Using a simulation of the ionization chamber reading in the treatment head allows the Monte Carlo dose to be specified in absolute units (Gy per ionization chamber reading). Next, the capability of reading CT data information was implemented into the Monte Carlo code to model patient anatomy. To allow time-efficient dose calculation, the standard Geant4 tracking algorithm was modified. Finally, a software link of the Monte Carlo dose engine to the patient database and the commercial planning system was established to allow data exchange, thus completing the implementation of the proton Monte Carlo dose calculation engine ('DoC++'). Monte Carlo re-calculated plans are a valuable tool to revisit decisions in the planning process. Identification of clinically significant differences between Monte Carlo and pencil-beam-based dose calculations may also drive improvements of current pencil-beam methods. As an example, four patients (29 fields in total) with tumors in the head and neck regions were analyzed. Differences between the pencil-beam algorithm and Monte Carlo were identified in particular near the end of range, both due to dose degradation and overall differences in range prediction due to bony anatomy in the beam path. Further, the Monte Carlo reports dose-to-tissue as compared to dose-to-water by the planning system. Our implementation is tailored to a specific Monte Carlo code and the treatment planning system XiO (Computerized Medical
Neutron and gamma ray transport calculations in shielding system
Energy Technology Data Exchange (ETDEWEB)
Masukawa, Fumihiro; Sakamoto, Hiroki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1998-03-01
In the shields for radiation in nuclear facilities, the penetrating holes of various kinds and irregular shapes are made for the reasons of operation, control and others. These penetrating holes and gaps are filled with air or the substances with relatively small shielding performance, and radiation flows out through them, which is called streaming. As the calculation techniques for the shielding design or analysis related to the streaming problem, there are the calculations by simplified evaluation, transport calculation and Monte Carlo method. In this report, the example of calculation by Monte Carlo method which is represented by MCNP code is discussed. A number of variance reduction techniques which seem effective for the analysis of streaming problem were tried. As to the investigation of the applicability of MCNP code to streaming analysis, the object of analysis which are the concrete walls without hole and with horizontal hole, oblique hole and bent oblique hole, the analysis procedure, the composition of concrete, and the conversion coefficient of dose equivalent, and the results of analysis are reported. As for variance reduction technique, cell importance was adopted. (K.I.)
Monte Carlo study of electron transport in monolayer silicene
Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek
2016-11-01
Electron mobility and diffusion coefficients in monolayer silicene are calculated by Monte Carlo simulations using simplified band structure with linear energy bands. Results demonstrate reasonable agreement with the full-band Monte Carlo method in low applied electric field conditions. Negative differential resistivity is observed and an explanation of the origin of this effect is proposed. Electron mobility and diffusion coefficients are studied in low applied electric field conditions. We demonstrate that a comparison of these parameter values can provide a good check that the calculation is correct. Low-field mobility in silicene exhibits {T}-3 temperature dependence for nondegenerate electron gas conditions and {T}-1 for higher electron concentrations, when degenerate conditions are imposed. It is demonstrated that to explain the relation between mobility and temperature in nondegenerate electron gas the linearity of the band structure has to be taken into account. It is also found that electron-electron scattering only slightly modifies low-field electron mobility in degenerate electron gas conditions.
Deep-penetration calculation for the ISIS target station shielding using the MARS Monte Carlo code
International Nuclear Information System (INIS)
A calculation of neutron penetration through a thick shield was performed with a three-dimensional multi-layer technique using the MARS14(02) Monte Carlo code to compare with the experimental shielding data in 1998 at the ISIS spallation neutron source facility. In this calculation, secondary particles from a tantalum target bombarded by 800-MeV protons were transmitted through a bulk shield of approximately 3-m-thick iron and 1-m-thick concrete. To accomplish this deep-penetration calculation with good statistics, the following three techniques were used in this study. First, the geometry of the bulk shield was three-dimensionally divided into several layers of about 50-cm thickness, and a step-by-step calculation was carried out to multiply the number of penetrated particles at the boundaries between the layers. Second, the source particles in the layers were divided into two parts to maintain the statistical balance on the spatial-flux distribution. Third, only high-energy particles above 20 MeV were transported up to approximately 1 m before the region for benchmark calculation. Finally, the energy spectra of neutrons behind the very thick shield were calculated down to the thermal energy with good statistics, and typically agree well within a factor of two with the experimental data over a broad energy range. The 12C(n,2n)11C reaction rates behind the bulk shield were also calculated, which agree with the experimental data typically within 60%. These results are quite impressive in calculation accuracy for deep-penetration problem. In this report, the calculation conditions, geometry and the variance reduction techniques used in the deep-penetration calculation with the MARS14 code are clarified, and several subroutines of MARS14 which were used in our calculation are also given in the appendix. The numerical data of the calculated neutron energy spectra, reaction rates, dose rates and their C/E (Calculation/Experiment) values are also summarized. The
Monte Carlo calculation of chloride diffusion in concrete
International Nuclear Information System (INIS)
Coefficient of chloride diffusion is calculated by applying the Fick's second law of diffusion to a chloride concentration profile. Then from the signal strength for various chlorine gamma-ray energies was then calculated at the detector of the portable D-D neutron generator based PGNAA setup. (author)
International Nuclear Information System (INIS)
In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of: (1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 221 = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domain's replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.
Energy Technology Data Exchange (ETDEWEB)
O' Brien, M. J.; Brantley, P. S.
2015-01-20
In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 2^{21} = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domain’s replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.
ORPHEE research reactor: 3D core depletion calculation using Monte-Carlo code TRIPOLI-4®
Damian, F.; Brun, E.
2014-06-01
ORPHEE is a research reactor located at CEA Saclay. It aims at producing neutron beams for experiments. This is a pool-type reactor (heavy water), and the core is cooled by light water. Its thermal power is 14 MW. ORPHEE core is 90 cm height and has a cross section of 27x27 cm2. It is loaded with eight fuel assemblies characterized by a various number of fuel plates. The fuel plate is composed of aluminium and High Enriched Uranium (HEU). It is a once through core with a fuel cycle length of approximately 100 Equivalent Full Power Days (EFPD) and with a maximum burnup of 40%. Various analyses under progress at CEA concern the determination of the core neutronic parameters during irradiation. Taking into consideration the geometrical complexity of the core and the quasi absence of thermal feedback for nominal operation, the 3D core depletion calculations are performed using the Monte-Carlo code TRIPOLI-4® [1,2,3]. A preliminary validation of the depletion calculation was performed on a 2D core configuration by comparison with the deterministic transport code APOLLO2 [4]. The analysis showed the reliability of TRIPOLI-4® to calculate a complex core configuration using a large number of depleting regions with a high level of confidence.
Verification of Monte Carlo transport codes FLUKA, Mars and Shield
International Nuclear Information System (INIS)
The present study is a continuation of the project 'Verification of Monte Carlo Transport Codes' which is running at GSI as a part of activation studies of FAIR relevant materials. It includes two parts: verification of stopping modules of FLUKA, MARS and SHIELD-A (with ATIMA stopping module) and verification of their isotope production modules. The first part is based on the measurements of energy deposition function of uranium ions in copper and stainless steel. The irradiation was done at 500 MeV/u and 950 MeV/u, the experiment was held at GSI from September 2004 until May 2005. The second part is based on gamma-activation studies of an aluminium target irradiated with an argon beam of 500 MeV/u in August 2009. Experimental depth profiling of the residual activity of the target is compared with the simulations. (authors)
Domain Decomposition of a Constructive Solid Geometry Monte Carlo Transport Code
Energy Technology Data Exchange (ETDEWEB)
O' Brien, M J; Joy, K I; Procassini, R J; Greenman, G M
2008-12-07
Domain decomposition has been implemented in a Constructive Solid Geometry (CSG) Monte Carlo neutron transport code. Previous methods to parallelize a CSG code relied entirely on particle parallelism; but in our approach we distribute the geometry as well as the particles across processors. This enables calculations whose geometric description is larger than what could fit in memory of a single processor, thus it must be distributed across processors. In addition to enabling very large calculations, we show that domain decomposition can speed up calculations compared to particle parallelism alone. We also show results of a calculation of the proposed Laser Inertial-Confinement Fusion-Fission Energy (LIFE) facility, which has 5.6 million CSG parts.
Parallel processing of Monte Carlo code MCNP for particle transport problem
Energy Technology Data Exchange (ETDEWEB)
Higuchi, Kenji; Kawasaki, Takuji
1996-06-01
It is possible to vectorize or parallelize Monte Carlo codes (MC code) for photon and neutron transport problem, making use of independency of the calculation for each particle. Applicability of existing MC code to parallel processing is mentioned. As for parallel computer, we have used both vector-parallel processor and scalar-parallel processor in performance evaluation. We have made (i) vector-parallel processing of MCNP code on Monte Carlo machine Monte-4 with four vector processors, (ii) parallel processing on Paragon XP/S with 256 processors. In this report we describe the methodology and results for parallel processing on two types of parallel or distributed memory computers. In addition, we mention the evaluation of parallel programming environments for parallel computers used in the present work as a part of the work developing STA (Seamless Thinking Aid) Basic Software. (author)
International Nuclear Information System (INIS)
We have investigated Monte Carlo schemes for analyzing particle transport through media with exponentially varying time-dependent cross sections. For such media, the cross sections are represented in the form Σ(t) = Σ0 e-at (1) or equivalently as Σ(x) = Σ0 e-bx (2) where b = av and v is the particle speed. For the following discussion, the parameters a and b may be either positive, for exponentially decreasing cross sections, or negative, for exponentially increasing cross sections. For most time-dependent Monte Carlo applications, the time and spatial variations of the cross-section data are handled by means of a stepwise procedure, holding the cross sections constant for each region over a small time interval Δt, performing the Monte Carlo random walk over the interval Δt, updating the cross sections, and then repeating for a series of time intervals. Continuously varying spatial- or time-dependent cross sections can be treated in a rigorous Monte Carlo fashion using delta-tracking, but inefficiencies may arise if the range of cross-section variation is large. In this paper, we present a new method for sampling collision distances directly for cross sections that vary exponentially in space or time. The method is exact and efficient and has direct application to Monte Carlo radiation transport methods. To verify that the probability density function (PDF) is correct and that the random-sampling procedure yields correct results, numerical experiments were performed using a one-dimensional Monte Carlo code. The physical problem consisted of a beam source impinging on a purely absorbing infinite slab, with a slab thickness of 1 cm and Σ0 = 1 cm-1. Monte Carlo calculations with 10 000 particles were run for a range of the exponential parameter b from -5 to +20 cm-1. Two separate Monte Carlo calculations were run for each choice of b, a continuously varying case using the random-sampling procedures described earlier, and a 'conventional' case where the
International Nuclear Information System (INIS)
1 - Description of problem or function: KENO is a multigroup, Monte Carlo criticality code containing a special geometry package which allows easy description of systems composed of cylinders, spheres, and cuboids (rectangular parallelepipeds) arranged in any order with only one restriction. They cannot be rotated or translated. Each geometrical region must be described as completely enclosing all regions interior to it. For systems not describable using this special geometry package, the program can use the generalized geometry package (GEOM) developed for the O5R Monte Carlo code. It allows any system that can be described by a collection of planes and/or quadratic surfaces, arbitrarily oriented and intersecting in arbitrary fashion. The entire problem can be mocked up in generalized geometry, or one generalized geometry unit or box type can be used alone or in combination with standard KENO units or box types. Rectangular arrays of fissile units are allowed with or without external reflector regions. Output from KENO consists of keff for the system plus an estimate of its standard deviation and the leakage, absorption, and fissions for each energy group plus the totals for all groups. Flux as a function of energy group and region and fission densities as a function of region are optional output. KENO-4: Added features include a neutron balance edit, PICTURE routines to check the input geometry, and a random number sequencing subroutine written in FORTRAN-4. 2 - Method of solution: The scattering treatment used in KENO assumes that the differential neutron scattering cross section can be represented by a P1 Legendre polynomial. Absorption of neutrons in KENO is not allowed. Instead, at each collision point of a neutron tracking history the weight of the neutron is reduced by the absorption probability. When the neutron weight has been reduced below a specified point for the region in which the collision occurs, Russian roulette is played to determine if the
Jeraj, Robert; Keall, Paul
2000-12-01
The effect of the statistical uncertainty, or noise, in inverse treatment planning for intensity modulated radiotherapy (IMRT) based on Monte Carlo dose calculation was studied. Sets of Monte Carlo beamlets were calculated to give uncertainties at Dmax ranging from 0.2% to 4% for a lung tumour plan. The weights of these beamlets were optimized using a previously described procedure based on a simulated annealing optimization algorithm. Several different objective functions were used. It was determined that the use of Monte Carlo dose calculation in inverse treatment planning introduces two errors in the calculated plan. In addition to the statistical error due to the statistical uncertainty of the Monte Carlo calculation, a noise convergence error also appears. For the statistical error it was determined that apparently successfully optimized plans with a noisy dose calculation (3% 1σ at Dmax ), which satisfied the required uniformity of the dose within the tumour, showed as much as 7% underdose when recalculated with a noise-free dose calculation. The statistical error is larger towards the tumour and is only weakly dependent on the choice of objective function. The noise convergence error appears because the optimum weights are determined using a noisy calculation, which is different from the optimum weights determined for a noise-free calculation. Unlike the statistical error, the noise convergence error is generally larger outside the tumour, is case dependent and strongly depends on the required objectives.
New Capabilities in Mercury: A Modern, Monte Carlo Particle Transport Code
Energy Technology Data Exchange (ETDEWEB)
Procassini, R J; Cullen, D E; Greenman, G M; Hagmann, C A; Kramer, K J; McKinley, M S; O' Brien, M J; Taylor, J M
2007-03-08
The new physics, algorithmic and computer science capabilities of the Mercury general-purpose Monte Carlo particle transport code are discussed. The new physics and algorithmic features include in-line energy deposition and isotopic depletion, significant enhancements to the tally and source capabilities, diagnostic ray-traced particles, support for multi-region hybrid (mesh and combinatorial geometry) systems, and a probability of initiation method. Computer science enhancements include a second method of dynamically load-balancing parallel calculations, improved methods for visualizing 3-D combinatorial geometries and initial implementation of an in-line visualization capabilities.
Verification and Validation of MERCURY: A Modern, Monte Carlo Particle Transport Code
Energy Technology Data Exchange (ETDEWEB)
Procassini, R J; Cullen, D E; Greenman, G M; Hagmann, C A
2004-12-09
Verification and Validation (V&V) is a critical phase in the development cycle of any scientific code. The aim of the V&V process is to determine whether or not the code fulfills and complies with the requirements that were defined prior to the start of the development process. While code V&V can take many forms, this paper concentrates on validation of the results obtained from a modern code against those produced by a validated, legacy code. In particular, the neutron transport capabilities of the modern Monte Carlo code MERCURY are validated against those in the legacy Monte Carlo code TART. The results from each code are compared for a series of basic transport and criticality calculations which are designed to check a variety of code modules. These include the definition of the problem geometry, particle tracking, collisional kinematics, sampling of secondary particle distributions, and nuclear data. The metrics that form the basis for comparison of the codes include both integral quantities and particle spectra. The use of integral results, such as eigenvalues obtained from criticality calculations, is shown to be necessary, but not sufficient, for a comprehensive validation of the code. This process has uncovered problems in both the transport code and the nuclear data processing codes which have since been rectified.
HEXANN-EVALU - a Monte Carlo program system for pressure vessel neutron irradiation calculation
International Nuclear Information System (INIS)
The Monte Carlo program HEXANN and the evaluation program EVALU are intended to calculate Monte Carlo estimates of reaction rates and currents in segments of concentric angular regions around a hexagonal reactor-core region. The report describes the theoretical basis, structure and activity of the programs. Input data preparation guides and a sample problem are also included. Theoretical considerations as well as numerical experimental results suggest the user a nearly optimum way of making use of the Monte Carlo efficiency increasing options included in the program
Update on the development and validation of MERCURY: a modern, Monte Carlo particle transport code
Energy Technology Data Exchange (ETDEWEB)
Procassini, R.; Taylor, J.; McKinley, S.; Greenman, G. [Dermott Cullen, Matthew O' Brien, Bret Beck and Christian Hagmann, Lawrence Livermore National Lab., Livermore, CA (United States)
2005-07-01
An update on the development and validation of the MERCURY Monte Carlo particle transport code is presented. MERCURY is a modern, parallel, general-purpose Monte Carlo code being developed at the Lawrence Livermore National Laboratory. During the past year, several major algorithm enhancements have been completed. These include the addition of particle trackers for 3-dimensional combinatorial geometry (CG), 1-dimensional radial meshes, 2-dimensional quadrilateral unstructured meshes, as well as a feature known as templates for defining recursive, repeated structures in CG. New physics capabilities include an elastic-scattering neutron thermalization model for free gas and bound, S({alpha}, {beta}) molecular scattering, as well as support for continuous energy cross sections. Each of these new physics features has been validated through code-to-code comparisons with another Monte Carlo transport code. Several important computer science features have been developed, including an extensible input-parameter parser based upon the XML data description language, and a dynamic load-balance methodology for efficient parallel calculations. This paper discusses the recent work in each of these areas, and describes a plan for future extensions that are required to meet the needs of our ever expanding user base. (authors)
Update on the Development and Validation of MERCURY: A Modern, Monte Carlo Particle Transport Code
Energy Technology Data Exchange (ETDEWEB)
Procassini, R J; Taylor, J M; McKinley, M S; Greenman, G M; Cullen, D E; O' Brien, M J; Beck, B R; Hagmann, C A
2005-06-06
An update on the development and validation of the MERCURY Monte Carlo particle transport code is presented. MERCURY is a modern, parallel, general-purpose Monte Carlo code being developed at the Lawrence Livermore National Laboratory. During the past year, several major algorithm enhancements have been completed. These include the addition of particle trackers for 3-D combinatorial geometry (CG), 1-D radial meshes, 2-D quadrilateral unstructured meshes, as well as a feature known as templates for defining recursive, repeated structures in CG. New physics capabilities include an elastic-scattering neutron thermalization model, support for continuous energy cross sections and S ({alpha}, {beta}) molecular bound scattering. Each of these new physics features has been validated through code-to-code comparisons with another Monte Carlo transport code. Several important computer science features have been developed, including an extensible input-parameter parser based upon the XML data description language, and a dynamic load-balance methodology for efficient parallel calculations. This paper discusses the recent work in each of these areas, and describes a plan for future extensions that are required to meet the needs of our ever expanding user base.
Calculation of effective delayed neutron fraction with modified library of Monte Carlo code
International Nuclear Information System (INIS)
Highlights: ► We propose a new Monte Carlo method to calculate the effective delayed neutron fraction by changing the library. ► We study the stability of our method. When the particles and cycles are sufficiently great, the stability is very good. ► The final result is determined to make the deviation least. ► We verify our method on several benchmarks, and the results are very good. - Abstract: A new Monte Carlo method is proposed to calculate the effective delayed neutron fraction βeff. Based on perturbation theory, βeff is calculated with modified library of Monte Carlo code. To verify the proposed method, calculations are performed on several benchmarks. The error of the method is analyzed and the way to reduce error is proposed. The results are in good agreement with the reference data
Magnetism of iron and nickel from rotationally invariant Hirsch-Fye quantum Monte Carlo calculations
Belozerov, A. S.; Leonov, I.; Anisimov, V. I.
2013-01-01
We present a rotationally invariant Hirsch-Fye quantum Monte Carlo algorithm in which the spin rotational invariance of Hund's exchange is approximated by averaging over all possible directions of the spin quantization axis. We employ this technique to perform benchmark calculations for the two- and three-band Hubbard models on the infinite-dimensional Bethe lattice. Our results agree quantitatively well with those obtained using the continuous-time quantum Monte Carlo method with rotationall...
Monte Carlo Study of Temperature-dependent Non-diffusive Thermal Transport in Si Nanowires
Ma, Lei; Liu, Mengmeng; Zhao, Xuxin; Wu, Qixing; Sun, Hongyuan
2016-01-01
Non-diffusive thermal transport has gained extensive research interest recently due to its important implications on fundamental understanding of material phonon mean free path distributions and many nanoscale energy applications. In this work, we systematically investigate the role of boundary scattering and nanowire length on the nondiffusive thermal transport in thin silicon nanowires by rigorously solving the phonon Boltzmann transport equation using a variance reduced Monte Carlo technique across a range of temperatures. The simulations use the complete phonon dispersion and spectral lifetime data obtained from first-principle density function theory calculations as input without any adjustable parameters. Our BTE simulation results show that the nanowire length plays an important role in determining the thermal conductivity of silicon nanowires. In addition, our simulation results suggest significant phonon confinement effect for the previously measured silicon nanowires. These findings are important fo...
International Nuclear Information System (INIS)
We develop a 'Local' Exponential Transform method which distributes the particles nearly uniformly across the system in Monte Carlo transport calculations. An exponential approximation to the continuous transport equation is used in each mesh cell to formulate biasing parameters. The biasing parameters, which resemble those of the conventional exponential transform, tend to produce a uniform sampling of the problem geometry when applied to a forward Monte Carlo calculation, and thus they help to minimize the maximum variance of the flux. Unlike the conventional exponential transform, the biasing parameters are spatially dependent, and are automatically determined from a forward diffusion calculation. We develop two versions of the forward Local Exponential Transform method, one with spatial biasing only, and one with spatial and angular biasing. The method is compared to conventional geometry splitting/Russian roulette for several sample one-group problems in X-Y geometry. The forward Local Exponential Transform method with angular biasing is found to produce better results than geometry splitting/Russian roulette in terms of minimizing the maximum variance of the flux. (orig.)
International Nuclear Information System (INIS)
External linking scripts between Monte Carlo transport codes and burnup codes, and complete integration of burnup capability into Monte Carlo transport codes, have been or are currently being developed. Monte Carlo linked burnup methodologies may serve as an excellent benchmark for new deterministic burnup codes used for advanced systems; however, there are some instances where deterministic methodologies break down (i.e., heavily angularly biased systems containing exotic materials without proper group structure) and Monte Carlo burn up may serve as an actual design tool. Therefore, researchers are also developing these capabilities in order to examine complex, three-dimensional exotic material systems that do not contain benchmark data. Providing a reference scheme implies being able to associate statistical errors to any neutronic value of interest like k(eff), reaction rates, fluxes, etc. Usually in Monte Carlo, standard deviations are associated with a particular value by performing different independent and identical simulations (also referred to as 'cycles', 'batches', or 'replicas'), but this is only valid if the calculation itself is not biased. And, as will be shown in this paper, there is a bias in the methodology that consists of coupling transport and depletion codes because Bateman equations are not linear functions of the fluxes or of the reaction rates (those quantities being always measured with an uncertainty). Therefore, we have to quantify and correct this bias. This will be achieved by deriving an unbiased minimum variance estimator of a matrix exponential function of a normal mean. The result is then used to propose a reference scheme to solve Boltzmann/Bateman coupled equations, thanks to Monte Carlo transport codes. Numerical tests will be performed with an ad hoc Monte Carlo code on a very simple depletion case and will be compared to the theoretical results obtained with the reference scheme. Finally, the statistical error propagation
Progress on burnup calculation methods coupling Monte Carlo and depletion codes
Energy Technology Data Exchange (ETDEWEB)
Leszczynski, Francisco [Comision Nacional de Energia Atomica, San Carlos de Bariloche, RN (Argentina). Centro Atomico Bariloche]. E-mail: lesinki@cab.cnea.gob.ar
2005-07-01
Several methods of burnup calculations coupling Monte Carlo and depletion codes that were investigated and applied for the author last years are described. here. Some benchmark results and future possibilities are analyzed also. The methods are: depletion calculations at cell level with WIMS or other cell codes, and use of the resulting concentrations of fission products, poisons and actinides on Monte Carlo calculation for fixed burnup distributions obtained from diffusion codes; same as the first but using a method o coupling Monte Carlo (MCNP) and a depletion code (ORIGEN) at a cell level for obtaining the concentrations of nuclides, to be used on full reactor calculation with Monte Carlo code; and full calculation of the system with Monte Carlo and depletion codes, on several steps. All these methods were used for different problems for research reactors and some comparisons with experimental results of regular lattices were performed. On this work, a resume of all these works is presented and discussion of advantages and problems found are included. Also, a brief description of the methods adopted and MCQ system for coupling MCNP and ORIGEN codes is included. (author)
Monte Carlo uncertainty propagation approaches in ADS burn-up calculations
International Nuclear Information System (INIS)
Highlights: ► Two Monte Carlo uncertainty propagation approaches are compared. ► How to make both approaches equivalent is presented and applied. ► ADS burn-up calculation is selected as the application of approaches. ► The cross-section uncertainties of 239Pu and 241Pu are propagated. ► Cross-correlations appear as a source of differences between approaches. - Abstract: In activation calculations, there are several approaches to quantify uncertainties: deterministic by means of sensitivity analysis, and stochastic by means of Monte Carlo. Here, two different Monte Carlo approaches for nuclear data uncertainty are presented: the first one is the Total Monte Carlo (TMC). The second one is by means of a Monte Carlo sampling of the covariance information included in the nuclear data libraries to propagate these uncertainties throughout the activation calculations. This last approach is what we named Covariance Uncertainty Propagation, CUP. This work presents both approaches and their differences. Also, they are compared by means of an activation calculation, where the cross-section uncertainties of 239Pu and 241Pu are propagated in an ADS activation calculation
Parallelization of a Monte Carlo particle transport simulation code
Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.
2010-05-01
We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.
Recent R and D around the Monte-Carlo code Tripoli-4 for criticality calculation
Energy Technology Data Exchange (ETDEWEB)
Hugot, F.X.; Lee, Y.K.; Malvagi, F. [CEA - DEN/DANS/DM2S/SERMA/LTSD, Saclay (France)
2008-07-01
TRIPOLI-4 [1] is the fourth generation of the TRIPOLI family of Monte Carlo codes developed from the 60's by CEA. It simulates the 3D transport of neutrons, photons, electrons and positrons as well as coupled neutron-photon propagation and electron-photons cascade showers. The code addresses radiation protection and shielding problems, as well as criticality and reactor physics problems through both critical and subcritical neutronics calculations. It uses full pointwise as well as multigroup cross-sections. The code has been validated through several hundred benchmarks as well as measurement campaigns. It is used as a reference tool by CEA as well as its industrial and institutional partners, and in the NURESIM [2] European project. Section 2 reviews its main features, with emphasis on the latest developments. Section 3 presents some recent R and D for criticality calculations. Fission matrix, Eigen-values and eigenvectors computations will be exposed. Corrections on the standard deviation estimator in the case of correlations between generation steps will be detailed. Section 4 presents some preliminary results obtained by the new mesh tally feature. The last section presents the interest of using XML format output files. (authors)
Energy Technology Data Exchange (ETDEWEB)
Boudou, C
2006-09-15
High grade gliomas are extremely aggressive brain tumours. Specific techniques combining the presence of high atomic number elements within the tumour to an irradiation with a low x-rays (below 100 keV) beam from a synchrotron source were proposed. For the sake of clinical trials, the use of treatment planning system has to be foreseen as well as tailored dosimetry protocols. Objectives of this thesis work were (1) the development of a dose calculation tools based on Monte Carlo code for particles transport and (2) the implementation of an experimental method for the three dimensional verification of the dose delivered. The dosimetric tool is an interface between tomography images from patient or sample and the M.C.N.P.X. general purpose code. Besides, dose distributions were measured through a radiosensitive polymer gel, providing acceptable results compared to calculations.
Directory of Open Access Journals (Sweden)
Diego Ferraro
2011-01-01
Full Text Available Monte Carlo neutron transport codes are usually used to perform criticality calculations and to solve shielding problems due to their capability to model complex systems without major approximations. However, these codes demand high computational resources. The improvement in computer capabilities leads to several new applications of Monte Carlo neutron transport codes. An interesting one is to use this method to perform cell-level fuel assembly calculations in order to obtain few group constants to be used on core calculations. In the present work the VTT recently developed Serpent v.1.1.7 cell-oriented neutronic calculation code is used to perform cell calculations of a theoretical BWR lattice benchmark with burnable poisons, and the main results are compared to reported ones and with calculations performed with Condor v.2.61, the INVAP's neutronic collision probability cell code.
Bedload transport calculations for steep streams
Rickenmann, D.; Turowski, J. M.; Nitsche, M.; Badoux, A.; Raymond, M.
2011-12-01
Due to large flow resistance, bedload transport calculations for steep streams often result in a clear overestimation of observed bedload. This contribution discusses the importance of introducing a proper partitioning of flow resistance for bedload transport calculations for steep streams. Several approaches to account for additional flow resistance were tested. They were used with the same reference bedload transport equation, and the predictions were then compared with bedload observations for a number of mountain streams. To this end, we measured the streambed parameters required for these calculations for flood events in 7 mountain rivers and torrents and for long-term discharge and bedload data of 6 torrents. The streams have channel slopes from 2 to 19 %, catchment areas from 0.5 to 170 km2, and are all located in the Swiss Alps. Some approaches give better predictions for rougher streams and for the extreme flood events than for less rough streams and for the long-term data from the torrents (Nitsche et al., 2011). An example for this prediction pattern is the approach of Yager et al. (2007) which is the one mostly based on physical principles for flow resistance calculations. This approach requires additional field measurements regarding the key roughness parameters. On the other hand considering all the bedload data, the empirical approach of Rickenmann and Recking (2011) appears to give the best overall predictions. This approach has the advantage to be easy to apply. Further bedload transport calculations were made for steep streams upstream of water intakes in the Swiss Alps where information is available on both discharge and annual sediment yield. If no correction for high flow resistance is made, calculated bedload transport rates with many equations tend to result in elevated bedload concentrations which are expected for debris flood or debris flow conditions. Some observations from the widespread flood events of August 2005 in Switzerland
COMET-PE: an incident fluence response expansion transport method for radiotherapy calculations
Hayward, Robert M.; Rahnema, Farzad
2013-05-01
Accurate dose calculation is a central component of radiotherapy treatment planning. A new method of dose calculation has been developed based on transport theory and validated by comparison to Monte Carlo methods. The coarse mesh transport method has been extended to allow coupled photon-electron transport in 3D. The method combines stochastic pre-computation with a deterministic solver to achieve high accuracy and precision. To enhance the method for radiotherapy calculations, a new angular basis was derived, and an analytical source treatment was developed. Validation was performed by comparison to DOSXYZnrc using a heterogeneous interface phantom composed of water, aluminum, and lung. Calculations of both kinetic energy released per unit mass and dose were compared. Good agreement was found with a maximum error and root mean square relative error of less than 1.5% for all cases. The results show that the new method achieves an accuracy comparable to Monte Carlo.
Molecular transport calculations with Wannier Functions
DEFF Research Database (Denmark)
Thygesen, Kristian Sommer; Jacobsen, Karsten Wedel
2005-01-01
We present a scheme for calculating coherent electron transport in atomic-scale contacts. The method combines a formally exact Green's function formalism with a mean-field description of the electronic structure based on the Kohn-Sham scheme of density functional theory. We use an accurate plane......-wave electronic structure method to calculate the eigenstates which are subsequently transformed into a set of localized Wannier functions (WFs). The WFs provide a highly efficient basis set which at the same time is well suited for analysis due to the chemical information contained in the WFs. The method is...
Mairani, A; Valente, M; Battistoni, G; Botta, F; Pedroli, G; Ferrari, A; Cremonesi, M; Di Dia, A; Ferrari, M; Fasso, A
2011-01-01
Purpose: The calculation of patient-specific dose distribution can be achieved by Monte Carlo simulations or by analytical methods. In this study, FLUKA Monte Carlo code has been considered for use in nuclear medicine dosimetry. Up to now, FLUKA has mainly been dedicated to other fields, namely high energy physics, radiation protection, and hadrontherapy. When first employing a Monte Carlo code for nuclear medicine dosimetry, its results concerning electron transport at energies typical of nuclear medicine applications need to be verified. This is commonly achieved by means of calculation of a representative parameter and comparison with reference data. Dose point kernel (DPK), quantifying the energy deposition all around a point isotropic source, is often the one. Methods: FLUKA DPKS have been calculated in both water and compact bone for monoenergetic electrons (10-3 MeV) and for beta emitting isotopes commonly used for therapy ((89)Sr, (90)Y, (131)I, (153)Sm, (177)Lu, (186)Re, and (188)Re). Point isotropic...
Fermion Monte Carlo Calculations on Liquid-3He
Energy Technology Data Exchange (ETDEWEB)
Kalos, M H; Colletti, L; Pederiva, F
2004-03-16
Methods and results for calculations of the ground state energy of the bulk system of {sup 3}He atoms are discussed. Results are encouraging: they believe that they demonstrate that their methods offer a solution of the ''fermion sign problem'' and the possibility of direct computation of many-fermion systems with no uncontrolled approximations. Nevertheless, the method is still rather inefficient compared with variational or fixed-node approximate methods. There appears to be a significant populations size effect. The situation is improved by the inclusion of ''Second Stage Importance Sampling'' and of ''Acceptance/Rejection'' adapted to their needs.
Energy Technology Data Exchange (ETDEWEB)
Hardiansyah, D.; Haryanto, F. [Nuclear Physics and Biophysics Research Laboratory, Physics Department, Institut Teknologi Bandung (ITB) (Indonesia); Male, S. [Radiotherapy Division, Research Hospital of Hassanudin University (Indonesia)
2014-09-30
Prism is a non-commercial Radiotherapy Treatment Planning System (RTPS) develop by Ira J. Kalet from Washington University. Inhomogeneity factor is included in Prism TPS dose calculation. The aim of this study is to investigate the sensitivity of dose calculation on Prism using Monte Carlo simulation. Phase space source from head linear accelerator (LINAC) for Monte Carlo simulation is implemented. To achieve this aim, Prism dose calculation is compared with EGSnrc Monte Carlo simulation. Percentage depth dose (PDD) and R50 from both calculations are observed. BEAMnrc is simulated electron transport in LINAC head and produced phase space file. This file is used as DOSXYZnrc input to simulated electron transport in phantom. This study is started with commissioning process in water phantom. Commissioning process is adjusted Monte Carlo simulation with Prism RTPS. Commissioning result is used for study of inhomogeneity phantom. Physical parameters of inhomogeneity phantom that varied in this study are: density, location and thickness of tissue. Commissioning result is shown that optimum energy of Monte Carlo simulation for 6 MeV electron beam is 6.8 MeV. This commissioning is used R50 and PDD with Practical length (R{sub p}) as references. From inhomogeneity study, the average deviation for all case on interest region is below 5 %. Based on ICRU recommendations, Prism has good ability to calculate the radiation dose in inhomogeneity tissue.
Hardiansyah, D.; Male, S.; Haryanto, F.
2014-09-01
Prism is a non-commercial Radiotherapy Treatment Planning System (RTPS) develop by Ira J. Kalet from Washington University. Inhomogeneity factor is included in Prism TPS dose calculation. The aim of this study is to investigate the sensitivity of dose calculation on Prism using Monte Carlo simulation. Phase space source from head linear accelerator (LINAC) for Monte Carlo simulation is implemented. To achieve this aim, Prism dose calculation is compared with EGSnrc Monte Carlo simulation. Percentage depth dose (PDD) and R50 from both calculations are observed. BEAMnrc is simulated electron transport in LINAC head and produced phase space file. This file is used as DOSXYZnrc input to simulated electron transport in phantom. This study is started with commissioning process in water phantom. Commissioning process is adjusted Monte Carlo simulation with Prism RTPS. Commissioning result is used for study of inhomogeneity phantom. Physical parameters of inhomogeneity phantom that varied in this study are: density, location and thickness of tissue. Commissioning result is shown that optimum energy of Monte Carlo simulation for 6 MeV electron beam is 6.8 MeV. This commissioning is used R50 and PDD with Practical length (Rp) as references. From inhomogeneity study, the average deviation for all case on interest region is below 5 %. Based on ICRU recommendations, Prism has good ability to calculate the radiation dose in inhomogeneity tissue.
International Nuclear Information System (INIS)
Prism is a non-commercial Radiotherapy Treatment Planning System (RTPS) develop by Ira J. Kalet from Washington University. Inhomogeneity factor is included in Prism TPS dose calculation. The aim of this study is to investigate the sensitivity of dose calculation on Prism using Monte Carlo simulation. Phase space source from head linear accelerator (LINAC) for Monte Carlo simulation is implemented. To achieve this aim, Prism dose calculation is compared with EGSnrc Monte Carlo simulation. Percentage depth dose (PDD) and R50 from both calculations are observed. BEAMnrc is simulated electron transport in LINAC head and produced phase space file. This file is used as DOSXYZnrc input to simulated electron transport in phantom. This study is started with commissioning process in water phantom. Commissioning process is adjusted Monte Carlo simulation with Prism RTPS. Commissioning result is used for study of inhomogeneity phantom. Physical parameters of inhomogeneity phantom that varied in this study are: density, location and thickness of tissue. Commissioning result is shown that optimum energy of Monte Carlo simulation for 6 MeV electron beam is 6.8 MeV. This commissioning is used R50 and PDD with Practical length (Rp) as references. From inhomogeneity study, the average deviation for all case on interest region is below 5 %. Based on ICRU recommendations, Prism has good ability to calculate the radiation dose in inhomogeneity tissue
Calculating Quantum Transports Using Periodic Boundary Conditions
Wang, Lin-Wang
2004-01-01
An efficient new method is presented to calculate the quantum transports using periodic boundary conditions. This new method is based on a method we developed previously, but with an essential change in solving the Schrodinger's equation. As a result of this change, the scattering states can be solved at any given energy. Compared to the previous method, the current method is faster and numerically more stable. The total computational time of the current method is similar to a conventional gr...
Secondary electron emission yield calculation performed using two different Monte Carlo strategies
Energy Technology Data Exchange (ETDEWEB)
Dapor, Maurizio, E-mail: dapor@fbk.eu [Interdisciplinary Laboratory for Computational Science (LISC), FBK-CMM and University of Trento, via Sommarive 18, I-38123 Povo, Trento (Italy); Department of Materials Engineering and Industrial Technologies, University of Trento, via Mesiano 77, I-38123 Trento (Italy)
2011-07-15
The secondary electron emission yield in Al{sub 2}O{sub 3} and polymethylmethacrylate (PMMA) is calculated using two different Monte Carlo approaches, one based on the energy straggling strategy (ES), the other one on the continuous-slowing-down (CSD) approximation. This work is aimed at comparing the secondary electron emission yields calculated by these two Monte Carlo strategies with the available experimental data. The results of both methods are in good agreement with experimental data. The CSD strategy is about 10 times faster than the ES strategy.
Energy Technology Data Exchange (ETDEWEB)
Sloan, D.P.
1983-05-01
Morel (1981) has developed multigroup Legendre cross sections suitable for input to standard discrete ordinates transport codes for performing charged-particle Fokker-Planck calculations in one-dimensional slab and spherical geometries. Since the Monte Carlo neutron transport code, MORSE, uses the same multigroup cross section data that discrete ordinates codes use, it was natural to consider whether Fokker-Planck calculations could be performed with MORSE. In order to extend the unique three-dimensional forward or adjoint capability of MORSE to Fokker-Planck calculations, the MORSE code was modified to correctly treat the delta-function scattering of the energy operator, and a new set of physically acceptable cross sections was derived to model the angular operator. Morel (1979) has also developed multigroup Legendre cross sections suitable for input to standard discrete ordinates codes for performing electron Boltzmann calculations. These electron cross sections may be treated in MORSE with the same methods developed to treat the Fokker-Planck cross sections. The large magnitude of the elastic scattering cross section, however, severely increases the computation or run time. It is well-known that approximate elastic cross sections are easily obtained by applying the extended transport (or delta function) correction to the Legendre coefficients of the exact cross section. An exact method for performing the extended transport cross section correction produces cross sections which are physically acceptable. Sample calculations using electron cross sections have demonstrated this new technique to be very effective in decreasing the large magnitude of the cross sections.
Monte Carlo analysis of radiative transport in oceanographic lidar measurements
Energy Technology Data Exchange (ETDEWEB)
Cupini, E.; Ferro, G. [ENEA, Divisione Fisica Applicata, Centro Ricerche Ezio Clementel, Bologna (Italy); Ferrari, N. [Bologna Univ., Bologna (Italy). Dipt. Ingegneria Energetica, Nucleare e del Controllo Ambientale
2001-07-01
The analysis of oceanographic lidar systems measurements is often carried out with semi-empirical methods, since there is only a rough understanding of the effects of many environmental variables. The development of techniques for interpreting the accuracy of lidar measurements is needed to evaluate the effects of various environmental situations, as well as of different experimental geometric configurations and boundary conditions. A Monte Carlo simulation model represents a tool that is particularly well suited for answering these important questions. The PREMAR-2F Monte Carlo code has been developed taking into account the main molecular and non-molecular components of the marine environment. The laser radiation interaction processes of diffusion, re-emission, refraction and absorption are treated. In particular are considered: the Rayleigh elastic scattering, produced by atoms and molecules with small dimensions with respect to the laser emission wavelength (i.e. water molecules), the Mie elastic scattering, arising from atoms or molecules with dimensions comparable to the laser wavelength (hydrosols), the Raman inelastic scattering, typical of water, the absorption of water, inorganic (sediments) and organic (phytoplankton and CDOM) hydrosols, the fluorescence re-emission of chlorophyll and yellow substances. PREMAR-2F is an extension of a code for the simulation of the radiative transport in atmospheric environments (PREMAR-2). The approach followed in PREMAR-2 was to combine conventional Monte Carlo techniques with analytical estimates of the probability of the receiver to have a contribution from photons coming back after an interaction in the field of view of the lidar fluorosensor collecting apparatus. This offers an effective mean for modelling a lidar system with realistic geometric constraints. The retrieved semianalytic Monte Carlo radiative transfer model has been developed in the frame of the Italian Research Program for Antarctica (PNRA) and it is
Energy Technology Data Exchange (ETDEWEB)
Clouet, J.F.; Samba, G. [CEA Bruyeres-le-Chatel, 91 (France)
2005-07-01
We use asymptotic analysis to study the diffusion limit of the Symbolic Implicit Monte-Carlo (SIMC) method for the transport equation. For standard SIMC with piecewise constant basis functions, we demonstrate mathematically that the solution converges to the solution of a wrong diffusion equation. Nevertheless a simple extension to piecewise linear basis functions enables to obtain the correct solution. This improvement allows the calculation in opaque medium on a mesh resolving the diffusion scale much larger than the transport scale. Anyway, the huge number of particles which is necessary to get a correct answer makes this computation time consuming. Thus, we have derived from this asymptotic study an hybrid method coupling deterministic calculation in the opaque medium and Monte-Carlo calculation in the transparent medium. This method gives exactly the same results as the previous one but at a much lower price. We present numerical examples which illustrate the analysis. (authors)
Srna-Monte Carlo codes for proton transport simulation in combined and voxelized geometries
International Nuclear Information System (INIS)
This paper describes new Monte Carlo codes for proton transport simulations in complex geometrical forms and in materials of different composition. The SRNA codes were developed for three dimensional (3D) dose distribution calculation in proton therapy and dosimetry. The model of these codes is based on the theory of proton multiple scattering and a simple model of compound nucleus decay. The developed package consists of two codes: SRNA-2KG and SRNA-VOX. The first code simulates proton transport in combined geometry that can be described by planes and second order surfaces. The second one uses the voxelized geometry of material zones and is specifically adopted for the application of patient computer tomography data. Transition probabilities for both codes are given by the SRNADAT program. In this paper, we will present the models and algorithms of our programs, as well as the results of the numerical experiments we have carried out applying them, along with the results of proton transport simulation obtained through the PETRA and GEANT programs. The simulation of the proton beam characterization by means of the Multi-Layer Faraday Cup and spatial distribution of positron emitters obtained by our program indicate the imminent application of Monte Carlo techniques in clinical practice. (author)
Srna - Monte Carlo codes for proton transport simulation in combined and voxelized geometries
Directory of Open Access Journals (Sweden)
Ilić Radovan D.
2002-01-01
Full Text Available This paper describes new Monte Carlo codes for proton transport simulations in complex geometrical forms and in materials of different composition. The SRNA codes were developed for three dimensional (3D dose distribution calculation in proton therapy and dosimetry. The model of these codes is based on the theory of proton multiple scattering and a simple model of compound nucleus decay. The developed package consists of two codes: SRNA-2KG and SRNA-VOX. The first code simulates proton transport in combined geometry that can be described by planes and second order surfaces. The second one uses the voxelized geometry of material zones and is specifically adopted for the application of patient computer tomography data. Transition probabilities for both codes are given by the SRNADAT program. In this paper, we will present the models and algorithms of our programs, as well as the results of the numerical experiments we have carried out applying them, along with the results of proton transport simulation obtained through the PETRA and GEANT programs. The simulation of the proton beam characterization by means of the Multi-Layer Faraday Cup and spatial distribution of positron emitters obtained by our program indicate the imminent application of Monte Carlo techniques in clinical practice.
International Nuclear Information System (INIS)
As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo Method (MCM) has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this thesis, the CUBMC code is presented, a GPU-based MC photon transport algorithm for dose calculation under the Compute Unified Device Architecture (CUDA) platform. The simulation of physical events is based on the algorithm used in PENELOPE, and the cross section table used is the one generated by the MATERIAL routine, also present in PENELOPE code. Photons are transported in voxel-based geometries with different compositions. There are two distinct approaches used for transport simulation. The rst of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon ignores the existence of borders and travels in homogeneous fictitious media. The CUBMC code aims to be an alternative of Monte Carlo simulator code that, by using the capability of parallel processing of graphics processing units (GPU), provide high performance simulations in low cost compact machines, and thus can be applied in clinical cases and incorporated in treatment planning systems for radiotherapy. (author)
Temperature variance study in Monte-Carlo photon transport theory
International Nuclear Information System (INIS)
We study different Monte-Carlo methods for solving radiative transfer problems, and particularly Fleck's Monte-Carlo method. We first give the different time-discretization schemes and the corresponding stability criteria. Then we write the temperature variance as a function of the variances of temperature and absorbed energy at the previous time step. Finally we obtain some stability criteria for the Monte-Carlo method in the stationary case
Homma, Yuto; Moriwaki, Hiroyuki; Ohki, Shigeo; Ikeda, Kazumi
2014-06-01
This paper deals with verification of three dimensional triangular prismatic discrete ordinates transport calculation code ENSEMBLE-TRIZ by comparison with multi-group Monte Carlo calculation code GMVP in a large fast breeder reactor. The reactor is a 750 MWe electric power sodium cooled reactor. Nuclear characteristics are calculated at beginning of cycle of an initial core and at beginning and end of cycle of equilibrium core. According to the calculations, the differences between the two methodologies are smaller than 0.0002 Δk in the multi-plication factor, relatively about 1% in the control rod reactivity, and 1% in the sodium void reactivity.
Report of 'Monte Carlo calculation summer seminar'
Energy Technology Data Exchange (ETDEWEB)
Sakurai, Kiyoshi; Kume, Etsuo; Yatabe, Shigeru; Maekawa, Fujio; Yamamoto, Toshihiro; Nagaya, Yasunobu; Mori, Takamasa [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ueki, Kohtaro [Ship Research Inst., Tokyo (Japan); Naito, Yoshitaka [Nippon Advanced Information Service, Tokai, Ibaraki (Japan)
2001-02-01
'Monte Carlo Calculation Summer Seminar', which was sponsored by Research Committee on Particle Simulation with Monte Carlo Method' in Atomic Energy Society of Japan, was held in 26-28 July 2000 at Tokai Research Establishment, Japan Atomic Energy Research Institute. The participator is 111 persons from universities, Research Institutes and Companies. In the beginner course, the lecture of fundamental theory of Monte Carlo Method and the installation to the note-type personal computer of MCNP- 4B2 and attached libraries, sample input were performed. As the seminar is first attempt in Japan, the general review and lecture, installation, exercise calculation were summarized in this report. (author)
penORNL: a parallel Monte Carlo photon and electron transport package using PENELOPE
Energy Technology Data Exchange (ETDEWEB)
Bekar, Kursat B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Miller, Thomas Martin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Patton, Bruce W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Weber, Charles F. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2015-01-01
The parallel Monte Carlo photon and electron transport code package penORNL was developed at Oak Ridge National Laboratory to enable advanced scanning electron microscope (SEM) simulations on high performance computing systems. This paper discusses the implementations, capabilities and parallel performance of the new code package. penORNL uses PENELOPE for its physics calculations and provides all available PENELOPE features to the users, as well as some new features including source definitions specifically developed for SEM simulations, a pulse-height tally capability for detailed simulations of gamma and x-ray detectors, and a modified interaction forcing mechanism to enable accurate energy deposition calculations. The parallel performance of penORNL was extensively tested with several model problems, and very good linear parallel scaling was observed with up to 512 processors. penORNL, along with its new features, will be available for SEM simulations upon completion of the new pulse-height tally implementation.
Clouvas, A; Antonopoulos-Domis, M; Silva, J
2000-01-01
The dose rate conversion factors D/sub CF/ (absorbed dose rate in air per unit activity per unit of soil mass, nGy h/sup -1/ per Bq kg/sup -1/) are calculated 1 m above ground for photon emitters of natural radionuclides uniformly distributed in the soil. Three Monte Carlo codes are used: 1) The MCNP code of Los Alamos; 2) The GEANT code of CERN; and 3) a Monte Carlo code developed in the Nuclear Technology Laboratory of the Aristotle University of Thessaloniki. The accuracy of the Monte Carlo results is tested by the comparison of the unscattered flux obtained by the three Monte Carlo codes with an independent straightforward calculation. All codes and particularly the MCNP calculate accurately the absorbed dose rate in air due to the unscattered radiation. For the total radiation (unscattered plus scattered) the D/sub CF/ values calculated from the three codes are in very good agreement between them. The comparison between these results and the results deduced previously by other authors indicates a good ag...
Monte Carlo Calculation as an Aid to Teaching Solid-State Diffusion.
Murch, G. E.
1979-01-01
A simple Monte Carlo method is used to simulate an atomistic model of solid-state diffusion. This approach illustrates some of the principles of diffusion and in particular verifies a solution to Fick's second law. The role and calculation of the diffusion correlation factor is also discussed. (Author/BB)
Widder, Joachim; Hollander, Miranda; Ubbels, Jan F.; Bolt, Rene A.; Langendijk, Johannes A.
2010-01-01
Purpose: To define a method of dose prescription employing Monte Carlo (MC) dose calculation in stereotactic body radiotherapy (SBRT) for lung tumours aiming at a dose as low as possible outside of the PTV. Methods and materials: Six typical T1 lung tumours - three small, three large - were construc
Hard, charged spheres in spherical pores. Grand canonical ensemble Monte Carlo calculations
DEFF Research Database (Denmark)
Sloth, Peter; Sørensen, T. S.
1992-01-01
A model consisting of hard charged spheres inside hard spherical pores is investigated by grand canonical ensemble Monte Carlo calculations. It is found that the mean ionic density profiles in the pores are almost the same when the wall of the pore is moderately charged as when it is uncharged...
A general framework for implementing NLO calculations in shower Monte Carlo programs. The POWHEG BOX
Energy Technology Data Exchange (ETDEWEB)
Alioli, Simone [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Nason, Paolo [INFN, Milano-Bicocca (Italy); Oleari, Carlo [INFN, Milano-Bicocca (Italy); Milano-Bicocca Univ. (Italy); Re, Emanuele [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomenology
2010-02-15
In this work we illustrate the POWHEG BOX, a general computer code framework for implementing NLO calculations in shower Monte Carlo programs according to the POWHEG method. Aim of this work is to provide an illustration of the needed theoretical ingredients, a view of how the code is organized and a description of what a user should provide in order to use it. (orig.)
Monte Carlo calculation of received dose from ingestion and inhalation of natural uranium
International Nuclear Information System (INIS)
For the purpose of this study eighty samples are taken from the area Bela Crkva and Vrsac. The activity of radionuclide in the soil is determined by gamma- ray spectrometry. Monte Carlo method is used to calculate effective dose received by population resulting from the inhalation and ingestion of natural uranium. The estimated doses were compared with the legally prescribed levels. (author)
Monte Carlo simulation and analytical calculation of coherent Bremsstrahlung and its polarisation
Energy Technology Data Exchange (ETDEWEB)
Natter, F.A.; Grabmayr, P. E-mail: grabmayr@uni-tuebingen.de; Hehl, T.; Owens, R.O.; Wunderlich, S
2003-12-01
Spectral distributions for coherent and incoherent Bremsstrahlung produced by electrons on thin diamond radiators are calculated accurately by a Monte Carlo procedure. Realistic descriptions of the electron beam and the physical processes within the radiator have been implemented. Results are compared to measured data. A faster calculation at only a slight loss of precision is possible using analytical expressions which can be derived after simplifying assumptions.
Laub, Wolfram U.; Bakai, Annemarie; Nüsslin, Fridtjof
2001-06-01
The present study investigates the application of compensators for the intensity modulated irradiation of a thorax phantom. Measurements are compared with Monte Carlo and standard pencil beam algorithm dose calculations. Compensators were manufactured to produce the intensity profiles that were generated from the scientific version of the KonRad IMRT treatment-planning system for a given treatment plan. The comparison of dose distributions calculated with a pencil beam algorithm, with the Monte Carlo code EGS4 and with measurements is presented. By measurements in a water phantom it is demonstrated that the method used to manufacture the compensators reproduces the intensity profiles in a suitable manner. Monte Carlo simulations in a water phantom show that the accelerator head model used for simulations is sufficient. No significant overestimations of dose values inside the target volume by the pencil beam algorithm are found in the thorax phantom. An overestimation of dose values in lung by the pencil beam algorithm is also not found. Expected dose calculation errors of the pencil beam algorithm are suppressed, because the dose to the low density region lung is reduced by the use of a non-coplanar beam arrangement and by intensity modulation.
Monte-Carlo Method Python Library for dose distribution Calculation in Brachytherapy
International Nuclear Information System (INIS)
The Cs-137 Brachytherapy treatment is performed in Madagascar since 2005. Time treatment calculation for prescribed dose is made manually. Monte-Carlo Method Python library written at Madagascar INSTN is experimentally used to calculate the dose distribution on the tumour and around it. The first validation of the code was done by comparing the library curves with the Nucletron company curves. To reduce the duration of the calculation, a Grid of PC's is set up with listner patch run on each PC. The library will be used to modelize the dose distribution in the CT scan patient picture for individual and better accuracy time calculation for a prescribed dose.
Ebru Ermis, Elif; Celiktas, Cuneyt
2015-07-01
Calculations of gamma-ray mass attenuation coefficients of various detector materials (crystals) were carried out by means of FLUKA Monte Carlo (MC) method at different gamma-ray energies. NaI, PVT, GSO, GaAs and CdWO4 detector materials were chosen in the calculations. Calculated coefficients were also compared with the National Institute of Standards and Technology (NIST) values. Obtained results through this method were highly in accordance with those of the NIST values. It was concluded from the study that FLUKA MC method can be an alternative way to calculate the gamma-ray mass attenuation coefficients of the detector materials.
Meric, N; Bor, D
1999-01-01
Scatter fractions have been determined experimentally for lucite, polyethylene, polypropylene, aluminium and copper of varying thicknesses using a polyenergetic broad X-ray beam of 67 kVp. Simulation of the experiment has been carried out by the Monte Carlo technique under the same input conditions. Comparison of the measured and predicted data with each other and with the previously reported values has been given. The Monte Carlo calculations have also been carried out for water, bakelite and bone to examine the dependence of scatter fraction on the density of the scatterer.
Systematic study of finite-size effects in quantum Monte Carlo calculations of real metallic systems
Energy Technology Data Exchange (ETDEWEB)
Azadi, Sam, E-mail: s.azadi@imperial.ac.uk; Foulkes, W. M. C. [Department of Physics, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom)
2015-09-14
We present a systematic and comprehensive study of finite-size effects in diffusion quantum Monte Carlo calculations of metals. Several previously introduced schemes for correcting finite-size errors are compared for accuracy and efficiency, and practical improvements are introduced. In particular, we test a simple but efficient method of finite-size correction based on an accurate combination of twist averaging and density functional theory. Our diffusion quantum Monte Carlo results for lithium and aluminum, as examples of metallic systems, demonstrate excellent agreement between all of the approaches considered.
Automated-biasing approach to Monte Carlo shipping-cask calculations
International Nuclear Information System (INIS)
Computer Sciences at Oak Ridge National Laboratory, under a contract with the Nuclear Regulatory Commission, has developed the SCALE system for performing standardized criticality, shielding, and heat transfer analyses of nuclear systems. During the early phase of shielding development in SCALE, it was established that Monte Carlo calculations of radiation levels exterior to a spent fuel shipping cask would be extremely expensive. This cost can be substantially reduced by proper biasing of the Monte Carlo histories. The purpose of this study is to develop and test an automated biasing procedure for the MORSE-SGC/S module of the SCALE system
Sakamoto, Y
2002-01-01
In the prevention of nuclear disaster, there needs the information on the dose equivalent rate distribution inside and outside the site, and energy spectra. The three dimensional radiation transport calculation code is a useful tool for the site specific detailed analysis with the consideration of facility structures. It is important in the prediction of individual doses in the future countermeasure that the reliability of the evaluation methods of dose equivalent rate distribution and energy spectra by using of Monte Carlo radiation transport calculation code, and the factors which influence the dose equivalent rate distribution outside the site are confirmed. The reliability of radiation transport calculation code and the influence factors of dose equivalent rate distribution were examined through the analyses of critical accident at JCO's uranium processing plant occurred on September 30, 1999. The radiation transport calculations including the burn-up calculations were done by using of the structural info...
Uncertainty analysis of neutron transport calculation
International Nuclear Information System (INIS)
A cross section sensitivity-uncertainty analysis code, SUSD was developed. The code calculates sensitivity coefficients for one and two-dimensional transport problems based on the first order perturbation theory. Variance and standard deviation of detector responses or design parameters can be obtained using cross section covariance matrix. The code is able to perform sensitivity-uncertainty analysis for secondary neutron angular distribution(SAD) and secondary neutron energy distribution(SED). Covariances of 6Li and 7Li neutron cross sections in JENDL-3PR1 were evaluated including SAD and SED. Covariances of Fe and Be were also evaluated. The uncertainty of tritium breeding ratio, fast neutron leakage flux and neutron heating was analysed on four types of blanket concepts for a commercial tokamak fusion reactor. The uncertainty of tritium breeding ratio was less than 6 percent. Contribution from SAD/SED uncertainties are significant for some parameters. Formulas to estimate the errors of numerical solution of the transport equation were derived based on the perturbation theory. This method enables us to deterministically estimate the numerical errors due to iterative solution, spacial discretization and Legendre polynomial expansion of transfer cross-sections. The calculational errors of the tritium breeding ratio and the fast neutron leakage flux of the fusion blankets were analysed. (author)
Transportation channels calculation method in MATLAB
International Nuclear Information System (INIS)
Output devices and charged particles transport channels are necessary components of any modern particle accelerator. They differ both in sizes and in terms of focusing elements depending on particle accelerator type and its destination. A package of transport line designing codes for magnet optical channels in MATLAB environment is presented in this report. Charged particles dynamics in a focusing channel can be studied easily by means of the matrix technique. MATLAB usage is convenient because its information objects are matrixes. MATLAB allows the use the modular principle to build the software package. Program blocks are small in size and easy to use. They can be executed separately or commonly. A set of codes has a user-friendly interface. Transport channel construction consists of focusing lenses (doublets and triplets). The main of the magneto-optical channel parameters are total length and lens position and parameters of the output beam in the phase space (channel acceptance, beam emittance - beam transverse dimensions, particles divergence and image stigmaticity). Choice of the channel operation parameters is based on the conditions for satisfying mutually competing demands. And therefore the channel parameters calculation is carried out by using the search engine optimization techniques.
Monte Carlo calculations for gamma-ray mass attenuation coefficients of some soil samples
International Nuclear Information System (INIS)
Highlights: • Gamma-ray mass attenuation coefficients of soils. • Radiation shielding properties of soil. • Comparison of calculated results with the theoretical and experimental ones. • The method can be applied to various media. - Abstract: We developed a simple Monte Carlo code to determine the mass attenuation coefficients of some soil samples at nine different gamma-ray energies (59.5, 80.9, 122.1, 159.0, 356.5, 511.0, 661.6, 1173.2 and 1332.5 keV). Results of the Monte Carlo calculations have been compared with tabulations based upon the results of photon cross section database (XCOM) and with experimental results by other researchers for the same samples. The calculated mass attenuation coefficients were found to be very close to the theoretical values and the experimental results
Monte Carlo Calculation for Landmine Detection using Prompt Gamma Neutron Activation Analysis
Energy Technology Data Exchange (ETDEWEB)
Park, Seungil; Kim, Seong Bong; Yoo, Suk Jae [Plasma Technology Research Center, Gunsan (Korea, Republic of); Shin, Sung Gyun; Cho, Moohyun [POSTECH, Pohang (Korea, Republic of); Han, Seunghoon; Lim, Byeongok [Samsung Thales, Yongin (Korea, Republic of)
2014-05-15
Identification and demining of landmines are a very important issue for the safety of the people and the economic development. To solve the issue, several methods have been proposed in the past. In Korea, National Fusion Research Institute (NFRI) is developing a landmine detector using prompt gamma neutron activation analysis (PGNAA) as a part of the complex sensor-based landmine detection system. In this paper, the Monte Carlo calculation results for this system are presented. Monte Carlo calculation was carried out for the design of the landmine detector using PGNAA. To consider the soil effect, average soil composition is analyzed and applied to the calculation. This results has been used to determine the specification of the landmine detector.
Energy Technology Data Exchange (ETDEWEB)
Cho, S; Shin, E H; Kim, J; Ahn, S H; Chung, K; Kim, D-H; Han, Y; Choi, D H [Samsung Medical Center, Seoul (Korea, Republic of)
2015-06-15
Purpose: To evaluate the shielding wall design to protect patients, staff and member of the general public for secondary neutron using a simply analytic solution, multi-Monte Carlo code MCNPX, ANISN and FLUKA. Methods: An analytical and multi-Monte Carlo method were calculated for proton facility (Sumitomo Heavy Industry Ltd.) at Samsung Medical Center in Korea. The NCRP-144 analytical evaluation methods, which produced conservative estimates on the dose equivalent values for the shielding, were used for analytical evaluations. Then, the radiation transport was simulated with the multi-Monte Carlo code. The neutron dose at evaluation point is got by the value using the production of the simulation value and the neutron dose coefficient introduced in ICRP-74. Results: The evaluation points of accelerator control room and control room entrance are mainly influenced by the point of the proton beam loss. So the neutron dose equivalent of accelerator control room for evaluation point is 0.651, 1.530, 0.912, 0.943 mSv/yr and the entrance of cyclotron room is 0.465, 0.790, 0.522, 0.453 mSv/yr with calculation by the method of NCRP-144 formalism, ANISN, FLUKA and MCNP, respectively. The most of Result of MCNPX and FLUKA using the complicated geometry showed smaller values than Result of ANISN. Conclusion: The neutron shielding for a proton therapy facility has been evaluated by the analytic model and multi-Monte Carlo methods. We confirmed that the setting of shielding was located in well accessible area to people when the proton facility is operated.
Energy Technology Data Exchange (ETDEWEB)
Murata, Isao [Osaka Univ., Suita (Japan); Mori, Takamasa; Nakagawa, Masayuki; Itakura, Hirofumi
1996-03-01
The method to calculate neutronics parameters of a core composed of randomly distributed spherical fuels has been developed based on a statistical geometry model with a continuous energy Monte Carlo method. This method was implemented in a general purpose Monte Carlo code MCNP, and a new code MCNP-CFP had been developed. This paper describes the model and method how to use it and the validation results. In the Monte Carlo calculation, the location of a spherical fuel is sampled probabilistically along the particle flight path from the spatial probability distribution of spherical fuels, called nearest neighbor distribution (NND). This sampling method was validated through the following two comparisons: (1) Calculations of inventory of coated fuel particles (CFPs) in a fuel compact by both track length estimator and direct evaluation method, and (2) Criticality calculations for ordered packed geometries. This method was also confined by applying to an analysis of the critical assembly experiment at VHTRC. The method established in the present study is quite unique so as to a probabilistic model of the geometry with a great number of spherical fuels distributed randomly. Realizing the speed-up by vector or parallel computations in future, it is expected to be widely used in calculation of a nuclear reactor core, especially HTGR cores. (author).
CPMC-Lab: A MATLAB package for Constrained Path Monte Carlo calculations
Nguyen, Huy; Shi, Hao; Xu, Jie; Zhang, Shiwei
2014-12-01
We describe CPMC-Lab, a MATLAB program for the constrained-path and phaseless auxiliary-field Monte Carlo methods. These methods have allowed applications ranging from the study of strongly correlated models, such as the Hubbard model, to ab initio calculations in molecules and solids. The present package implements the full ground-state constrained-path Monte Carlo (CPMC) method in MATLAB with a graphical interface, using the Hubbard model as an example. The package can perform calculations in finite supercells in any dimensions, under periodic or twist boundary conditions. Importance sampling and all other algorithmic details of a total energy calculation are included and illustrated. This open-source tool allows users to experiment with various model and run parameters and visualize the results. It provides a direct and interactive environment to learn the method and study the code with minimal overhead for setup. Furthermore, the package can be easily generalized for auxiliary-field quantum Monte Carlo (AFQMC) calculations in many other models for correlated electron systems, and can serve as a template for developing a production code for AFQMC total energy calculations in real materials. Several illustrative studies are carried out in one- and two-dimensional lattices on total energy, kinetic energy, potential energy, and charge- and spin-gaps.
Nelson, Adam
Multi-group scattering moment matrices are critical to the solution of the multi-group form of the neutron transport equation, as they are responsible for describing the change in direction and energy of neutrons. These matrices, however, are difficult to correctly calculate from the measured nuclear data with both deterministic and stochastic methods. Calculating these parameters when using deterministic methods requires a set of assumptions which do not hold true in all conditions. These quantities can be calculated accurately with stochastic methods, however doing so is computationally expensive due to the poor efficiency of tallying scattering moment matrices. This work presents an improved method of obtaining multi-group scattering moment matrices from a Monte Carlo neutron transport code. This improved method of tallying the scattering moment matrices is based on recognizing that all of the outgoing particle information is known a priori and can be taken advantage of to increase the tallying efficiency (therefore reducing the uncertainty) of the stochastically integrated tallies. In this scheme, the complete outgoing probability distribution is tallied, supplying every one of the scattering moment matrices elements with its share of data. In addition to reducing the uncertainty, this method allows for the use of a track-length estimation process potentially offering even further improvement to the tallying efficiency. Unfortunately, to produce the needed distributions, the probability functions themselves must undergo an integration over the outgoing energy and scattering angle dimensions. This integration is too costly to perform during the Monte Carlo simulation itself and therefore must be performed in advance by way of a pre-processing code. The new method increases the information obtained from tally events and therefore has a significantly higher efficiency than the currently used techniques. The improved method has been implemented in a code system
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-01
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-01
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU’s shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75–2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-01
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0
Monte Carlo calculations of the impact of a hip prosthesis on the dose distribution
International Nuclear Information System (INIS)
Because of the ageing of the population, an increasing number of patients with hip prostheses are undergoing pelvic irradiation. Treatment planning systems (TPS) currently available are not always able to accurately predict the dose distribution around such implants. In fact, only Monte Carlo simulation has the ability to precisely calculate the impact of a hip prosthesis during radiotherapeutic treatment. Monte Carlo phantoms were developed to evaluate the dose perturbations during pelvic irradiation. A first model, constructed with the DOSXYZnrc usercode, was elaborated to determine the dose increase at the tissue-metal interface as well as the impact of the material coating the prosthesis. Next, CT-based phantoms were prepared, using the usercode CTCreate, to estimate the influence of the geometry and the composition of such implants on the beam attenuation. Thanks to a program that we developed, the study was carried out with CT-based phantoms containing a hip prosthesis without metal artefacts. Therefore, anthropomorphic phantoms allowed better definition of both patient anatomy and the hip prosthesis in order to better reproduce the clinical conditions of pelvic irradiation. The Monte Carlo results revealed the impact of certain coatings such as PMMA on dose enhancement at the tissue-metal interface. Monte Carlo calculations in CT-based phantoms highlighted the marked influence of the implant's composition, its geometry as well as its position within the beam on dose distribution
Ermis Elif Ebru; Celiktas Cuneyt
2015-01-01
Calculations of gamma-ray mass attenuation coefficients of various detector materials (crystals) were carried out by means of FLUKA Monte Carlo (MC) method at different gamma-ray energies. NaI, PVT, GSO, GaAs and CdWO4 detector materials were chosen in the calculations. Calculated coefficients were also compared with the National Institute of Standards and Technology (NIST) values. Obtained results through this method were highly in accordance with those of the NIST values. It was concluded f...
Propagation of nuclear data uncertainties in fuel cycle calculations using Monte-Carlo technique
International Nuclear Information System (INIS)
Nowadays, the knowledge of uncertainty propagation in depletion calculations is a critical issue because of the safety and economical performance of fuel cycles. Response magnitudes such as decay heat, radiotoxicity and isotopic inventory and their uncertainties should be known to handle spent fuel in present fuel cycles (e.g. high burnup fuel programme) and furthermore in new fuel cycles designs (e.g. fast breeder reactors and ADS). To deal with this task, there are different error propagation techniques, deterministic (adjoint/forward sensitivity analysis) and stochastic (Monte-Carlo technique) to evaluate the error in response magnitudes due to nuclear data uncertainties. In our previous works, cross-section uncertainties were propagated using a Monte-Carlo technique to calculate the uncertainty of response magnitudes such as decay heat and neutron emission. Also, the propagation of decay data, fission yield and cross-section uncertainties was performed, but only isotopic composition was the response magnitude calculated. Following the previous technique, the nuclear data uncertainties are taken into account and propagated to response magnitudes, decay heat and radiotoxicity. These uncertainties are assessed during cooling time. To evaluate this Monte-Carlo technique, two different applications are performed. First, a fission pulse decay heat calculation is carried out to check the Monte-Carlo technique, using decay data and fission yields uncertainties. Then, the results, experimental data and reference calculation (JEFF Report20), are compared. Second, we assess the impact of basic nuclear data (activation cross-section, decay data and fission yields) uncertainties on relevant fuel cycle parameters (decay heat and radiotoxicity) for a conceptual design of a modular European Facility for Industrial Transmutation (EFIT) fuel cycle. After identifying which time steps have higher uncertainties, an assessment of which uncertainties have more relevance is performed
Energy Technology Data Exchange (ETDEWEB)
Bordy, J.M.; Kodeli, I.; Menard, St.; Bouchet, J.L.; Renard, F.; Martin, E.; Blazy, L.; Voros, S.; Bochud, F.; Laedermann, J.P.; Beaugelin, K.; Makovicka, L.; Quiot, A.; Vermeersch, F.; Roche, H.; Perrin, M.C.; Laye, F.; Bardies, M.; Struelens, L.; Vanhavere, F.; Gschwind, R.; Fernandez, F.; Quesne, B.; Fritsch, P.; Lamart, St.; Crovisier, Ph.; Leservot, A.; Antoni, R.; Huet, Ch.; Thiam, Ch.; Donadille, L.; Monfort, M.; Diop, Ch.; Ricard, M
2006-07-01
The purpose of this conference was to describe the present state of computer codes dedicated to radiation transport or radiation source assessment or dosimetry. The presentations have been parted into 2 sessions: 1) methodology and 2) uses in industrial or medical or research domains. It appears that 2 different calculation strategies are prevailing, both are based on preliminary Monte-Carlo calculations with data storage. First, quick simulations made from a database of particle histories built though a previous Monte-Carlo simulation and secondly, a neuronal approach involving a learning platform generated through a previous Monte-Carlo simulation. This document gathers the slides of the presentations.
Alhassid, Y; Liu, S; Mukherjee, A; Nakada, H
2014-01-01
The shell model Monte Carlo (SMMC) method enables calculations in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods, and is particularly suitable for the calculation of level densities in the presence of correlations. We review recent advances and applications of SMMC for the microscopic calculation of level densities. Recent developments include (i) a method to calculate accurately the ground-state energy of an odd-mass nucleus, circumventing a sign problem that originates in the projection on an odd number of particles, and (ii) a method to calculate directly level densities, which, unlike state densities, do not include the spin degeneracy of the levels. We calculated the level densities of a family of nickel isotopes $^{59-64}$Ni and of a heavy deformed rare-earth nucleus $^{162}$Dy and found them to be in close agreement with various experimental data sets.
Construction of Monte Carlo operators in collisional transport theory
International Nuclear Information System (INIS)
A Monte Carlo approach for investigating the dynamics of quiescent collisional magnetoplasmas is presented, based on the discretization of the gyrokinetic equation. The theory applies to a strongly rotating multispecies plasma, in a toroidally axisymmetric configuration. Expressions of the Monte Carlo collision operators are obtained for general v-space nonorthogonal coordinates systems, in terms of approximate solutions of the discretized gyrokinetic equation. Basic features of the Monte Carlo operators are that they fullfill all the required conservation laws, i.e., linear momentum and kinetic energy conservation, and in addition that they take into account correctly also off-diagonal diffusion coefficients. The present operators are thus potentially useful for describing the dynamics of a multispecies toroidal magnetoplasma. In particular, strict ambipolarity of particle fluxes is ensured automatically in the limit of small departures of the unperturbed particle trajectories from some initial axisymmetric toroidal magnetic surfaces
MCPT: A Monte Carlo code for simulation of photon transport in tomographic scanners
International Nuclear Information System (INIS)
MCPT is a special-purpose Monte Carlo code designed to simulate photon transport in tomographic scanners. Variance reduction schemes and sampling games present in MCPT were selected to characterize features common to most tomographic scanners. Combined splitting and biasing (CSB) games are used to systematically sample important detection pathways. An efficient splitting game is used to tally particle energy deposition in detection zones. The pulse height distribution of each detector can be found by convolving the calculated energy deposition distribution with the detector's resolution function. A general geometric modelling package, HERMETOR, is used to describe the geometry of the tomographic scanners and provide MCPT information needed for particle tracking. MCPT's modelling capabilites are described and preliminary experimental validation is presented. (orig.)
McKinley, Michael Scott; Brooks, Eugene D., III; Szoke, Abraham
2003-07-01
We compare the implicit Monte Carlo (IMC) technique to the symbolic IMC (SIMC) technique, with and without weight vectors in frequency space, for time-dependent line transport in the presence of collisional pumping. We examine the efficiency and accuracy of the IMC and SIMC methods for test problems involving the evolution of a collisionally pumped trapping problem to its steady-state, the surface heating of a cold medium by a beam, and the diffusion of energy from a localized region that is collisionally pumped. The importance of spatial biasing and teleportation for problems involving high opacity is demonstrated. Our numerical solution, along with its associated teleportation error, is checked against theoretical calculations for the last example.
Stabilizing Canonical-Ensemble Calculations in the Auxiliary-Field Monte Carlo Method
Gilbreth, C N
2014-01-01
Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method.
Prudnikov, V. V.; Prudnikov, P. V.; Romanovskii, D. E.
2015-11-01
The Monte Carlo study of three-layer and spin-valve magnetic structures with giant magnetoresistance effects has been performed with the application of the Heisenberg anisotropic model to the description of the magnetic properties of thin ferromagnetic films. The dependences of the magnetic characteristics on the temperature and external magnetic field have been obtained for the ferromagnetic and antiferromagnetic configurations of these structures. A Monte Carlo method for determining the magnetoresistance coefficient has been developed. The magnetoresistance coefficient has been calculated for three-layer and spin-valve magnetic structures at various thicknesses of ferromagnetic films. It has been shown that the calculated temperature dependence of the magnetoresistance coefficient is in good agreement with experimental data obtained for the Fe(001)/Cr(001) multilayer structure and the CFAS/Ag/CFAS/IrMn spin valve based on the Co2FeAl0.5Si0.5 (CFAS) Heusler alloy.
Prudnikov, V. V.; Prudnikov, P. V.; Romanovskiy, D. E.
2016-06-01
A Monte Carlo study of trilayer and spin-valve magnetic structures with giant magnetoresistance effects is carried out. The anisotropic Heisenberg model is used for description of magnetic properties of ultrathin ferromagnetic films forming these structures. The temperature and magnetic field dependences of magnetic characteristics are considered for ferromagnetic and antiferromagnetic configurations of these multilayer structures. The methodology for determination of the magnetoresistance by the Monte Carlo method is introduced; this permits us to calculate the magnetoresistance of multilayer structures for different thicknesses of the ferromagnetic films. The calculated temperature dependence of the magnetoresistance agrees very well with the experimental results measured for the Fe(0 0 1)–Cr(0 0 1) multilayer structure and CFAS–Ag–CFAS–IrMn spin-valve structure based on the half-metallic Heusler alloy Co2FeAl0.5Si0.5.
Energy Technology Data Exchange (ETDEWEB)
Tholomier, M.; Vicario, E.; Doghmane, N.
1987-10-01
The contribution of backscattered electrons to Auger electrons yield was studied with a multiple scattering Monte-Carlo simulation. The Auger backscattering factor has been calculated in the 5 keV-60 keV energy range. The dependence of the Auger backscattering factor on the primary energy and the beam incidence angle were determined. Spatial distributions of backscattered electrons and Auger electrons are presented for a point incident beam. Correlations between these distributions are briefly investigated.
Efficient implementation of the Hellmann-Feynman theorem in a diffusion Monte Carlo calculation.
Vitiello, S A
2011-02-01
Kinetic and potential energies of systems of (4)He atoms in the solid phase are computed at T = 0. Results at two densities of the liquid phase are presented as well. Calculations are performed by the multiweight extension to the diffusion Monte Carlo method that allows the application of the Hellmann-Feynman theorem in a robust and efficient way. This is a general method that can be applied in other situations of interest as well.
Poster — Thur Eve — 14: Improving Tissue Segmentation for Monte Carlo Dose Calculation using DECT
Energy Technology Data Exchange (ETDEWEB)
Di Salvio, A.; Bedwani, S.; Carrier, J-F. [Centre hospitalier de l' Université de Montréal (Canada); Bouchard, H. [National Physics Laboratory, Teddington (United Kingdom)
2014-08-15
Purpose: To improve Monte Carlo dose calculation accuracy through a new tissue segmentation technique with dual energy CT (DECT). Methods: Electron density (ED) and effective atomic number (EAN) can be extracted directly from DECT data with a stoichiometric calibration method. Images are acquired with Monte Carlo CT projections using the user code egs-cbct and reconstructed using an FDK backprojection algorithm. Calibration is performed using projections of a numerical RMI phantom. A weighted parameter algorithm then uses both EAN and ED to assign materials to voxels from DECT simulated images. This new method is compared to a standard tissue characterization from single energy CT (SECT) data using a segmented calibrated Hounsfield unit (HU) to ED curve. Both methods are compared to the reference numerical head phantom. Monte Carlo simulations on uniform phantoms of different tissues using dosxyz-nrc show discrepancies in depth-dose distributions. Results: Both SECT and DECT segmentation methods show similar performance assigning soft tissues. Performance is however improved with DECT in regions with higher density, such as bones, where it assigns materials correctly 8% more often than segmentation with SECT, considering the same set of tissues and simulated clinical CT images, i.e. including noise and reconstruction artifacts. Furthermore, Monte Carlo results indicate that kV photon beam depth-dose distributions can double between two tissues of density higher than muscle. Conclusions: A direct acquisition of ED and the added information of EAN with DECT data improves tissue segmentation and increases the accuracy of Monte Carlo dose calculation in kV photon beams.
Monte Carlo calculation of electron initiated impact ionization in bulk zinc-blende and wurtzite GaN
Kolník, Ján; Oǧuzman, Ismail H.; Brennan, Kevin F.; Wang, Rongping; Ruden, P. Paul
1997-01-01
Calculations of the high-field electronic transport properties of bulk zinc-blende and wurtzite phase gallium nitride are presented focusing particularly on the electron initiated impact ionization rate. The calculations are performed using ensemble Monte Carlo simulations, which include the full details of the band structure derived from an empirical pseudopotential method. The model also includes the numerically generated electron impact ionization transition rate, calculated based on the pseudopotential band structures for both crystallographic phases. The electron initiated impact ionization coefficients are calculated as a function of the applied electric field. The electron distribution is found to be cooler and the ionization coefficients are calculated to be lower in the wurtzite phase as compared to zinc-blende gallium nitride at compatable electric-field strengths. The higher electron energies and the resulting larger impact ionization coefficients in zinc-blende gallium nitride are believed to result from the combined effects of a lower density of states and phonon scattering rate for energies near and below 3 eV above the conduction-band minimum, and a somewhat higher ionization transition rate compared to the wurtzite phase. The nature of the impact ionization threshold in both phases of gallium nitride is predicted to be soft. Although there is considerable uncertainty in the knowledge of the scattering rates and the band structure at high energies which lead to uncertainty in the Monte Carlo calculations, the results presented provide a first estimate of what the electron initiated impact ionization rate in GaN can be expected to be.
Measured and Monte Carlo calculated k{sub Q} factors: Accuracy and comparison
Energy Technology Data Exchange (ETDEWEB)
Muir, B. R.; McEwen, M. R.; Rogers, D. W. O. [Ottawa Medical Physics Institute (OMPI), Ottawa Carleton Institute for Physics, Carleton University Campus, 1125 Colonel By Drive, Ottawa, Ontario K1S 5B6 (Canada); Institute for National Measurement Standards, National Research Council of Canada, Ottawa, Ontario K1A 0R6 (Canada); Ottawa Medical Physics Institute (OMPI), Ottawa Carleton Institute for Physics, Carleton University Campus, 1125 Colonel By Drive, Ottawa, Ontario K1S 5B6 (Canada)
2011-08-15
Purpose: The journal Medical Physics recently published two papers that determine beam quality conversion factors, k{sub Q}, for large sets of ion chambers. In the first paper [McEwen Med. Phys. 37, 2179-2193 (2010)], k{sub Q} was determined experimentally, while the second paper [Muir and Rogers Med. Phys. 37, 5939-5950 (2010)] provides k{sub Q} factors calculated using Monte Carlo simulations. This work investigates a variety of additional consistency checks to verify the accuracy of the k{sub Q} factors determined in each publication and a comparison of the two data sets. Uncertainty introduced in calculated k{sub Q} factors by possible variation of W/e with beam energy is investigated further. Methods: The validity of the experimental set of k{sub Q} factors relies on the accuracy of the NE2571 reference chamber measurements to which k{sub Q} factors for all other ion chambers are correlated. The stability of NE2571 absorbed dose to water calibration coefficients is determined and comparison to other experimental k{sub Q} factors is analyzed. Reliability of Monte Carlo calculated k{sub Q} factors is assessed through comparison to other publications that provide Monte Carlo calculations of k{sub Q} as well as an analysis of the sleeve effect, the effect of cavity length and self-consistencies between graphite-walled Farmer-chambers. Comparison between the two data sets is given in terms of the percent difference between the k{sub Q} factors presented in both publications. Results: Monitoring of the absorbed dose calibration coefficients for the NE2571 chambers over a period of more than 15 yrs exhibit consistency at a level better than 0.1%. Agreement of the NE2571 k{sub Q} factors with a quadratic fit to all other experimental data from standards labs for the same chamber is observed within 0.3%. Monte Carlo calculated k{sub Q} factors are in good agreement with most other Monte Carlo calculated k{sub Q} factors. Expected results are observed for the sleeve
Diffusion coefficients for LMFBR cells calculated with MOC and Monte Carlo methods
Energy Technology Data Exchange (ETDEWEB)
Rooijen, W.F.G. van, E-mail: rooijen@u-fukui.ac.j [Research Institute of Nuclear Energy, University of Fukui, Bunkyo 3-9-1, Fukui-shi, Fukui-ken 910-8507 (Japan); Chiba, G., E-mail: chiba.go@jaea.go.j [Japan Atomic Energy Agency, 2-4 Shirakata Shirane, Tokai-mura, Naka-gun, Ibaraki-ken 319-1195 (Japan)
2011-01-15
The present work discusses the calculation of the diffusion coefficient of a lattice of hexagonal cells, with both 'sodium present' and 'sodium absent' conditions. Calculations are performed in the framework of lattice theory (also known as fundamental mode approximation). Unlike the classical approaches, our heterogeneous leakage model allows the calculation of diffusion coefficients under all conditions, even if planar voids are present in the lattice. Equations resulting from this model are solved using the method of characteristics (MOC). Independent confirmation of the MOC result comes from Monte Carlo calculations, in which the diffusion coefficient is obtained without any of the assumptions of lattice theory. It is shown by comparison to the Monte Carlo results that the MOC solution yields correct values of the diffusion coefficient under all conditions, even in cases where the classic calculation of the diffusion coefficient fails. This work is a first step in the development of a robust method to calculate the diffusion coefficient of lattice cells. Adoption into production codes will require more development and validation of the method.
Hickson, Kevin J; O'Keefe, Graeme J
2014-09-01
The scalable XCAT voxelised phantom was used with the GATE Monte Carlo toolkit to investigate the effect of voxel size on dosimetry estimates of internally distributed radionuclide calculated using direct Monte Carlo simulation. A uniformly distributed Fluorine-18 source was simulated in the Kidneys of the XCAT phantom with the organ self dose (kidney ← kidney) and organ cross dose (liver ← kidney) being calculated for a number of organ and voxel sizes. Patient specific dose factors (DF) from a clinically acquired FDG PET/CT study have also been calculated for kidney self dose and liver ← kidney cross dose. Using the XCAT phantom it was found that significantly small voxel sizes are required to achieve accurate calculation of organ self dose. It has also been used to show that a voxel size of 2 mm or less is suitable for accurate calculations of organ cross dose. To compensate for insufficient voxel sampling a correction factor is proposed. This correction factor is applied to the patient specific dose factors calculated with the native voxel size of the PET/CT study.
International Nuclear Information System (INIS)
The Italian Committee for Dosimetry in Radiotherapy is about to produce a protocol for the dosimetry of brachytherapy sources that defines methods to measure the quantity 'air kerma rate in free air in a reference point' using ionisation chambers. Several parameters and quantities necessary to apply the protocol have to be calculated. In this presentation we show the methods used to calculate two of them: Pair, that account for the attenuation and scattering of photons in air; Nk(source), the calibration factor for each dosimeter and source type. Both quantities have been calculated by means of Monte Carlo simulations. To calculate Pair we score the photon fluence in the detector area, separately for 'primary photons', i.e. photons coming directly from the source without interacting in air; 'scattered photons', i.e. photons that are diffused from the air towards the scoring region; 'attenuated photons', i.e. primary photons directed towards the scoring region that are subtracted from the primary fluence by interactions in air. Pair is calculated as a combination of those fluences. Nk(source) is calculated starting from the air kerma rates due to spectral lines emitted by the source and from the corresponding calibration factors. The Monte Carlo code EGS4 is used, in a version modified in order to take into account the characteristics X-ray production. Results are shown for some of the sources most used in Italy
COMET-PE as an Alternative to Monte Carlo for Photon and Electron Transport
Hayward, Robert M.; Rahnema, Farzad
2014-06-01
Monte Carlo methods are a central component of radiotherapy treatment planning, shielding design, detector modeling, and other applications. Long calculation times, however, can limit the usefulness of these purely stochastic methods. The coarse mesh method for photon and electron transport (COMET-PE) provides an attractive alternative. By combining stochastic pre-computation with a deterministic solver, COMET-PE achieves accuracy comparable to Monte Carlo methods in only a fraction of the time. The method's implementation has been extended to 3D, and in this work, it is validated by comparison to DOSXYZnrc using a photon radiotherapy benchmark. The comparison demonstrates excellent agreement; of the voxels that received more than 10% of the maximum dose, over 97.3% pass a 2% / 2mm acceptance test and over 99.7% pass a 3% / 3mm test. Furthermore, the method is over an order of magnitude faster than DOSXYZnrc and is able to take advantage of both distributed-memory and shared-memory parallel architectures for increased performance.
Charge transport in a-Si:H detectors: Comparison of analytical and Monte Carlo simulations
International Nuclear Information System (INIS)
To understand the signal formation in hydrogenated amorphous silicon (a-Si:H) p-i-n detectors, dispersive charge transport due to multiple trapping in a-Si:H tail states is studied both analytically and by Monte Carlo simulations. An analytical solution is found for the free electron and hole distributions n(x,t) and the transient current I(t) due to an initial electron-hole pair generated at an arbitrary depth in the detector for the case of exponential band tails and linear field profiles; integrating over all e-h pairs produced along the particle's trajectory yields the actual distributions and current; the induced charge Q(t) is obtained by numerically integrating the current. This generalizes previous models used to analyze time-of-flight experiments. The Monte Carlo simulation provides the same information but can be applied to arbitrary field profiles, field dependent mobilities and localized state distributions. A comparison of both calculations is made in a simple case to show that identical results are obtained over a large time domain. A comparison with measured signals confirms that the total induced charge depends on the applied bias voltage. The applicability of the same approach to other semiconductors is discussed
GPUMCD: a new GPU-oriented Monte Carlo dose calculation platform
Hissoiny, Sami; Ozell, Benoît; Després, Philippe
2011-01-01
Purpose: Monte Carlo methods are considered the gold standard for dosimetric computations in radiotherapy. Their execution time is however still an obstacle to the routine use of Monte Carlo packages in a clinical setting. To address this problem, a completely new, and designed from the ground up for the GPU, Monte Carlo dose calculation package for voxelized geometries is proposed: GPUMCD. Method : GPUMCD implements a coupled photon-electron Monte Carlo simulation for energies in the range 0.01 MeV to 20 MeV. An analogue simulation of photon interactions is used and a Class II condensed history method has been implemented for the simulation of electrons. A new GPU random number generator, some divergence reduction methods as well as other optimization strategies are also described. GPUMCD was run on a NVIDIA GTX480 while single threaded implementations of EGSnrc and DPM were run on an Intel Core i7 860. Results : Dosimetric results obtained with GPUMCD were compared to EGSnrc. In all but one test case, 98% o...
Energy Technology Data Exchange (ETDEWEB)
Perfetti, C.; Martin, W. [Univ. of Michigan, Dept. of Nuclear Engineering and Radiological Sciences, 2355 Bonisteel Boulevard, Ann Arbor, MI 48109-2104 (United States); Rearden, B.; Williams, M. [Oak Ridge National Laboratory, Reactor and Nuclear Systems Div., Bldg. 5700, P.O. Box 2008, Oak Ridge, TN 37831-6170 (United States)
2012-07-01
Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the Shift Monte Carlo code within the SCALE code package. The methods were used for two small-scale test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods. (authors)
Applying graphics processor units to Monte Carlo dose calculation in radiation therapy
Directory of Open Access Journals (Sweden)
Bakhtiari M
2010-01-01
Full Text Available We investigate the potential in using of using a graphics processor unit (GPU for Monte-Carlo (MC-based radiation dose calculations. The percent depth dose (PDD of photons in a medium with known absorption and scattering coefficients is computed using a MC simulation running on both a standard CPU and a GPU. We demonstrate that the GPU′s capability for massive parallel processing provides a significant acceleration in the MC calculation, and offers a significant advantage for distributed stochastic simulations on a single computer. Harnessing this potential of GPUs will help in the early adoption of MC for routine planning in a clinical environment.
Monte Carlo calculations for design of An accelerator based PGNAA facility
International Nuclear Information System (INIS)
Monte Carlo calculations were carried out for design of a set up for Prompt Gamma Ray Neutron Activation Analysis (PGNAA) by 14 MeV neutrons to analyze cement raw material samples. The calculations were carried out using code the MCNP4B2. Various geometry parameters of the PGNAA experimental setup such as sample thickness, moderator geometry and detector shielding etc were optimized by maximizing the prompt gamma ray yield of different elements of sample material. Finally calibration curve of the PGNAA setup were generated for various concentrations of calcium in the material sample. Results of this simulation are presented. (author)
Monte Carlo calculations for design of An accelerator based PGNAA facility
Energy Technology Data Exchange (ETDEWEB)
Nagadi, M.M.; Naqvi, A.A. [King Fahd University of Petroleum and Minerals, Center for Applied Physical Sciences, Dhahran (Saudi Arabia); Rehman, Khateeb-ur; Kidwai, S. [King Fahd University of Petroleum and Minerals, Department of Physics, Dhahran (Saudi Arabia)
2002-08-01
Monte Carlo calculations were carried out for design of a set up for Prompt Gamma Ray Neutron Activation Analysis (PGNAA) by 14 MeV neutrons to analyze cement raw material samples. The calculations were carried out using code the MCNP4B2. Various geometry parameters of the PGNAA experimental setup such as sample thickness, moderator geometry and detector shielding etc were optimized by maximizing the prompt gamma ray yield of different elements of sample material. Finally calibration curve of the PGNAA setup were generated for various concentrations of calcium in the material sample. Results of this simulation are presented. (author)
Radon detection in conical diffusion chambers: Monte Carlo calculations and experiment
Energy Technology Data Exchange (ETDEWEB)
Rickards, J.; Golzarri, J. I.; Espinosa, G., E-mail: espinosa@fisica.unam.mx [Instituto de Física, Universidad Nacional Autónoma de México Circuito de la Investigación Científica, Ciudad Universitaria México, D.F. 04520, México (Mexico); Vázquez-López, C. [Departamento de Física, Centro de Investigación y de Estudios Avanzados del IPN Ave. IPN 2508, Col. San Pedro Zacatenco, México 07360, DF, México (Mexico)
2015-07-23
The operation of radon detection diffusion chambers of truncated conical shape was studied using Monte Carlo calculations. The efficiency was studied for alpha particles generated randomly in the volume of the chamber, and progeny generated randomly on the interior surface, which reach track detectors placed in different positions within the chamber. Incidence angular distributions, incidence energy spectra and path length distributions are calculated. Cases studied include different positions of the detector within the chamber, varying atmospheric pressure, and introducing a cutoff incidence angle and energy.
International Nuclear Information System (INIS)
1 - Description of problem or function: FOCUS enables the calculation of any quantity related to neutron transport in reactor or shielding problems, but was especially designed to calculate differential quantities, such as point values at one or more of the space, energy, direction and time variables of quantities like neutron flux, detector response, reaction rate, etc. or averages of such quantities over a small volume of the phase space. Different types of problems can be treated: systems with a fixed neutron source which may be a mono-directional source located out- side the system, and Eigen function problems in which the neutron source distribution is given by the (unknown) fundamental mode Eigen function distribution. Using Monte Carlo methods complex 3- dimensional geometries and detailed cross section information can be treated. Cross section data are derived from ENDF/B, with anisotropic scattering and discrete or continuous inelastic scattering taken into account. Energy is treated as a continuous variable and time dependence may also be included. 2 - Method of solution: A transformed form of the adjoint Boltzmann equation in integral representation is solved for the space, energy, direction and time variables by Monte Carlo methods. Adjoint particles are defined with properties in some respects contrary to those of neutrons. Adjoint particle histories are constructed from which estimates are obtained of the desired quantity. Adjoint cross sections are defined with which the nuclide and reaction type are selected in a collision. The energy after a collision is selected from adjoint energy distributions calculated together with the adjoint cross sections in advance of the actual Monte Carlo calculation. For multiplying systems successive generations of adjoint particles are obtained which will die out for subcritical systems with a fixed neutron source and will be kept approximately stationary for Eigen function problems. Completely arbitrary problems can
Oxygen transport properties estimation by classical trajectory–direct simulation Monte Carlo
Energy Technology Data Exchange (ETDEWEB)
Bruno, Domenico, E-mail: domenico.bruno@cnr.it [Istituto di Metodologie Inorganiche e dei Plasmi, Consiglio Nazionale delle Ricerche– Via G. Amendola 122, 70125 Bari (Italy); Frezzotti, Aldo, E-mail: aldo.frezzotti@polimi.it; Ghiroldi, Gian Pietro, E-mail: gpghiro@gmail.com [Dipartimento di Scienze e Tecnologie Aerospaziali, Politecnico di Milano–Via La Masa 34, 20156 Milano (Italy)
2015-05-15
Coupling direct simulation Monte Carlo (DSMC) simulations with classical trajectory calculations is a powerful tool to improve predictive capabilities of computational dilute gas dynamics. The considerable increase in computational effort outlined in early applications of the method can be compensated by running simulations on massively parallel computers. In particular, Graphics Processing Unit acceleration has been found quite effective in reducing computing time of classical trajectory (CT)-DSMC simulations. The aim of the present work is to study dilute molecular oxygen flows by modeling binary collisions, in the rigid rotor approximation, through an accurate Potential Energy Surface (PES), obtained by molecular beams scattering. The PES accuracy is assessed by calculating molecular oxygen transport properties by different equilibrium and non-equilibrium CT-DSMC based simulations that provide close values of the transport properties. Comparisons with available experimental data are presented and discussed in the temperature range 300–900 K, where vibrational degrees of freedom are expected to play a limited (but not always negligible) role.
International Nuclear Information System (INIS)
Some early examples of Monte Carlo simulations of radiation transport, prior to the general availability of automatic electronic computers, are recalled. In particular, some results and details are presented of a gamma ray albedo calculation in the early 1950s by Hayward and Hubbell using mechanical desk calculators (+, -, x, / only), in which 67 trajectories were determined using the RAND book of random numbers, with three random numbers at each collision being used to determine (1) the Compton scatter energy loss (and thus the deflection angle), (2) the azimuthal angle and (3) the path length since the previous collision. Successive angles were compounded in three dimensions using a two-dimensional grid with a rotating arm with a slider on it, the device being dubbed an ''Ouija Board''. Survival probabilities along each path segment were determined analytically according to photoelectric absorption exponential attenuation in each of five materials, using a slide rule. For the five substances, H2O, Al, Cu, Sn and Pb, useful number and energy albedo values were obtained for 1 MeV photons incident at 0 (normal), 45 and 80 angles of incidence. Advances in the Monte Carlo method following this and other early-1950s computations, up to the present time with high-speed all-function automatic computers, are briefly reviewed. A brief review of advances in the input cross section data, particularly for photon interactions, over that same period, is included. (orig.)
An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations
Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B.; Jia, Xun
2015-10-01
Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum
Srna-Monte Carlo codes for proton transport simulation in combined and voxelized geometries
Ilic, R D; Stankovic, S J
2002-01-01
This paper describes new Monte Carlo codes for proton transport simulations in complex geometrical forms and in materials of different composition. The SRNA codes were developed for three dimensional (3D) dose distribution calculation in proton therapy and dosimetry. The model of these codes is based on the theory of proton multiple scattering and a simple model of compound nucleus decay. The developed package consists of two codes: SRNA-2KG and SRNA-VOX. The first code simulates proton transport in combined geometry that can be described by planes and second order surfaces. The second one uses the voxelized geometry of material zones and is specifically adopted for the application of patient computer tomography data. Transition probabilities for both codes are given by the SRNADAT program. In this paper, we will present the models and algorithms of our programs, as well as the results of the numerical experiments we have carried out applying them, along with the results of proton transport simulation obtaine...
Advantages of Analytical Transformations in Monte Carlo Methods for Radiation Transport
International Nuclear Information System (INIS)
Monte Carlo methods for radiation transport typically attempt to solve an integral by directly sampling analog or weighted particles, which are treated as physical entities. Improvements to the methods involve better sampling, probability games or physical intuition about the problem. We show that significant improvements can be achieved by recasting the equations with an analytical transform to solve for new, non-physical entities or fields. This paper looks at one such transform, the difference formulation for thermal photon transport, showing a significant advantage for Monte Carlo solution of the equations for time dependent transport. Other related areas are discussed that may also realize significant benefits from similar analytical transformations
Radiation Transport for Explosive Outflows: A Multigroup Hybrid Monte Carlo Method
Wollaeger, Ryan T; Graziani, Carlo; Couch, Sean M; Jordan, George C; Lamb, Donald Q; Moses, Gregory A
2013-01-01
We explore the application of Implicit Monte Carlo (IMC) and Discrete Diffusion Monte Carlo (DDMC) to radiation transport in strong fluid outflows with structured opacity. The IMC method of Fleck & Cummings is a stochastic computational technique for nonlinear radiation transport. IMC is partially implicit in time and may suffer in efficiency when tracking Monte Carlo particles through optically thick materials. The DDMC method of Densmore accelerates an IMC computation where the domain is diffusive. Recently, Abdikamalov extended IMC and DDMC to multigroup, velocity-dependent neutrino transport with the intent of modeling neutrino dynamics in core-collapse supernovae. Densmore has also formulated a multifrequency extension to the originally grey DDMC method. In this article we rigorously formulate IMC and DDMC over a high-velocity Lagrangian grid for possible application to photon transport in the post-explosion phase of Type Ia supernovae. The method described is suitable for a large variety of non-mono...
MONTE CARLO NEUTRINO TRANSPORT THROUGH REMNANT DISKS FROM NEUTRON STAR MERGERS
Energy Technology Data Exchange (ETDEWEB)
Richers, Sherwood; Ott, Christian D. [TAPIR, Mailcode 350-17, Walter Burke Institute for Theoretical Physics, California Institute of Technology, Pasadena, CA 91125 (United States); Kasen, Daniel; Fernández, Rodrigo [Department of Astronomy and Theoretical Astrophysics Center, University of California, Berkeley, CA 94720 (United States); O’Connor, Evan [Department of Physics, Campus Code 8202, North Carolina State University, Raleigh, NC 27695 (United States)
2015-11-01
We present Sedonu, a new open source, steady-state, special relativistic Monte Carlo (MC) neutrino transport code, available at bitbucket.org/srichers/sedonu. The code calculates the energy- and angle-dependent neutrino distribution function on fluid backgrounds of any number of spatial dimensions, calculates the rates of change of fluid internal energy and electron fraction, and solves for the equilibrium fluid temperature and electron fraction. We apply this method to snapshots from two-dimensional simulations of accretion disks left behind by binary neutron star mergers, varying the input physics and comparing to the results obtained with a leakage scheme for the cases of a central black hole and a central hypermassive neutron star. Neutrinos are guided away from the densest regions of the disk and escape preferentially around 45° from the equatorial plane. Neutrino heating is strengthened by MC transport a few scale heights above the disk midplane near the innermost stable circular orbit, potentially leading to a stronger neutrino-driven wind. Neutrino cooling in the dense midplane of the disk is stronger when using MC transport, leading to a globally higher cooling rate by a factor of a few and a larger leptonization rate by an order of magnitude. We calculate neutrino pair annihilation rates and estimate that an energy of 2.8 × 10{sup 46} erg is deposited within 45° of the symmetry axis over 300 ms when a central BH is present. Similarly, 1.9 × 10{sup 48} erg is deposited over 3 s when an HMNS sits at the center, but neither estimate is likely to be sufficient to drive a gamma-ray burst jet.
Monte Carlo Studies of Charge Transport Below the Mobility Edge
Jakobsson, Mattias
2012-01-01
Charge transport below the mobility edge, where the charge carriers are hopping between localized electronic states, is the dominant charge transport mechanism in a wide range of disordered materials. This type of incoherent charge transport is fundamentally different from the coherent charge transport in ordered crystalline materials. With the advent of organic electronics, where small organic molecules or polymers replace traditional inorganic semiconductors, the interest for this type of h...
Mohammadi, A; Hassanzadeh, M; Gharib, M
2016-02-01
In this study, shielding calculation and criticality safety analysis were carried out for general material testing reactor (MTR) research reactors interim storage and relevant transportation cask. During these processes, three major terms were considered: source term, shielding, and criticality calculations. The Monte Carlo transport code MCNP5 was used for shielding calculation and criticality safety analysis and ORIGEN2.1 code for source term calculation. According to the results obtained, a cylindrical cask with body, top, and bottom thicknesses of 18, 13, and 13 cm, respectively, was accepted as the dual-purpose cask. Furthermore, it is shown that the total dose rates are below the normal transport criteria that meet the standards specified. PMID:26720262
Vectorizing the Monte Carlo algorithm for lattice gauge theory calculations on the CDC cyber 205
Barkai, D.; Moriarty, K. J. M.
1982-06-01
Lattice gauge theory is a technique for studying quantum field theory free of divergences. All the Monte Carlo computer calculations up to now have been performed on scalar machines. A technique has been developed for effectively vectorizing this class of Monte Carlo problems. The key for vectorizing is in finding groups in finding groups of points on the space-time lattice which are independent of each other. This requires a particular ordering of points along diagonals. A technique for matrix multiply is used which enables one to get the whole of the result matrix in one pass. The CDC CYBER 205 is most suitable for this class of problems using random "index-lists" (arising from the ordering algorithm and the use of random numbers) due to the hardware implementation of "GATHER" and "SCATTER" operations performing at a streaming-rate. A preliminary implementation of this method has executed 5 times faster than on the CDC 7600 system.
Effects of CT based Voxel Phantoms on Dose Distribution Calculated with Monte Carlo Method
Chen, Chaobin; Huang, Qunying; Wu, Yican
2005-04-01
A few CT-based voxel phantoms were produced to investigate the sensitivity of Monte Carlo simulations of x-ray beam and electron beam to the proportions of elements and the mass densities of the materials used to express the patient's anatomical structure. The human body can be well outlined by air, lung, adipose, muscle, soft bone and hard bone to calculate the dose distribution with Monte Carlo method. The effects of the calibration curves established by using various CT scanners are not clinically significant based on our investigation. The deviation from the values of cumulative dose volume histogram derived from CT-based voxel phantoms is less than 1% for the given target.
Bourva, L C A
1999-01-01
The general purpose neutron-photon-electron Monte Carlo N-Particle code, MCNP sup T sup M , has been used to simulate the neutronic characteristics of the on-site laboratory passive neutron coincidence counter to be installed, under Euratom Safeguards Directorate supervision, at the Sellafield reprocessing plant in Cumbria, UK. This detector is part of a series of nondestructive assay instruments to be installed for the accurate determination of the plutonium content of nuclear materials. The present work focuses on one aspect of this task, namely, the accurate calculation of the coincidence gate utilisation factor. This parameter is an important term in the interpretative model used to analyse the passive neutron coincidence count data acquired using pulse train deconvolution electronics based on the shift register technique. It accounts for the limited proportion of neutrons detected within the time interval for which the electronics gate is open. The Monte Carlo code MCF, presented in this work, represents...
Coe, J P
2015-01-01
We adapt the method of Monte Carlo configuration interaction to calculate core-hole states and use this for the computation of X-ray emission and absorption values. We consider CO, CH$_{4}$, NH$_{3}$, H$_{2}$O, HF, HCN, CH$_{3}$OH, CH$_{3}$F, HCl and NO using a 6-311G** basis. We also look at carbon monoxide with a stretched geometry and discuss the dependence of its results on the cutoff used. The Monte Carlo configuration interaction results are compared with EOM-CCSD values for X-ray emission and with experiment for X-ray absorption. Oscillator strengths are also computed and we quantify the multireference nature of the wavefunctions to suggest when approaches based on a single reference would be expected to be successful.
Effects of CT based Voxel Phantoms on Dose Distribution Calculated with Monte Carlo Method
Institute of Scientific and Technical Information of China (English)
Chen Chaobin; Huang Qunying; Wu Yican
2005-01-01
A few CT-based voxel phantoms were produced to investigate the sensitivity of Monte Carlo simulations of X-ray beam and electron beam to the proportions of elements and the mass densities of the materials used to express the patient's anatomical structure. The human body can be well outlined by air, lung, adipose, muscle, soft bone and hard bone to calculate the dose distribution with Monte Carlo method. The effects of the calibration curves established by using various CT scanners are not clinically significant based on our investigation. The deviation from the values of cumulative dose volume histogram derived from CT-based voxel phantoms is less than 1% for the given target.
DEFF Research Database (Denmark)
Sloth, Peter
1990-01-01
Density profiles and partition coefficients are obtained for hard-sphere fluids inside hard, spherical pores of different sizes by grand canonical ensemble Monte Carlo calculations. The Monte Carlo results are compared to the results obtained by application of different kinds of integral equation...
MCNP: a general Monte Carlo code for neutron and photon transport
International Nuclear Information System (INIS)
The general-purpose Monte Carlo code MCNP ca be used for neutron, photon, or coupled neutron-photon transport, including the capability to calculate eigenvalues for critical systems. The code treats an arbitrary three-dimensional configuration of materials in geometric cells bounded by first- and second-degree surfaces and some special fourth-degree surfaces (elliptical tori). Pointwise cross-section data are used. For neutrons, all reactions given in a particular cross-section evaluation are accounted for. Thermal neutrons are described by both the free-gas and S(α,β) models. For photons, the code takes account of incoherent and coherent scattering, the possibility of fluorescent emission following photoelectric absorption, and absorption in pair production with local emission of annihilation radiation. MCNP includes an elaborate, interactive plotting capability that allows the user to view his input geometry to help check for setup errors. Standard features which are available to improve computational efficiency include geometry splitting and Russian roulette, weight cutoff with Russian roulette, correlated sampling, analog capture or capture by weight reduction, the exponential transformation, energy splitting, forced collisions in designated cells, flux estimates at point or ring detectors, deterministically transporting pseudo-particles to designated regions, track-length estimators, source biasing, and several parameter cutoffs. Extensive summary information is provided to help the user better understand the physics and Monte Carlo simulation of his problem. The standard, user-defined output of MCNP includes two-way current as a function of direction across any set of surfaces or surface segments in the problem. Flux across any set of surfaces or surface segments is available. 58 figures, 28 tables
Raystation Monte Carlo application: evaluation of electron calculations with entry obliquity.
Archibald-Heeren, Ben; Liu, Guilin
2016-06-01
To evaluate the accuracy of Raystation's implementation for Monte Carlo VMC ++ with electrons at varying angles of incidence for low and medium energy electron beams. Thirty-two profile and percentage depth dose scans were taken at 5° incident angle intervals for 6 and 12 MeV and compared to extracted fluences from Raystation calculations using gamma analysis with 2 %/2 mm criteria. Point dose measurements were compared to calculated doses to determine output accuracy. Electron profile and percentage depth dose curves for both energies show good agreement between 0° and 20° with 29/30 scans above 90 % pass rate. Average variation between calculated and measured point doses was -0.73 % with all measurements falling within ±2 % of calculated dose. Raystation's application of VMC ++ Monte Carlo algorithm provides clinically acceptable accuracy for low and medium energy electron dosimetry at incident angles up to 20° for Varian Clinac iX models. PMID:27052438
PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code
International Nuclear Information System (INIS)
Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.
PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code
Energy Technology Data Exchange (ETDEWEB)
Iandola, F N; O' Brien, M J; Procassini, R J
2010-11-29
Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.
Tseung, H Wan Chan; Beltran, C
2014-01-01
Purpose: Very fast Monte Carlo (MC) simulations of proton transport have been implemented recently on GPUs. However, these usually use simplified models for non-elastic (NE) proton-nucleus interactions. Our primary goal is to build a GPU-based proton transport MC with detailed modeling of elastic and NE collisions. Methods: Using CUDA, we implemented GPU kernels for these tasks: (1) Simulation of spots from our scanning nozzle configurations, (2) Proton propagation through CT geometry, considering nuclear elastic scattering, multiple scattering, and energy loss straggling, (3) Modeling of the intranuclear cascade stage of NE interactions, (4) Nuclear evaporation simulation, and (5) Statistical error estimates on the dose. To validate our MC, we performed: (1) Secondary particle yield calculations in NE collisions, (2) Dose calculations in homogeneous phantoms, (3) Re-calculations of head and neck plans from a commercial treatment planning system (TPS), and compared with Geant4.9.6p2/TOPAS. Results: Yields, en...
International Nuclear Information System (INIS)
In the field of shielding, the requirement of radiation transport calculations in severe conditions, characterized by irreducible three-dimensional geometries has increased the use of the Monte Carlo method. The latter has proved to be the only rigorous and appropriate calculational method in such conditions. However, further efforts at optimization are still necessary to render the technique practically efficient, despite recent improvements in the Monte Carlo codes, the progress made in the field of computers and the availability of accurate nuclear data. Moreover, the personal experience acquired in the field and the control of sophisticated calculation procedures are of the utmost importance. The aim of the work which has been carried out is the gathering of all the necessary elements and features that would lead to an efficient utilization of the Monte Carlo method used in connection with shielding problems. The study of the general aspects of the method and the exploitation techniques of the MORSE code, which has proved to be one of the most comprehensive of the Monte Carlo codes, lead to a successful analysis of an actual case. In fact, the severe conditions and difficulties met have been overcome using such a stochastic simulation code. Finally, a critical comparison between calculated and high-accuracy experimental results has allowed the final confirmation of the methodology used by us
New electron multiple scattering distributions for Monte Carlo transport simulation
Energy Technology Data Exchange (ETDEWEB)
Chibani, Omar (Haut Commissariat a la Recherche (C.R.S.), 2 Boulevard Franz Fanon, Alger B.P. 1017, Alger-Gare (Algeria)); Patau, Jean Paul (Laboratoire de Biophysique et Biomathematiques, Faculte des Sciences Pharmaceutiques, Universite Paul Sabatier, 35 Chemin des Maraichers, 31062 Toulouse cedex (France))
1994-10-01
New forms of electron (positron) multiple scattering distributions are proposed. The first is intended for use in the conditions of validity of the Moliere theory. The second distribution takes place when the electron path is so short that only few elastic collisions occur. These distributions are adjustable formulas. The introduction of some parameters allows impositions of the correct value of the first moment. Only positive and analytic functions were used in constructing the present expressions. This makes sampling procedures easier. Systematic tests are presented and some Monte Carlo simulations, as benchmarks, are carried out. ((orig.))
Inverse treatment planning for radiation therapy based on fast Monte Carlo dose calculation
International Nuclear Information System (INIS)
An inverse treatment planning system based on fast Monte Carlo (MC) dose calculation is presented. It allows optimisation of intensity modulated dose distributions in 15 to 60 minutes on present day personal computers. If a multi-processor machine is available, parallel simulation of particle histories is also possible, leading to further calculation time reductions. The optimisation process is divided into two stages. The first stage results influence profiles based on pencil beam (PB) dose calculation. The second stage starts with MC verification and post-optimisation of the PB dose and fluence distributions. Because of the potential to accurately model beam modifiers, MC based inverse planning systems are able to optimise compensator thicknesses and leaf trajectories instead of intensity profiles only. The corresponding techniques, whose implementation is the subject for future work, are also presented here. (orig.)
Many-body effects on graphene conductivity: Quantum Monte Carlo calculations
Boyda, D. L.; Braguta, V. V.; Katsnelson, M. I.; Ulybyshev, M. V.
2016-08-01
Optical conductivity of graphene is studied using quantum Monte Carlo calculations. We start from a Euclidean current-current correlator and extract σ (ω ) from Green-Kubo relations using the Backus-Gilbert method. Calculations were performed both for long-range interactions and taking into account only the contact term. In both cases we vary interaction strength and study its influence on optical conductivity. We compare our results with previous theoretical calculations choosing ω ≈κ , thus working in the region of the plateau in σ (ω ) which corresponds to optical conductivity of Dirac quasiparticles. No dependence of optical conductivity on interaction strength is observed unless we approach the antiferromagnetic phase transition in the case of an artificially enhanced contact term. Our results strongly support previous theoretical studies that claimed very weak regularization of graphene conductivity.
A generalized albedo option for forward and adjoint Monte Carlo calculations
International Nuclear Information System (INIS)
The advisability of using the albedo procedure for the Monte Carlo solution of deep-penetration shielding problems which have ducts and other penetrations is investigated. It is generally accepted that the use of albedo data can dramatically improve the computational efficiency of certain Monte Carlo calculations - however the accuracy of these results may be unacceptable because of lost information during the albedo event and serious errors in the available differential albedo data. This study was done to evaluate and appropriate modify the MORSE/BREESE package, to develop new methods for generating the required albedo data, and to extend the adjoint capability to the albedo-modified calculations. The major modifications include an option to save for further use information that would be lost at the albedo event, an option to displace the emergent point during an albedo event, and an option to read spatially-dependent albedo data for both forward and adjoint calculations - which includes the emergent point as a new random variable to be selected during an albedo reflection event. The theoretical basis for using TORT-generated forward albedo information to produce adjuncton albedos is derived
MORSE/STORM: A generalized albedo option for Monte Carlo calculations
Energy Technology Data Exchange (ETDEWEB)
Gomes, I.C.; Stevens, P.N. (Tennessee Univ., Knoxville, TN (United States))
1991-09-01
The advisability of using the albedo procedure for the Monte Carlo solution of deep penetration shielding problems that have ducts and other penetrations has been investigated. The use of albedo data can dramatically improve the computational efficiency of certain Monte Carlo calculations. However, the accuracy of these results may be unacceptable because of lost information during the albedo event and serious errors in the available differential albedo data. This study was done to evaluate and appropriately modify the MORSE/BREESE package, to develop new methods for generating the required albedo data, and to extend the adjoint capability to the albedo-modified calculations. Major modifications to MORSE/BREESE include an option to save for further use information that would be lost at the albedo event, an option to displace the point of emergence during an albedo event, and an option to use spatially dependent albedo data for both forward and adjoint calculations, which includes the point of emergence as a new random variable to be selected during an albedo event. The theoretical basis for using TORT-generated forward albedo information to produce adjuncton albedos was derived. The MORSE/STORM package was developed to perform both forward and adjoint modes of analysis using spatially dependent albedo data. Results obtained with MORSE/STORM for both forward and adjoint modes were compared with benchmark solutions. Excellent agreement and improved computational efficiency were achieved, demonstrating the full utilization of the albedo option in the MORSE code. 7 refs., 17 figs., 15 tabs.
Monte Carlo calculation of the energy deposited in the KASCADE GRANDE detectors
International Nuclear Information System (INIS)
The energy deposited by protons, electrons and positrons in the KASCADE GRANDE detectors is calculated with a simple and fast Monte Carlo method. The KASCADE GRANDE experiment (Forschungszentrum Karlsruhe, Germany), based on an array of plastic scintillation detectors, has the aim to study the energy spectrum of the primary cosmic rays around and above the 'knee' region of the spectrum. The reconstruction of the primary spectrum is achieved by comparing the data collected by the detectors with simulations of the development of the extensive air shower initiated by the primary particle combined with detailed simulations of the detector response. The simulation of the air shower development is carried out with the CORSIKA Monte Carlo code. The output file produced by CORSIKA is further processed with a program that estimates the energy deposited in the detectors by the particles of the shower. The standard method to calculate the energy deposit in the detectors is based on the Geant package from the CERN library. A new method that calculates the energy deposit by fitting the Geant based distributions with simpler functions is proposed in this work. In comparison with the method based on the Geant package this method is substantially faster. The time saving is important because the number of particles involved is large. (author)
Measurement and Monte Carlo Calculation of Waste Drum Filled With Radioactive Aqueous Solution
Institute of Scientific and Technical Information of China (English)
XU; Li-jun; ZHANG; Wei-dong; YE; Hong-sheng; LIN; Min; CHEN; Xi-lin; GUO; Xiao-qing
2012-01-01
<正>Theoretically the best calibrating source of gamma scan system (SGS) is a waste drum filled with uniform distribution of medium and radioactive nuclides. However, in reality, waste drums usually full of solid substance, which are difficult to be prepared in a completely uniformly distributed state. To reduce measurement uncertainty of the radioactivity of waste drums prepared using the method of shell source, a waste drum filled with radioactive aqueous solution was prepared. Besides, its radioactivity was measured by a SGS device and calculated using Monte Carlo method to verify the exact geometric model, which
Theory of Finite Size Effects for Electronic Quantum Monte Carlo Calculations of Liquids and Solids
Holzmann, Markus; Morales, Miguel A; Tubmann, Norm M; Ceperley, David M; Pierleoni, Carlo
2016-01-01
Concentrating on zero temperature Quantum Monte Carlo calculations of electronic systems, we give a general description of the theory of finite size extrapolations of energies to the thermodynamic limit based on one and two-body correlation functions. We introduce new effective procedures, such as using the potential and wavefunction split-up into long and short range functions to simplify the method and we discuss how to treat backflow wavefunctions. Then we explicitly test the accuracy of our method to correct finite size errors on example hydrogen and helium many-body systems and show that the finite size bias can be drastically reduced for even small systems.
Monte Carlo 20 and 45 MeV Bremsstrahlung and dose-reduction calculations
International Nuclear Information System (INIS)
The SANDYL electron-photon coupled Monte Carlo code has been compared with previously published experimental bremsstrahlung data at 20.9 MeV electron energy. The code was then used to calculate forward-directed spectra, angular distributions and dose-reduction factors for three practical configurations. These are: 20 MeV electrons incident on 1 mm of W + 59 mm of Be, 45 MeV electrons of 1 mm of W and 45 MeV electrons on 1 mm of W + 147 mm of Be. The application of these results to flash radiography is discussed. 7 references, 12 figures, 1 table
Load balancing in highly parallel processing of Monte Carlo code for particle transport
Energy Technology Data Exchange (ETDEWEB)
Higuchi, Kenji; Takemiya, Hiroshi [Japan Atomic Energy Research Inst., Tokyo (Japan); Kawasaki, Takuji [Fuji Research Institute Corporation, Tokyo (Japan)
2001-01-01
In parallel processing of Monte Carlo(MC) codes for neutron, photon and electron transport problems, particle histories are assigned to processors making use of independency of the calculation for each particle. Although we can easily parallelize main part of a MC code by this method, it is necessary and practically difficult to optimize the code concerning load balancing in order to attain high speedup ratio in highly parallel processing. In fact, the speedup ratio in the case of 128 processors remains in nearly one hundred times when using the test bed for the performance evaluation. Through the parallel processing of the MCNP code, which is widely used in the nuclear field, it is shown that it is difficult to attain high performance by static load balancing in especially neutron transport problems, and a load balancing method, which dynamically changes the number of assigned particles minimizing the sum of the computational and communication costs, overcomes the difficulty, resulting in nearly fifteen percentage of reduction for execution time. (author)
Load balancing in highly parallel processing of Monte Carlo code for particle transport
International Nuclear Information System (INIS)
In parallel processing of Monte Carlo(MC) codes for neutron, photon and electron transport problems, particle histories are assigned to processors making use of independency of the calculation for each particle. Although we can easily parallelize main part of a MC code by this method, it is necessary and practically difficult to optimize the code concerning load balancing in order to attain high speedup ratio in highly parallel processing. In fact, the speedup ratio in the case of 128 processors remains in nearly one hundred times when using the test bed for the performance evaluation. Through the parallel processing of the MCNP code, which is widely used in the nuclear field, it is shown that it is difficult to attain high performance by static load balancing in especially neutron transport problems, and a load balancing method, which dynamically changes the number of assigned particles minimizing the sum of the computational and communication costs, overcomes the difficulty, resulting in nearly fifteen percentage of reduction for execution time. (author)
SU-E-T-558: Monte Carlo Photon Transport Simulations On GPU with Quadric Geometry
International Nuclear Information System (INIS)
Purpose: Monte Carlo simulation on GPU has experienced rapid advancements over the past a few years and tremendous accelerations have been achieved. Yet existing packages were developed only in voxelized geometry. In some applications, e.g. radioactive seed modeling, simulations in more complicated geometry are needed. This abstract reports our initial efforts towards developing a quadric geometry module aiming at expanding the application scope of GPU-based MC simulations. Methods: We defined the simulation geometry consisting of a number of homogeneous bodies, each specified by its material composition and limiting surfaces characterized by quadric functions. A tree data structure was utilized to define geometric relationship between different bodies. We modified our GPU-based photon MC transport package to incorporate this geometry. Specifically, geometry parameters were loaded into GPU’s shared memory for fast access. Geometry functions were rewritten to enable the identification of the body that contains the current particle location via a fast searching algorithm based on the tree data structure. Results: We tested our package in an example problem of HDR-brachytherapy dose calculation for shielded cylinder. The dose under the quadric geometry and that under the voxelized geometry agreed in 94.2% of total voxels within 20% isodose line based on a statistical t-test (95% confidence level), where the reference dose was defined to be the one at 0.5cm away from the cylinder surface. It took 243sec to transport 100million source photons under this quadric geometry on an NVidia Titan GPU card. Compared with simulation time of 99.6sec in the voxelized geometry, including quadric geometry reduced efficiency due to the complicated geometry-related computations. Conclusion: Our GPU-based MC package has been extended to support photon transport simulation in quadric geometry. Satisfactory accuracy was observed with a reduced efficiency. Developments for charged
SU-E-T-558: Monte Carlo Photon Transport Simulations On GPU with Quadric Geometry
Energy Technology Data Exchange (ETDEWEB)
Chi, Y; Tian, Z; Jiang, S; Jia, X [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States)
2015-06-15
Purpose: Monte Carlo simulation on GPU has experienced rapid advancements over the past a few years and tremendous accelerations have been achieved. Yet existing packages were developed only in voxelized geometry. In some applications, e.g. radioactive seed modeling, simulations in more complicated geometry are needed. This abstract reports our initial efforts towards developing a quadric geometry module aiming at expanding the application scope of GPU-based MC simulations. Methods: We defined the simulation geometry consisting of a number of homogeneous bodies, each specified by its material composition and limiting surfaces characterized by quadric functions. A tree data structure was utilized to define geometric relationship between different bodies. We modified our GPU-based photon MC transport package to incorporate this geometry. Specifically, geometry parameters were loaded into GPU’s shared memory for fast access. Geometry functions were rewritten to enable the identification of the body that contains the current particle location via a fast searching algorithm based on the tree data structure. Results: We tested our package in an example problem of HDR-brachytherapy dose calculation for shielded cylinder. The dose under the quadric geometry and that under the voxelized geometry agreed in 94.2% of total voxels within 20% isodose line based on a statistical t-test (95% confidence level), where the reference dose was defined to be the one at 0.5cm away from the cylinder surface. It took 243sec to transport 100million source photons under this quadric geometry on an NVidia Titan GPU card. Compared with simulation time of 99.6sec in the voxelized geometry, including quadric geometry reduced efficiency due to the complicated geometry-related computations. Conclusion: Our GPU-based MC package has been extended to support photon transport simulation in quadric geometry. Satisfactory accuracy was observed with a reduced efficiency. Developments for charged
Bahadori, Amir Alexander
Astronauts are exposed to a unique radiation environment in space. United States terrestrial radiation worker limits, derived from guidelines produced by scientific panels, do not apply to astronauts. Limits for astronauts have changed throughout the Space Age, eventually reaching the current National Aeronautics and Space Administration limit of 3% risk of exposure induced death, with an administrative stipulation that the risk be assured to the upper 95% confidence limit. Much effort has been spent on reducing the uncertainty associated with evaluating astronaut risk for radiogenic cancer mortality, while tools that affect the accuracy of the calculations have largely remained unchanged. In the present study, the impacts of using more realistic computational phantoms with size variability to represent astronauts with simplified deterministic radiation transport were evaluated. Next, the impacts of microgravity-induced body changes on space radiation dosimetry using the same transport method were investigated. Finally, dosimetry and risk calculations resulting from Monte Carlo radiation transport were compared with results obtained using simplified deterministic radiation transport. The results of the present study indicated that the use of phantoms that more accurately represent human anatomy can substantially improve space radiation dose estimates, most notably for exposures from solar particle events under light shielding conditions. Microgravity-induced changes were less important, but results showed that flexible phantoms could assist in optimizing astronaut body position for reducing exposures during solar particle events. Finally, little overall differences in risk calculations using simplified deterministic radiation transport and 3D Monte Carlo radiation transport were found; however, for the galactic cosmic ray ion spectra, compensating errors were observed for the constituent ions, thus exhibiting the need to perform evaluations on a particle
International Nuclear Information System (INIS)
three dimensional Monte Carlo calculation is required for the shielding calculation in the tokamak-type DT nuclear fusion reactor with many penetrations. 2) In Chapter 3, radiation streaming through the slit between the blanket modules is described, in Chapter 4, that through the small circular duct in the blanket modules is described, in Chapter 5, and that through the large opening duct in the vacuum vessel is described. The nuclear properties of the blanket, the vacuum vessel and the TF coil are systematically calculated for the various configurations. Based on the obtained results, the analytical formulas of these nuclear properties are deduced, and the guideline is proposed for the shielding design. 3) In Chapter 6, in order to evaluate the decay gamma ray dose rate around the duct due to radiation streaming through the large opening duct in the vacuum vessel, the evaluation method is proposed using the decay gamma ray Monte Carlo calculation. By replacing the prompt gamma-ray spectrum to the decay one in the Monte Carlo code, the decay gamma ray Monte Carlo transport calculation is conducted. The effective variance reduction method is developed for the decay gamma ray Monte Carlo calculation in the over-all tokamak region with drastically reducing the calculation time. Using this method, the shielding calculation is conducted for the ITER duct penetration, and the effectiveness of this method is demonstrated. (author)
Monte Carlo model of neutral-particle transport in diverted plasmas
Energy Technology Data Exchange (ETDEWEB)
Heifetz, D.; Post, D.; Petravic, M.; Weisheit, J.; Bateman, G.
1981-11-01
The transport of neutral atoms and molecules in the edge and divertor regions of fusion experiments has been calculated using Monte-Carlo techniques. The deuterium, tritium, and helium atoms are produced by recombination in the plasma and at the walls. The relevant collision processes of charge exchange, ionization, and dissociation between the neutrals and the flowing plasma electrons and ions are included, along with wall reflection models. General two-dimensional wall and plasma geometries are treated in a flexible manner so that varied configurations can be easily studied. The algorithm uses a pseudo-collision method. Splitting with Russian roulette, suppression of absorption, and efficient scoring techniques are used to reduce the variance. The resulting code is sufficiently fast and compact to be incorporated into iterative treatments of plasma dynamics requiring numerous neutral profiles. The calculation yields the neutral gas densities, pressures, fluxes, ionization rates, momentum transfer rates, energy transfer rates, and wall sputtering rates. Applications have included modeling of proposed INTOR/FED poloidal divertor designs and other experimental devices.
Energy Technology Data Exchange (ETDEWEB)
Han, Gi Young; Seo, Bo Kyun [Korea Institute of Nuclear Safety,, Daejeon (Korea, Republic of); Kim, Do Hyun; Shin, Chang Ho; Kim, Song Hyun [Dept. of Nuclear Engineering, Hanyang University, Seoul (Korea, Republic of); Sun, Gwang Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2016-06-15
In analyzing residual radiation, researchers generally use a two-step Monte Carlo (MC) simulation. The first step (MC1) simulates neutron transport, and the second step (MC2) transports the decay photons emitted from the activated materials. In this process, the stochastic uncertainty estimated by the MC2 appears only as a final result, but it is underestimated because the stochastic error generated in MC1 cannot be directly included in MC2. Hence, estimating the true stochastic uncertainty requires quantifying the propagation degree of the stochastic error in MC1. The brute force technique is a straightforward method to estimate the true uncertainty. However, it is a costly method to obtain reliable results. Another method, called the adjoint-based method, can reduce the computational time needed to evaluate the true uncertainty; however, there are limitations. To address those limitations, we propose a new strategy to estimate uncertainty propagation without any additional calculations in two-step MC simulations. To verify the proposed method, we applied it to activation benchmark problems and compared the results with those of previous methods. The results show that the proposed method increases the applicability and user-friendliness preserving accuracy in quantifying uncertainty propagation. We expect that the proposed strategy will contribute to efficient and accurate two-step MC calculations.
Uncertainty calculation in transport models and forecasts
DEFF Research Database (Denmark)
Manzo, Stefano; Prato, Carlo Giacomo
) resources, hence resulting in socio-economic losses. Along with technical and decision-process related issues, such inaccuracy also originates from transport models’ inherent uncertainty, which in turns originates from the complexity of the systems generating both transport supply (e.g. services...... to represent the complex system in a deterministic way. By modelling complex systems, transport models are subject to uncertainty. The main consequence of such uncertainty is that point estimates of modelled traffic flows, and their derived measures, only represent one of the possible outputs generated...... investigated the effects of uncertainty in the volume-delay function parameters used in the Danish National Transport Model1. The results showed that some links in the modelled network have high sensitivity to the variability in the function parameters. In particular, the affected links mainly refer to short...
Energy Technology Data Exchange (ETDEWEB)
Cullen, D E
2003-06-06
TART 2002 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART 2002 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART 2002 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART 2002 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART 2002 and its data files.
Energy Technology Data Exchange (ETDEWEB)
Cullen, D.E
2000-11-22
TART2000 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input Preparation, running Monte Carlo calculations, and analysis of output results. TART2000 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART2000 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART2000 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART2000 and its data files.
Monte Carlo calculation based on hydrogen composition of the tissue for MV photon radiotherapy.
Demol, Benjamin; Viard, Romain; Reynaert, Nick
2015-01-01
The purpose of this study was to demonstrate that Monte Carlo treatment planning systems require tissue characterization (density and composition) as a function of CT number. A discrete set of tissue classes with a specific composition is introduced. In the current work we demonstrate that, for megavoltage photon radiotherapy, only the hydrogen content of the different tissues is of interest. This conclusion might have an impact on MRI-based dose calculations and on MVCT calibration using tissue substitutes. A stoichiometric calibration was performed, grouping tissues with similar atomic composition into 15 dosimetrically equivalent subsets. To demonstrate the importance of hydrogen, a new scheme was derived, with correct hydrogen content, complemented by oxygen (all elements differing from hydrogen are replaced by oxygen). Mass attenuation coefficients and mass stopping powers for this scheme were calculated and compared to the original scheme. Twenty-five CyberKnife treatment plans were recalculated by an in-house developed Monte Carlo system using tissue density and hydrogen content derived from the CT images. The results were compared to Monte Carlo simulations using the original stoichiometric calibration. Between 300 keV and 3 MeV, the relative difference of mass attenuation coefficients is under 1% within all subsets. Between 10 keV and 20 MeV, the relative difference of mass stopping powers goes up to 5% in hard bone and remains below 2% for all other tissue subsets. Dose-volume histograms (DVHs) of the treatment plans present no visual difference between the two schemes. Relative differences of dose indexes D98, D95, D50, D05, D02, and Dmean were analyzed and a distribution centered around zero and of standard deviation below 2% (3 σ) was established. On the other hand, once the hydrogen content is slightly modified, important dose differences are obtained. Monte Carlo dose planning in the field of megavoltage photon radiotherapy is fully achievable using
Quantum Monte Carlo calculation of the binding energy of the beryllium dimer
Energy Technology Data Exchange (ETDEWEB)
Deible, Michael J.; Kessler, Melody; Gasperich, Kevin E.; Jordan, Kenneth D. [Department of Chemistry, University of Pittsburgh, Pittsburgh, Pennsylvania 15260 (United States)
2015-08-28
The accurate calculation of the binding energy of the beryllium dimer is a challenging theoretical problem. In this study, the binding energy of Be{sub 2} is calculated using the diffusion Monte Carlo (DMC) method, using single Slater determinant and multiconfigurational trial functions. DMC calculations using single-determinant trial wave functions of orbitals obtained from density functional theory calculations overestimate the binding energy, while DMC calculations using Hartree-Fock or CAS(4,8), complete active space trial functions significantly underestimate the binding energy. In order to obtain an accurate value of the binding energy of Be{sub 2} from DMC calculations, it is necessary to employ trial functions that include excitations outside the valence space. Our best estimate DMC result for the binding energy of Be{sub 2}, obtained by using configuration interaction trial functions and extrapolating in the threshold for the configurations retained in the trial function, is 908 cm{sup −1}, only slightly below the 935 cm{sup −1} value derived from experiment.
Energy Technology Data Exchange (ETDEWEB)
Ghassoun, Jillali; Jehoauni, Abdellatif [Nuclear physics and Techniques Lab., Faculty of Science, Semlalia, Marrakech (Morocco)
2000-01-01
In practice, the estimation of the flux obtained by Fredholm integral equation needs a truncation of the Neuman series. The order N of the truncation must be large in order to get a good estimation. But a large N induces a very large computation time. So the conditional Monte Carlo method is used to reduce time without affecting the estimation quality. In a previous works, in order to have rapid convergence of calculations it was considered only weakly diffusing media so that has permitted to truncate the Neuman series after an order of 20 terms. But in the most practical shields, such as water, graphite and beryllium the scattering probability is high and if we truncate the series at 20 terms we get bad estimation of flux, so it becomes useful to use high orders in order to have good estimation. We suggest two simple techniques based on the conditional Monte Carlo. We have proposed a simple density of sampling the steps for the random walk. Also a modified stretching factor density depending on a biasing parameter which affects the sample vector by stretching or shrinking the original random walk in order to have a chain that ends at a given point of interest. Also we obtained a simple empirical formula which gives the neutron flux for a medium characterized by only their scattering probabilities. The results are compared to the exact analytic solution, we have got a good agreement of results with a good acceleration of convergence calculations. (author)
International Nuclear Information System (INIS)
In practice, the estimation of the flux obtained by Fredholm integral equation needs a truncation of the Neuman series. The order N of the truncation must be large in order to get a good estimation. But a large N induces a very large computation time. So the conditional Monte Carlo method is used to reduce time without affecting the estimation quality. In a previous works, in order to have rapid convergence of calculations it was considered only weakly diffusing media so that has permitted to truncate the Neuman series after an order of 20 terms. But in the most practical shields, such as water, graphite and beryllium the scattering probability is high and if we truncate the series at 20 terms we get bad estimation of flux, so it becomes useful to use high orders in order to have good estimation. We suggest two simple techniques based on the conditional Monte Carlo. We have proposed a simple density of sampling the steps for the random walk. Also a modified stretching factor density depending on a biasing parameter which affects the sample vector by stretching or shrinking the original random walk in order to have a chain that ends at a given point of interest. Also we obtained a simple empirical formula which gives the neutron flux for a medium characterized by only their scattering probabilities. The results are compared to the exact analytic solution, we have got a good agreement of results with a good acceleration of convergence calculations. (author)
Noblet, C.; Chiavassa, S.; Smekens, F.; Sarrut, D.; Passal, V.; Suhard, J.; Lisbona, A.; Paris, F.; Delpon, G.
2016-05-01
In preclinical studies, the absorbed dose calculation accuracy in small animals is fundamental to reliably investigate and understand observed biological effects. This work investigated the use of the split exponential track length estimator (seTLE), a new kerma based Monte Carlo dose calculation method for preclinical radiotherapy using a small animal precision micro irradiator, the X-RAD 225Cx. Monte Carlo modelling of the irradiator with GATE/GEANT4 was extensively evaluated by comparing measurements and simulations for half-value layer, percent depth dose, off-axis profiles and output factors in water and water-equivalent material for seven circular fields, from 20 mm down to 1 mm in diameter. Simulated and measured dose distributions in cylinders of water obtained for a 360° arc were also compared using dose, distance-to-agreement and gamma-index maps. Simulations and measurements agreed within 3% for all static beam configurations, with uncertainties estimated to 1% for the simulation and 3% for the measurements. Distance-to-agreement accuracy was better to 0.14 mm. For the arc irradiations, gamma-index maps of 2D dose distributions showed that the success rate was higher than 98%, except for the 0.1 cm collimator (92%). Using the seTLE method, MC simulations compute 3D dose distributions within minutes for realistic beam configurations with a clinically acceptable accuracy for beam diameter as small as 1 mm.
International Nuclear Information System (INIS)
At the present time a Monte Carlo transport computer code is being designed and implemented at Lawrence Livermore National Laboratory to include the transport of: neutrons, photons, electrons and light charged particles as well as the coupling between all species of particles, e.g., photon induced electron emission. Since this code is being designed to handle all particles this approach is called the ''All Particle Method''. The code is designed as a test bed code to include as many different methods as possible (e.g., electron single or multiple scattering) and will be data driven to minimize the number of methods and models ''hard wired'' into the code. This approach will allow changes in the Livermore nuclear and atomic data bases, used to described the interaction and production of particles, to be used to directly control the execution of the program. In addition this approach will allow the code to be used at various levels of complexity to balance computer running time against the accuracy requirements of specific applications. This paper describes the current design philosophy and status of the code. Since the treatment of neutrons and photons used by the All Particle Method code is more or less conventional, emphasis in this paper is placed on the treatment of electron, and to a lesser degree charged particle, transport. An example is presented in order to illustrate an application in which the ability to accurately transport electrons is important. 21 refs., 1 fig
DEFF Research Database (Denmark)
Taasti, Vicki Trier; Knudsen, Helge; Holzscheiter, Michael;
2015-01-01
The Monte Carlo particle transport code SHIELD-HIT12A is designed to simulate therapeutic beams for cancer radiotherapy with fast ions. SHIELD-HIT12A allows creation of antiproton beam kernels for the treatment planning system TRiP98, but first it must be benchmarked against experimental data...
Vanderstraeten, Barbara; Reynaert, Nick; Paelinck, Leen; Madani, Indira; De Wagter, Carlos; De Gersem, Werner; De Neve, Wilfried; Thierens, Hubert
2006-09-01
Pinnacle-CS and Helax-CC, deviations from MCDE above 5% were found within the OARs: within the lungs for two (6 MV) and six (18 MV) patients for Pinnacle-CS, and within other OARs for two patients for Helax-CC (for Dmax of the heart and D33 of the expanded esophagus) but only for 6 MV. For one patient, all four algorithms were used to recompute the dose after replacing all computed tomography voxels within the patient's skin contour by water. This made all differences above 5% between MCDE and the other dose calculation algorithms disappear. Thus, the observed deviations mainly arose from differences in particle transport modeling within the lungs, and the commissioning of the algorithms was adequately performed (or the commissioning was less important for this type of treatment). In conclusion, not one pair of the dose calculation algorithms we investigated could provide results that were consistent within 5% for all 10 patients for the set of clinically relevant dose-volume indices studied. As the results from both CS algorithms differed significantly, care should be taken when evaluating treatment plans as the choice of dose calculation algorithm may influence clinical results. Full Monte Carlo provides a great benchmarking tool for evaluating the performance of other algorithms for patient dose computations. PMID:17022207
International Nuclear Information System (INIS)
Pinnacle-CS and Helax-CC, deviations from MCDE above 5% were found within the OARs: within the lungs for two (6 MV) and six (18 MV) patients for Pinnacle-CS, and within other OARs for two patients for Helax-CC (for Dmax of the heart and D33 of the expanded esophagus) but only for 6 MV. For one patient, all four algorithms were used to recompute the dose after replacing all computed tomography voxels within the patient's skin contour by water. This made all differences above 5% between MCDE and the other dose calculation algorithms disappear. Thus, the observed deviations mainly arose from differences in particle transport modeling within the lungs, and the commissioning of the algorithms was adequately performed (or the commissioning was less important for this type of treatment). In conclusion, not one pair of the dose calculation algorithms we investigated could provide results that were consistent within 5% for all 10 patients for the set of clinically relevant dose-volume indices studied. As the results from both CS algorithms differed significantly, care should be taken when evaluating treatment plans as the choice of dose calculation algorithm may influence clinical results. Full Monte Carlo provides a great benchmarking tool for evaluating the performance of other algorithms for patient dose computations
Auxiliary-field quantum Monte Carlo calculations of the molybdenum dimer.
Purwanto, Wirawan; Zhang, Shiwei; Krakauer, Henry
2016-06-28
Chemical accuracy is difficult to achieve for systems with transition metal atoms. Third row transition metal atoms are particularly challenging due to strong electron-electron correlation in localized d-orbitals. The Cr2 molecule is an outstanding example, which we previously treated with highly accurate auxiliary-field quantum Monte Carlo (AFQMC) calculations [W. Purwanto et al., J. Chem. Phys. 142, 064302 (2015)]. Somewhat surprisingly, computational description of the isoelectronic Mo2 dimer has also, to date, been scattered and less than satisfactory. We present high-level theoretical benchmarks of the Mo2 singlet ground state (X(1)Σg (+)) and first triplet excited state (a(3)Σu (+)), using the phaseless AFQMC calculations. Extrapolation to the complete basis set limit is performed. Excellent agreement with experimental spectroscopic constants is obtained. We also present a comparison of the correlation effects in Cr2 and Mo2. PMID:27369514
International Nuclear Information System (INIS)
Local in- and ex-core responses are calculated by employing variance reduction within the Monte Carlo source-iteration scheme. This is done by employing the Direct Statistical Approach to search for an optimum trade-off between sampling the local response and sampling the fundamental mode. Superhistories are employed to improve the trade-off point. Realistic test problems are run that show a good agreement between the predicted and actual calculational figure-of-merits. For the sample problems treated, gains in efficiency over analog (i.e. without variance reduction) range from 1 - 2 orders of magnitude for in-core responses to many orders of magnitude for ex-core responses. An alternative way of finding the trade-off point using the classic adjoint flux formalism showed substantial differences for one of the problems. (author)
Auxiliary-field quantum Monte Carlo calculations of the molybdenum dimer
Purwanto, Wirawan; Zhang, Shiwei; Krakauer, Henry
2016-06-01
Chemical accuracy is difficult to achieve for systems with transition metal atoms. Third row transition metal atoms are particularly challenging due to strong electron-electron correlation in localized d-orbitals. The Cr2 molecule is an outstanding example, which we previously treated with highly accurate auxiliary-field quantum Monte Carlo (AFQMC) calculations [W. Purwanto et al., J. Chem. Phys. 142, 064302 (2015)]. Somewhat surprisingly, computational description of the isoelectronic Mo2 dimer has also, to date, been scattered and less than satisfactory. We present high-level theoretical benchmarks of the Mo2 singlet ground state (X1Σg+) and first triplet excited state (a3Σu+), using the phaseless AFQMC calculations. Extrapolation to the complete basis set limit is performed. Excellent agreement with experimental spectroscopic constants is obtained. We also present a comparison of the correlation effects in Cr2 and Mo2.
Auxiliary-field quantum Monte Carlo calculations of the molybdenum dimer
Purwanto, Wirawan; Krakauer, Henry
2016-01-01
Chemical accuracy is difficult to achieve for systems with transition metal atoms. Third row transition metal atoms are particularly challenging due to strong electron-electron correlation in localized d-orbitals. The Cr2 molecule is an outstanding example, which we previously treated with highly accurate auxiliary-field quantum Monte Carlo (AFQMC) calculations [Purwanto et al., J. Chem. Phys. 142, 064302 (2015)]. Somewhat surprisingly, computational description of the isoelectronic Mo2 dimer has also, to date, been scattered and less than satisfactory. We present high-level theoretical benchmarks of the Mo2 singlet ground state ($X ^1\\Sigma_g^+$) and first triplet excited state ($a ^3\\Sigma_u^+$), using the phaseless AFQMC calculations. Extrapolation to the complete basis set limit is performed. Excellent agreement with experimental spectroscopic constants is obtained. We also present a comparison of the correlation effects in Cr2 and Mo2.
International Nuclear Information System (INIS)
KORPUS is an irradiation facility located at the lateral core surface of the 6 MW experimental reactor RBT-6 in Dimitrovgrad. In this work the KORPUS irradiation experiment has been used to demonstrate the capability of the pressure vessel dosimetry methodology developed in Rossendorf to solve these problems. At the same time the experiments were used to test recent improvements of this methodology including a new procedure for treatment of elastic scattering in the Monte Carlo code TRAMO and a new multispectrum version of the adjustment code. By means of a series of calculations the influence of model and data approximations were investigated aiming at an evaluation of the uncertainties of the calculations. Further, uncertainty investigations were carried out in connection with spectrum adjustment resulting in covariances of spectra, measured reaction rates and fluence integrals. (orig.)
International Nuclear Information System (INIS)
A description is given of a method for calculating the penetration and energy deposition of gamma radiation, based on Monte Carlo techniques. The essential feature is the application of the exponential transformation to promote the transport of penetrating quanta and to balance the steep spatial variations of the source distributions which appear in secondary gamma emission problems. The estimated statistical errors in a number of sample problems, involving concrete shields with thicknesses up to 500 cm, are shown to be quite favorable, even at relatively short computing times. A practical reactor shielding problem is also shown and the predictions compared with measurements
International Nuclear Information System (INIS)
The perturbation source method is used in the Monte Carlo method in calculating small effects in a particle field. It offers primising possibilities for introducing positive correlation between subtracting estimates even in the cases where other methods fail, in the case of geometrical variations of a given arrangement. The perturbation source method is formulated on the basis of integral equations for the particle fields. The formulae for the second moment of the difference of events are derived. Explicity a certain class of transport games and different procedures for generating the so-called perturbation particles are considered
Energy Technology Data Exchange (ETDEWEB)
Biondo, Elliott D [ORNL; Ibrahim, Ahmad M [ORNL; Mosher, Scott W [ORNL; Grove, Robert E [ORNL
2015-01-01
Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNce reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).
Françoise Benz
2006-01-01
2005-2006 ACADEMIC TRAINING PROGRAMME LECTURE SERIES 27, 28, 29 June 11:00-12:00 - TH Conference Room, bldg. 4 The use of Monte Carlo radiation transport codes in radiation physics and dosimetry F. Salvat Gavalda,Univ. de Barcelona, A. FERRARI, CERN-AB, M. SILARI, CERN-SC Lecture 1. Transport and interaction of electromagnetic radiation F. Salvat Gavalda,Univ. de Barcelona Interaction models and simulation schemes implemented in modern Monte Carlo codes for the simulation of coupled electron-photon transport will be briefly reviewed. Different schemes for simulating electron transport will be discussed. Condensed algorithms, which rely on multiple-scattering theories, are comparatively fast, but less accurate than mixed algorithms, in which hard interactions (with energy loss or angular deflection larger than certain cut-off values) are simulated individually. The reliability, and limitations, of electron-interaction models and multiple-scattering theories will be analyzed. Benchmark comparisons of simu...
International Nuclear Information System (INIS)
The electron drift velocity W, and the first Townsend ionization coefficient, α, are calculated for nitrogen, over the range 7000 is the electric field to pressure ratio. The pressure P0 is reduced to 00C. The spherical harmonic expansion calculation predicts α values which are 50-100% larger than those predicted by the Monte Carlo calculation. The predicted drift velocities agree to within 10-20%. (Auth.)
Energy Technology Data Exchange (ETDEWEB)
Bankovic, A., E-mail: ana.bankovic@gmail.com [Institute of Physics, University of Belgrade, Pregrevica 118, 11080 Belgrade (Serbia); Dujko, S. [Institute of Physics, University of Belgrade, Pregrevica 118, 11080 Belgrade (Serbia); Centrum Wiskunde and Informatica (CWI), P.O. Box 94079, 1090 GB Amsterdam (Netherlands); ARC Centre for Antimatter-Matter Studies, School of Engineering and Physical Sciences, James Cook University, Townsville, QLD 4810 (Australia); White, R.D. [ARC Centre for Antimatter-Matter Studies, School of Engineering and Physical Sciences, James Cook University, Townsville, QLD 4810 (Australia); Buckman, S.J. [ARC Centre for Antimatter-Matter Studies, Australian National University, Canberra, ACT 0200 (Australia); Petrovic, Z.Lj. [Institute of Physics, University of Belgrade, Pregrevica 118, 11080 Belgrade (Serbia)
2012-05-15
This work reports on a new series of calculations of positron transport properties in molecular hydrogen under the influence of spatially homogeneous electric field. Calculations are performed using a Monte Carlo simulation technique and multi term theory for solving the Boltzmann equation. Values and general trends of the mean energy, drift velocity and diffusion coefficients as a function of the reduced electric field E/n{sub 0} are reported here. Emphasis is placed on the explicit and implicit effects of positronium (Ps) formation on the drift velocity and diffusion coefficients. Two important phenomena arise; first, for certain regions of E/n{sub 0} the bulk and flux components of the drift velocity and longitudinal diffusion coefficient are markedly different, both qualitatively and quantitatively. Second, and contrary to previous experience in electron swarm physics, there is negative differential conductivity (NDC) effect in the bulk drift velocity component with no indication of any NDC for the flux component. In order to understand this atypical manifestation of the drift and diffusion of positrons in H{sub 2} under the influence of electric field, the spatially dependent positron transport properties such as number of positrons, average energy and velocity and spatially resolved rate for Ps formation are calculated using a Monte Carlo simulation technique. The spatial variation of the positron average energy and extreme skewing of the spatial profile of positron swarm are shown to play a central role in understanding the phenomena.
Energy Technology Data Exchange (ETDEWEB)
Abdel-Khalik, Hany S. [North Carolina State Univ., Raleigh, NC (United States); Zhang, Qiong [North Carolina State Univ., Raleigh, NC (United States)
2014-05-20
The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 10^{3} - 10^{5} times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.
Development of a software package for solid-angle calculations using the Monte Carlo method
Zhang, Jie; Chen, Xiulian; Zhang, Changsheng; Li, Gang; Xu, Jiayun; Sun, Guangai
2014-02-01
Solid-angle calculations play an important role in the absolute calibration of radioactivity measurement systems and in the determination of the activity of radioactive sources, which are often complicated. In the present paper, a software package is developed to provide a convenient tool for solid-angle calculations in nuclear physics. The proposed software calculates solid angles using the Monte Carlo method, in which a new type of variance reduction technique was integrated. The package, developed under the environment of Microsoft Foundation Classes (MFC) in Microsoft Visual C++, has a graphical user interface, in which, the visualization function is integrated in conjunction with OpenGL. One advantage of the proposed software package is that it can calculate the solid angle subtended by a detector with different geometric shapes (e.g., cylinder, square prism, regular triangular prism or regular hexagonal prism) to a point, circular or cylindrical source without any difficulty. The results obtained from the proposed software package were compared with those obtained from previous studies and calculated using Geant4. It shows that the proposed software package can produce accurate solid-angle values with a greater computation speed than Geant4.
Monte Carlo calculations of the depth-dose distribution in skin contaminated by hot particles
Energy Technology Data Exchange (ETDEWEB)
Patau, J.-P. (Toulouse-3 Univ., 31 (France))
1991-01-01
Accurate computer programs were developed in order to calculate the spatial distribution of absorbed radiation doses in the skin, near high activity particles (''hot particles''). With a view to ascertaining the reliability of the codes the transport of beta particles was simulated in a complex configuration used for dosimetric measurements: spherical {sup 60}Co sources of 10-1000 {mu}m fastened to an aluminium support with a tissue-equivalent adhesive overlaid with 10 {mu}m thick aluminium foil. Behind it an infinite polystyrene medium including an extrapolation chamber was assumed. The exact energy spectrum of beta emission was sampled. Production and transport of secondary knock-on electrons were also simulated. Energy depositions in polystyrene were calculated with a high spatial resolution. Finally, depth-dose distributions were calculated for hot particles placed on the skin. The calculations will be continued for other radionuclides and for a configuration suited to TLD measurements. (author).
Fernandes, A C; Gonçalves, I C; Santos, J; Cardoso, J; Santos, L; Ferro Carvalho, A; Marques, J G; Kling, A; Ramalho, A J G; Osvay, M
2006-01-01
This work presents an extensive study on Monte Carlo radiation transport simulation and thermoluminescent (TL) dosimetry for characterising mixed radiation fields (neutrons and photons) occurring in nuclear reactors. The feasibility of these methods is investigated for radiation fields at various locations of the Portuguese Research Reactor (RPI). The performance of the approaches developed in this work is compared with dosimetric techniques already existing at RPI. The Monte Carlo MCNP-4C code was used for a detailed modelling of the reactor core, the fast neutron beam and the thermal column of RPI. Simulations using these models allow to reproduce the energy and spatial distributions of the neutron field very well (agreement better than 80%). In the case of the photon field, the agreement improves with decreasing intensity of the component related to fission and activation products. (7)LiF:Mg,Ti, (7)LiF:Mg,Cu,P and Al(2)O(3):Mg,Y TL detectors (TLDs) with low neutron sensitivity are able to determine photon dose and dose profiles with high spatial resolution. On the other hand, (nat)LiF:Mg,Ti TLDs with increased neutron sensitivity show a remarkable loss of sensitivity and a high supralinearity in high-intensity fields hampering their application at nuclear reactors.
Infantino, Angelo; Marengo, Mario; Baschetti, Serafina; Cicoria, Gianfranco; Longo Vaschetto, Vittorio; Lucconi, Giulia; Massucci, Piera; Vichi, Sara; Zagni, Federico; Mostacci, Domiziano
2015-11-01
Biomedical cyclotrons for production of Positron Emission Tomography (PET) radionuclides and radiotherapy with hadrons or ions are widely diffused and established in hospitals as well as in industrial facilities and research sites. Guidelines for site planning and installation, as well as for radiation protection assessment, are given in a number of international documents; however, these well-established guides typically offer analytic methods of calculation of both shielding and materials activation, in approximate or idealized geometry set up. The availability of Monte Carlo codes with accurate and up-to-date libraries for transport and interactions of neutrons and charged particles at energies below 250 MeV, together with the continuously increasing power of nowadays computers, makes systematic use of simulations with realistic geometries possible, yielding equipment and site specific evaluation of the source terms, shielding requirements and all quantities relevant to radiation protection. In this work, the well-known Monte Carlo code FLUKA was used to simulate two representative models of cyclotron for PET radionuclides production, including their targetry; and one type of proton therapy cyclotron including the energy selection system. Simulations yield estimates of various quantities of radiological interest, including the effective dose distribution around the equipment, the effective number of neutron produced per incident proton and the activation of target materials, the structure of the cyclotron, the energy degrader, the vault walls and the soil. The model was validated against experimental measurements and comparison with well-established reference data. Neutron ambient dose equivalent H*(10) was measured around a GE PETtrace cyclotron: an average ratio between experimental measurement and simulations of 0.99±0.07 was found. Saturation yield of 18F, produced by the well-known 18O(p,n)18F reaction, was calculated and compared with the IAEA recommended
A clinical study of lung cancer dose calculation accuracy with Monte Carlo simulation
International Nuclear Information System (INIS)
The accuracy of dose calculation is crucial to the quality of treatment planning and, consequently, to the dose delivered to patients undergoing radiation therapy. Current general calculation algorithms such as Pencil Beam Convolution (PBC) and Collapsed Cone Convolution (CCC) have shortcomings in regard to severe inhomogeneities, particularly in those regions where charged particle equilibrium does not hold. The aim of this study was to evaluate the accuracy of the PBC and CCC algorithms in lung cancer radiotherapy using Monte Carlo (MC) technology. Four treatment plans were designed using Oncentra Masterplan TPS for each patient. Two intensity-modulated radiation therapy (IMRT) plans were developed using the PBC and CCC algorithms, and two three-dimensional conformal therapy (3DCRT) plans were developed using the PBC and CCC algorithms. The DICOM-RT files of the treatment plans were exported to the Monte Carlo system to recalculate. The dose distributions of GTV, PTV and ipsilateral lung calculated by the TPS and MC were compared. For 3DCRT and IMRT plans, the mean dose differences for GTV between the CCC and MC increased with decreasing of the GTV volume. For IMRT, the mean dose differences were found to be higher than that of 3DCRT. The CCC algorithm overestimated the GTV mean dose by approximately 3% for IMRT. For 3DCRT plans, when the volume of the GTV was greater than 100 cm3, the mean doses calculated by CCC and MC almost have no difference. PBC shows large deviations from the MC algorithm. For the dose to the ipsilateral lung, the CCC algorithm overestimated the dose to the entire lung, and the PBC algorithm overestimated V20 but underestimated V5; the difference in V10 was not statistically significant. PBC substantially overestimates the dose to the tumour, but the CCC is similar to the MC simulation. It is recommended that the treatment plans for lung cancer be developed using an advanced dose calculation algorithm other than PBC. MC can accurately
Modelling photon transport in non-uniform media for SPECT with a vectorized Monte Carlo code.
Smith, M F
1993-10-01
A vectorized Monte Carlo code has been developed for modelling photon transport in non-uniform media for single-photon-emission computed tomography (SPECT). The code is designed to compute photon detection kernels, which are used to build system matrices for simulating SPECT projection data acquisition and for use in matrix-based image reconstruction. Non-uniform attenuating and scattering regions are constructed from simple three-dimensional geometric shapes, in which the density and mass attenuation coefficients are individually specified. On a Stellar GS1000 computer, Monte Carlo simulations are performed between 1.6 and 2.0 times faster when the vector processor is utilized than when computations are performed in scalar mode. Projection data acquired with a clinical SPECT gamma camera for a line source in a non-uniform thorax phantom are well modelled by Monte Carlo simulations. The vectorized Monte Carlo code was used to stimulate a 99Tcm SPECT myocardial perfusion study, and compensations for non-uniform attenuation and the detection of scattered photons improve activity estimation. The speed increase due to vectorization makes Monte Carlo simulation more attractive as a tool for modelling photon transport in non-uniform media for SPECT. PMID:8248288
Monte Carlo calculation of dose to water of a 106Ru COB-type ophthalmic plaque
International Nuclear Information System (INIS)
The concave eye applicators with 106Ru/106Rh or 90Sr/90Y beta-ray sources are worldwide used in brachytherapy for treating intraocular tumors. It raises the need to know the exact dose delivered by beta radiation to tumors but measurement of the dose to water (or tissue) is very difficult due to short range of electrons. The Monte Carlo technique provides a powerful tool for calculation of the dose and dose distributions which helps to predict and determine the doses from different shapes of various types of eye applicators more accurately. The Monte Carlo code MCNPX has been used to calculate dose distributions from a COB-type 106Ru/106Rh ophthalmic applicator manufactured by Eckert and Ziegler BEBIG GmbH. This type of a concave eye applicator has a cut-out whose purpose is to protect the eye nerve which makes the dose distribution more complicated. Several calculations have been performed including depth dose along the applicator central axis and various dose distributions. The depth dose along the applicator central axis and the dose distribution on a spherical surface 1 mm above the plaque inner surface have been compared with measurement data provided by the manufacturer. For distances from 0.5 to 4 mm above the surface, the agreement was within 2.5% and from 5 mm the difference increased from 6% up to 25% at 10 mm whereas the uncertainty on manufacturer data is 20% (2s). It is assumed that the difference is caused by nonuniformly distributed radioactivity over the applicator radioactive layer
Interactive three-dimensional visualization and creation of geometries for Monte Carlo calculations
International Nuclear Information System (INIS)
The implementation of three-dimensional geometries for the simulation of radiation transport problems is a very time-consuming task. Each particle transport code supplies its own scripting language and syntax for creating the geometries. All of them are based on the Constructive Solid Geometry scheme requiring textual description. This makes the creation a tedious and error-prone task, which is especially hard to master for novice users. The Monte Carlo code FLUKA comes with built-in support for creating two-dimensional cross-sections through the geometry and FLUKACAD, a custom-built converter to the commercial Computer Aided Design package AutoCAD, exists for 3D visualization. For other codes, like MCNPX, a couple of different tools are available, but they are often specifically tailored to the particle transport code and its approach used for implementing geometries. Complex constructive solid modeling usually requires very fast and expensive special purpose hardware, which is not widely available. In this paper SimpleGeo is presented, which is an implementation of a generic versatile interactive geometry modeler using off-the-shelf hardware. It is running on Windows, with a Linux version currently under preparation. This paper describes its functionality, which allows for rapid interactive visualization as well as generation of three-dimensional geometries, and also discusses critical issues regarding common CAD systems
Interactive three-dimensional visualization and creation of geometries for Monte Carlo calculations
Theis, C.; Buchegger, K. H.; Brugger, M.; Forkel-Wirth, D.; Roesler, S.; Vincke, H.
2006-06-01
The implementation of three-dimensional geometries for the simulation of radiation transport problems is a very time-consuming task. Each particle transport code supplies its own scripting language and syntax for creating the geometries. All of them are based on the Constructive Solid Geometry scheme requiring textual description. This makes the creation a tedious and error-prone task, which is especially hard to master for novice users. The Monte Carlo code FLUKA comes with built-in support for creating two-dimensional cross-sections through the geometry and FLUKACAD, a custom-built converter to the commercial Computer Aided Design package AutoCAD, exists for 3D visualization. For other codes, like MCNPX, a couple of different tools are available, but they are often specifically tailored to the particle transport code and its approach used for implementing geometries. Complex constructive solid modeling usually requires very fast and expensive special purpose hardware, which is not widely available. In this paper SimpleGeo is presented, which is an implementation of a generic versatile interactive geometry modeler using off-the-shelf hardware. It is running on Windows, with a Linux version currently under preparation. This paper describes its functionality, which allows for rapid interactive visualization as well as generation of three-dimensional geometries, and also discusses critical issues regarding common CAD systems.
International Nuclear Information System (INIS)
Questions, related to Monte-Carlo method for solution of neutron and photon transport equation, are discussed in the work concerned. Problems dealing with direct utilization of information from evaluated nuclear data files in run-time calculations are considered. ENDF-6 format libraries have been used for calculations. Approaches provided by the rules of ENDF-6 files 2, 3-6, 12-15, 23, 27 and algorithms for reconstruction of resolved and unresolved resonance region cross sections under preset energy are described. The comparison results of calculations made by NJOY and GRUCON programs and computed cross sections data are represented. Test computation data of neutron leakage spectra for spherical benchmark-experiments are also represented. (authors)
Fast Monte Carlo Simulation for Patient-specific CT/CBCT Imaging Dose Calculation
Jia, Xun; Gu, Xuejun; Jiang, Steve B
2011-01-01
Recently, X-ray imaging dose from computed tomography (CT) or cone beam CT (CBCT) scans has become a serious concern. Patient-specific imaging dose calculation has been proposed for the purpose of dose management. While Monte Carlo (MC) dose calculation can be quite accurate for this purpose, it suffers from low computational efficiency. In response to this problem, we have successfully developed a MC dose calculation package, gCTD, on GPU architecture under the NVIDIA CUDA platform for fast and accurate estimation of the x-ray imaging dose received by a patient during a CT or CBCT scan. Techniques have been developed particularly for the GPU architecture to achieve high computational efficiency. Dose calculations using CBCT scanning geometry in a homogeneous water phantom and a heterogeneous Zubal head phantom have shown good agreement between gCTD and EGSnrc, indicating the accuracy of our code. In terms of improved efficiency, it is found that gCTD attains a speed-up of ~400 times in the homogeneous water ...
Deep-penetration calculation for the ISIS target station shielding using the MARS Monte Carlo code
Nunomiya, T; Nakamura, T; Nakao, N
2002-01-01
A calculation of neutron penetration through a thick shield was performed with a three-dimensional multi-layer technique using the MARS14(02) Monte Carlo code to compare with the experimental shielding data in 1998 at the ISIS spallation neutron source facility. In this calculation, secondary particles from a tantalum target bombarded by 800-MeV protons were transmitted through a bulk shield of approximately 3-m-thick iron and 1-m-thick concrete. To accomplish this deep-penetration calculation with good statistics, the following three techniques were used in this study. First, the geometry of the bulk shield was three-dimensionally divided into several layers of about 50-cm thickness, and a step-by-step calculation was carried out to multiply the number of penetrated particles at the boundaries between the layers. Second, the source particles in the layers were divided into two parts to maintain the statistical balance on the spatial-flux distribution. Third, only high-energy particles above 20 MeV were trans...
Postimplant Dosimetry Using a Monte Carlo Dose Calculation Engine: A New Clinical Standard
International Nuclear Information System (INIS)
Purpose: To use the Monte Carlo (MC) method as a dose calculation engine for postimplant dosimetry. To compare the results with clinically approved data for a sample of 28 patients. Two effects not taken into account by the clinical calculation, interseed attenuation and tissue composition, are being specifically investigated. Methods and Materials: An automated MC program was developed. The dose distributions were calculated for the target volume and organs at risk (OAR) for 28 patients. Additional MC techniques were developed to focus specifically on the interseed attenuation and tissue effects. Results: For the clinical target volume (CTV) D90 parameter, the mean difference between the clinical technique and the complete MC method is 10.7 Gy, with cases reaching up to 17 Gy. For all cases, the clinical technique overestimates the deposited dose in the CTV. This overestimation is mainly from a combination of two effects: the interseed attenuation (average, 6.8 Gy) and tissue composition (average, 4.1 Gy). The deposited dose in the OARs is also overestimated in the clinical calculation. Conclusions: The clinical technique systematically overestimates the deposited dose in the prostate and in the OARs. To reduce this systematic inaccuracy, the MC method should be considered in establishing a new standard for clinical postimplant dosimetry and dose-outcome studies in a near future
Calculating CR-39 Response to Radon in Water Using Monte Carlo Simulation
Directory of Open Access Journals (Sweden)
2012-09-01
Full Text Available Introduction CR-39 detectors are widely used for Radon and progeny measurement in the air. In this paper, using the Monte Carlo simulation, the possibility of using the CR-39 for direct measurement of Radon and progeny in water is investigated. Materials and Methods Assuming the random position and angle of alpha particle emitted by Radon and progeny, alpha energy and angular spectrum that arrive at CR-39, the calibration factor, and the suitable depth of chemical etching of CR-39 in air and water was calculated. In this simulation, a range of data were obtained from SRIM2008 software. Results Calibration factor of CR-39 in water is calculated as 6.6 (kBq.d/m3/(track/cm2 that is corresponding with EPA standard level of Radon concentration in water (10-11 kBq/m3. With replacing the skin instead of CR-39, the volume affected by Radon and progeny was determined to be 2.51 mm3 for one m2 of skin area. The annual dose conversion factor for Radon and progeny was calculated to be between 8.8-58.8 nSv/(Bq.h/m3. Conclusion Using the CR-39 for Radon measurement in water can be beneficial. The annual dose conversion factor for Radon and progeny was calculated to be between 8.8-58.8 nSv/ (Bq.h/m3.
Monte Carlo calculation of the energy response characteristics of a RadFET radiation detector
International Nuclear Information System (INIS)
The Metal -Oxide Semiconductor Field-Effect-Transistor (MOSFET, RadFET) is frequently used as a sensor of ionizing radiation in nuclear-medicine, diagnostic-radiology, radiotherapy quality-assurance and in the nuclear and space industries. We focused our investigations on calculating the energy response of a p-type RadFET to low-energy photons in range from 12 keV to 2 MeV and on understanding the influence of uncertainties in the composition and geometry of the device in calculating the energy response function. All results were normalized to unit air kerma incident on the RadFET for incident photon energy of 1.1 MeV. The calculations of the energy response characteristics of a RadFET radiation detector were performed via Monte Carlo simulations using the MCNPX code and for a limited number of incident photon energies the FOTELP code was also used for the sake of comparison. The geometry of the RadFET was modeled as a simple stack of appropriate materials. Our goal was to obtain results with statistical uncertainties better than 1% (fulfilled in MCNPX calculations for all incident energies which resulted in simulations with 1 - 2x109 histories.
Unified description of pf-shell nuclei by the Monte Carlo shell model calculations
Energy Technology Data Exchange (ETDEWEB)
Mizusaki, Takahiro; Otsuka, Takaharu [Tokyo Univ. (Japan). Dept. of Physics; Honma, Michio
1998-03-01
The attempts to solve shell model by new methods are briefed. The shell model calculation by quantum Monte Carlo diagonalization which was proposed by the authors is a more practical method, and it became to be known that it can solve the problem with sufficiently good accuracy. As to the treatment of angular momentum, in the method of the authors, deformed Slater determinant is used as the basis, therefore, for making angular momentum into the peculiar state, projected operator is used. The space determined dynamically is treated mainly stochastically, and the energy of the multibody by the basis formed as the result is evaluated and selectively adopted. The symmetry is discussed, and the method of decomposing shell model space into dynamically determined space and the product of spin and isospin spaces was devised. The calculation processes are shown with the example of {sup 50}Mn nuclei. The calculation of the level structure of {sup 48}Cr with known exact energy can be done with the accuracy of peculiar absolute energy value within 200 keV. {sup 56}Ni nuclei are the self-conjugate nuclei of Z=N=28. The results of the shell model calculation of {sup 56}Ni nucleus structure by using the interactions of nuclear models are reported. (K.I.)
Energy Technology Data Exchange (ETDEWEB)
Serena, P. A. [Instituto de Ciencias de Materiales de Madrid, Madrid (Spain); Costa-Kraemer, J. L. [Instituto de Microelectronica de Madrid, Madrid (Spain)
2001-03-01
A Monte Carlo algorithm suitable to study systems described by an anisotropic Heisenberg Hamiltonian is presented. This technique has been tested successfully with 3D and 2D systems, illustrating how magnetic properties depend on the dimensionality and the coordination number. We have found that magnetic properties of constrictions differ from those appearing in bulk. In particular, spin fluctuations are considerable larger than those calculated for bulk materials. In addition, domain walls are strongly modified when a constriction is present, with a decrease of the domain-wall width. This decrease is explained in terms of previous theoretical works. [Spanish] Se presenta un algoritmo de Monte Carlo para estudiar sistemas discritos por un hamiltoniano anisotropico de Heisenburg. Esta tecnica ha sido probada exitosamente con sistemas de dos y tres dimensiones, ilustrado con las propiedades magneticas dependen de la dimensionalidad y el numero de coordinacion. Hemos encontrado que las propiedades magneticas de constricciones difieren de aquellas del bulto. En particular, las fluctuaciones de espin son considerablemente mayores. Ademas, las paredes de dominio son fuertemente modificadas cuando una construccion esta presente, originando un decrecimiento del ancho de la pared de dominio. Damos cuenta de este decrecimiento en terminos de un trabajo teorico previo.
Fission yield calculation using toy model based on Monte Carlo simulation
Energy Technology Data Exchange (ETDEWEB)
Jubaidah, E-mail: jubaidah@student.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia); Physics Department, Faculty of Mathematics and Natural Science – State University of Medan. Jl. Willem Iskandar Pasar V Medan Estate – North Sumatera, Indonesia 20221 (Indonesia); Kurniadi, Rizal, E-mail: rijalk@fi.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia)
2015-09-30
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90
Fission yield calculation using toy model based on Monte Carlo simulation
International Nuclear Information System (INIS)
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (Rc), mean of left curve (μL) and mean of right curve (μR), deviation of left curve (σL) and deviation of right curve (σR). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90
Criticality calculation in TRIGA MARK II PUSPATI Reactor using Monte Carlo code
International Nuclear Information System (INIS)
A Monte Carlo simulation of the Malaysian nuclear reactor has been performed using MCNP Version 5 code. The purpose of the work is the determination of the multiplication factor (keff) for the TRIGA Mark II research reactor in Malaysia based on Monte Carlo method. This work has been performed to calculate the value of keff for two cases, which are the control rod either fully withdrawn or fully inserted to construct a complete model of the TRIGA Mark II PUSPATI Reactor (RTP). The RTP core was modeled as close as possible to the real core and the results of keff from MCNP5 were obtained when the control fuel rods were fully inserted, the keff value indicates the RTP reactor was in the subcritical condition with a value of 0.98370±0.00054. When the control fuel rods were fully withdrawn the value of keff value indicates the RTP reactor is in the supercritical condition, that is 1.10773±0.00083. (Author)
Dufek, Jan; Anglart, Henryk
2013-01-01
Numerically stable Monte Carlo burnup calculations of nuclear fuel cycles are now possible with the previously derived Stochastic Implicit Euler method based coupling scheme. In this paper, we show that this scheme can be easily extended to include the thermal-hydraulic feedback during the Monte Carlo burnup simulations, while preserving its unconditional stability property. At each time step, the implicit solution (for the end-of-step neutron flux, fuel nuclide densities and thermal-hydrauli...
Energy Technology Data Exchange (ETDEWEB)
Bellezzo, Murillo
2014-09-01
As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo Method (MCM) has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this thesis, the CUBMC code is presented, a GPU-based MC photon transport algorithm for dose calculation under the Compute Unified Device Architecture (CUDA) platform. The simulation of physical events is based on the algorithm used in PENELOPE, and the cross section table used is the one generated by the MATERIAL routine, also present in PENELOPE code. Photons are transported in voxel-based geometries with different compositions. There are two distinct approaches used for transport simulation. The rst of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon ignores the existence of borders and travels in homogeneous fictitious media. The CUBMC code aims to be an alternative of Monte Carlo simulator code that, by using the capability of parallel processing of graphics processing units (GPU), provide high performance simulations in low cost compact machines, and thus can be applied in clinical cases and incorporated in treatment planning systems for radiotherapy. (author)
International Nuclear Information System (INIS)
To report the result of independent absorbed-dose calculations based on a Monte Carlo (MC) algorithm in volumetric modulated arc therapy (VMAT) for various treatment sites. All treatment plans were created by the superposition/convolution (SC) algorithm of SmartArc (Pinnacle V9.2, Philips). The beam information was converted into the format of the Monaco V3.3 (Elekta), which uses the X-ray voxel-based MC (XVMC) algorithm. The dose distribution was independently recalculated in the Monaco. The dose for the planning target volume (PTV) and the organ at risk (OAR) were analyzed via comparisons with those of the treatment plan. Before performing an independent absorbed-dose calculation, the validation was conducted via irradiation from 3 different gantry angles with a 10- × 10-cm2 field. For the independent absorbed-dose calculation, 15 patients with cancer (prostate, 5; lung, 5; head and neck, 3; rectal, 1; and esophageal, 1) who were treated with single-arc VMAT were selected. To classify the cause of the dose difference between the Pinnacle and Monaco TPSs, their calculations were also compared with the measurement data. In validation, the dose in Pinnacle agreed with that in Monaco within 1.5%. The agreement in VMAT calculations between Pinnacle and Monaco using phantoms was exceptional; at the isocenter, the difference was less than 1.5% for all the patients. For independent absorbed-dose calculations, the agreement was also extremely good. For the mean dose for the PTV in particular, the agreement was within 2.0% in all the patients; specifically, no large difference was observed for high-dose regions. Conversely, a significant difference was observed in the mean dose for the OAR. For patients with prostate cancer, the mean rectal dose calculated in Monaco was significantly smaller than that calculated in Pinnacle. There was no remarkable difference between the SC and XVMC calculations in the high-dose regions. The difference observed in the low-dose regions may
Accuracy preserving surrogate for neutron transport calculations
International Nuclear Information System (INIS)
Recent advances in reduced order modeling and exact-to-precision generalized perturbation theory are combined in a novel algorithm that constructs a surrogate model for the Boltzmann equation, commonly used in assembly calculations to functionalize the few-group cross-sections in terms of the various assembly types, depletion characteristics, and thermal-hydraulics conditions. First, the algorithm employs reduced order modeling to determine the dominant input parameters, aggregated in the so-called active subspace, using a random sample of first-order derivatives calculated using an adjoint model. Next, exact-to-precision generalized perturbation theory identifies an active subspace for the state solution (i.e., angular flux) and constructs a surrogate model that is parameterized over the active subspace of the input parameters. This approach is shown to significantly reduce computational time needed for the analysis of a large number of model variations, while meeting the user-defined accuracy requirements. Numerical experiments are employed to demonstrate the mechanics and application of the proposed approach to assembly calculations commonly used in reactor physics analysis. (author)
Direct simulation Monte Carlo calculation of rarefied gas drag using an immersed boundary method
Jin, W.; Kleijn, C. R.; van Ommen, J. R.
2016-06-01
For simulating rarefied gas flows around a moving body, an immersed boundary method is presented here in conjunction with the Direct Simulation Monte Carlo (DSMC) method in order to allow the movement of a three dimensional immersed body on top of a fixed background grid. The simulated DSMC particles are reflected exactly at the landing points on the surface of the moving immersed body, while the effective cell volumes are taken into account for calculating the collisions between molecules. The effective cell volumes are computed by utilizing the Lagrangian intersecting points between the immersed boundary and the fixed background grid with a simple polyhedra regeneration algorithm. This method has been implemented in OpenFOAM and validated by computing the drag forces exerted on steady and moving spheres and comparing the results to that from conventional body-fitted mesh DSMC simulations and to analytical approximations.
Monte Carlo calculations of relativistic solar proton propagation in interplanetary space
Lumme, M.; Torsti, J. J.; Vainikka, E.; Peltonen, J.; Nieminen, M.; Valtonen, E.; Arvelta, H.
1985-01-01
Particle fluxes and pitch angle distributions of relativistic solar protons at 1 AU were determined by Monte Carlo calculations. The analysis covers two hours after the release of the particles from the Sun and total of eight 100000 particle trajectories were simulated. The pitch angle scattering was assumed to be isotropic ad the scattering mean free path was varied from 0.1 to 4 AU. As an application, the solar injection time and interplanetary scattering mean free path of particles that gave rise to the GLE on May, 1978 were determined. Assuming exponential form, the injection decay time was found to be about 11 minutes. The m.f.p. of pitch angle scattering during the event was about 1 AU.
A CAD based automatic modeling method for primitive solid based Monte Carlo calculation geometry
International Nuclear Information System (INIS)
The Multi-Physics Coupling Analysis Modeling Program (MCAM), developed by FDS Team, China, is an advanced modeling tool aiming to solve the modeling challenges for multi-physics coupling simulation. The automatic modeling method for SuperMC, the Super Monte Carlo Calculation Program for Nuclear and Radiation Process, was recently developed and integrated in MCAM5.2. This method could bi-convert between CAD model and SuperMC input file. While converting from CAD model to SuperMC model, the CAD model was decomposed into several convex solids set, and then corresponding SuperMC convex basic solids were generated and output. While inverting from SuperMC model to CAD model, the basic primitive solids was created and related operation was done to according the SuperMC model. This method was benchmarked with ITER Benchmark model. The results showed that the method was correct and effective. (author)
On line CALDoseX: real time Monte Carlo calculation via Internet for dosimetry in radiodiagnostic
International Nuclear Information System (INIS)
The CALDoseX 4.1 is a software which uses thr MASH and FASH phantoms. Patient dosimetry with reference phantoms is limited because the results can be applied only for patients which possess the same body mass and right height that the reference phantom. In this paper, the dosimetry of patients for diagnostic with X ray was extended by using a series of 18 phantoms with defined gender, different body masses and heights, in order to cover the real anatomy of the patients. It is possible to calculate absorbed doses in organs and tissues by real time Monte Carlo dosimetry through the Internet through a dosimetric service called CALDoseX on line
International Nuclear Information System (INIS)
To establish a theoretical framework for generalizing Monte Carlo transport algorithms by adding external electromagnetic fields to the Boltzmann radiation transport equation in a rigorous and consistent fashion. Using first principles, the Boltzmann radiation transport equation is modified by adding a term describing the variation of the particle distribution due to the Lorentz force. The implications of this new equation are evaluated by investigating the validity of Fano’s theorem. Additionally, Lewis’ approach to multiple scattering theory in infinite homogeneous media is redefined to account for the presence of external electromagnetic fields. The equation is modified and yields a description consistent with the deterministic laws of motion as well as probabilistic methods of solution. The time-independent Boltzmann radiation transport equation is generalized to account for the electromagnetic forces in an additional operator similar to the interaction term. Fano’s and Lewis’ approaches are stated in this new equation. Fano’s theorem is found not to apply in the presence of electromagnetic fields. Lewis’ theory for electron multiple scattering and moments, accounting for the coupling between the Lorentz force and multiple elastic scattering, is found. However, further investigation is required to develop useful algorithms for Monte Carlo and deterministic transport methods. To test the accuracy of Monte Carlo transport algorithms in the presence of electromagnetic fields, the Fano cavity test, as currently defined, cannot be applied. Therefore, new tests must be designed for this specific application. A multiple scattering theory that accurately couples the Lorentz force with elastic scattering could improve Monte Carlo efficiency. The present study proposes a new theoretical framework to develop such algorithms. (paper)
Evaluation of an electron Monte Carlo dose calculation algorithm for treatment planning.
Chamberland, Eve; Beaulieu, Luc; Lachance, Bernard
2015-01-01
The purpose of this study is to evaluate the accuracy of the electron Monte Carlo (eMC) dose calculation algorithm included in a commercial treatment planning system and compare its performance against an electron pencil beam algorithm. Several tests were performed to explore the system's behavior in simple geometries and in configurations encountered in clinical practice. The first series of tests were executed in a homogeneous water phantom, where experimental measurements and eMC-calculated dose distributions were compared for various combinations of energy and applicator. More specifically, we compared beam profiles and depth-dose curves at different source-to-surface distances (SSDs) and gantry angles, by using dose difference and distance to agreement. Also, we compared output factors, we studied the effects of algorithm input parameters, which are the random number generator seed, as well as the calculation grid size, and we performed a calculation time evaluation. Three different inhomogeneous solid phantoms were built, using high- and low-density materials inserts, to clinically simulate relevant heterogeneity conditions: a small air cylinder within a homogeneous phantom, a lung phantom, and a chest wall phantom. We also used an anthropomorphic phantom to perform comparison of eMC calculations to measurements. Finally, we proceeded with an evaluation of the eMC algorithm on a clinical case of nose cancer. In all mentioned cases, measurements, carried out by means of XV-2 films, radiographic films or EBT2 Gafchromic films. were used to compare eMC calculations with dose distributions obtained from an electron pencil beam algorithm. eMC calculations in the water phantom were accurate. Discrepancies for depth-dose curves and beam profiles were under 2.5% and 2 mm. Dose calculations with eMC for the small air cylinder and the lung phantom agreed within 2% and 4%, respectively. eMC calculations for the chest wall phantom and the anthropomorphic phantom also
International Nuclear Information System (INIS)
Purpose: In this work, the authors describe an approach which has been developed to drive the application of different variance-reduction techniques to the Monte Carlo simulation of photon and electron transport in clinical accelerators. Methods: The new approach considers the following techniques: Russian roulette, splitting, a modified version of the directional bremsstrahlung splitting, and the azimuthal particle redistribution. Their application is controlled by an ant colony algorithm based on an importance map. Results: The procedure has been applied to radiosurgery beams. Specifically, the authors have calculated depth-dose profiles, off-axis ratios, and output factors, quantities usually considered in the commissioning of these beams. The agreement between Monte Carlo results and the corresponding measurements is within ∼3%/0.3 mm for the central axis percentage depth dose and the dose profiles. The importance map generated in the calculation can be used to discuss simulation details in the different parts of the geometry in a simple way. The simulation CPU times are comparable to those needed within other approaches common in this field. Conclusions: The new approach is competitive with those previously used in this kind of problems (PSF generation or source models) and has some practical advantages that make it to be a good tool to simulate the radiation transport in problems where the quantities of interest are difficult to obtain because of low statistics.
Energy Technology Data Exchange (ETDEWEB)
Garcia-Pareja, S.; Galan, P.; Manzano, F.; Brualla, L.; Lallena, A. M. [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ' ' Carlos Haya' ' , Avda. Carlos Haya s/n, E-29010 Malaga (Spain); Unidad de Radiofisica Hospitalaria, Hospital Xanit Internacional, Avda. de los Argonautas s/n, E-29630 Benalmadena (Malaga) (Spain); NCTeam, Strahlenklinik, Universitaetsklinikum Essen, Hufelandstr. 55, D-45122 Essen (Germany); Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)
2010-07-15
Purpose: In this work, the authors describe an approach which has been developed to drive the application of different variance-reduction techniques to the Monte Carlo simulation of photon and electron transport in clinical accelerators. Methods: The new approach considers the following techniques: Russian roulette, splitting, a modified version of the directional bremsstrahlung splitting, and the azimuthal particle redistribution. Their application is controlled by an ant colony algorithm based on an importance map. Results: The procedure has been applied to radiosurgery beams. Specifically, the authors have calculated depth-dose profiles, off-axis ratios, and output factors, quantities usually considered in the commissioning of these beams. The agreement between Monte Carlo results and the corresponding measurements is within {approx}3%/0.3 mm for the central axis percentage depth dose and the dose profiles. The importance map generated in the calculation can be used to discuss simulation details in the different parts of the geometry in a simple way. The simulation CPU times are comparable to those needed within other approaches common in this field. Conclusions: The new approach is competitive with those previously used in this kind of problems (PSF generation or source models) and has some practical advantages that make it to be a good tool to simulate the radiation transport in problems where the quantities of interest are difficult to obtain because of low statistics.
SHIELD-HIT12A - a Monte Carlo particle transport program for ion therapy research
DEFF Research Database (Denmark)
Bassler, Niels; Hansen, David Christoffer; Lühr, Armin;
2014-01-01
Abstract. Purpose: The Monte Carlo (MC) code SHIELD-HIT simulates the transport of ions through matter. Since SHIELD-HIT08 we added numerous features that improves speed, usability and underlying physics and thereby the user experience. The “-A” fork of SHIELD-HIT also aims to attach SHIELD...... of computation time. Scheduled for later release are CT import and photon-electron transport. Conclusions: SHIELD-HIT12A is an interesting alternative ion transport engine. Apart from being a flexible particle therapy research tool, it can also serve as a back end for a MC ion treatment planning system. More...
Naff, R.L.; Haley, D.F.; Sudicky, E.A.
1998-01-01
In this, the second of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, results from the transport aspect of these simulations are reported on. Transport simulations contained herein assume a finite pulse input of conservative tracer, and the numerical technique endeavors to realistically simulate tracer spreading as the cloud moves through a heterogeneous medium. Medium heterogeneity is limited to the hydraulic conductivity field, and generation of this field assumes that the hydraulic- conductivity process is second-order stationary. Methods of estimating cloud moments, and the interpretation of these moments, are discussed. Techniques for estimation of large-time macrodispersivities from cloud second-moment data, and for the approximation of the standard errors associated with these macrodispersivities, are also presented. These moment and macrodispersivity estimation techniques were applied to tracer clouds resulting from transport scenarios generated by specific Monte Carlo simulations. Where feasible, moments and macrodispersivities resulting from the Monte Carlo simulations are compared with first- and second-order perturbation analyses. Some limited results concerning the possible ergodic nature of these simulations, and the presence of non- Gaussian behavior of the mean cloud, are reported on as well.
Tian, Zhen; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun
2015-01-01
Monte Carlo (MC) simulation is considered as the most accurate method for radiation dose calculations. Accuracy of a source model for a linear accelerator is critical for the overall dose calculation accuracy. In this paper, we presented an analytical source model that we recently developed for GPU-based MC dose calculations. A key concept called phase-space-ring (PSR) was proposed. It contained a group of particles that are of the same type and close in energy and radial distance to the center of the phase-space plane. The model parameterized probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. For a primary photon PSRs, the particle direction is assumed to be from the beam spot. A finite spot size is modeled with a 2D Gaussian distribution. For a scattered photon PSR, multiple Gaussian components were used to model the particle direction. The direction distribution of an electron PSRs was also modeled as a 2D Gaussian distributi...
Data decomposition of Monte Carlo particle transport simulations via tally servers
Energy Technology Data Exchange (ETDEWEB)
Romano, Paul K., E-mail: paul.k.romano@gmail.com [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States); Siegel, Andrew R., E-mail: siegala@mcs.anl.gov [Argonne National Laboratory, Theory and Computing Sciences, 9700 S Cass Ave., Argonne, IL 60439 (United States); Forget, Benoit, E-mail: bforget@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States); Smith, Kord, E-mail: kord@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States)
2013-11-01
An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithm in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.
Energy Technology Data Exchange (ETDEWEB)
Chibani, Omar, E-mail: omar.chibani@fccc.edu; C-M Ma, Charlie [Fox Chase Cancer Center, Philadelphia, Pennsylvania 19111 (United States)
2014-05-15
Purpose: To present a new accelerated Monte Carlo code for CT-based dose calculations in high dose rate (HDR) brachytherapy. The new code (HDRMC) accounts for both tissue and nontissue heterogeneities (applicator and contrast medium). Methods: HDRMC uses a fast ray-tracing technique and detailed physics algorithms to transport photons through a 3D mesh of voxels representing the patient anatomy with applicator and contrast medium included. A precalculated phase space file for the{sup 192}Ir source is used as source term. HDRM is calibrated to calculated absolute dose for real plans. A postprocessing technique is used to include the exact density and composition of nontissue heterogeneities in the 3D phantom. Dwell positions and angular orientations of the source are reconstructed using data from the treatment planning system (TPS). Structure contours are also imported from the TPS to recalculate dose-volume histograms. Results: HDRMC was first benchmarked against the MCNP5 code for a single source in homogenous water and for a loaded gynecologic applicator in water. The accuracy of the voxel-based applicator model used in HDRMC was also verified by comparing 3D dose distributions and dose-volume parameters obtained using 1-mm{sup 3} versus 2-mm{sup 3} phantom resolutions. HDRMC can calculate the 3D dose distribution for a typical HDR cervix case with 2-mm resolution in 5 min on a single CPU. Examples of heterogeneity effects for two clinical cases (cervix and esophagus) were demonstrated using HDRMC. The neglect of tissue heterogeneity for the esophageal case leads to the overestimate of CTV D90, CTV D100, and spinal cord maximum dose by 3.2%, 3.9%, and 3.6%, respectively. Conclusions: A fast Monte Carlo code for CT-based dose calculations which does not require a prebuilt applicator model is developed for those HDR brachytherapy treatments that use CT-compatible applicators. Tissue and nontissue heterogeneities should be taken into account in modern HDR
Zhang, Aizhen; Wen, Ning; Nurushev, Teamour; Burmeister, Jay; Chetty, Indrin J
2013-01-01
A commercial electron Monte Carlo (eMC) dose calculation algorithm has become available in Eclipse treatment planning system. The purpose of this work was to evaluate the eMC algorithm and investigate the clinical implementation of this system. The beam modeling of the eMC algorithm was performed for beam energies of 6, 9, 12, 16, and 20 MeV for a Varian Trilogy and all available applicator sizes in the Eclipse treatment planning system. The accuracy of the eMC algorithm was evaluated in a homogeneous water phantom, solid water phantoms containing lung and bone materials, and an anthropomorphic phantom. In addition, dose calculation accuracy was compared between pencil beam (PB) and eMC algorithms in the same treatment planning system for heterogeneous phantoms. The overall agreement between eMC calculations and measurements was within 3%/2 mm, while the PB algorithm had large errors (up to 25%) in predicting dose distributions in the presence of inhomogeneities such as bone and lung. The clinical implementation of the eMC algorithm was investigated by performing treatment planning for 15 patients with lesions in the head and neck, breast, chest wall, and sternum. The dose distributions were calculated using PB and eMC algorithms with no smoothing and all three levels of 3D Gaussian smoothing for comparison. Based on a routine electron beam therapy prescription method, the number of eMC calculated monitor units (MUs) was found to increase with increased 3D Gaussian smoothing levels. 3D Gaussian smoothing greatly improved the visual usability of dose distributions and produced better target coverage. Differences of calculated MUs and dose distributions between eMC and PB algorithms could be significant when oblique beam incidence, surface irregularities, and heterogeneous tissues were present in the treatment plans. In our patient cases, monitor unit differences of up to 7% were observed between PB and eMC algorithms. Monitor unit calculations were also preformed
Bergmann, Ryan
Graphics processing units, or GPUs, have gradually increased in computational power from the small, job-specific boards of the early 1990s to the programmable powerhouses of today. Compared to more common central processing units, or CPUs, GPUs have a higher aggregate memory bandwidth, much higher floating-point operations per second (FLOPS), and lower energy consumption per FLOP. Because one of the main obstacles in exascale computing is power consumption, many new supercomputing platforms are gaining much of their computational capacity by incorporating GPUs into their compute nodes. Since CPU-optimized parallel algorithms are not directly portable to GPU architectures (or at least not without losing substantial performance), transport codes need to be rewritten to execute efficiently on GPUs. Unless this is done, reactor simulations cannot take full advantage of these new supercomputers. WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed in this work as to efficiently implement a continuous energy Monte Carlo neutron transport algorithm on a GPU. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo Method, namely, very few physical and geometrical simplifications. WARP is able to calculate multiplication factors, flux tallies, and fission source distributions for time-independent problems, and can run in both criticality or fixed source modes. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. WARP uses an event-based algorithm, but with some important differences. Moving data is expensive, so WARP uses a remapping vector of pointer/index pairs to direct GPU threads to the data they need to access. The remapping vector is sorted by reaction type after every transport iteration using a high-efficiency parallel radix sort, which serves to keep the
Self-Consistent Scattering and Transport Calculations
Hansen, S. B.; Grabowski, P. E.
2015-11-01
An average-atom model with ion correlations provides a compact and complete description of atomic-scale physics in dense, finite-temperature plasmas. The self-consistent ionic and electronic distributions from the model enable calculation of x-ray scattering signals and conductivities for material across a wide range of temperatures and densities. We propose a definition for the bound electronic states that ensures smooth behavior of these measurable properties under pressure ionization and compare the predictions of this model with those of less consistent models for Be, C, Al, and Fe. SNL is a multi-program laboratory managed and operated by Sandia Corp., a wholly owned subsidiary of Lockheed Martin Corp, for the U.S. DoE NNSA under contract DE-AC04-94AL85000. This work was supported by DoE OFES Early Career grant FWP-14-017426.
International Nuclear Information System (INIS)
The neutron generation time Λ plays an important role in the reactor kinetics. However, it is not straightforward nor standard in most continuous energy Monte Carlo codes which are able to calculate the prompt neutron lifetime lp directly. The difference between Λ and lp are sometimes very apparent. As very few delayed neutrons are produced in the reactor, they have little influence on Λ. Thus on the assumption that no delayed neutrons are produced in the system, the prompt kinetics equations for critical system and subcritical system with an external source are proposed. And then the equations are applied to calculating Λ with pulsed neutron technique using Monte Carlo. Only one fission neutron source is simulated with Monte Carlo in critical system while two neutron sources, including a fission source and an external source, are simulated for subcritical system. Calculations are performed on both critical benchmarks and subcritical system with an external source and the results are consistent with the reference values. (author)
Application of Photon Transport Monte Carlo Module with GPU-based Parallel System
Energy Technology Data Exchange (ETDEWEB)
Park, Chang Je [Sejong University, Seoul (Korea, Republic of); Shon, Heejeong [Golden Eng. Co. LTD, Seoul (Korea, Republic of); Lee, Donghak [CoCo Link Inc., Seoul (Korea, Republic of)
2015-05-15
In general, it takes lots of computing time to get reliable results in Monte Carlo simulations especially in deep penetration problems with a thick shielding medium. To mitigate such a weakness of Monte Carlo methods, lots of variance reduction algorithms are proposed including geometry splitting and Russian roulette, weight windows, exponential transform, and forced collision, etc. Simultaneously, advanced computing hardware systems such as GPU(Graphics Processing Units)-based parallel machines are used to get a better performance of the Monte Carlo simulation. The GPU is much easier to access and to manage when comparing a CPU cluster system. It also becomes less expensive these days due to enhanced computer technology. There, lots of engineering areas adapt GPU-bases massive parallel computation technique. based photon transport Monte Carlo method. It provides almost 30 times speedup without any optimization and it is expected almost 200 times with fully supported GPU system. It is expected that GPU system with advanced parallelization algorithm will contribute successfully for development of the Monte Carlo module which requires quick and accurate simulations.
Application of Photon Transport Monte Carlo Module with GPU-based Parallel System
International Nuclear Information System (INIS)
In general, it takes lots of computing time to get reliable results in Monte Carlo simulations especially in deep penetration problems with a thick shielding medium. To mitigate such a weakness of Monte Carlo methods, lots of variance reduction algorithms are proposed including geometry splitting and Russian roulette, weight windows, exponential transform, and forced collision, etc. Simultaneously, advanced computing hardware systems such as GPU(Graphics Processing Units)-based parallel machines are used to get a better performance of the Monte Carlo simulation. The GPU is much easier to access and to manage when comparing a CPU cluster system. It also becomes less expensive these days due to enhanced computer technology. There, lots of engineering areas adapt GPU-bases massive parallel computation technique. based photon transport Monte Carlo method. It provides almost 30 times speedup without any optimization and it is expected almost 200 times with fully supported GPU system. It is expected that GPU system with advanced parallelization algorithm will contribute successfully for development of the Monte Carlo module which requires quick and accurate simulations
Energy Technology Data Exchange (ETDEWEB)
Sutherland, J. G. H.; Thomson, R. M.; Rogers, D. W. O. [Carleton Laboratory for Radiotherapy Physics, Department of Physics, Carleton University, Ottawa K1S 5B6 (Canada)
2011-08-15
Purpose: To investigate the use of various breast tissue segmentation models in Monte Carlo dose calculations for low-energy brachytherapy. Methods: The EGSnrc user-code BrachyDose is used to perform Monte Carlo simulations of a breast brachytherapy treatment using TheraSeed Pd-103 seeds with various breast tissue segmentation models. Models used include a phantom where voxels are randomly assigned to be gland or adipose (randomly segmented), a phantom where a single tissue of averaged gland and adipose is present (averaged tissue), and a realistically segmented phantom created from previously published numerical phantoms. Radiation transport in averaged tissue while scoring in gland along with other combinations is investigated. The inclusion of calcifications in the breast is also studied in averaged tissue and randomly segmented phantoms. Results: In randomly segmented and averaged tissue phantoms, the photon energy fluence is approximately the same; however, differences occur in the dose volume histograms (DVHs) as a result of scoring in the different tissues (gland and adipose versus averaged tissue), whose mass energy absorption coefficients differ by 30%. A realistically segmented phantom is shown to significantly change the photon energy fluence compared to that in averaged tissue or randomly segmented phantoms. Despite this, resulting DVHs for the entire treatment volume agree reasonably because fluence differences are compensated by dose scoring differences. DVHs for the dose to only the gland voxels in a realistically segmented phantom do not agree with those for dose to gland in an averaged tissue phantom. Calcifications affect photon energy fluence to such a degree that the differences in fluence are not compensated for (as they are in the no calcification case) by dose scoring in averaged tissue phantoms. Conclusions: For low-energy brachytherapy, if photon transport and dose scoring both occur in an averaged tissue, the resulting DVH for the entire
Žukauskaitėa, A; Plukienė, R; Ridikas, D
2007-01-01
Particle accelerators and other high energy facilities produce penetrating ionizing radiation (neutrons and γ-rays) that must be shielded. The objective of this work was to model photon and neutron transport in various materials, usually used as shielding, such as concrete, iron or graphite. Monte Carlo method allows obtaining answers by simulating individual particles and recording some aspects of their average behavior. In this work several nuclear experiments were modeled: AVF 65 (AVF cyclotron of Research Center of Nuclear Physics, Osaka University, Japan) – γ-ray beams (1-10 MeV), HIMAC (heavy-ion synchrotron of the National Institute of Radiological Sciences in Chiba, Japan) and ISIS-800 (ISIS intensive spallation neutron source facility of the Rutherford Appleton laboratory, UK) – high energy neutron (20-800 MeV) transport in iron and concrete. The calculation results were then compared with experimental data.compared with experimental data.
VVER-440 Ex-Core Neutron Transport Calculations by MCNP-5 Code and Comparison with Experiment
Energy Technology Data Exchange (ETDEWEB)
Borodkin, Pavel; Khrennikov, Nikolay [Scientific and Engineering Centre for Nuclear and Radiation Safety (SEC NRS) Malaya Krasnoselskaya ul., 2/8, bld. 5, 107140 Moscow (Russian Federation)
2008-07-01
Ex-core neutron transport calculations are needed to evaluate radiation loading parameters (neutron fluence, fluence rate and spectra) on the in-vessel equipment, reactor pressure vessel (RPV) and support constructions of VVER type reactors. Due to these parameters are used for reactor equipment life-time assessment, neutron transport calculations should be carried out by precise and reliable calculation methods. In case of RPVs, especially, of first generation VVER-440s, the neutron fluence plays a key role in the prediction of RPV lifetime. Main part of VVER ex-core neutron transport calculations are performed by deterministic and Monte-Carlo methods. This paper deals with precise calculations of the Russian first generation VVER-440 by MCNP-5 code. The purpose of this work was an application of this code for expert calculations, verification of results by comparison with deterministic calculations and validation by neutron activation measured data. Deterministic discrete ordinates DORT code, widely used for RPV neutron dosimetry and many times tested by experiments, was used for comparison analyses. Ex-vessel neutron activation measurements at the VVER-440 NPP have provided space (in azimuth and height directions) and neutron energy (different activation reactions) distributions data for experimental (E) validation of calculated results. Calculational intercomparison (DORT vs. MCNP-5) and comparison with measured values (MCNP-5 and DORT vs. E) have shown agreement within 10-15% for different space points and reaction rates. The paper submits a discussion of results and makes conclusions about practice use of MCNP-5 code for ex-core neutron transport calculations in expert analysis. (authors)
Laoues, M.; Khelifi, R.; Moussa, A. S.
2015-01-01
Strontium-90 eye applicators are a beta-ray emitter with a relatively high-energy (maximum energy about 2.28 MeV and average energy about 0.9 MeV). These applicators come in different shapes and dimensions; they are used for the treatment of eye diseases. Whenever, radiation is used in treatment, dosimetry is essential. However, knowledge of the exact dose distribution is a critical decision-making to the outcome of the treatment. The main aim of our study is to simulate the dosimetry of the SIA.20 eye applicator with Monte Carlo GATE 6.1 platform and to compare the calculated results with those measured with EBT2 films. This means that GATE and EBT2 were used to quantify the surface and depths dose- rate, the relative dose profile and the dosimetric parameters in according to international recommendations. Calculated and measured results are in good agreement and they are consistent with the ICRU and NCS recommendations.
International Nuclear Information System (INIS)
Full text: Medical imaging provides two-dimensional pictures of the human internal anatomy from which may be constructed a three-dimensional model of organs and tissues suitable for calculation of dose from radiation. Diagnostic CT provides the greatest exposure to radiation per examination and the frequency of CT examination is high. Esti mates of dose from diagnostic radiography are still determined from data derived from geometric models (rather than anatomical models), models scaled from adult bodies (rather than bodies of children) and CT scanner hardware that is no longer used. The aim of anatomical modelling is to produce a mathematical representation of internal anatomy that has organs of realistic size, shape and positioning. The organs and tissues are represented by a great many cuboidal volumes (voxels). The conversion of medical images to voxels is called segmentation and on completion every pixel in an image is assigned to a tissue or organ. Segmentation is time consuming. An image processing pack age is used to identify organ boundaries in each image. Thirty to forty tomographic voxel models of anatomy have been reported in the literature. Each model is of an individual, or a composite from several individuals. Images of children are particularly scarce. So there remains a need for more paediatric anatomical models. I am working on segmenting ''William'' who is 368 PET-CT images from head to toe of a seven year old boy. William will be used for Monte Carlo dose calculations of dose from CT examination using a simulated modern CT scanner.
Calculation of Credit Valuation Adjustment Based on Least Square Monte Carlo Methods
Directory of Open Access Journals (Sweden)
Qian Liu
2015-01-01
Full Text Available Counterparty credit risk has become one of the highest-profile risks facing participants in the financial markets. Despite this, relatively little is known about how counterparty credit risk is actually priced mathematically. We examine this issue using interest rate swaps. This largely traded financial product allows us to well identify the risk profiles of both institutions and their counterparties. Concretely, Hull-White model for rate and mean-reverting model for default intensity have proven to be in correspondence with the reality and to be well suited for financial institutions. Besides, we find that least square Monte Carlo method is quite efficient in the calculation of credit valuation adjustment (CVA, for short as it avoids the redundant step to generate inner scenarios. As a result, it accelerates the convergence speed of the CVA estimators. In the second part, we propose a new method to calculate bilateral CVA to avoid double counting in the existing bibliographies, where several copula functions are adopted to describe the dependence of two first to default times.
International Nuclear Information System (INIS)
Strontium-90 eye applicators are a beta-ray emitter with a relatively high-energy (maximum energy about 2.28 MeV and average energy about 0.9 MeV). These applicators come in different shapes and dimensions; they are used for the treatment of eye diseases. Whenever, radiation is used in treatment, dosimetry is essential. However, knowledge of the exact dose distribution is a critical decision-making to the outcome of the treatment. The main aim of our study is to simulate the dosimetry of the SIA.20 eye applicator with Monte Carlo GATE 6.1 platform and to compare the calculated results with those measured with EBT2 films. This means that GATE and EBT2 were used to quantify the surface and depths dose- rate, the relative dose profile and the dosimetric parameters in according to international recommendations. Calculated and measured results are in good agreement and they are consistent with the ICRU and NCS recommendations
Dose and shielding calculation of galactic cosmic ray using FLUKA Mont Carlo code
Energy Technology Data Exchange (ETDEWEB)
Jalali, Hamide B. [Physics Department, University of Qom, Qom (Iran); Raisali, Golamreza; Babazade, Alireza [Radiation Applications Research School, Nuclear Science and Technology Research Institute, Atomic Energy Organization of Iran, Tehran (Iran); Feghhi, Amirhosein [Physics and Nuclear Engineering Department, Amirkabir University, Tehran (Iran)
2009-07-01
Astronauts' exposure to space radiation is a limiting factor for long-term missions. Therefore shielding is a critical issue in space mission success. In this work the FLUKA Monte Carlo code has been coupled with simple models of the spacecraft and equivalent phantom to calculate skin averaged doses due to exposure to Galactic Cosmic Rays (GCR) beyond various thicknesses of aluminium and polyethylene shields. Simulations have been performed for the most abundant elements including H, He, C and Fe ions. The spectra of these ions have been taken from Badhwar-O'Neill's model, and LET distribution of the ions and electrons calculated using SRIM and ESTAR computer programs, respectively. It has been observed that GCR absorbed dose behind the shields remained approximately constant with increasing shield thicknesses, but dose equivalent shows a slight decrease. It is also found that although polyethylene is a more effective GCR shield than aluminum as indicated in the results of similar investigations, but the practical thicknesses of polyethylene are still insufficient to shield high energy GCR ions encountered in long-term space missions.
International Nuclear Information System (INIS)
A detailed Monte Carlo N-Particle Transport Code (MCNP5) model of the University of Missouri research reactor (MURR) has been developed. The ability of the model to accurately predict isotope production rates was verified by comparing measured and calculated neutron- capture reaction rates for numerous isotopes. In addition to thermal (1/v) monitors, the benchmarking included a number of isotopes whose (n, γ) reaction rates are very sensitive to the epithermal portion of the neutron spectrum. Using the most recent neutron libraries (ENDF/ B-VII.0), the model was able to accurately predict the measured reaction rates in all cases. The model was then combined with ORIGEN 2.2, via MONTEBURNS 2.0, to calculate production of 99Mo from fission of low-enriched uranium foils. The model was used to investigate both annular and plate LEU foil targets in a variety of arrangements in a graphite irradiation wedge to optimize the production of 99Mo. (author)
Hubber, D A; Dale, J
2015-01-01
Ionising feedback from massive stars dramatically affects the interstellar medium local to star forming regions. Numerical simulations are now starting to include enough complexity to produce morphologies and gas properties that are not too dissimilar from observations. The comparison between the density fields produced by hydrodynamical simulations and observations at given wavelengths relies however on photoionisation/chemistry and radiative transfer calculations. We present here an implementation of Monte Carlo radiation transport through a Voronoi tessellation in the photoionisation and dust radiative transfer code MOCASSIN. We show for the first time a synthetic spectrum and synthetic emission line maps of an hydrodynamical simulation of a molecular cloud affected by massive stellar feedback. We show that the approach on which previous work is based, which remapped hydrodynamical density fields onto Cartesian grids before performing radiative transfer/photoionisation calculations, results in significant ...
International Nuclear Information System (INIS)
In deuterium-deuterium (D-D) and deuterium-tritium (D-T) fusion plasmas neutrons are produced causing activation of JET machine components. For safe operation and maintenance it is important to be able to predict the induced activation and the resulting shut down dose rates. This requires a suitable system of codes which is capable of simulating both the neutron induced material activation during operation and the decay gamma radiation transport after shut-down in the proper 3-D geometry. Two methodologies to calculate the dose rate in fusion devices have been developed recently and applied to fusion machines, both using the MCNP Monte Carlo code. FZK has developed a more classical approach, the rigorous 2-step (R2S) system in which MCNP is coupled to the FISPACT inventory code with an automated routing. ENEA, in collaboration with the ITER Team, has developed an alternative approach, the direct 1 step method (D1S). Neutron and decay gamma transport are handled in one single MCNP run, using an ad hoc cross section library. The intention was to tightly couple the neutron induced production of a radio-isotope and the emission of its decay gammas for an accurate spatial distribution and a reliable calculated statistical error. The two methods have been used by the two Associations to calculate the dose rate in five positions of JET machine, two inside the vacuum chamber and three outside, at cooling times between 1 second and 1 year after shutdown. The same MCNP model and irradiation conditions have been assumed. The exercise has been proposed and financed in the frame of the Fusion Technological Program of the JET machine. The scope is to supply the designers with the most reliable tool and data to calculate the dose rate on fusion machines. Results showed that there is a good agreement: the differences range between 5-35%. The next step to be considered in 2003 will be an exercise in which the comparison will be done with dose-rate data from JET taken during and
The use of Monte Carlo radiation transport codes in radiation physics and dosimetry
CERN. Geneva; Ferrari, Alfredo; Silari, Marco
2006-01-01
Transport and interaction of electromagnetic radiation Interaction models and simulation schemes implemented in modern Monte Carlo codes for the simulation of coupled electron-photon transport will be briefly reviewed. In these codes, photon transport is simulated by using the detailed scheme, i.e., interaction by interaction. Detailed simulation is easy to implement, and the reliability of the results is only limited by the accuracy of the adopted cross sections. Simulations of electron and positron transport are more difficult, because these particles undergo a large number of interactions in the course of their slowing down. Different schemes for simulating electron transport will be discussed. Condensed algorithms, which rely on multiple-scattering theories, are comparatively fast, but less accurate than mixed algorithms, in which hard interactions (with energy loss or angular deflection larger than certain cut-off values) are simulated individually. The reliability, and limitations, of electron-interacti...
Optimal calculational schemes for solving multigroup photon transport problem
International Nuclear Information System (INIS)
A scheme of complex algorithm for solving multigroup equation of radiation transport is suggested. The algorithm is based on using the method of successive collisions, the method of forward scattering and the spherical harmonics method, and is realized in the FORAP program (FORTRAN, BESM-6 computer). As an example the results of calculating reactor photon transport in water are presented. The considered algorithm being modified may be used for solving neutron transport problems
A comparison between the Monte Carlo radiation transport codes MCNP and MCBEND
Energy Technology Data Exchange (ETDEWEB)
Sawamura, Hidenori; Nishimura, Kazuya [Computer Software Development Co., Ltd., Tokyo (Japan)
2001-01-01
In Japan, almost of all radiation analysts are using the MCNP code and MVP code on there studies. But these codes have not had automatic variance reduction. MCBEND code made by UKAEA have automatic variance reduction. And, MCBEND code is user friendly more than other Monte Carlo Radiation Transport Codes. Our company was first introduced MCBEND code in Japan. Therefore, we compared with MCBEND code and MCNP code about functions and production capacity. (author)
Systems guide to MCNP (Monte Carlo Neutron and Photon Transport Code)
International Nuclear Information System (INIS)
The subject of this report is the implementation of the Los Alamos National Laboratory Monte Carlo Neutron and Photon Transport Code - Version 3 (MCNP) on the different types of computer systems, especially the IBM MVS system. The report supplements the documentation of the RSIC computer code package CCC-200/MCNP. Details of the procedure to follow in executing MCNP on the IBM computers, either in batch mode or interactive mode, are provided
A Monte Carlo Simulation for the Ion Transport in Glow Discharges with Dusts
Institute of Scientific and Technical Information of China (English)
SUN Ai-Ping; PU Wei; QIU Xiao-Ming
2001-01-01
We use the Monte Carlo method to simulate theion transport in the rf parallel plate glow discharge with a negative-voltage pulse connected to the electrode. It is found that self-consistent field, dust charge, dust concentration,and dust size influence the energy distribution and the density of the ions arriving at the target, and in particular, the latter two make significant influence. As dust concentration or dust size increases, the number of ions arriving at the target reduces greatly.
Modular, object-oriented redesign of a large-scale Monte Carlo neutron transport program
International Nuclear Information System (INIS)
This paper describes the modular, object-oriented redesign of a large-scale Monte Carlo neutron transport program. This effort represents a complete 'white sheet of paper' rewrite of the code. In this paper, the motivation driving this project, the design objectives for the new version of the program, and the design choices and their consequences will be discussed. The design itself will also be described, including the important subsystems as well as the key classes within those subsystems
Kramer, R; Khoury, H J; Vieira, J W; Loureiro, E C M; Lima, V J M; Lima, F R A; Hoff, G
2004-12-01
The International Commission on Radiological Protection (ICRP) has created a task group on dose calculations, which, among other objectives, should replace the currently used mathematical MIRD phantoms by voxel phantoms. Voxel phantoms are based on digital images recorded from scanning of real persons by computed tomography or magnetic resonance imaging (MRI). Compared to the mathematical MIRD phantoms, voxel phantoms are true to the natural representations of a human body. Connected to a radiation transport code, voxel phantoms serve as virtual humans for which equivalent dose to organs and tissues from exposure to ionizing radiation can be calculated. The principal database for the construction of the FAX (Female Adult voXel) phantom consisted of 151 CT images recorded from scanning of trunk and head of a female patient, whose body weight and height were close to the corresponding data recommended by the ICRP in Publication 89. All 22 organs and tissues at risk, except for the red bone marrow and the osteogenic cells on the endosteal surface of bone ('bone surface'), have been segmented manually with a technique recently developed at the Departamento de Energia Nuclear of the UFPE in Recife, Brazil. After segmentation the volumes of the organs and tissues have been adjusted to agree with the organ and tissue masses recommended by ICRP for the Reference Adult Female in Publication 89. Comparisons have been made with the organ and tissue masses of the mathematical EVA phantom, as well as with the corresponding data for other female voxel phantoms. The three-dimensional matrix of the segmented images has eventually been connected to the EGS4 Monte Carlo code. Effective dose conversion coefficients have been calculated for exposures to photons, and compared to data determined for the mathematical MIRD-type phantoms, as well as for other voxel phantoms.
Calculations of the transport properties within the PAW formalism
International Nuclear Information System (INIS)
We implemented the calculation of the transport properties within the PAW formalism in the ABINIT code. This feature allows the calculation of the electrical and optical properties, including the XANES spectrum, as well as the electronic contribution to the thermal conductivity. We present here the details of the implementation and results obtained for warm dense aluminum plasma. (authors)
Calculations of the transport properties within the PAW formalism
Energy Technology Data Exchange (ETDEWEB)
Mazevet, S.; Torrent, M.; Recoules, V.; Jollet, F. [CEA Bruyeres-le-Chatel, DIF, 91 (France)
2010-07-01
We implemented the calculation of the transport properties within the PAW formalism in the ABINIT code. This feature allows the calculation of the electrical and optical properties, including the XANES spectrum, as well as the electronic contribution to the thermal conductivity. We present here the details of the implementation and results obtained for warm dense aluminum plasma. (authors)
Three-dimensional whole core transport calculation method and performance of the DeCART code
International Nuclear Information System (INIS)
The three-dimensional (3D) transport calculation method implemented in a whole core neutron transport code DeCART is presented and its performance is examined in terms of solution accuracy and execution speed. The 3D flux calculation in DeCART is based on a transverse-integration method in which the radial and axial dependencies are handled separately. The radial dependence is resolved by the elaborated two-dimensional method of characteristics (MOC) whereas the axial dependence is dealt with the simple one-dimensional diffusion model. The global balance of the 3D flux distribution is incorporated by the coarse mesh finite difference (CMFD) formulation. It is shown that the CMFD formulation enables the approximate three-dimensional transport calculation through the transverse-integration, and furthermore it is very effective in achieving rapid convergence. The accuracy of the approximate 3D whole-core transport calculation method is proved by analyzing rodded variations of the C5G7 MOX heterogeneous core benchmark problem for which Monte Carlo solutions are generated as the reference
Time-implicit Monte-Carlo collision algorithm for particle-in-cell electron transport models
International Nuclear Information System (INIS)
A time-implicit Monte-Carlo collision algorithm has been developed to allow particle-in-cell electron transport models to be applied to arbitrarily collisional systems. The algorithm is formulated for electrons moving in response to electric and magnetic accelerations and subject to collisional drag and scattering due to a background plasma. The correct fluid or streaming transport results are obtained in the respective limits of strongly- or weakly-collisional systems, and reasonable behavior is produced even for time steps greatly exceeding the magnetic-gyration and collisional-scattering times
Energy Technology Data Exchange (ETDEWEB)
Bauer, Thilo; Jäger, Christof M. [Department of Chemistry and Pharmacy, Computer-Chemistry-Center and Interdisciplinary Center for Molecular Materials, Friedrich-Alexander-Universität Erlangen-Nürnberg, Nägelsbachstrasse 25, 91052 Erlangen (Germany); Jordan, Meredith J. T. [School of Chemistry, University of Sydney, Sydney, NSW 2006 (Australia); Clark, Timothy, E-mail: tim.clark@fau.de [Department of Chemistry and Pharmacy, Computer-Chemistry-Center and Interdisciplinary Center for Molecular Materials, Friedrich-Alexander-Universität Erlangen-Nürnberg, Nägelsbachstrasse 25, 91052 Erlangen (Germany); Centre for Molecular Design, University of Portsmouth, Portsmouth PO1 2DY (United Kingdom)
2015-07-28
We have developed a multi-agent quantum Monte Carlo model to describe the spatial dynamics of multiple majority charge carriers during conduction of electric current in the channel of organic field-effect transistors. The charge carriers are treated by a neglect of diatomic differential overlap Hamiltonian using a lattice of hydrogen-like basis functions. The local ionization energy and local electron affinity defined previously map the bulk structure of the transistor channel to external potentials for the simulations of electron- and hole-conduction, respectively. The model is designed without a specific charge-transport mechanism like hopping- or band-transport in mind and does not arbitrarily localize charge. An electrode model allows dynamic injection and depletion of charge carriers according to source-drain voltage. The field-effect is modeled by using the source-gate voltage in a Metropolis-like acceptance criterion. Although the current cannot be calculated because the simulations have no time axis, using the number of Monte Carlo moves as pseudo-time gives results that resemble experimental I/V curves.
International Nuclear Information System (INIS)
We have developed a multi-agent quantum Monte Carlo model to describe the spatial dynamics of multiple majority charge carriers during conduction of electric current in the channel of organic field-effect transistors. The charge carriers are treated by a neglect of diatomic differential overlap Hamiltonian using a lattice of hydrogen-like basis functions. The local ionization energy and local electron affinity defined previously map the bulk structure of the transistor channel to external potentials for the simulations of electron- and hole-conduction, respectively. The model is designed without a specific charge-transport mechanism like hopping- or band-transport in mind and does not arbitrarily localize charge. An electrode model allows dynamic injection and depletion of charge carriers according to source-drain voltage. The field-effect is modeled by using the source-gate voltage in a Metropolis-like acceptance criterion. Although the current cannot be calculated because the simulations have no time axis, using the number of Monte Carlo moves as pseudo-time gives results that resemble experimental I/V curves
Energy Technology Data Exchange (ETDEWEB)
Marcus, Ryan C. [Los Alamos National Laboratory
2012-07-25
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.
Dual-energy CT-based material extraction for tissue segmentation in Monte Carlo dose calculations
Bazalova, Magdalena; Carrier, Jean-François; Beaulieu, Luc; Verhaegen, Frank
2008-05-01
Monte Carlo (MC) dose calculations are performed on patient geometries derived from computed tomography (CT) images. For most available MC codes, the Hounsfield units (HU) in each voxel of a CT image have to be converted into mass density (ρ) and material type. This is typically done with a (HU; ρ) calibration curve which may lead to mis-assignment of media. In this work, an improved material segmentation using dual-energy CT-based material extraction is presented. For this purpose, the differences in extracted effective atomic numbers Z and the relative electron densities ρe of each voxel are used. Dual-energy CT material extraction based on parametrization of the linear attenuation coefficient for 17 tissue-equivalent inserts inside a solid water phantom was done. Scans of the phantom were acquired at 100 kVp and 140 kVp from which Z and ρe values of each insert were derived. The mean errors on Z and ρe extraction were 2.8% and 1.8%, respectively. Phantom dose calculations were performed for 250 kVp and 18 MV photon beams and an 18 MeV electron beam in the EGSnrc/DOSXYZnrc code. Two material assignments were used: the conventional (HU; ρ) and the novel (HU; ρ, Z) dual-energy CT tissue segmentation. The dose calculation errors using the conventional tissue segmentation were as high as 17% in a mis-assigned soft bone tissue-equivalent material for the 250 kVp photon beam. Similarly, the errors for the 18 MeV electron beam and the 18 MV photon beam were up to 6% and 3% in some mis-assigned media. The assignment of all tissue-equivalent inserts was accurate using the novel dual-energy CT material assignment. As a result, the dose calculation errors were below 1% in all beam arrangements. Comparable improvement in dose calculation accuracy is expected for human tissues. The dual-energy tissue segmentation offers a significantly higher accuracy compared to the conventional single-energy segmentation.
Investigation of geometrical and scoring grid resolution for Monte Carlo dose calculations for IMRT
DeSmedt, B.; Vanderstraeten, B.; Reynaert, N.; DeNeve, W.; Thierens, H.
2005-09-01
Monte Carlo based treatment planning of two different patient groups treated with step-and-shoot IMRT (head-and-neck and lung treatments) with different CT resolutions and scoring methods is performed to determine the effect of geometrical and scoring voxel sizes on DVHs and calculation times. Dose scoring is performed in two different ways: directly into geometrical voxels (or in a number of grouped geometrical voxels) or into scoring voxels defined by a separate scoring grid superimposed on the geometrical grid. For the head-and-neck cancer patients, more than 2% difference is noted in the right optical nerve when using voxel dimensions of 4 × 4 × 4 mm3 compared to the reference calculation with 1 × 1 × 2 mm3 voxel dimensions. For the lung cancer patients, 2% difference is noted in the spinal cord when using voxel dimensions of 4 × 4 × 10 mm3 compared to the 1 × 1 × 5 mm3 calculation. An independent scoring grid introduces several advantages. In cases where a relatively high geometrical resolution is required and where the scoring resolution is less important, the number of scoring voxels can be limited while maintaining a high geometrical resolution. This can be achieved either by grouping several geometrical voxels together into scoring voxels or by superimposing a separate scoring grid of spherical voxels with a user-defined radius on the geometrical grid. For the studied lung cancer cases, both methods produce accurate results and introduce a speed increase by a factor of 10-36. In cases where a low geometrical resolution is allowed, but where a high scoring resolution is required, superimposing a separate scoring grid on the geometrical grid allows a reduction in geometrical voxels while maintaining a high scoring resolution. For the studied head-and-neck cancer cases, calculations performed with a geometrical resolution of 2 × 2 × 2 mm3 and a separate scoring grid containing spherical scoring voxels with a radius of 2 mm produce accurate results
SU-E-I-28: Evaluating the Organ Dose From Computed Tomography Using Monte Carlo Calculations
Energy Technology Data Exchange (ETDEWEB)
Ono, T; Araki, F [Faculty of Life Sciences, Kumamoto University, Kumamoto (Japan)
2014-06-01
Purpose: To evaluate organ doses from computed tomography (CT) using Monte Carlo (MC) calculations. Methods: A Philips Brilliance CT scanner (64 slice) was simulated using the GMctdospp (IMPS, Germany) based on the EGSnrc user code. The X-ray spectra and a bowtie filter for MC simulations were determined to coincide with measurements of half-value layer (HVL) and off-center ratio (OCR) profile in air. The MC dose was calibrated from absorbed dose measurements using a Farmer chamber and a cylindrical water phantom. The dose distribution from CT was calculated using patient CT images and organ doses were evaluated from dose volume histograms. Results: The HVLs of Al at 80, 100, and 120 kV were 6.3, 7.7, and 8.7 mm, respectively. The calculated HVLs agreed with measurements within 0.3%. The calculated and measured OCR profiles agreed within 3%. For adult head scans (CTDIvol) =51.4 mGy), mean doses for brain stem, eye, and eye lens were 23.2, 34.2, and 37.6 mGy, respectively. For pediatric head scans (CTDIvol =35.6 mGy), mean doses for brain stem, eye, and eye lens were 19.3, 24.5, and 26.8 mGy, respectively. For adult chest scans (CTDIvol=19.0 mGy), mean doses for lung, heart, and spinal cord were 21.1, 22.0, and 15.5 mGy, respectively. For adult abdominal scans (CTDIvol=14.4 mGy), the mean doses for kidney, liver, pancreas, spleen, and spinal cord were 17.4, 16.5, 16.8, 16.8, and 13.1 mGy, respectively. For pediatric abdominal scans (CTDIvol=6.76 mGy), mean doses for kidney, liver, pancreas, spleen, and spinal cord were 8.24, 8.90, 8.17, 8.31, and 6.73 mGy, respectively. In head scan, organ doses were considerably different from CTDIvol values. Conclusion: MC dose distributions calculated by using patient CT images are useful to evaluate organ doses absorbed to individual patients.
Raedt, Hans De; Lagendijk, Ad
1981-01-01
Starting from a genuine discrete version of the Feynman path-integral representation for the partition function, calculations have been made of the energy, specific heat, and the static density-density correlation functions for a one-dimensional lattice model at nonzero temperatures. A Monte Carlo t
Jacimovic, R; Maucec, M; Trkov, A
2003-01-01
An experimental verification of Monte Carlo neutron flux calculations in typical irradiation channels in the TRIGA Mark II reactor at the Jozef Stefan Institute is presented. It was found that the flux, as well as its spectral characteristics, depends rather strongly on the position of the irradiati
Routti, J T
1975-01-01
The monokinetic and multigroup Monte Carlo albedo methods applicable to estimating neutron leakage through penetrations in the shielding of high-energy accelerators are reviewed. They are used to calculate attenuation factors and dose levels in the tunnels of the CERN intersecting storage rings. (28 refs).
DEFF Research Database (Denmark)
Mangiarotti, Alessio; Sona, Pietro; Ballestrero, Sergio;
2012-01-01
Approximate analytical calculations of multi-photon effects in the spectrum of total radiated energy by high-energy electrons crossing thin targets are compared to the results of Monte Carlo type simulations. The limits of validity of the analytical expressions found in the literature are establi...
Zweck, Christopher; Zreda, Marek; Desilets, Darin
2013-10-01
Conventional formulations of changes in cosmogenic nuclide production rates with snow cover are based on a mass-shielding approach, which neglects the role of neutron moderation by hydrogen. This approach can produce erroneous correction factors and add to the uncertainty of the calculated cosmogenic exposure ages. We use a Monte Carlo particle transport model to simulate fluxes of secondary cosmic-ray neutrons near the surface of the Earth and vary surface snow depth to show changes in neutron fluxes above rock or soil surface. To correspond with shielding factors for spallation and low-energy neutron capture, neutron fluxes are partitioned into high-energy, epithermal and thermal components. The results suggest that high-energy neutrons are attenuated by snow cover at a significantly higher rate (shorter attenuation length) than indicated by the commonly-used mass-shielding formulation. As thermal and epithermal neutrons derive from the moderation of high-energy neutrons, the presence of a strong moderator such as hydrogen in snow increases the thermal neutron flux both within the snow layer and above it. This means that low-energy production rates are affected by snow cover in a manner inconsistent with the mass-shielding approach and those formulations cannot be used to compute snow correction factors for nuclides produced by thermal neutrons. Additionally, as above-ground low-energy neutron fluxes vary with snow cover as a result of reduced diffusion from the ground, low-energy neutron fluxes are affected by snow even if the snow is at some distance from the site where measurements are made.
A portable, parallel, object-oriented Monte Carlo neutron transport code in C++
Energy Technology Data Exchange (ETDEWEB)
Lee, S.R.; Cummings, J.C. [Los Alamos National Lab., NM (United States); Nolen, S.D. [Texas A and M Univ., College Station, TX (United States)]|[Los Alamos National Lab., NM (United States)
1997-05-01
We have developed a multi-group Monte Carlo neutron transport code using C++ and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and {alpha}-eigenvalues and is portable to and runs parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities of MC++ are discussed, along with physics and performance results on a variety of hardware, including all Accelerated Strategic Computing Initiative (ASCI) hardware. Current parallel performance indicates the ability to compute {alpha}-eigenvalues in seconds to minutes rather than hours to days. Future plans and the implementation of a general transport physics framework are also discussed.
Efficient heterogeneous execution of Monte Carlo shielding calculations on a Beowulf cluster.
Dewar, David; Hulse, Paul; Cooper, Andrew; Smith, Nigel
2005-01-01
Recent work has been done in using a high-performance 'Beowulf' cluster computer system for the efficient distribution of Monte Carlo shielding calculations. This has enabled the rapid solution of complex shielding problems at low cost and with greater modularity and scalability than traditional platforms. The work has shown that a simple approach to distributing the workload is as efficient as using more traditional techniques such as PVM (Parallel Virtual Machine). In addition, when used in an operational setting this technique is fairer with the use of resources than traditional methods, in that it does not tie up a single computing resource but instead shares the capacity with other tasks. These developments in computing technology have enabled shielding problems to be solved that would have taken an unacceptably long time to run on traditional platforms. This paper discusses the BNFL Beowulf cluster and a number of tests that have recently been run to demonstrate the efficiency of the asynchronous technique in running the MCBEND program. The BNFL Beowulf currently consists of 84 standard PCs running RedHat Linux. Current performance of the machine has been estimated to be between 40 and 100 Gflop s(-1). When the whole system is employed on one problem up to four million particles can be tracked per second. There are plans to review its size in line with future business needs.
Velazquez, L.; Castro-Palacio, J. C.
2015-03-01
Velazquez and Curilef [J. Stat. Mech. (2010) P02002, 10.1088/1742-5468/2010/02/P02002; J. Stat. Mech. (2010) P04026, 10.1088/1742-5468/2010/04/P04026] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989), 10.1103/PhysRevLett.63.1195]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L ×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L ) ∝(1/L ) z with exponent z ≃0 .26 ±0 .02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L →+∞ .
Energy Technology Data Exchange (ETDEWEB)
Guenay, Mehtap [Malatya Univ. (Turkey). Physics Department
2015-03-15
In this study, salt-heavy metal mixtures consisting of 93-85% Li{sub 20}Sn{sub 80} + 5% SFG-PuO{sub 2} and 2-10% UO{sub 2}, 93-85% Li{sub 20}Sn{sub 80} + 5% SFG-PuO{sub 2} and 2-10% NpO{sub 2}, and 93-85% Li{sub 20}Sn{sub 80} + 5% SFG-PuO{sub 2} and 2-10% UCO were used as fluids. The fluids were used in the liquid first wall, blanket, and shield zones of a fusion-fission hybrid reactor system. A beryllium (Be) zone with a width of 3 cm was used for neutron multiplicity between the liquid first wall and the blanket. 9Cr2WVTa ferritic steel with the width of 4 cm was used as the structural material. The contributions of each isotope in the fluids to the nuclear parameters, such as tritium breeding ratio (TBR), energy multiplication factor (M), and heat deposition rate, of the fusion-fission hybrid reactor were calculated in the liquid first wall, blanket, and shield zones. Three-dimensional analyses were performed using the Monte Carlo code MCNPX-2.7.0 and nuclear data library ENDF/B-VII.0.
Hanada, Masanori; Miwa, Akitsugu; Nishimura, Jun; Takeuchi, Shingo
2009-05-01
In the string-gauge duality it is important to understand how the space-time geometry is encoded in gauge theory observables. We address this issue in the case of the D0-brane system at finite temperature T. Based on the duality, the temporal Wilson loop W in gauge theory is expected to contain the information of the Schwarzschild radius RSch of the dual black hole geometry as log(W)=RSch/(2pialpha'T). This translates to the power-law behavior log(W)=1.89(T/lambda 1/3)-3/5, where lambda is the 't Hooft coupling constant. We calculate the Wilson loop on the gauge theory side in the strongly coupled regime by performing Monte Carlo simulations of supersymmetric matrix quantum mechanics with 16 supercharges. The results reproduce the expected power-law behavior up to a constant shift, which is explainable as alpha' corrections on the gravity side. Our conclusion also demonstrates manifestly the fuzzball picture of black holes. PMID:19518857
International Nuclear Information System (INIS)
Hybrid methods of neutron transport have increased greatly in use, for example, in applications of using both Monte Carlo and deterministic transport methods to calculate quantities of interest, such as the flux and eigenvalue in a nuclear reactor. Many 3d parallel Sn codes apply a Cartesian mesh, and thus for nuclear reactors the representation of curved fuels (cylinder, sphere, etc.) are impacted in the representation of proper fuel inventory, resulting in both a deviation of mass and exact geometry in the computer model representation. In addition, we discuss auto-conversion techniques with our 3d Cartesian mesh generation tools to allow for full generation of MCNP5 inputs (Cartesian mesh and Multigroup XS) from a basis PENTRAN Sn model. For a PWR assembly eigenvalue problem, we explore the errors associated with this Cartesian discrete mesh representation, and perform an analysis to calculate a slope parameter that relates the pcm to the percent areal/volumetric deviation (areal → 2d problems, volumetric → 3d problems). This paper analysis demonstrates a linear relationship between pcm change and areal/volumetric deviation using Multigroup MCNP on a PWR assembly compared to a reference exact combinatorial MCNP geometry calculation. For the same MCNP multigroup problems, we also characterize this linear relationship in discrete ordinates (3d PENTRAN). Finally, for 3D Sn models, we show an application of corner fractioning, a volume-weighted recovery of underrepresented target fuel mass that reduced pcm error to < 100, compared to reference Monte Carlo, in the application to a PWR assembly. (author)
Ding, George X.; Duggan, Dennis M.; Coffey, Charles W.; Shokrani, Parvaneh; Cygler, Joanna E.
2006-06-01
The purpose of this study is to present our experience of commissioning, testing and use of the first commercial macro Monte Carlo based dose calculation algorithm for electron beam treatment planning and to investigate new issues regarding dose reporting (dose-to-water versus dose-to-medium) as well as statistical uncertainties for the calculations arising when Monte Carlo based systems are used in patient dose calculations. All phantoms studied were obtained by CT scan. The calculated dose distributions and monitor units were validated against measurements with film and ionization chambers in phantoms containing two-dimensional (2D) and three-dimensional (3D) type low- and high-density inhomogeneities at different source-to-surface distances. Beam energies ranged from 6 to 18 MeV. New required experimental input data for commissioning are presented. The result of validation shows an excellent agreement between calculated and measured dose distributions. The calculated monitor units were within 2% of measured values except in the case of a 6 MeV beam and small cutout fields at extended SSDs (>110 cm). The investigation on the new issue of dose reporting demonstrates the differences up to 4% for lung and 12% for bone when 'dose-to-medium' is calculated and reported instead of 'dose-to-water' as done in a conventional system. The accuracy of the Monte Carlo calculation is shown to be clinically acceptable even for very complex 3D-type inhomogeneities. As Monte Carlo based treatment planning systems begin to enter clinical practice, new issues, such as dose reporting and statistical variations, may be clinically significant. Therefore it is imperative that a consistent approach to dose reporting is used.
Institute of Scientific and Technical Information of China (English)
刘松芬; 胡北来
2003-01-01
The internal energy and pressure of dense hydrogen plasma are calculated by the direct path integral Monte Carlo approach. The Kelbg potential is used as interaction potentials both between electrons and between protons and electrons in the calculation. The complete formulae for internal energy and pressure in dense hydrogen plasma derived for the simulation are presented. The correctness of the derived formulae are validated by the obtained simulation results. The numerical results are discussed in details.
International Nuclear Information System (INIS)
Highlights: • Among the kinetic parameters, the most important ones are βeff and Λ. • Several methods including the Rossi-α and Feynman-α techniques, slope fit and MCNPX code have been investigated. • The Monte Carlo MCNPX code was used to simulate a geometrical model of the TRIGA core. • The results of the methods have been validated. - Abstract: In this study, noise analysis techniques including Feynman-α (variance-to-mean) and Rossi-α (correlation) and dynamic method such as slope fit method have been used to calculate effective delayed neutron fraction (βeff) and neutron reproduction time (Λ) in Accelerator Driven Subcritical TRIGA reactor. The obtained results have been compared with MCNPX code results. The relative difference between MCNPX code with Feynman-α and Rossi-α techniques and slope fit method for βeff are approximately −5.4%, 1.2%, and −10.6%, −14.8%, respectively, and also for Λ is approximately 2.1%. According to results, the noise methods can been considered ideal for detection with high efficiency and zero dead time and in the slope fit method, the decay of the delayed neutrons has been neglected and only the prompt neutrons have been taken into account. In addition, quantities simulated in the current study are validated against both the reference data and the results of MCNPX code. Therefore, the purpose of this study is to simulate the commonly used experimental methods by MCNPX code and investigate the convergence as well as accuracy of the computational results for different analysis methods in calculation of the kinetic parameters in an Accelerator Driven Subcritical TRIGA reactor
Patient-specific Monte Carlo dose calculations for 103Pd breast brachytherapy
Miksys, N.; Cygler, J. E.; Caudrelier, J. M.; Thomson, R. M.
2016-04-01
This work retrospectively investigates patient-specific Monte Carlo (MC) dose calculations for 103Pd permanent implant breast brachytherapy, exploring various necessary assumptions for deriving virtual patient models: post-implant CT image metallic artifact reduction (MAR), tissue assignment schemes (TAS), and elemental tissue compositions. Three MAR methods (thresholding, 3D median filter, virtual sinogram) are applied to CT images; resulting images are compared to each other and to uncorrected images. Virtual patient models are then derived by application of different TAS ranging from TG-186 basic recommendations (mixed adipose and gland tissue at uniform literature-derived density) to detailed schemes (segmented adipose and gland with CT-derived densities). For detailed schemes, alternate mass density segmentation thresholds between adipose and gland are considered. Several literature-derived elemental compositions for adipose, gland and skin are compared. MC models derived from uncorrected CT images can yield large errors in dose calculations especially when used with detailed TAS. Differences in MAR method result in large differences in local doses when variations in CT number cause differences in tissue assignment. Between different MAR models (same TAS), PTV {{D}90} and skin {{D}1~\\text{c{{\\text{m}}3}}} each vary by up to 6%. Basic TAS (mixed adipose/gland tissue) generally yield higher dose metrics than detailed segmented schemes: PTV {{D}90} and skin {{D}1~\\text{c{{\\text{m}}3}}} are higher by up to 13% and 9% respectively. Employing alternate adipose, gland and skin elemental compositions can cause variations in PTV {{D}90} of up to 11% and skin {{D}1~\\text{c{{\\text{m}}3}}} of up to 30%. Overall, AAPM TG-43 overestimates dose to the PTV ({{D}90} on average 10% and up to 27%) and underestimates dose to the skin ({{D}1~\\text{c{{\\text{m}}3}}} on average 29% and up to 48%) compared to the various MC models derived using the post-MAR CT images studied
Fensin, Michael Lorne
Monte Carlo-linked depletion methods have gained recent interest due to the ability to more accurately model complex 3-dimesional geometries and better track the evolution of temporal nuclide inventory by simulating the actual physical process utilizing continuous energy coefficients. The integration of CINDER90 into the MCNPX Monte Carlo radiation transport code provides a high-fidelity completely self-contained Monte-Carlo-linked depletion capability in a well established, widely accepted Monte Carlo radiation transport code that is compatible with most nuclear criticality (KCODE) particle tracking features in MCNPX. MCNPX depletion tracks all necessary reaction rates and follows as many isotopes as cross section data permits in order to achieve a highly accurate temporal nuclide inventory solution. This work chronicles relevant nuclear history, surveys current methodologies of depletion theory, details the methodology in applied MCNPX and provides benchmark results for three independent OECD/NEA benchmarks. Relevant nuclear history, from the Oklo reactor two billion years ago to the current major United States nuclear fuel cycle development programs, is addressed in order to supply the motivation for the development of this technology. A survey of current reaction rate and temporal nuclide inventory techniques is then provided to offer justification for the depletion strategy applied within MCNPX. The MCNPX depletion strategy is then dissected and each code feature is detailed chronicling the methodology development from the original linking of MONTEBURNS and MCNP to the most recent public release of the integrated capability (MCNPX 2.6.F). Calculation results of the OECD/NEA Phase IB benchmark, H. B. Robinson benchmark and OECD/NEA Phase IVB are then provided. The acceptable results of these calculations offer sufficient confidence in the predictive capability of the MCNPX depletion method. This capability sets up a significant foundation, in a well established
MCNP: a general Monte Carlo code for neutron and photon transport
International Nuclear Information System (INIS)
MCNP is a very general Monte Carlo neutron photon transport code system with approximately 250 person years of Group X-6 code development invested. It is extremely portable, user-oriented, and a true production code as it is used about 60 Cray hours per month by about 150 Los Alamos users. It has as its data base the best cross-section evaluations available. MCNP contains state-of-the-art traditional and adaptive Monte Carlo techniques to be applied to the solution of an ever-increasing number of problems. Excellent user-oriented documentation is available for all facets of the MCNP code system. Many useful and important variants of MCNP exist for special applications. The Radiation Shielding Information Center (RSIC) in Oak Ridge, Tennessee is the contact point for worldwide MCNP code and documentation distribution. A much improved MCNP Version 3A will be available in the fall of 1985, along with new and improved documentation. Future directions in MCNP development will change the meaning of MCNP to Monte Carlo N Particle where N particle varieties will be transported
Brons, S; Elsässer, T; Ferrari, A; Gadioli, E; Mairani, A; Parodi, K; Sala, P; Scholz, M; Sommerer, F
2010-01-01
Monte Carlo codes are rapidly spreading among hadron therapy community due to their sophisticated nuclear/electromagnetic models which allow an improved description of the complex mixed radiation field produced by nuclear reactions in therapeutic irradiation. In this contribution results obtained with the Monte Carlo code FLUKA are presented focusing on the production of secondary fragments in carbon ion interaction with water and on CT-based calculations of absorbed and biological effective dose for typical clinical situations. The results of the simulations are compared with the available experimental data and with the predictions of the GSI analytical treatment planning code TRiP.
Parallel processing of neutron transport in fuel assembly calculation
International Nuclear Information System (INIS)
Group constants, which are used for reactor analyses by nodal method, are generated by fuel assembly calculations based on the neutron transport theory, since one or a quarter of the fuel assembly corresponds to a unit mesh in the current nodal calculation. The group constant calculation for a fuel assembly is performed through spectrum calculations, a two-dimensional fuel assembly calculation, and depletion calculations. The purpose of this study is to develop a parallel algorithm to be used in a parallel processor for the fuel assembly calculation and the depletion calculations of the group constant generation. A serial program, which solves the neutron integral transport equation using the transmission probability method and the linear depletion equation, was prepared and verified by a benchmark calculation. Small changes from the serial program was enough to parallelize the depletion calculation which has inherent parallel characteristics. In the fuel assembly calculation, however, efficient parallelization is not simple and easy because of the many coupling parameters in the calculation and data communications among CPU's. In this study, the group distribution method is introduced for the parallel processing of the fuel assembly calculation to minimize the data communications. The parallel processing was performed on Quadputer with 4 CPU's operating in NURAD Lab. at KAIST. Efficiencies of 54.3 % and 78.0 % were obtained in the fuel assembly calculation and depletion calculation, respectively, which lead to the overall speedup of about 2.5. As a result, it is concluded that the computing time consumed for the group constant generation can be easily reduced by parallel processing on the parallel computer with small size CPU's
A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC).
Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun
2015-10-01
Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia's CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE's random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by
A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC)
Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B.; Jia, Xun
2015-09-01
Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia’s CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE’s random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by
International Nuclear Information System (INIS)
Highlights: • Calculation of effective delayed neutron fraction in circulating-fuel reactors. • Extension of the Monte Carlo SERPENT-2 code for delayed neutron precursor tracking. • Forward and adjoint multi-group diffusion eigenvalue problems in OpenFOAM. • Analytical approach for βeff calculation in simple geometries and flow conditions. • Good agreement among the three proposed approaches in the MSFR test-case. - Abstract: This paper deals with the calculation of the effective delayed neutron fraction (βeff) in circulating-fuel nuclear reactors. The Molten Salt Fast Reactor is adopted as test case for the comparison of the analytical, deterministic and Monte Carlo methods presented. The Monte Carlo code SERPENT-2 has been extended to allow for delayed neutron precursors drift, according to the fuel velocity field. The forward and adjoint eigenvalue multi-group diffusion problems are implemented and solved adopting the multi-physics tool-kit OpenFOAM, by taking into account the convective and turbulent diffusive terms in the precursors balance. These two approaches show good agreement in the whole range of the MSFR operating conditions. An analytical formula for the circulating-to-static conditions βeff correction factor is also derived under simple hypotheses, which explicitly takes into account the spatial dependence of the neutron importance. Its accuracy is assessed against Monte Carlo and deterministic results. The effects of in-core recirculation vortex and turbulent diffusion are finally analysed and discussed
An object-oriented implementation of a parallel Monte Carlo code for radiation transport
Santos, Pedro Duarte; Lani, Andrea
2016-05-01
This paper describes the main features of a state-of-the-art Monte Carlo solver for radiation transport which has been implemented within COOLFluiD, a world-class open source object-oriented platform for scientific simulations. The Monte Carlo code makes use of efficient ray tracing algorithms (for 2D, axisymmetric and 3D arbitrary unstructured meshes) which are described in detail. The solver accuracy is first verified in testcases for which analytical solutions are available, then validated for a space re-entry flight experiment (i.e. FIRE II) for which comparisons against both experiments and reference numerical solutions are provided. Through the flexible design of the physical models, ray tracing and parallelization strategy (fully reusing the mesh decomposition inherited by the fluid simulator), the implementation was made efficient and reusable.
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2016-03-01
This work discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package authored at Oak Ridge National Laboratory. Shift has been developed to scale well from laptops to small computing clusters to advanced supercomputers and includes features such as support for multiple geometry and physics engines, hybrid capabilities for variance reduction methods such as the Consistent Adjoint-Driven Importance Sampling methodology, advanced parallel decompositions, and tally methods optimized for scalability on supercomputing architectures. The scaling studies presented in this paper demonstrate good weak and strong scaling behavior for the implemented algorithms. Shift has also been validated and verified against various reactor physics benchmarks, including the Consortium for Advanced Simulation of Light Water Reactors' Virtual Environment for Reactor Analysis criticality test suite and several Westinghouse AP1000® problems presented in this paper. These benchmark results compare well to those from other contemporary Monte Carlo codes such as MCNP5 and KENO.
Exact modeling of the torus geometry with Monte Carlo transport code
International Nuclear Information System (INIS)
It is valuable to model torus geometry exactry for the neutronics design of fusion reactor in order to assess neutronics characteristics such as tritium breeding ratio, heat generation rate, etc, near the plasma. Monte Carlo code MORSE-GG which plays important role in the radiation streaming calculation of fusion reactors had been able to deal with the geometry composed of second order surfaces. The MORSE-GG program is modified to be able to deal with torus geometry which has fourth order surface by solving biquadratic equations, hoping that MORSE-GG code becomes more effective for the neutronics calculation of the Tokamak fusion reactor. (author)
Road Transport Congestion Costs Calculations-Adaptation to Engineering Approach
Directory of Open Access Journals (Sweden)
Marjan Lep
2008-01-01
Full Text Available The article represents so called engineering approach for computing the total road transport congestion costs. According to economic welfare theory, the total costs of transport congestion are defined as dead weight loss (DWL of infrastructure use. With a set of equations DWL could be formulated in a mathematical way. Because such form of equation is not directly applicable for concrete road network calculations it should be transformed into engineering form, which comprises transport engineering related data as classified road links, traffic volumes, passenger unit costs, etc. The equation is well applicable on the interurban road network; adaptations are needed for the urban road network cost calculations, where time losses are not so much related to the link travel time. The final equation was derived for the purposes of national road congestion cost calculation.
Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy
Energy Technology Data Exchange (ETDEWEB)
Martinez-Rovira, I.; Sempau, J.; Prezado, Y. [Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain) and ID17 Biomedical Beamline, European Synchrotron Radiation Facility (ESRF), 6 rue Jules Horowitz B.P. 220, F-38043 Grenoble Cedex (France); Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain); Laboratoire Imagerie et modelisation en neurobiologie et cancerologie, UMR8165, Centre National de la Recherche Scientifique (CNRS), Universites Paris 7 et Paris 11, Bat 440., 15 rue Georges Clemenceau, F-91406 Orsay Cedex (France)
2012-05-15
Purpose: Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-{mu}m-wide microbeams spaced by 200-400 {mu}m) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. Methods: The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Results: Good agreement between MC simulations and experimental results was achieved, even at
Rabie, M.; Franck, C. M.
2016-06-01
We present a freely available MATLAB code for the simulation of electron transport in arbitrary gas mixtures in the presence of uniform electric fields. For steady-state electron transport, the program provides the transport coefficients, reaction rates and the electron energy distribution function. The program uses established Monte Carlo techniques and is compatible with the electron scattering cross section files from the open-access Plasma Data Exchange Project LXCat. The code is written in object-oriented design, allowing the tracing and visualization of the spatiotemporal evolution of electron swarms and the temporal development of the mean energy and the electron number due to attachment and/or ionization processes. We benchmark our code with well-known model gases as well as the real gases argon, N2, O2, CF4, SF6 and mixtures of N2 and O2.
Monte Carlo calculation of the maximum therapeutic gain of tumor antivascular alpha therapy
Energy Technology Data Exchange (ETDEWEB)
Huang, Chen-Yu; Oborn, Bradley M.; Guatelli, Susanna; Allen, Barry J. [Centre for Experimental Radiation Oncology, St. George Clinical School, University of New South Wales, Kogarah, New South Wales 2217 (Australia); Illawarra Cancer Care Centre, Wollongong, New South Wales 2522, Australia and Centre for Medical Radiation Physics, University of Wollongong, New South Wales 2522 (Australia); Centre for Medical Radiation Physics, University of Wollongong, New South Wales 2522 (Australia); Centre for Experimental Radiation Oncology, St. George Clinical School, University of New South Wales, Kogarah, New South Wales 2217 (Australia)
2012-03-15
Purpose: Metastatic melanoma lesions experienced marked regression after systemic targeted alpha therapy in a phase 1 clinical trial. This unexpected response was ascribed to tumor antivascular alpha therapy (TAVAT), in which effective tumor regression is achieved by killing endothelial cells (ECs) in tumor capillaries and, thus, depriving cancer cells of nutrition and oxygen. The purpose of this paper is to quantitatively analyze the therapeutic efficacy and safety of TAVAT by building up the testing Monte Carlo microdosimetric models. Methods: Geant4 was adapted to simulate the spatial nonuniform distribution of the alpha emitter {sup 213}Bi. The intraluminal model was designed to simulate the background dose to normal tissue capillary ECs from the nontargeted activity in the blood. The perivascular model calculates the EC dose from the activity bound to the perivascular cancer cells. The key parameters are the probability of an alpha particle traversing an EC nucleus, the energy deposition, the lineal energy transfer, and the specific energy. These results were then applied to interpret the clinical trial. Cell survival rate and therapeutic gain were determined. Results: The specific energy for an alpha particle hitting an EC nucleus in the intraluminal and perivascular models is 0.35 and 0.37 Gy, respectively. As the average probability of traversal in these models is 2.7% and 1.1%, the mean specific energy per decay drops to 1.0 cGy and 0.4 cGy, which demonstrates that the source distribution has a significant impact on the dose. Using the melanoma clinical trial activity of 25 mCi, the dose to tumor EC nucleus is found to be 3.2 Gy and to a normal capillary EC nucleus to be 1.8 cGy. These data give a maximum therapeutic gain of about 180 and validate the TAVAT concept. Conclusions: TAVAT can deliver a cytotoxic dose to tumor capillaries without being toxic to normal tissue capillaries.
Comparison between Acuros XB and Brainlab Monte Carlo algorithms for photon dose calculation
Energy Technology Data Exchange (ETDEWEB)
Misslbeck, M.; Kneschaurek, P. [Klinikum rechts der Isar der Technischen Univ. Muenchen (Germany). Klinik und Poliklinik fuer Strahlentherapie und Radiologische Onkologie
2012-07-15
Purpose: The Acuros {sup registered} XB dose calculation algorithm by Varian and the Monte Carlo algorithm XVMC by Brainlab were compared with each other and with the well-established AAA algorithm, which is also from Varian. Methods: First, square fields to two different artificial phantoms were applied: (1) a 'slab phantom' with a 3 cm water layer, followed by a 2 cm bone layer, a 7 cm lung layer, and another 18 cm water layer and (2) a 'lung phantom' with water surrounding an eccentric lung block. For the slab phantom, depth-dose curves along central beam axis were compared. The lung phantom was used to compare profiles at depths of 6 and 14 cm. As clinical cases, the CTs of three different patients were used. The original AAA plans with all three algorithms using open fields were recalculated. Results: There were only minor differences between Acuros and XVMC in all artificial phantom depth doses and profiles; however, this was different for AAA, which had deviations of up to 13% in depth dose and a few percent for profiles in the lung phantom. These deviations did not translate into the clinical cases, where the dose-volume histograms of all algorithms were close to each other for open fields. Conclusion: Only within artificial phantoms with clearly separated layers of simulated tissue does AAA show differences at layer boundaries compared to XVMC or Acuros. In real patient CTs, these differences in the dose-volume histogram of the planning target volume were not observed. (orig.)
Efendiev, Yalchin R.
2013-08-21
In this paper, we propose multilevel Monte Carlo (MLMC) methods that use ensemble level mixed multiscale methods in the simulations of multiphase flow and transport. The contribution of this paper is twofold: (1) a design of ensemble level mixed multiscale finite element methods and (2) a novel use of mixed multiscale finite element methods within multilevel Monte Carlo techniques to speed up the computations. The main idea of ensemble level multiscale methods is to construct local multiscale basis functions that can be used for any member of the ensemble. In this paper, we consider two ensemble level mixed multiscale finite element methods: (1) the no-local-solve-online ensemble level method (NLSO); and (2) the local-solve-online ensemble level method (LSO). The first approach was proposed in Aarnes and Efendiev (SIAM J. Sci. Comput. 30(5):2319-2339, 2008) while the second approach is new. Both mixed multiscale methods use a number of snapshots of the permeability media in generating multiscale basis functions. As a result, in the off-line stage, we construct multiple basis functions for each coarse region where basis functions correspond to different realizations. In the no-local-solve-online ensemble level method, one uses the whole set of precomputed basis functions to approximate the solution for an arbitrary realization. In the local-solve-online ensemble level method, one uses the precomputed functions to construct a multiscale basis for a particular realization. With this basis, the solution corresponding to this particular realization is approximated in LSO mixed multiscale finite element method (MsFEM). In both approaches, the accuracy of the method is related to the number of snapshots computed based on different realizations that one uses to precompute a multiscale basis. In this paper, ensemble level multiscale methods are used in multilevel Monte Carlo methods (Giles 2008a, Oper.Res. 56(3):607-617, b). In multilevel Monte Carlo methods, more accurate
Energy Technology Data Exchange (ETDEWEB)
Brooks III, E D; Szoke, A; Peterson, J L
2005-11-15
We describe a Monte Carlo solution for time dependent photon transport, in the difference formulation with the material in local thermodynamic equilibrium (LTE), that is piecewise linear in its treatment of the material state variable. Our method employs a Galerkin solution for the material energy equation while using Symbolic Implicit Monte Carlo (SIMC) to solve the transport equation. In constructing the scheme, one has the freedom to choose between expanding the material temperature, or the equivalent black body radiation energy density at the material temperature, in terms of finite element basis functions. The former provides a linear treatment of the material energy while the latter provides a linear treatment of the radiative coupling between zones. Subject to the conditional use of a lumped material energy in the vicinity of strong gradients, possible with a linear treatment of the material energy, our approach provides a robust solution for time dependent transport of thermally emitted radiation that can address a wide range of problems. It produces accurate results in the diffusion limit.
Energy Technology Data Exchange (ETDEWEB)
Landry, Guillaume; Reniers, Brigitte; Murrer, Lars; Lutgens, Ludy; Bloemen-Van Gurp, Esther; Pignol, Jean-Philippe; Keller, Brian; Beaulieu, Luc; Verhaegen, Frank [Department of Radiation Oncology (MAASTRO), GROW-School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Department of Radiation Oncology, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario M4N 3M5 (Canada); Departement de Radio-Oncologie et Centre de Recherche en Cancerologie, de l' Universite Laval, CHUQ, Pavillon L' Hotel-Dieu de Quebec, Quebec G1R 2J6 (Canada) and Departement de Physique, de Genie Physique et d' Optique, Universite Laval, Quebec G1K 7P4 (Canada); Department of Radiation Oncology (MAASTRO), GROW-School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands) and Medical Physics Unit, McGill University, Montreal General Hospital, Montreal, Quebec H3G 1A4 (Canada)
2010-10-15
Purpose: The objective of this work is to assess the sensitivity of Monte Carlo (MC) dose calculations to uncertainties in human tissue composition for a range of low photon energy brachytherapy sources: {sup 125}I, {sup 103}Pd, {sup 131}Cs, and an electronic brachytherapy source (EBS). The low energy photons emitted by these sources make the dosimetry sensitive to variations in tissue atomic number due to the dominance of the photoelectric effect. This work reports dose to a small mass of water in medium D{sub w,m} as opposed to dose to a small mass of medium in medium D{sub m,m}. Methods: Mean adipose, mammary gland, and breast tissues (as uniform mixture of the aforementioned tissues) are investigated as well as compositions corresponding to one standard deviation from the mean. Prostate mean compositions from three different literature sources are also investigated. Three sets of MC simulations are performed with the GEANT4 code: (1) Dose calculations for idealized TG-43-like spherical geometries using point sources. Radial dose profiles obtained in different media are compared to assess the influence of compositional uncertainties. (2) Dose calculations for four clinical prostate LDR brachytherapy permanent seed implants using {sup 125}I seeds (Model 2301, Best Medical, Springfield, VA). The effect of varying the prostate composition in the planning target volume (PTV) is investigated by comparing PTV D{sub 90} values. (3) Dose calculations for four clinical breast LDR brachytherapy permanent seed implants using {sup 103}Pd seeds (Model 2335, Best Medical). The effects of varying the adipose/gland ratio in the PTV and of varying the elemental composition of adipose and gland within one standard deviation of the assumed mean composition are investigated by comparing PTV D{sub 90} values. For (2) and (3), the influence of using the mass density from CT scans instead of unit mass density is also assessed. Results: Results from simulation (1) show that variations
International Nuclear Information System (INIS)
Purpose: The objective of this work is to assess the sensitivity of Monte Carlo (MC) dose calculations to uncertainties in human tissue composition for a range of low photon energy brachytherapy sources: 125I, 103Pd, 131Cs, and an electronic brachytherapy source (EBS). The low energy photons emitted by these sources make the dosimetry sensitive to variations in tissue atomic number due to the dominance of the photoelectric effect. This work reports dose to a small mass of water in medium Dw,m as opposed to dose to a small mass of medium in medium Dm,m. Methods: Mean adipose, mammary gland, and breast tissues (as uniform mixture of the aforementioned tissues) are investigated as well as compositions corresponding to one standard deviation from the mean. Prostate mean compositions from three different literature sources are also investigated. Three sets of MC simulations are performed with the GEANT4 code: (1) Dose calculations for idealized TG-43-like spherical geometries using point sources. Radial dose profiles obtained in different media are compared to assess the influence of compositional uncertainties. (2) Dose calculations for four clinical prostate LDR brachytherapy permanent seed implants using 125I seeds (Model 2301, Best Medical, Springfield, VA). The effect of varying the prostate composition in the planning target volume (PTV) is investigated by comparing PTV D90 values. (3) Dose calculations for four clinical breast LDR brachytherapy permanent seed implants using 103Pd seeds (Model 2335, Best Medical). The effects of varying the adipose/gland ratio in the PTV and of varying the elemental composition of adipose and gland within one standard deviation of the assumed mean composition are investigated by comparing PTV D90 values. For (2) and (3), the influence of using the mass density from CT scans instead of unit mass density is also assessed. Results: Results from simulation (1) show that variations in the mean compositions of tissues affect low energy
International Nuclear Information System (INIS)
Monte Carlo criticality calculation allows to estimate the effective multiplication factor as well as local quantities such as local reaction rates. Some configurations presenting weak neutronic coupling (high burn up profile, complete reactor core,...) may induce biased estimations for keff or reaction rates. In order to improve robustness of the iterative Monte Carlo methods, a coupling with a deterministic code was studied. An adjoint flux is obtained by a deterministic calculation and then used in the Monte Carlo. The initial guess is then automated, the sampling of fission sites is modified and the random walk of neutrons is modified using splitting and russian roulette strategies. An automated convergence detection method has been developed. It locates and suppresses the transient due to the initialization in an output series, applied here to keff and Shannon entropy. It relies on modeling stationary series by an order 1 auto regressive process and applying statistical tests based on a Student Bridge statistics. This method can easily be extended to every output of an iterative Monte Carlo. Methods developed in this thesis are tested on different test cases. (author)
Effect of elemental compositions on Monte Carlo dose calculations in proton therapy of eye tumors
Rasouli, Fatemeh S.; Farhad Masoudi, S.; Keshazare, Shiva; Jette, David
2015-12-01
Recent studies in eye plaque brachytherapy have found considerable differences between the dosimetric results by using a water phantom, and a complete human eye model. Since the eye continues to be simulated as water-equivalent tissue in the proton therapy literature, a similar study for investigating such a difference in treating eye tumors by protons is indispensable. The present study inquires into this effect in proton therapy utilizing Monte Carlo simulations. A three-dimensional eye model with elemental compositions is simulated and used to examine the dose deposition to the phantom. The beam is planned to pass through a designed beam line to moderate the protons to the desired energies for ocular treatments. The results are compared with similar irradiation to a water phantom, as well as to a material with uniform density throughout the whole volume. Spread-out Bragg peaks (SOBPs) are created by adding pristine peaks to cover a typical tumor volume. Moreover, the corresponding beam parameters recommended by the ICRU are calculated, and the isodose curves are computed. The results show that the maximum dose deposited in ocular media is approximately 5-7% more than in the water phantom, and about 1-1.5% less than in the homogenized material of density 1.05 g cm-3. Furthermore, there is about a 0.2 mm shift in the Bragg peak due to the tissue composition difference between the models. It is found that using the weighted dose profiles optimized in a water phantom for the realistic eye model leads to a small disturbance of the SOBP plateau dose. In spite of the plaque brachytherapy results for treatment of eye tumors, it is found that the differences between the simplified models presented in this work, especially the phantom containing the homogenized material, are not clinically significant in proton therapy. Taking into account the intrinsic uncertainty of the patient dose calculation for protons, and practical problems corresponding to applying patient
Domain decomposition and terabyte tallies with the OpenMC Monte Carlo neutron transport code
International Nuclear Information System (INIS)
Memory limitations are a key obstacle to applying Monte Carlo neutron transport methods to high-fidelity full-core reactor analysis. Billions of unique regions are needed to carry out full-core depletion and fuel performance analyses, equating to terabytes of memory for isotopic abundances and tally scores - far more than can fit on a single computational node in modern architectures. This work introduces an implementation of domain decomposition that addresses this problem, demonstrating excellent scaling up to a 2.39TB mesh-tally distributed across 512 compute nodes running a full-core reactor benchmark on the Mira Blue Gene/Q supercomputer at Argonne National Laboratory. (author)
LTRACK: Beam-transport calculation including wakefield effects
International Nuclear Information System (INIS)
LTRACK is a first-order beam-transport code that includes wakefield effects up to quadrupole modes. This paper will introduce the readers to this computer code by describing the history, the method of calculations, and a brief summary of the input/output information. Future plans for the code will also be described
Radiation Transport for Explosive Outflows: A Multigroup Hybrid Monte Carlo Method
Wollaeger, Ryan T.; van Rossum, Daniel R.; Graziani, Carlo; Couch, Sean M.; Jordan, George C., IV; Lamb, Donald Q.; Moses, Gregory A.
2013-12-01
We explore Implicit Monte Carlo (IMC) and discrete diffusion Monte Carlo (DDMC) for radiation transport in high-velocity outflows with structured opacity. The IMC method is a stochastic computational technique for nonlinear radiation transport. IMC is partially implicit in time and may suffer in efficiency when tracking MC particles through optically thick materials. DDMC accelerates IMC in diffusive domains. Abdikamalov extended IMC and DDMC to multigroup, velocity-dependent transport with the intent of modeling neutrino dynamics in core-collapse supernovae. Densmore has also formulated a multifrequency extension to the originally gray DDMC method. We rigorously formulate IMC and DDMC over a high-velocity Lagrangian grid for possible application to photon transport in the post-explosion phase of Type Ia supernovae. This formulation includes an analysis that yields an additional factor in the standard IMC-to-DDMC spatial interface condition. To our knowledge the new boundary condition is distinct from others presented in prior DDMC literature. The method is suitable for a variety of opacity distributions and may be applied to semi-relativistic radiation transport in simple fluids and geometries. Additionally, we test the code, called SuperNu, using an analytic solution having static material, as well as with a manufactured solution for moving material with structured opacities. Finally, we demonstrate with a simple source and 10 group logarithmic wavelength grid that IMC-DDMC performs better than pure IMC in terms of accuracy and speed when there are large disparities between the magnitudes of opacities in adjacent groups. We also present and test our implementation of the new boundary condition.
Medhat, M. E.; Demir, Nilgun; Akar Tarim, Urkiye; Gurler, Orhan
2014-08-01
Monte Carlo simulations, FLUKA and Geant4, were performed to study mass attenuation for various types of soil at 59.5, 356.5, 661.6, 1173.2 and 1332.5 keV photon energies. Appreciable variations are noted for all parameters by changing the photon energy and the chemical composition of the sample. The simulations parameters were compared with experimental data and the XCOM program. The simulations show that the calculated mass attenuation coefficient values were closer to experimental values better than those obtained theoretically using the XCOM database for the same soil samples. The results indicate that Geant4 and FLUKA can be applied to estimate mass attenuation for various biological materials at different energies. The Monte Carlo method may be employed to make additional calculations on the photon attenuation characteristics of different soil samples collected from other places.
Efficient calculation of dissipative quantum transport properties in semiconductor nanostructures
Energy Technology Data Exchange (ETDEWEB)
Greck, Peter
2012-11-26
We present a novel quantum transport method that follows the non-equilibrium Green's function (NEGF) framework but side steps any self-consistent calculation of lesser self-energies by replacing them by a quasi-equilibrium expression. We termed this method the multi-scattering Buettiker-Probe (MSB) method. It generalizes the so-called Buettiker-Probe model but takes into account all relevant individual scattering mechanisms. It is orders of magnitude more efficient than a fully selfconsistent non-equilibrium Green's function calculation for realistic devices, yet accurately reproduces the results of the latter method as well as experimental data. This method is fairly easy to implement and opens the path towards realistic three-dimensional quantum transport calculations. In this work, we review the fundamentals of the non-equilibrium Green's function formalism for quantum transport calculations. Then, we introduce our novel MSB method after briefly reviewing the original Buettiker-Probe model. Finally, we compare the results of the MSB method to NEGF calculations as well as to experimental data. In particular, we calculate quantum transport properties of quantum cascade lasers in the terahertz (THz) and the mid-infrared (MIR) spectral domain. With a device optimization algorithm based upon the MSB method, we propose a novel THz quantum cascade laser design. It uses a two-well period with alternating barrier heights and complete carrier thermalization for the majority of the carriers within each period. We predict THz laser operation for temperatures up to 250 K implying a new temperature record.
International Nuclear Information System (INIS)
A code has been written for producing group data suited to take into account the center of mass anisotropy of elastic neutron scattering in Monte Carlo calculations. The format of the generated data library is described. Up to now variants of the library based on KEDAK2 and KEDAK3/ENDL78 have been produced. One of the libraries is listed in the appendix. (author)
PCXMC. A PC-based Monte Carlo program for calculating patient doses in medical x-ray examinations
International Nuclear Information System (INIS)
The report describes PCXMC, a Monte Carlo program for calculating patients' organ doses and the effective dose in medical x-ray examinations. The organs considered are: the active bone marrow, adrenals, brain, breasts, colon (upper and lower large intestine), gall bladder, heats, kidneys, liver, lungs, muscle, oesophagus, ovaries, pancreas, skeleton, skin, small intestine, spleen, stomach, testes, thymes, thyroid, urinary bladder, and uterus. (42 refs.)
Unbiased estimators of coincidence and correlation in non-analogous Monte Carlo particle transport
International Nuclear Information System (INIS)
Highlights: • The history splitting method was developed for non-Boltzmann Monte Carlo estimators. • The method allows variance reduction for pulse-height and higher moment estimators. • It works in highly multiplicative problems but Russian roulette has to be replaced. • Estimation of higher moments allows the simulation of neutron noise measurements. • Biased sampling of fission helps the effective simulation of neutron noise methods. - Abstract: The conventional non-analogous Monte Carlo methods are optimized to preserve the mean value of the distributions. Therefore, they are not suited to non-Boltzmann problems such as the estimation of coincidences or correlations. This paper presents a general method called history splitting for the non-analogous estimation of such quantities. The basic principle of the method is that a non-analogous particle history can be interpreted as a collection of analogous histories with different weights according to the probability of their realization. Calculations with a simple Monte Carlo program for a pulse-height-type estimator prove that the method is feasible and provides unbiased estimation. Different variance reduction techniques have been tried with the method and Russian roulette turned out to be ineffective in high multiplicity systems. An alternative history control method is applied instead. Simulation results of an auto-correlation (Rossi-α) measurement show that even the reconstruction of the higher moments is possible with the history splitting method, which makes the simulation of neutron noise measurements feasible
Liu, Han; Zhuang, Tingliang; Stephans, Kevin; Videtic, Gregory; Raithel, Stephen; Djemil, Toufik; Xia, Ping
2015-01-01
For patients with medically inoperable early-stage non-small cell lung cancer (NSCLC) treated with stereotactic body radiation therapy, early treatment plans were based on a simpler dose calculation algorithm, the pencil beam (PB) calculation. Because these patients had the longest treatment follow-up, identifying dose differences between the PB calculated dose and Monte Carlo calculated dose is clinically important for understanding of treatment outcomes. Previous studies found significant dose differences between the PB dose calculation and more accurate dose calculation algorithms, such as convolution-based or Monte Carlo (MC), mostly for three-dimensional conformal radiotherapy (3D CRT) plans. The aim of this study is to investigate whether these observed dose differences also exist for intensity-modulated radiotherapy (IMRT) plans for both centrally and peripherally located tumors. Seventy patients (35 central and 35 peripheral) were retrospectively selected for this study. The clinical IMRT plans that were initially calculated with the PB algorithm were recalculated with the MC algorithm. Among these paired plans, dosimetric parameters were compared for the targets and critical organs. When compared to MC calculation, PB calculation overestimated doses to the planning target volumes (PTVs) of central and peripheral tumors with different magnitudes. The doses to 95% of the central and peripheral PTVs were overestimated by 9.7% ± 5.6% and 12.0% ± 7.3%, respectively. This dose overestimation did not affect doses to the critical organs, such as the spinal cord and lung. In conclusion, for NSCLC treated with IMRT, dose differences between the PB and MC calculations were different from that of 3D CRT. No significant dose differences in critical organs were observed between the two calculations. PMID:26699560
International Nuclear Information System (INIS)
Purpose: To investigate the use of the linear Boltzmann transport equation as a dose calculation tool which can account for interface effects, while still having faster computation times than Monte Carlo methods. In particular, we introduce a forward scattering approximation, in hopes of improving calculation time without a significant hindrance to accuracy. Methods: Two coupled Boltzmann transport equations were constructed, one representing the fluence of photons within the medium, and the other, the fluence of electrons. We neglect the scattering term within the electron transport equation, resulting in an extreme forward scattering approximation to reduce computational complexity. These equations were then solved using a numerical technique for solving partial differential equations, known as a finite difference scheme, where the fluence at each discrete point in space is calculated based on the fluence at the previous point in the particle's path. Using this scheme, it is possible to develop a solution to the Boltzmann transport equations by beginning with boundary conditions and iterating across the entire medium. The fluence of electrons can then be used to find the dose at any point within the medium. Results: Comparisons with Monte Carlo simulations indicate that even simplistic techniques for solving the linear Boltzmann transport equation yield expected interface effects, which many popular dose calculation algorithms are not capable of predicting. Implementation of a forward scattering approximation does not appear to drastically reduce the accuracy of this algorithm. Conclusion: Optimized implementations of this algorithm have been shown to be very accurate when compared with Monte Carlo simulations, even in build up regions where many models fail. Use of a forward scattering approximation could potentially give a reasonably accurate dose distribution in a shorter amount of time for situations where a completely accurate dose distribution is not
TRING: a computer program for calculating radionuclide transport in groundwater
International Nuclear Information System (INIS)
The computer program TRING is described which enables the transport of radionuclides in groundwater to be calculated for use in long term radiological assessments using methods described previously. Examples of the areas of application of the program are activity transport in groundwater associated with accidental spillage or leakage of activity, the shutdown of reactors subject to delayed decommissioning, shallow land burial of intermediate level waste and geologic disposal of high level waste. Some examples of the use of the program are given, together with full details to enable users to run the program. (author)
User manual for version 4.3 of the Tripoli-4 Monte-Carlo method particle transport computer code
International Nuclear Information System (INIS)
This manual relates to Version 4.3 TRIPOLI-4 code. TRIPOLI-4 is a computer code simulating the transport of neutrons, photons, electrons and positrons. It can be used for radiation shielding calculations (long-distance propagation with flux attenuation in non-multiplying media) and neutronic calculations (fissile medium, criticality or sub-criticality basis). This makes it possible to calculate keff (for criticality), flux, currents, reaction rates and multi-group cross-sections. TRIPOLI-4 is a three-dimensional code that uses the Monte-Carlo method. It allows for point-wise description in terms of energy of cross-sections and multi-group homogenized cross-sections and features two modes of geometrical representation: surface and combinatorial. The code uses cross-section libraries in ENDF/B format (such as JEF2-2, ENDF/B-VI and JENDL) for point-wise description cross-sections in APOTRIM format (from the APOLLO2 code) or a format specific to TRIPOLI-4 for multi-group description. (authors)
Design of a transport calculation system for logging sondes simulation
International Nuclear Information System (INIS)
Analysis of available resources in earth crust is performed by different techniques, one of them is neutron logging. Design of sondes that are used to make such logging is supported by laboratory experiments as well as by numerical calculations.This work presents several calculation schemes, designed to simplify the task of whom has to planify such experiments or optimize parameters of this kind of sondes.These schemes use transport calculation codes, especially DaRT, TORT and MCNP, and cross section processing modules from SCALE system.Additionally a system for DaRT and TORT data postprocessing using OpenDX is presented.It allows scalar flux spatial distribution analysis, as wells as cross section condensation and reaction rates calculation
Barengoltz, Jack
2016-07-01
Monte Carlo (MC) is a common method to estimate probability, effectively by a simulation. For planetary protection, it may be used to estimate the probability of impact P{}_{I} by a launch vehicle (upper stage) of a protected planet. The object of the analysis is to provide a value for P{}_{I} with a given level of confidence (LOC) that the true value does not exceed the maximum allowed value of P{}_{I}. In order to determine the number of MC histories required, one must also guess the maximum number of hits that will occur in the analysis. This extra parameter is needed because a LOC is desired. If more hits occur, the MC analysis would indicate that the true value may exceed the specification value with a higher probability than the LOC. (In the worst case, even the mean value of the estimated P{}_{I} might exceed the specification value.) After the analysis is conducted, the actual number of hits is, of course, the mean. The number of hits arises from a small probability per history and a large number of histories; these are the classic requirements for a Poisson distribution. For a known Poisson distribution (the mean is the only parameter), the probability for some interval in the number of hits is calculable. Before the analysis, this is not possible. Fortunately, there are methods that can bound the unknown mean for a Poisson distribution. F. Garwoodfootnote{ F. Garwood (1936), ``Fiduciary limits for the Poisson distribution.'' Biometrika 28, 437-442.} published an appropriate method that uses the Chi-squared function, actually its inversefootnote{ The integral chi-squared function would yield probability α as a function of the mean µ and an actual value n.} (despite the notation used): This formula for the upper and lower limits of the mean μ with the two-tailed probability 1-α depends on the LOC α and an estimated value of the number of "successes" n. In a MC analysis for planetary protection, only the upper limit is of interest, i.e., the single
International Nuclear Information System (INIS)
The reliability of calculation tools to evaluate and calculate dose rates appearing behind multi-layered shields is important with regard to the certification of transport and storage casks. Actual benchmark databases like SINBAD do not offer such configurations because they were developed for reactor and accelerator purposes. Due to this, a bench-mark-suite based on own experiments that contain dose rates measured in different distances and levels from a transport and storage cask and on a public benchmark to validate Monte-Carlo-transport-codes has been developed. The analysed and summarised experiments include a 60Co point-source located in a cylindrical cask, a 252Cf line-source shielded by iron and polyethylene (PE) and a bare 252Cf source moderated by PE in a concrete-labyrinth with different inserted shielding materials to quantify neutron streaming effects on measured dose rates. In detail not only MCNPTM (version 5.1.6) but also MAVRIC, included in the SCALE 6.1 package, have been compared for photon and neutron transport. Aiming at low deviations between calculation and measurement requires precise source term specification and exact measurements of the dose rates which have been evaluated carefully including known uncertainties. In MAVRIC different source-descriptions with respect to the group-structure of the nuclear data library are analysed for the calculation of gamma dose rates because the energy lines of 60Co can only be modelled in groups. In total the comparison shows that MCNPTM fits very wall to the measurements within up to two standard deviations and that MAVRIC behaves similarly under the prerequisite that the source-model can be optimized. (author)
Energy Technology Data Exchange (ETDEWEB)
Vergnaud, Th.; Nimal, J.C.; Chiron, M
2001-07-01
The TRIPOLI-3 code applies the Monte Carlo method to neutron, gamma-ray and coupled neutron and gamma-ray transport calculations in three-dimensional geometries, either in steady-state conditions or having a time dependence. It can be used to study problems where there is a high flux attenuation between the source zone and the result zone (studies of shielding configurations or source driven sub-critical systems, with fission being taken into account), as well as problems where there is a low flux attenuation (neutronic calculations -- in a fuel lattice cell, for example -- where fission is taken into account, usually with the calculation on the effective multiplication factor, fine structure studies, numerical experiments to investigate methods approximations, etc). TRIPOLI-3 has been operational since 1995 and is the version of the TRIPOLI code that follows on from TRIPOLI-2; it can be used on SUN, RISC600 and HP workstations and on PC using the Linux or Windows/NT operating systems. The code uses nuclear data libraries generated using the THEMIS/NJOY system. The current libraries were derived from ENDF/B6 and JEF2. There is also a response function library based on a number of evaluations, notably the dosimetry libraries IRDF/85, IRDF/90 and also evaluations from JEF2. The treatment of particle transport is the same in version 3.5 as in version 3.4 of the TRIPOLI code; but the version 3.5 is more convenient for preparing the input data and for reading the output. The french version of the user's manual exists. (authors)
Energy Technology Data Exchange (ETDEWEB)
Swaminathan-Gopalan, Krishnan; Stephani, Kelly A., E-mail: ksteph@illinois.edu [Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States)
2016-02-15
A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.
International Nuclear Information System (INIS)
The objective of this study was to estimate doses in the physician and the nurse assistant at different positions during interventional radiology procedures. In this study, effective doses obtained for the physician and at points occupied by other workers were normalised by air kerma-area product (KAP). The simulations were performed for two X-ray spectra (70 kVp and 87 kVp) using the radiation transport code MCNPX (version 2.7.0), and a pair of anthropomorphic voxel phantoms (MASH/FASH) used to represent both the patient and the medical professional at positions from 7 cm to 47 cm from the patient. The X-ray tube was represented by a point source positioned in the anterior posterior (AP) and posterior anterior (PA) projections. The CC can be useful to calculate effective doses, which in turn are related to stochastic effects. With the knowledge of the values of CCs and KAP measured in an X-ray equipment, at a similar exposure, medical professionals will be able to know their own effective dose. - Highlights: ► This study presents a series of simulations to determine scatter-dose in IR. ► Irradiation of the worker is non-uniform and a part of his body is shielded. ► With the CCs it is possible to estimate the occupational doses in the CA examination. ► Protection of medical personnel in IR is an important issue of radiological protection
Santos, W. S.; Carvalho, A. B., Jr.; Hunt, J. G.; Maia, A. F.
2014-02-01
The objective of this study was to estimate doses in the physician and the nurse assistant at different positions during interventional radiology procedures. In this study, effective doses obtained for the physician and at points occupied by other workers were normalised by air kerma-area product (KAP). The simulations were performed for two X-ray spectra (70 kVp and 87 kVp) using the radiation transport code MCNPX (version 2.7.0), and a pair of anthropomorphic voxel phantoms (MASH/FASH) used to represent both the patient and the medical professional at positions from 7 cm to 47 cm from the patient. The X-ray tube was represented by a point source positioned in the anterior posterior (AP) and posterior anterior (PA) projections. The CC can be useful to calculate effective doses, which in turn are related to stochastic effects. With the knowledge of the values of CCs and KAP measured in an X-ray equipment, at a similar exposure, medical professionals will be able to know their own effective dose.
International Nuclear Information System (INIS)
Improving the prediction of radiation parameters and reliability of fuel behaviour under different irradiation modes is particularly relevant for new fuel compositions, including recycled nuclear fuel. For fast reactors there is a strong dependence of nuclide accumulations on the nuclear data libraries. The effect of fission yield libraries on irradiated fuel is studied in MONTEBURNS-MCNP5-ORIGEN2 calculations of sodium fast reactors. Fission yield libraries are generated for sodium fast reactors with MOX fuel, using ENDF/B-VII.0, JEFF3.1, original library FY-Koldobsky, and GEFY 3.3 as sources. The transport libraries are generated from ENDF/B-VII.0 and JEFF-3.1. Analysis of irradiated MOX fuel using different fission yield libraries demonstrates the considerable spread in concentrations of fission products. The discrepancies in concentrations of inert gases being ∼25%, up to 5 times for stable and long-life nuclides, and up to 10 orders of magnitude for short-lived nuclides. (authors)
Sherbini, S; Tamasanis, D; Sykes, J; Porter, S W
1986-12-01
A program was developed to calculate the exposure rate resulting from airborne gases inside a reactor containment building. The calculations were performed at the location of a wall-mounted area radiation monitor. The program uses Monte Carlo techniques and accounts for both the direct and scattered components of the radiation field at the detector. The scattered component was found to contribute about 30% of the total exposure rate at 50 keV and dropped to about 7% at 2000 keV. The results of the calculations were normalized to unit activity per unit volume of air in the containment. This allows the exposure rate readings of the area monitor to be used to estimate the airborne activity in containment in the early phases of an accident. Such estimates, coupled with containment leak rates, provide a method to obtain a release rate for use in offsite dose projection calculations.
Directory of Open Access Journals (Sweden)
Kępisty Grzegorz
2015-09-01
Full Text Available In this paper, we compare the methodology of different time-step models in the context of Monte Carlo burnup calculations for nuclear reactors. We discuss the differences between staircase step model, slope model, bridge scheme and stochastic implicit Euler method proposed in literature. We focus on the spatial stability of depletion procedure and put additional emphasis on the problem of normalization of neutron source strength. Considered methodology has been implemented in our continuous energy Monte Carlo burnup code (MCB5. The burnup simulations have been performed using the simplified high temperature gas-cooled reactor (HTGR system with and without modeling of control rod withdrawal. Useful conclusions have been formulated on the basis of results.
Transport appraisal and Monte Carlo simulation by use of the CBA-DK model
DEFF Research Database (Denmark)
Salling, Kim Bang; Leleur, Steen
2011-01-01
This paper presents the Danish CBA-DK software model for assessment of transport infrastructure projects. The assessment model is based on both a deterministic calculation following the cost-benefit analysis (CBA) methodology in a Danish manual from the Ministry of Transport and on a stochastic......, is explained. Furthermore, comprehensive assessments based on the set of distributions are made and implemented by use of a Danish case example. Finally, conclusions and a perspective are presented....
Energy Technology Data Exchange (ETDEWEB)
Sampson, Andrew; Le Yi; Williamson, Jeffrey F. [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia 23298 (United States)
2012-02-15
Purpose: To demonstrate potential of correlated sampling Monte Carlo (CMC) simulation to improve the calculation efficiency for permanent seed brachytherapy (PSB) implants without loss of accuracy. Methods: CMC was implemented within an in-house MC code family (PTRAN) and used to compute 3D dose distributions for two patient cases: a clinical PSB postimplant prostate CT imaging study and a simulated post lumpectomy breast PSB implant planned on a screening dedicated breast cone-beam CT patient exam. CMC tallies the dose difference, {Delta}D, between highly correlated histories in homogeneous and heterogeneous geometries. The heterogeneous geometry histories were derived from photon collisions sampled in a geometrically identical but purely homogeneous medium geometry, by altering their particle weights to correct for bias. The prostate case consisted of 78 Model-6711 {sup 125}I seeds. The breast case consisted of 87 Model-200 {sup 103}Pd seeds embedded around a simulated lumpectomy cavity. Systematic and random errors in CMC were unfolded using low-uncertainty uncorrelated MC (UMC) as the benchmark. CMC efficiency gains, relative to UMC, were computed for all voxels, and the mean was classified in regions that received minimum doses greater than 20%, 50%, and 90% of D{sub 90}, as well as for various anatomical regions. Results: Systematic errors in CMC relative to UMC were less than 0.6% for 99% of the voxels and 0.04% for 100% of the voxels for the prostate and breast cases, respectively. For a 1 x 1 x 1 mm{sup 3} dose grid, efficiency gains were realized in all structures with 38.1- and 59.8-fold average gains within the prostate and breast clinical target volumes (CTVs), respectively. Greater than 99% of the voxels within the prostate and breast CTVs experienced an efficiency gain. Additionally, it was shown that efficiency losses were confined to low dose regions while the largest gains were located where little difference exists between the homogeneous and
Barengoltz, Jack
2016-07-01
Monte Carlo (MC) is a common method to estimate probability, effectively by a simulation. For planetary protection, it may be used to estimate the probability of impact P{}_{I} by a launch vehicle (upper stage) of a protected planet. The object of the analysis is to provide a value for P{}_{I} with a given level of confidence (LOC) that the true value does not exceed the maximum allowed value of P{}_{I}. In order to determine the number of MC histories required, one must also guess the maximum number of hits that will occur in the analysis. This extra parameter is needed because a LOC is desired. If more hits occur, the MC analysis would indicate that the true value may exceed the specification value with a higher probability than the LOC. (In the worst case, even the mean value of the estimated P{}_{I} might exceed the specification value.) After the analysis is conducted, the actual number of hits is, of course, the mean. The number of hits arises from a small probability per history and a large number of histories; these are the classic requirements for a Poisson distribution. For a known Poisson distribution (the mean is the only parameter), the probability for some interval in the number of hits is calculable. Before the analysis, this is not possible. Fortunately, there are methods that can bound the unknown mean for a Poisson distribution. F. Garwoodfootnote{ F. Garwood (1936), ``Fiduciary limits for the Poisson distribution.'' Biometrika 28, 437-442.} published an appropriate method that uses the Chi-squared function, actually its inversefootnote{ The integral chi-squared function would yield probability α as a function of the mean µ and an actual value n.} (despite the notation used): This formula for the upper and lower limits of the mean μ with the two-tailed probability 1-α depends on the LOC α and an estimated value of the number of "successes" n. In a MC analysis for planetary protection, only the upper limit is of interest, i.e., the single
International Nuclear Information System (INIS)
Highlights: ► We have extended the KAERI library generation system to include gamma cross section generation capability. ► A gamma transport/diffusion calculation module has been implemented in KARMA 1.2. ► The computational results for benchmark problems show that the gamma library and gamma simulation in KARMA are reasonable. - Abstract: KAERI has developed a lattice transport calculation code KARMA (Kernel Analyzer by Ray-tracing Method for fuel Assembly) and its library generation system. Recently, the library generation system has been extended to include a gamma cross section generation capability and a gamma transport/diffusion calculation module has been implemented in KARMA 1.2. The method of characteristics for the neutron transport calculation to estimate eigenvalue has been utilized to predict gamma flux distribution and energy deposition. In addition, the coarse mesh finite difference method with diffusion approximation has also been utilized to estimate gamma flux distribution and energy depositions for each coarse mesh with homogenized pins as a computationally efficient alternative. This paper describes the procedure to generate neutron induced gamma production and gamma cross section data, and the methods to predict gamma flux distribution, gamma energy deposition and gamma smeared pin power distribution. The computational results for benchmark problems show that the gamma library and gamma simulation in KARMA are reasonable. And it is noted that gamma smeared power distributions predicted by coarse mesh diffusion calculation are very accurate compared to the results of transport calculation
Quantum Monte Carlo calculations of neutron matter with chiral three-body forces
Tews, I.; Gandolfi, S.; Gezerlis, A.; Schwenk, A.
2016-02-01
Chiral effective field theory (EFT) enables a systematic description of low-energy hadronic interactions with controlled theoretical uncertainties. For strongly interacting systems, quantum Monte Carlo (QMC) methods provide some of the most accurate solutions, but they require as input local potentials. We have recently constructed local chiral nucleon-nucleon (NN) interactions up to next-to-next-to-leading order (N2LO ). Chiral EFT naturally predicts consistent many-body forces. In this paper, we consider the leading chiral three-nucleon (3N) interactions in local form. These are included in auxiliary field diffusion Monte Carlo (AFDMC) simulations. We present results for the equation of state of neutron matter and for the energies and radii of neutron drops. In particular, we study the regulator dependence at the Hartree-Fock level and in AFDMC and find that present local regulators lead to less repulsion from 3N forces compared to the usual nonlocal regulators.
Supersonic flow with shock waves. Monte-Carlo calculations for low density plasma. I
International Nuclear Information System (INIS)
This Report gives preliminary information about a Monte Carlo procedure to simulate supersonic flow past a body of a low density plasma in the transition regime. A computer program has been written for a UNIVAC 1108 machine to account for a plasma composed by neutral molecules and positive and negative ions. Different and rather general body geometries can be analyzed. Special attention is played to tho detached shock waves growth In front of the body. (Author) 30 refs
Energy Technology Data Exchange (ETDEWEB)
Verde Velasco, J. M.; Garcia Repiso, S.; Martin rincon, C.; Ramos Pacho, J. A.; Delgado Aparicio, J. M.; Perez alvarez, M. E.; Saez Beltran, M.; Gomez Gonzalez, N.; Cons Perez, N.; Sena Espinel, E.
2013-07-01
The implementation of a Monte Carlo algorithm requires not only a careful series of steps, but also adjust various parameters of calculation which will influence both in the goodness of the calculation of doses as in the time required for the calculation, being necessary to reach a compromise solution that get acceptable calculation accuracy in a time of calculation which is acceptable. In this paper we present our experience in this setting. (Author)
Gomà, Carles; Andreo, Pedro; Sempau, Josep
2016-03-01
This work calculates beam quality correction factors (k Q ) in monoenergetic proton beams using detailed Monte Carlo simulation of ionization chambers. It uses the Monte Carlo code penh and the electronic stopping powers resulting from the adoption of two different sets of mean excitation energy values for water and graphite: (i) the currently ICRU 37 and ICRU 49 recommended {{I}\\text{w}}=75~\\text{eV} and {{I}\\text{g}}=78~\\text{eV} and (ii) the recently proposed {{I}\\text{w}}=78~\\text{eV} and {{I}\\text{g}}=81.1~\\text{eV} . Twelve different ionization chambers were studied. The k Q factors calculated using the two different sets of I-values were found to agree with each other within 1.6% or better. k Q factors calculated using current ICRU I-values were found to agree within 2.3% or better with the k Q factors tabulated in IAEA TRS-398, and within 1% or better with experimental values published in the literature. k Q factors calculated using the new I-values were also found to agree within 1.1% or better with the experimental values. This work concludes that perturbation correction factors in proton beams—currently assumed to be equal to unity—are in fact significantly different from unity for some of the ionization chambers studied.
TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations
Energy Technology Data Exchange (ETDEWEB)
Schuemann, J; Grassberger, C; Paganetti, H [Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Dowdell, S [Illawarra Shoalhaven Local Health District, Wollongong (Australia)
2014-06-15
Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50) were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we recommend
International Nuclear Information System (INIS)
Neutron and gamma spectra were measured behind and inside of modules consisting of variable iron and water slabs that were installed in radial beams of the zero-power training and research reactors AKR of the Technical University Dresden and ZLFR of the University of Applied Sciences Zittau/Goerlitz. The applied NE-213 scintillation spectrometer did allow the measurement of gamma and neutron fluence spectra in the energy regions 0.3-10 MeV for photons and 1.0-20 MeV for neutrons. The paper describes the experiments and presents important results of the measurements. They are compared with the results of Monte Carlo transport calculations made by means of the codes MCNP and TRAMO on an absolute scale of fluences
Energy Technology Data Exchange (ETDEWEB)
Mariotti, F., E-mail: francesca.mariotti@bologna.enea.i [ENEA-BAS-ION IRP Radiation Protection Institute, Via dei Colli 16, 40136, Bologna (Italy); Gualdrini, G. [ENEA-BAS-ION IRP Radiation Protection Institute, Via dei Colli 16, 40136, Bologna (Italy)
2011-04-15
The ORAMED (Optimization of RAdiation protection for MEDical staff) Working Tasks (WP4) is addressed at evaluating extremity doses (and dose distributions across the hands) of medical staff working in nuclear medicine departments, to study the influence of protective devices such as syringe and vial shields, to improve such devices when possible and to propose 'levels of reference doses' for each standard nuclear medicine procedure. In particular task 4 is concerned with the study of the extremity dosimetry for the hand of operators devoted to the preparation and administration stages of the usage, for example, of {sup 99m}Tc, {sup 18}F and {sup 90}Y (Zevalin) radionuclides. The aim of this report consists in the study of photon-electron equilibrium conditions at 0.07 mm in the skin to justify a simplified 'kerma approximation' approach in the planned complex Monte Carlo voxel hand modeling. Furthermore a detailed investigation on primary electron and secondary bremsstrahlung photon transport from {sup 90}Y to speed up the calculations was performed. The results obtained in the simplified investigated conditions could be of help for the production calculations, introducing, if necessary, suited correction factors applicable to the complex condition results.
GPU-based high performance Monte Carlo simulation in neutron transport
Energy Technology Data Exchange (ETDEWEB)
Heimlich, Adino; Mol, Antonio C.A.; Pereira, Claudio M.N.A. [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. de Inteligencia Artificial Aplicada], e-mail: cmnap@ien.gov.br
2009-07-01
Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in neutron transport simulation by Monte Carlo method. To accomplish that, GPU- and CPU-based (single and multicore) approaches were developed and applied to a simple, but time-consuming problem. Comparisons demonstrated that the GPU-based approach is about 15 times faster than a parallel 8-core CPU-based approach also developed in this work. (author)
Overview of TRIPOLI-4 version 7, Continuous-energy Monte Carlo Transport Code
International Nuclear Information System (INIS)
The TRIPOLI-4 code is used essentially for four major classes of applications: shielding studies, criticality studies, core physics studies, and instrumentation studies. In this updated overview of the Monte Carlo transport code TRIPOLI-4, we list and describe its current main features, including recent developments or extended capacities like effective beta estimation, photo-nuclear reactions or extended mesh tallies. The code computes coupled neutron-photon propagation as well as the electron-photon cascade shower. While providing the user with common biasing techniques, it also implements an automatic weighting scheme. TRIPOLI-4 has support for execution in parallel mode. Special features and applications are also presented concerning: 'particles storage', resuming a stopped TRIPOLI-4 run, collision bands, Green's functions, source convergence in criticality mode, and mesh tally
Core-scale solute transport model selection using Monte Carlo analysis
Malama, Bwalya; James, Scott C
2013-01-01
Model applicability to core-scale solute transport is evaluated using breakthrough data from column experiments conducted with conservative tracers tritium (H-3) and sodium-22, and the retarding solute uranium-232. The three models considered are single-porosity, double-porosity with single-rate mobile-immobile mass-exchange, and the multirate model, which is a deterministic model that admits the statistics of a random mobile-immobile mass-exchange rate coefficient. The experiments were conducted on intact Culebra Dolomite core samples. Previously, data were analyzed using single- and double-porosity models although the Culebra Dolomite is known to possess multiple types and scales of porosity, and to exhibit multirate mobile-immobile-domain mass transfer characteristics at field scale. The data are reanalyzed here and null-space Monte Carlo analysis is used to facilitate objective model selection. Prediction (or residual) bias is adopted as a measure of the model structural error. The analysis clearly shows ...
Monte Carlo simulation of phonon transport in variable cross-section nanowires
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
A dedicated Monte Carlo (MC) model is proposed to investigate the mechanism of phonon transport in variable cross-section silicon nanowires (NWs). Emphasis is placed on understanding the thermal rectification effect and thermal conduction in tapered cross-section and incremental cross-section NWs. In the simulations, both equal and unequal heat input conditions are discussed. Under the latter condition, the tapered cross-section NW has a more prominent thermal rectification effect. Additionally, the capacity of heat conduction in the tapered cross-section NW is always higher than that of the incremental one. Two reasons may be attributed to these behaviors: one is their different boundary conditions and the other is their different volume distribution. Although boundary scattering plays an important role in nanoscale structures, the results suggest the influence of boundary scattering on heat conduction is less obvious than that of volume distribution in NWs with variable cross-sections.
Monte Carlo Simulations of Spin Transport in Nanoscale InGaAs Field Effect Transistors
Thorpe, B; Langbein, F; Schirmer, S
2016-01-01
By augmenting an in-house developed, experimentally verified Monte Carlo device simulator with a Bloch equation model with a spin-orbit interaction Hamiltonian accounting for Dresselhaus and Rashba couplings, we simulate electron spin transport in a \\SI{25}{nm} gate length InGaAs MOSFET. We observe non-uniform decay of the net magnetization between the source and gate electrodes and an interesting magnetization recovery effect due to spin refocusing induced by high electric field between the gate and drain electrodes. We demonstrate coherent control of the polarization vector of the drain current via the source-drain and gate voltages, and show that the magnetization of the drain current is sensitive to strain in the channel, suggesting that the device could act as a room-temperature nanoscale strain sensor.
Monte Carlo Simulations of Charge Transport in 2D Organic Photovoltaics.
Gagorik, Adam G; Mohin, Jacob W; Kowalewski, Tomasz; Hutchison, Geoffrey R
2013-01-01
The effect of morphology on charge transport in organic photovoltaics is assessed using Monte Carlo. In isotopic two-phase morphologies, increasing the domain size from 6.3 to 18.3 nm improves the fill factor by 11.6%, a result of decreased tortuosity and relaxation of Coulombic barriers. Additionally, when small aggregates of electron acceptors are interdispersed into the electron donor phase, charged defects form in the system, reducing fill factors by 23.3% on average, compared with systems without aggregates. In contrast, systems with idealized connectivity show a 3.31% decrease in fill factor when domain size was increased from 4 to 64 nm. We attribute this to a decreased rate of exciton separation at donor-acceptor interfaces. Finally, we notice that the presence of Coulomb interactions increases device performance as devices become smaller. The results suggest that for commonly found isotropic morphologies the Coulomb interactions between charge carriers dominates exciton separation effects.
Analysis of Light Transport Features in Stone Fruits Using Monte Carlo Simulation.
Directory of Open Access Journals (Sweden)
Chizhu Ding
Full Text Available The propagation of light in stone fruit tissue was modeled using the Monte Carlo (MC method. Peaches were used as the representative model of stone fruits. The effects of the fruit core and the skin on light transport features in the peaches were assessed. It is suggested that the skin, flesh and core should be separately considered with different parameters to accurately simulate light propagation in intact stone fruit. The detection efficiency was evaluated by the percentage of effective photons and the detection sensitivity of the flesh tissue. The fruit skin decreases the detection efficiency, especially in the region close to the incident point. The choices of the source-detector distance, detection angle and source intensity were discussed. Accurate MC simulations may result in better insight into light propagation in stone fruit and aid in achieving the optimal fruit quality inspection without extensive experimental measurements.
A Monte Carlo transport code study of the space radiation environment using FLUKA and ROOT
Wilson, T; Carminati, F; Brun, R; Ferrari, A; Sala, P; Empl, A; MacGibbon, J
2001-01-01
We report on the progress of a current study aimed at developing a state-of-the-art Monte-Carlo computer simulation of the space radiation environment using advanced computer software techniques recently available at CERN, the European Laboratory for Particle Physics in Geneva, Switzerland. By taking the next-generation computer software appearing at CERN and adapting it to known problems in the implementation of space exploration strategies, this research is identifying changes necessary to bring these two advanced technologies together. The radiation transport tool being developed is tailored to the problem of taking measured space radiation fluxes impinging on the geometry of any particular spacecraft or planetary habitat and simulating the evolution of that flux through an accurate model of the spacecraft material. The simulation uses the latest known results in low-energy and high-energy physics. The output is a prediction of the detailed nature of the radiation environment experienced in space as well a...
ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.
Energy Technology Data Exchange (ETDEWEB)
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2008-04-01
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.
An accurate δf method for neoclassical transport calculation
International Nuclear Information System (INIS)
A δf method, solving drift kinetic equation, for neoclassical transport calculation is presented in detail. It is demonstrated that valid results essentially rely on the correct evaluation of marker density g in weight calculation. A general and accurate weighting scheme is developed without using some assumed g in weight equation for advancing particle weights, unlike the previous schemes. This scheme employs an additional weight function to directly solve g from its kinetic equation using the idea of δf method. Therefore the severe constraint that the real marker distribution must be consistent with the initially assumed g during a simulation is relaxed. An improved like-particle collision scheme is presented. By performing compensation for momentum, energy and particle losses arising from numerical errors, the conservations of all the three quantities are greatly improved during collisions. Ion neoclassical transport due to self-collisions is examined under finite banana case as well as zero banana limit. A solution with zero particle and zero energy flux (in case of no temperature gradient) over whole poloidal section is obtained. With the improvement in both like-particle collision scheme and weighting scheme, the δf simulation shows a significantly upgraded performance for neoclassical transport study. (author)
An accurate {delta}f method for neoclassical transport calculation
Energy Technology Data Exchange (ETDEWEB)
Wang, W.X.; Nakajima, N.; Murakami, S.; Okamoto, M. [National Inst. for Fusion Science, Toki, Gifu (Japan)
1999-03-01
A {delta}f method, solving drift kinetic equation, for neoclassical transport calculation is presented in detail. It is demonstrated that valid results essentially rely on the correct evaluation of marker density g in weight calculation. A general and accurate weighting scheme is developed without using some assumed g in weight equation for advancing particle weights, unlike the previous schemes. This scheme employs an additional weight function to directly solve g from its kinetic equation using the idea of {delta}f method. Therefore the severe constraint that the real marker distribution must be consistent with the initially assumed g during a simulation is relaxed. An improved like-particle collision scheme is presented. By performing compensation for momentum, energy and particle losses arising from numerical errors, the conservations of all the three quantities are greatly improved during collisions. Ion neoclassical transport due to self-collisions is examined under finite banana case as well as zero banana limit. A solution with zero particle and zero energy flux (in case of no temperature gradient) over whole poloidal section is obtained. With the improvement in both like-particle collision scheme and weighting scheme, the {delta}f simulation shows a significantly upgraded performance for neoclassical transport study. (author)
Shielding calculations for spent CANDU fuel transport cask
International Nuclear Information System (INIS)
CANDU spent fuel discharged from the reactor core contains Pu, so, a special attention must be focussed into two directions: tracing for the fuel reactivity in order to prevent critical mass formation and personnel protection during the spent fuel manipulation. Shielding analyses, an essential component of the nuclear safety, take into account the difficulties occurred during the manipulation, transport and storage of spent fuel bundles, both for personnel protection and impact on the environment. The main objective here consists in estimations on radiation doses in order to reduce them under specified limit values. In order to perform the shielding calculations for the spent fuel transport cask three different codes were used: XSDOSE code and MORSE-SGC code, both incorporated in the SCALE4.4a system, and PELSHIE-3 code, respectively. As source of radiation one spent standard CANDU fuel bundle was used. All the geometrical and material data, related to the transport casks, were considered according to the shipping cask type B model, whose prototype has been realized and tested in the Institute for Nuclear Research Pitesti. The radial gamma dose rates estimated to the cask wall and in air, at different distances from the cask, are presented together with a comparison between the dose rates values obtained by all three recipes of shielding calculations. (authors)
International Nuclear Information System (INIS)
This report considers the contribution from scattered radiation to the dose to organs and tissues which lie outside the useful therapy beams. The results presented are the product of Monte Carlo studies used to determine the tissue doses due to internal scattering of the useful beams only. General cases are calculated in which central target volumes in the trunk are treated with 10 x 14 cm2 and 14 x 14 cm2 fields from 200 kV, Co-60, 8 MV and 25 MV therapy equipment. Target volumes in the neck are considered to be treated with 5 x 5 cm2 fields. Different treatment plans are calculated including rotational therapy. Also two specific cases are more fully analysed, namely for Ankylosing Spondylitis and central abdomen malignant disease in the region of the head of the pancreas. The calculated organ doses are presented in tables as a percentage of the target volume dose. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Landry, Guillaume; Reniers, Brigitte; Pignol, Jean-Philippe; Beaulieu, Luc; Verhaegen, Frank [Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Department of Radiation Oncology, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario M4N 3M5 (Canada); Departement de Radio-Oncologie et Centre de Recherche en Cancerologie, Universite Laval, CHUQ Pavillon L' Hotel-Dieu de Quebec, Quebec G1R 2J6 (Canada) and Departement de Physique, de Genie Physique et d' Optique, Universite Laval, Quebec G1K 7P4 (Canada); Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands) and Department of Oncology, McGill University, Montreal General Hospital, Montreal, Quebec H3G 1A4 (Canada)
2011-03-15
Purpose: The goal of this work is to compare D{sub m,m} (radiation transported in medium; dose scored in medium) and D{sub w,m} (radiation transported in medium; dose scored in water) obtained from Monte Carlo (MC) simulations for a subset of human tissues of interest in low energy photon brachytherapy. Using low dose rate seeds and an electronic brachytherapy source (EBS), the authors quantify the large cavity theory conversion factors required. The authors also assess whether applying large cavity theory utilizing the sources' initial photon spectra and average photon energy induces errors related to spatial spectral variations. First, ideal spherical geometries were investigated, followed by clinical brachytherapy LDR seed implants for breast and prostate cancer patients. Methods: Two types of dose calculations are performed with the GEANT4 MC code. (1) For several human tissues, dose profiles are obtained in spherical geometries centered on four types of low energy brachytherapy sources: {sup 125}I, {sup 103}Pd, and {sup 131}Cs seeds, as well as an EBS operating at 50 kV. Ratios of D{sub w,m} over D{sub m,m} are evaluated in the 0-6 cm range. In addition to mean tissue composition, compositions corresponding to one standard deviation from the mean are also studied. (2) Four clinical breast (using {sup 103}Pd) and prostate (using {sup 125}I) brachytherapy seed implants are considered. MC dose calculations are performed based on postimplant CT scans using prostate and breast tissue compositions. PTV D{sub 90} values are compared for D{sub w,m} and D{sub m,m}. Results: (1) Differences (D{sub w,m}/D{sub m,m}-1) of -3% to 70% are observed for the investigated tissues. For a given tissue, D{sub w,m}/D{sub m,m} is similar for all sources within 4% and does not vary more than 2% with distance due to very moderate spectral shifts. Variations of tissue composition about the assumed mean composition influence the conversion factors up to 38%. (2) The ratio of D
International Nuclear Information System (INIS)
The authors report calculations performed using the MNCP and PENELOPE codes to determine the Hp(3)/K air conversion coefficient which allows the Hp(3) dose equivalent to be determined from the measured value of the kerma in the air. They report the definition of the phantom, a 20 cm diameter and 20 cm high cylinder which is considered as representative of a head. Calculations are performed for an energy range corresponding to interventional radiology or cardiology (20 keV-110 keV). Results obtained with both codes are compared
Exact-to-precision generalized perturbation for neutron transport calculation
International Nuclear Information System (INIS)
This manuscript extends the exact-to-precision generalized perturbation theory (EPGPT), introduced previously, to neutron transport calculation whereby previous developments focused on neutron diffusion calculation only. The EPGPT collectively denotes new developments in generalized perturbation theory (GPT) that place premium on computational efficiency and defendable accuracy in order to render GPT a standard analysis tool in routine design and safety reactor calculations. EPGPT constructs a surrogate model with quantifiable accuracy which can replace the original neutron transport model for subsequent engineering analysis, e.g. functionalization of the homogenized few-group cross sections in terms of various core conditions, sensitivity analysis and uncertainty quantification. This is achieved by reducing the effective dimensionality of the state variable (i.e. neutron angular flux) by projection onto an active subspace. Confining the state variations to the active subspace allows one to construct a small number of what is referred to as the 'active' responses which are solely dependent on the physics model rather than on the responses of interest, the number of input parameters, or the number of points in the state phase space. (authors)
International Nuclear Information System (INIS)
Realistic simulations of the passage of fast neutrons through tissue require a large quantity of cross-sectional data. What are needed are differential (in particle type, energy and angle) cross sections. A computer code is described which produces such spectra for neutrons above ∼14 MeV incident on light nuclei such as carbon and oxygen. Comparisons have been made with experimental measurements of double-differential secondary charged-particle production on carbon and oxygen at energies from 27 to 60 MeV; they indicate that the model is adequate in this energy range. In order to utilize fully the results of these calculations, they should be incorporated into a neutron transport code. This requires defining a generalized format for describing charged-particle production, putting the calculated results in this format, interfacing the neutron transport code with these data, and charged-particle transport. The design and development of such a program is described. 13 refs., 3 figs
Lynn, J E
2015-01-01
I discuss our recent work on Green's function Monte Carlo (GFMC) calculations of light nuclei using local nucleon-nucleon interactions derived from chiral effective field theory (EFT) up to next-to-next-to-leading order (N$^2$LO). I present the natural extension of this work to include the consistent three-nucleon (3N) forces at the same order in the chiral expansion. I discuss our choice of observables to fit the two low-energy constants which enter in the 3N sector at N$^2$LO and present some results for light nuclei.
Lynn, J. E.
2016-03-01
I discuss our recent work on Green's function Monte Carlo (GFMC) calculations of light nuclei using local nucleon-nucleon interactions derived from chiral effective field theory (EFT) up to next-to-next-to-leading order (N2LO). I present the natural extension of this work to include the consistent three-nucleon (3N) forces at the same order in the chiral expansion. I discuss our choice of observables to fit the two low-energy constants which enter in the 3N sector at N2LO and present some results for light nuclei.
Directory of Open Access Journals (Sweden)
Lynn J. E.
2016-01-01
Full Text Available I discuss our recent work on Green’s function Monte Carlo (GFMC calculations of light nuclei using local nucleon-nucleon interactions derived from chiral effective field theory (EFT up to next-to-next-to-leading order (N2LO. I present the natural extension of this work to include the consistent three-nucleon (3N forces at the same order in the chiral expansion. I discuss our choice of observables to fit the two low-energy constants which enter in the 3N sector at N2LO and present some results for light nuclei.
Wu, D; X. T. He; Yu, W.; Fritzsche, S.
2016-01-01
A Monte-Carlo approach to proton stopping in warm dense matter is implemented into an existing particle-in-cell code. The model is based on multiple binary-collisions among electron-electron, electron-ion and ion-ion, taking into account contributions from both free and bound electrons, and allows to calculate particle stopping in much more natural manner. At low temperature limit, when ``all'' electron are bounded at the nucleus, the stopping power converges to the predictions of Bethe-Bloch...
Transport coefficients in diamond from ab-initio calculations
Löfâs, Henrik; Grigoriev, Anton; Isberg, Jan; Ahuja, Rajeev
2013-03-01
By combining the Boltzmann transport equation with ab-initio electronic structure calculations, we obtain transport coefficients for boron-doped diamond. We find the temperature dependence of the resistivity and the hall coefficients in good agreement with experimental measurements. Doping in the samples is treated via the rigid band approximation and scattering is treated in the relaxation time approximation. In contrast to previous results, the acoustic phonon scattering is the dominating scattering mechanism for the considered doping range. At room temperature, we find the thermopower, S, in the range 1-1.6 mV/K and the power factor, S2σ, in the range 0.004-0.16 μW /cm K2.
Hydraulic calculation of gravity transportation pipeline system for backfill slurry
Institute of Scientific and Technical Information of China (English)
ZHANG Qin-li; HU Guan-yu; WANG Xin-min
2008-01-01
Taking cemented coal gangue pipeline transportation system in Suncun Coal Mine, Xinwen Mining Group, Shandong Province, China, as an example, the hydraulic calculation approaches and process about gravity pipeline transportation of backfill slurry were investigated. The results show that the backfill capability of the backfill system should be higher than 74.4m3/h according to the mining production and backfill times in the mine; the minimum velocity (critical velocity) and practical working velocity of the backfill slurry are 1.44 and 3.82m/s, respectively. Various formulae give the maximum ratio of total length to vertical height of pipeline (L/H ratio) of the backfill system of 5.4, and then the reliability and capability of the system can be evaluated.