Parallel MCNP Monte Carlo transport calculations with MPI
International Nuclear Information System (INIS)
Wagner, J.C.; Haghighat, A.
1996-01-01
The steady increase in computational performance has made Monte Carlo calculations for large/complex systems possible. However, in order to make these calculations practical, order of magnitude increases in performance are necessary. The Monte Carlo method is inherently parallel (particles are simulated independently) and thus has the potential for near-linear speedup with respect to the number of processors. Further, the ever-increasing accessibility of parallel computers, such as workstation clusters, facilitates the practical use of parallel Monte Carlo. Recognizing the nature of the Monte Carlo method and the trends in available computing, the code developers at Los Alamos National Laboratory implemented the message-passing general-purpose Monte Carlo radiation transport code MCNP (version 4A). The PVM package was chosen by the MCNP code developers because it supports a variety of communication networks, several UNIX platforms, and heterogeneous computer systems. This PVM version of MCNP has been shown to produce speedups that approach the number of processors and thus, is a very useful tool for transport analysis. Due to software incompatibilities on the local IBM SP2, PVM has not been available, and thus it is not possible to take advantage of this useful tool. Hence, it became necessary to implement an alternative message-passing library package into MCNP. Because the message-passing interface (MPI) is supported on the local system, takes advantage of the high-speed communication switches in the SP2, and is considered to be the emerging standard, it was selected
Error reduction techniques for Monte Carlo neutron transport calculations
International Nuclear Information System (INIS)
Ju, J.H.W.
1981-01-01
Monte Carlo methods have been widely applied to problems in nuclear physics, mathematical reliability, communication theory, and other areas. The work in this thesis is developed mainly with neutron transport applications in mind. For nuclear reactor and many other applications, random walk processes have been used to estimate multi-dimensional integrals and obtain information about the solution of integral equations. When the analysis is statistically based such calculations are often costly, and the development of efficient estimation techniques plays a critical role in these applications. All of the error reduction techniques developed in this work are applied to model problems. It is found that the nearly optimal parameters selected by the analytic method for use with GWAN estimator are nearly identical to parameters selected by the multistage method. Modified path length estimation (based on the path length importance measure) leads to excellent error reduction in all model problems examined. Finally, it should be pointed out that techniques used for neutron transport problems may be transferred easily to other application areas which are based on random walk processes. The transport problems studied in this dissertation provide exceptionally severe tests of the error reduction potential of any sampling procedure. It is therefore expected that the methods of this dissertation will prove useful in many other application areas
Monte Carlo calculations of electron transport on microcomputers
International Nuclear Information System (INIS)
Chung, Manho; Jester, W.A.; Levine, S.H.; Foderaro, A.H.
1990-01-01
In the work described in this paper, the Monte Carlo program ZEBRA, developed by Berber and Buxton, was converted to run on the Macintosh computer using Microsoft BASIC to reduce the cost of Monte Carlo calculations using microcomputers. Then the Eltran2 program was transferred to an IBM-compatible computer. Turbo BASIC and Microsoft Quick BASIC have been used on the IBM-compatible Tandy 4000SX computer. The paper shows the running speed of the Monte Carlo programs on the different computers, normalized to one for Eltran2 on the Macintosh-SE or Macintosh-Plus computer. Higher values refer to faster running times proportionally. Since Eltran2 is a one-dimensional program, it calculates energy deposited in a semi-infinite multilayer slab. Eltran2 has been modified to a two-dimensional program called Eltran3 to computer more accurately the case with a point source, a small detector, and a short source-to-detector distance. The running time of Eltran3 is about twice as long as that of Eltran2 for a similar case
Energy Technology Data Exchange (ETDEWEB)
Millman, D. L. [Dept. of Computer Science, Univ. of North Carolina at Chapel Hill (United States); Griesheimer, D. P.; Nease, B. R. [Bechtel Marine Propulsion Corporation, Bertis Atomic Power Laboratory (United States); Snoeyink, J. [Dept. of Computer Science, Univ. of North Carolina at Chapel Hill (United States)
2012-07-01
In this paper we consider a new generalized algorithm for the efficient calculation of component object volumes given their equivalent constructive solid geometry (CSG) definition. The new method relies on domain decomposition to recursively subdivide the original component into smaller pieces with volumes that can be computed analytically or stochastically, if needed. Unlike simpler brute-force approaches, the proposed decomposition scheme is guaranteed to be robust and accurate to within a user-defined tolerance. The new algorithm is also fully general and can handle any valid CSG component definition, without the need for additional input from the user. The new technique has been specifically optimized to calculate volumes of component definitions commonly found in models used for Monte Carlo particle transport simulations for criticality safety and reactor analysis applications. However, the algorithm can be easily extended to any application which uses CSG representations for component objects. The paper provides a complete description of the novel volume calculation algorithm, along with a discussion of the conjectured error bounds on volumes calculated within the method. In addition, numerical results comparing the new algorithm with a standard stochastic volume calculation algorithm are presented for a series of problems spanning a range of representative component sizes and complexities. (authors)
International Nuclear Information System (INIS)
Zazula, J.M.
1983-01-01
The general purpose code BALTORO was written for coupling the three-dimensional Monte-Carlo /MC/ with the one-dimensional Discrete Ordinates /DO/ radiation transport calculations. The quantity of a radiation-induced /neutrons or gamma-rays/ nuclear effect or the score from a radiation-yielding nuclear effect can be analysed in this way. (author)
International Nuclear Information System (INIS)
Allam, Kh. A.
2017-01-01
In this work, a new methodology is developed based on Monte Carlo simulation for tunnels and mines external dose calculation. Tunnels external dose evaluation model of a cylindrical shape of finite thickness with an entrance and with or without exit. A photon transportation model was applied for exposure dose calculations. A new software based on Monte Carlo solution was designed and programmed using Delphi programming language. The variation of external dose due to radioactive nuclei in a mine tunnel and the corresponding experimental data lies in the range 7.3 19.9%. The variation of specific external dose rate with position in, tunnel building material density and composition were studied. The given new model has more flexible for real external dose in any cylindrical tunnel structure calculations. (authors)
International Nuclear Information System (INIS)
Weinhorst, Bastian; Fischer, Ulrich; Lu, Lei; Qiu, Yuefeng; Wilson, Paul
2015-01-01
Highlights: • Comparison of different approaches for the use of CAD geometry for Monte Carlo transport calculations. • Comparison with regard to user-friendliness and computation performance. • Three approaches, namely conversion with McCad, unstructured mesh feature of MCN6 and DAGMC. • Installation most complex for DAGMC, model preparation worst for McCad, computation performance worst for MCNP6. • Installation easiest for McCad, model preparation best for MCNP6, computation speed fastest for McCad. - Abstract: Computer aided design (CAD) is an important industrial way to produce high quality designs. Therefore, CAD geometries are in general used for engineering and the design of complex facilities like the ITER tokamak. Although Monte Carlo codes like MCNP are well suited to handle the complex 3D geometry of ITER for transport calculations, they rely on their own geometry description and are in general not able to directly use the CAD geometry. In this paper, three different approaches for the use of CAD geometries with MCNP calculations are investigated and assessed with regard to calculation performance and user-friendliness. The first method is the conversion of the CAD geometry into MCNP geometry employing the conversion software McCad developed by KIT. The second approach utilizes the MCNP6 mesh geometry feature for the particle tracking and relies on the conversion of the CAD geometry into a mesh model. The third method employs DAGMC, developed by the University of Wisconsin-Madison, for the direct particle tracking on the CAD geometry using a patched version of MCNP. The obtained results show that each method has its advantages depending on the complexity and size of the model, the calculation problem considered, and the expertise of the user.
International Nuclear Information System (INIS)
Zazula, J.M.
1984-01-01
This work concerns calculation of a neutron response, caused by a neutron field perturbed by materials surrounding the source or the detector. Solution of a problem is obtained using coupling of the Monte Carlo radiation transport computation for the perturbed region and the discrete ordinates transport computation for the unperturbed system. (author). 62 refs
International Nuclear Information System (INIS)
Picton, D.J.; Harris, R.G.; Randle, K.; Weaver, D.R.
1995-01-01
This paper describes a simple, accurate and efficient technique for the calculation of materials perturbation effects in Monte Carlo photon transport calculations. It is particularly suited to the application for which it was developed, namely the modelling of a dual detector density tool as used in borehole logging. However, the method would be appropriate to any photon transport calculation in the energy range 0.1 to 2 MeV, in which the predominant processes are Compton scattering and photoelectric absorption. The method enables a single set of particle histories to provide results for an array of configurations in which material densities or compositions vary. It can calculate the effects of small perturbations very accurately, but is by no means restricted to such cases. For the borehole logging application described here the method has been found to be efficient for a moderate range of variation in the bulk density (of the order of ±30% from a reference value) or even larger changes to a limited portion of the system (e.g. a low density mudcake of the order of a few tens of mm in thickness). The effective speed enhancement over an equivalent set of individual calculations is in the region of an order of magnitude or more. Examples of calculations on a dual detector density tool are given. It is demonstrated that the method predicts, to a high degree of accuracy, the variation of detector count rates with formation density, and that good results are also obtained for the effects of mudcake layers. An interesting feature of the results is that relative count rates (the ratios of count rates obtained with different configurations) can usually be determined more accurately than the absolute values of the count rates. (orig.)
Criticality coefficient calculation for a small PWR using Monte Carlo Transport Code
Energy Technology Data Exchange (ETDEWEB)
Trombetta, Debora M.; Su, Jian, E-mail: dtrombetta@nuclear.ufrj.br, E-mail: sujian@nuclear.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil); Chirayath, Sunil S., E-mail: sunilsc@tamu.edu [Department of Nuclear Engineering and Nuclear Security Science and Policy Institute, Texas A and M University, TX (United States)
2015-07-01
Computational models of reactors are increasingly used to predict nuclear reactor physics parameters responsible for reactivity changes which could lead to accidents and losses. In this work, preliminary results for criticality coefficient calculation using the Monte Carlo transport code MCNPX were presented for a small PWR. The computational modeling developed consists of the core with fuel elements, radial reflectors, and control rods inside a pressure vessel. Three different geometries were simulated, a single fuel pin, a fuel assembly and the core, with the aim to compare the criticality coefficients among themselves.The criticality coefficients calculated were: Doppler Temperature Coefficient, Coolant Temperature Coefficient, Coolant Void Coefficient, Power Coefficient, and Control Rod Worth. The coefficient values calculated by the MCNP code were compared with literature results, showing good agreement with reference data, which validate the computational model developed and allow it to be used to perform more complex studies. Criticality Coefficient values for the three simulations done had little discrepancy for almost all coefficients investigated, the only exception was the Power Coefficient. Preliminary results presented show that simple modelling as a fuel assembly can describe changes at almost all the criticality coefficients, avoiding the need of a complex core simulation. (author)
Comparison of Monte Carlo method and deterministic method for neutron transport calculation
International Nuclear Information System (INIS)
Mori, Takamasa; Nakagawa, Masayuki
1987-01-01
The report outlines major features of the Monte Carlo method by citing various applications of the method and techniques used for Monte Carlo codes. Major areas of its application include analysis of measurements on fast critical assemblies, nuclear fusion reactor neutronics analysis, criticality safety analysis, evaluation by VIM code, and calculation for shielding. Major techniques used for Monte Carlo codes include the random walk method, geometric expression method (combinatorial geometry, 1, 2, 4-th degree surface and lattice geometry), nuclear data expression, evaluation method (track length, collision, analog (absorption), surface crossing, point), and dispersion reduction (Russian roulette, splitting, exponential transform, importance sampling, corrected sampling). Major features of the Monte Carlo method are as follows: 1) neutron source distribution and systems of complex geometry can be simulated accurately, 2) physical quantities such as neutron flux in a place, on a surface or at a point can be evaluated, and 3) calculation requires less time. (Nogami, K.)
A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom
Energy Technology Data Exchange (ETDEWEB)
Bellezzo, M.; Do Nascimento, E.; Yoriyaz, H., E-mail: mbellezzo@gmail.br [Instituto de Pesquisas Energeticas e Nucleares / CNEN, Av. Lineu Prestes 2242, Cidade Universitaria, 05508-000 Sao Paulo (Brazil)
2014-08-15
As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)
A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom
International Nuclear Information System (INIS)
Bellezzo, M.; Do Nascimento, E.; Yoriyaz, H.
2014-08-01
As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)
Monte Carlo method for neutron transport calculations in graphics processing units (GPUs)
International Nuclear Information System (INIS)
Pellegrino, Esteban
2011-01-01
Monte Carlo simulation is well suited for solving the Boltzmann neutron transport equation in an inhomogeneous media for complicated geometries. However, routine applications require the computation time to be reduced to hours and even minutes in a desktop PC. The interest in adopting Graphics Processing Units (GPUs) for Monte Carlo acceleration is rapidly growing. This is due to the massive parallelism provided by the latest GPU technologies which is the most promising solution to the challenge of performing full-size reactor core analysis on a routine basis. In this study, Monte Carlo codes for a fixed-source neutron transport problem were developed for GPU environments in order to evaluate issues associated with computational speedup using GPUs. Results obtained in this work suggest that a speedup of several orders of magnitude is possible using the state-of-the-art GPU technologies. (author) [es
International Nuclear Information System (INIS)
Maconald, J.L.; Cashwell, E.D.
1978-09-01
The techniques of learning theory and pattern recognition are used to learn splitting surface locations for the Monte Carlo neutron transport code MCN. A study is performed to determine default values for several pattern recognition and learning parameters. The modified MCN code is used to reduce computer cost for several nontrivial example problems
Response matrix Monte Carlo based on a general geometry local calculation for electron transport
International Nuclear Information System (INIS)
Ballinger, C.T.; Rathkopf, J.A.; Martin, W.R.
1991-01-01
A Response Matrix Monte Carlo (RMMC) method has been developed for solving electron transport problems. This method was born of the need to have a reliable, computationally efficient transport method for low energy electrons (below a few hundred keV) in all materials. Today, condensed history methods are used which reduce the computation time by modeling the combined effect of many collisions but fail at low energy because of the assumptions required to characterize the electron scattering. Analog Monte Carlo simulations are prohibitively expensive since electrons undergo coulombic scattering with little state change after a collision. The RMMC method attempts to combine the accuracy of an analog Monte Carlo simulation with the speed of the condensed history methods. Like condensed history, the RMMC method uses probability distributions functions (PDFs) to describe the energy and direction of the electron after several collisions. However, unlike the condensed history method the PDFs are based on an analog Monte Carlo simulation over a small region. Condensed history theories require assumptions about the electron scattering to derive the PDFs for direction and energy. Thus the RMMC method samples from PDFs which more accurately represent the electron random walk. Results show good agreement between the RMMC method and analog Monte Carlo. 13 refs., 8 figs
Energy Technology Data Exchange (ETDEWEB)
Brockway, D.; Soran, P.; Whalen, P.
1985-01-01
A Monte Carlo algorithm to efficiently calculate static alpha eigenvalues, N = ne/sup ..cap alpha..t/, for supercritical systems has been developed and tested. A direct Monte Carlo approach to calculating a static alpha is to simply follow the buildup in time of neutrons in a supercritical system and evaluate the logarithmic derivative of the neutron population with respect to time. This procedure is expensive, and the solution is very noisy and almost useless for a system near critical. The modified approach is to convert the time-dependent problem to a static ..cap alpha../sup -/eigenvalue problem and regress ..cap alpha.. on solutions of a/sup -/ k/sup -/eigenvalue problem. In practice, this procedure is much more efficient than the direct calculation, and produces much more accurate results. Because the Monte Carlo codes are intrinsically three-dimensional and use elaborate continuous-energy cross sections, this technique is now used as a standard for evaluating other calculational techniques in odd geometries or with group cross sections.
Improved cache performance in Monte Carlo transport calculations using energy banding
Siegel, A.; Smith, K.; Felker, K.; Romano, P.; Forget, B.; Beckman, P.
2014-04-01
We present an energy banding algorithm for Monte Carlo (MC) neutral particle transport simulations which depend on large cross section lookup tables. In MC codes, read-only cross section data tables are accessed frequently, exhibit poor locality, and are typically too much large to fit in fast memory. Thus, performance is often limited by long latencies to RAM, or by off-node communication latencies when the data footprint is very large and must be decomposed on a distributed memory machine. The proposed energy banding algorithm allows maximal temporal reuse of data in band sizes that can flexibly accommodate different architectural features. The energy banding algorithm is general and has a number of benefits compared to the traditional approach. In the present analysis we explore its potential to achieve improvements in time-to-solution on modern cache-based architectures.
International Nuclear Information System (INIS)
Palau, J.M.
2005-01-01
This paper presents how Monte-Carlo calculations (French TRIPOLI4 poly-kinetic code with an appropriate pre-processing and post-processing software called OVNI) are used in the case of 3-dimensional heterogeneous benchmarks (slab reactor cores) to reduce model biases and enable a thorough and detailed analysis of the performances of deterministic methods and their associated data libraries with respect to key neutron parameters (reactivity, local power). Outstanding examples of application of these tools are presented regarding the new numerical methods implemented in the French lattice code APOLLO2 (advanced self-shielding models, new IDT characteristics method implemented within the discrete-ordinates flux solver model) and the JEFF3.1 nuclear data library (checked against JEF2.2 previous file). In particular we have pointed out, by performing multigroup/point-wise TRIPOLI4 (assembly and core) calculations, the efficiency (in terms of accuracy and computation time) of the new IDT method developed in APOLLO2. In addition, by performing 3-dimensional TRIPOLI4 calculations of the whole slab core (few millions of elementary volumes), the high quality of the new JEFF3.1 nuclear data files and revised evaluations (U 235 , U 238 , Hf) for reactivity prediction of slab cores critical experiments has been stressed. As a feedback of the whole validation process, improvements in terms of nuclear data (mainly Hf capture cross-sections) and numerical methods (advanced quadrature formulas accounting validation results, validation of new self-shielding models, parallelization) are suggested to improve even more the APOLLO2-CRONOS2 standard calculation route. (author)
Comparison of Space Radiation Calculations from Deterministic and Monte Carlo Transport Codes
Adams, J. H.; Lin, Z. W.; Nasser, A. F.; Randeniya, S.; Tripathi, r. K.; Watts, J. W.; Yepes, P.
2010-01-01
The presentation outline includes motivation, radiation transport codes being considered, space radiation cases being considered, results for slab geometry, results from spherical geometry, and summary. ///////// main physics in radiation transport codes hzetrn uprop fluka geant4, slab geometry, spe, gcr,
Vectorization of continuous energy Monte Carlo method for neutron transport calculation
International Nuclear Information System (INIS)
Mori, Takamasa; Nakagawa, Masayuki; Sasaki, Makoto
1992-01-01
The vectorization method was studied to achieve a high efficiency for the precise physics model used in the continuous energy Monte Carlo method. The collision analysis task was reconstructed on the basis of the event based algorithm, and the stack-driven zone-selection method was applied to the vectorization of random walk simulation. These methods were installed into the vectorized continuous energy MVP code for general purpose uses. Performance of the present method was evaluated by comparison with conventional scalar codes VIM and MCNP for two typical problems. The MVP code achieved a vectorization ratio of more than 95% and a computation speed faster by a factor of 8∼22 on the FACOM VP-2600 vector supercomputer compared with the conventional scalar codes. (author)
Adjoint electron Monte Carlo calculations
International Nuclear Information System (INIS)
Jordan, T.M.
1986-01-01
Adjoint Monte Carlo is the most efficient method for accurate analysis of space systems exposed to natural and artificially enhanced electron environments. Recent adjoint calculations for isotropic electron environments include: comparative data for experimental measurements on electronics boxes; benchmark problem solutions for comparing total dose prediction methodologies; preliminary assessment of sectoring methods used during space system design; and total dose predictions on an electronics package. Adjoint Monte Carlo, forward Monte Carlo, and experiment are in excellent agreement for electron sources that simulate space environments. For electron space environments, adjoint Monte Carlo is clearly superior to forward Monte Carlo, requiring one to two orders of magnitude less computer time for relatively simple geometries. The solid-angle sectoring approximations used for routine design calculations can err by more than a factor of 2 on dose in simple shield geometries. For critical space systems exposed to severe electron environments, these potential sectoring errors demand the establishment of large design margins and/or verification of shield design by adjoint Monte Carlo/experiment
Monte Carlo calculations of nuclei
Energy Technology Data Exchange (ETDEWEB)
Pieper, S.C. [Argonne National Lab., IL (United States). Physics Div.
1997-10-01
Nuclear many-body calculations have the complication of strong spin- and isospin-dependent potentials. In these lectures the author discusses the variational and Green`s function Monte Carlo techniques that have been developed to address this complication, and presents a few results.
International Nuclear Information System (INIS)
White, Morgan C.
2000-01-01
The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V and V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second, the ability to
Energy Technology Data Exchange (ETDEWEB)
White, Morgan C. [Univ. of Florida, Gainesville, FL (United States)
2000-07-01
The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V&V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second
Kouznetsov, A.; Cully, C. M.
2017-12-01
During enhanced magnetic activities, large ejections of energetic electrons from radiation belts are deposited in the upper polar atmosphere where they play important roles in its physical and chemical processes, including VLF signals subionospheric propagation. Electron deposition can affect D-Region ionization, which are estimated based on ionization rates derived from energy depositions. We present a model of D-region ion production caused by an arbitrary (in energy and pitch angle) distribution of fast (10 keV - 1 MeV) electrons. The model relies on a set of pre-calculated results obtained using a general Monte Carlo approach with the latest version of the MCNP6 (Monte Carlo N-Particle) code for the explicit electron tracking in magnetic fields. By expressing those results using the ionization yield functions, the pre-calculated results are extended to cover arbitrary magnetic field inclinations and atmospheric density profiles, allowing ionization rate altitude profile computations in the range of 20 and 200 km at any geographic point of interest and date/time by adopting results from an external atmospheric density model (e.g. NRLMSISE-00). The pre-calculated MCNP6 results are stored in a CDF (Common Data Format) file, and IDL routines library is written to provide an end-user interface to the model.
Hissoiny, Sami
Dose calculation is a central part of treatment planning. The dose calculation must be 1) accurate so that the medical physicists and the radio-oncologists can make a decision based on results close to reality and 2) fast enough to allow a routine use of dose calculation. The compromise between these two factors in opposition gave way to the creation of several dose calculation algorithms, from the most approximate and fast to the most accurate and slow. The most accurate of these algorithms is the Monte Carlo method, since it is based on basic physical principles. Since 2007, a new computing platform gains popularity in the scientific computing community: the graphics processor unit (GPU). The hardware platform exists since before 2007 and certain scientific computations were already carried out on the GPU. Year 2007, on the other hand, marks the arrival of the CUDA programming language which makes it possible to disregard graphic contexts to program the GPU. The GPU is a massively parallel computing platform and is adapted to data parallel algorithms. This thesis aims at knowing how to maximize the use of a graphics processing unit (GPU) to speed up the execution of a Monte Carlo simulation for radiotherapy dose calculation. To answer this question, the GPUMCD platform was developed. GPUMCD implements the simulation of a coupled photon-electron Monte Carlo simulation and is carried out completely on the GPU. The first objective of this thesis is to evaluate this method for a calculation in external radiotherapy. Simple monoenergetic sources and phantoms in layers are used. A comparison with the EGSnrc platform and DPM is carried out. GPUMCD is within a gamma criteria of 2%-2mm against EGSnrc while being at least 1200x faster than EGSnrc and 250x faster than DPM. The second objective consists in the evaluation of the platform for brachytherapy calculation. Complex sources based on the geometry and the energy spectrum of real sources are used inside a TG-43
Energy Technology Data Exchange (ETDEWEB)
Hunt, J.G. [Institute of Radiation Protection and Dosimetry, Av. Salvador Allende s/n, Recreio, Rio de Janeiro, CEP 22780-160 (Brazil); Watchman, C.J. [Department of Radiation Oncology, University of Arizona, Tucson, AZ, 85721 (United States); Bolch, W.E. [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL, 32611 (United States); Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611 (United States)
2007-07-01
Absorbed fraction (AF) calculations to the human skeletal tissues due to alpha particles are of interest to the internal dosimetry of occupationally exposed workers and members of the public. The transport of alpha particles through the skeletal tissue is complicated by the detailed and complex microscopic histology of the skeleton. In this study, both Monte Carlo and chord-based techniques were applied to the transport of alpha particles through 3-D micro-CT images of the skeletal microstructure of trabecular spongiosa. The Monte Carlo program used was 'Visual Monte Carlo-VMC'. VMC simulates the emission of the alpha particles and their subsequent energy deposition track. The second method applied to alpha transport is the chord-based technique, which randomly generates chord lengths across bone trabeculae and the marrow cavities via alternate and uniform sampling of their cumulative density functions. This paper compares the AF of energy to two radiosensitive skeletal tissues, active marrow and shallow active marrow, obtained with these two techniques. (authors)
International Nuclear Information System (INIS)
Nagaya, Yasunobu; Okumura, Keisuke; Sakurai, Takeshi; Mori, Takamasa
2017-03-01
In order to realize fast and accurate Monte Carlo simulation of neutron and photon transport problems, two Monte Carlo codes MVP (continuous-energy method) and GMVP (multigroup method) have been developed at Japan Atomic Energy Agency. The codes have adopted a vectorized algorithm and have been developed for vector-type supercomputers. They also support parallel processing with a standard parallelization library MPI and thus a speed-up of Monte Carlo calculations can be achieved on general computing platforms. The first and second versions of the codes were released in 1994 and 2005, respectively. They have been extensively improved and new capabilities have been implemented. The major improvements and new capabilities are as follows: (1) perturbation calculation for effective multiplication factor, (2) exact resonant elastic scattering model, (3) calculation of reactor kinetics parameters, (4) photo-nuclear model, (5) simulation of delayed neutrons, (6) generation of group constants. This report describes the physical model, geometry description method used in the codes, new capabilities and input instructions. (author)
Importance iteration in MORSE Monte Carlo calculations
International Nuclear Information System (INIS)
Kloosterman, J.L.; Hoogenboom, J.E.
1994-02-01
An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example, which shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation. (orig.)
Monte Carlo Particle Transport: Algorithm and Performance Overview
International Nuclear Information System (INIS)
Gentile, N.; Procassini, R.; Scott, H.
2005-01-01
Monte Carlo methods are frequently used for neutron and radiation transport. These methods have several advantages, such as relative ease of programming and dealing with complex meshes. Disadvantages include long run times and statistical noise. Monte Carlo photon transport calculations also often suffer from inaccuracies in matter temperature due to the lack of implicitness. In this paper we discuss the Monte Carlo algorithm as it is applied to neutron and photon transport, detail the differences between neutron and photon Monte Carlo, and give an overview of the ways the numerical method has been modified to deal with issues that arise in photon Monte Carlo simulations
International Nuclear Information System (INIS)
Nagaya, Yasunobu; Okumura, Keisuke; Mori, Takamasa; Nakagawa, Masayuki
2005-06-01
In order to realize fast and accurate Monte Carlo simulation of neutron and photon transport problems, two vectorized Monte Carlo codes MVP and GMVP have been developed at JAERI. MVP is based on the continuous energy model and GMVP is on the multigroup model. Compared with conventional scalar codes, these codes achieve higher computation speed by a factor of 10 or more on vector super-computers. Both codes have sufficient functions for production use by adopting accurate physics model, geometry description capability and variance reduction techniques. The first version of the codes was released in 1994. They have been extensively improved and new functions have been implemented. The major improvements and new functions are (1) capability to treat the scattering model expressed with File 6 of the ENDF-6 format, (2) time-dependent tallies, (3) reaction rate calculation with the pointwise response function, (4) flexible source specification, (5) continuous-energy calculation at arbitrary temperatures, (6) estimation of real variances in eigenvalue problems, (7) point detector and surface crossing estimators, (8) statistical geometry model, (9) function of reactor noise analysis (simulation of the Feynman-α experiment), (10) arbitrary shaped lattice boundary, (11) periodic boundary condition, (12) parallelization with standard libraries (MPI, PVM), (13) supporting many platforms, etc. This report describes the physical model, geometry description method used in the codes, new functions and how to use them. (author)
Monte Carlo method in radiation transport problems
International Nuclear Information System (INIS)
Dejonghe, G.; Nimal, J.C.; Vergnaud, T.
1986-11-01
In neutral radiation transport problems (neutrons, photons), two values are important: the flux in the phase space and the density of particles. To solve the problem with Monte Carlo method leads to, among other things, build a statistical process (called the play) and to provide a numerical value to a variable x (this attribution is called score). Sampling techniques are presented. Play biasing necessity is proved. A biased simulation is made. At last, the current developments (rewriting of programs for instance) are presented due to several reasons: two of them are the vectorial calculation apparition and the photon and neutron transport in vacancy media [fr
A keff calculation method by Monte Carlo
International Nuclear Information System (INIS)
Shen, H; Wang, K.
2008-01-01
The effective multiplication factor (k eff ) is defined as the ratio between the number of neutrons in successive generations, which definition is adopted by most Monte Carlo codes (e.g. MCNP). Also, it can be thought of as the ratio of the generation rate of neutrons by the sum of the leakage rate and the absorption rate, which should exclude the effect of the neutron reaction such as (n, 2n) and (n, 3n). This article discusses the Monte Carlo method for k eff calculation based on the second definition. A new code has been developed and the results are presented. (author)
Biases in Monte Carlo eigenvalue calculations
Energy Technology Data Exchange (ETDEWEB)
Gelbard, E.M.
1992-12-01
The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the ``fixed-source`` case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated (``replicated``) over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here.
Biases in Monte Carlo eigenvalue calculations
Energy Technology Data Exchange (ETDEWEB)
Gelbard, E.M.
1992-01-01
The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the fixed-source'' case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated ( replicated'') over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here.
Monte Carlo methods for shield design calculations
International Nuclear Information System (INIS)
Grimstone, M.J.
1974-01-01
A suite of Monte Carlo codes is being developed for use on a routine basis in commercial reactor shield design. The methods adopted for this purpose include the modular construction of codes, simplified geometries, automatic variance reduction techniques, continuous energy treatment of cross section data, and albedo methods for streaming. Descriptions are given of the implementation of these methods and of their use in practical calculations. 26 references. (U.S.)
Monte Carlo methods for particle transport
Haghighat, Alireza
2015-01-01
The Monte Carlo method has become the de facto standard in radiation transport. Although powerful, if not understood and used appropriately, the method can give misleading results. Monte Carlo Methods for Particle Transport teaches appropriate use of the Monte Carlo method, explaining the method's fundamental concepts as well as its limitations. Concise yet comprehensive, this well-organized text: * Introduces the particle importance equation and its use for variance reduction * Describes general and particle-transport-specific variance reduction techniques * Presents particle transport eigenvalue issues and methodologies to address these issues * Explores advanced formulations based on the author's research activities * Discusses parallel processing concepts and factors affecting parallel performance Featuring illustrative examples, mathematical derivations, computer algorithms, and homework problems, Monte Carlo Methods for Particle Transport provides nuclear engineers and scientists with a practical guide ...
Algorithms for Monte Carlo calculations with fermions
International Nuclear Information System (INIS)
Weingarten, D.
1985-01-01
We describe a fermion Monte Carlo algorithm due to Petcher and the present author and another due to Fucito, Marinari, Parisi and Rebbi. For the first algorithm we estimate the number of arithmetic operations required to evaluate a vacuum expectation value grows as N 11 /msub(q) on an N 4 lattice with fixed periodicity in physical units and renormalized quark mass msub(q). For the second algorithm the rate of growth is estimated to be N 8 /msub(q) 2 . Numerical experiments are presented comparing the two algorithms on a lattice of size 2 4 . With a hopping constant K of 0.15 and β of 4.0 we find the number of operations for the second algorithm is about 2.7 times larger than for the first and about 13 000 times larger than for corresponding Monte Carlo calculations with a pure gauge theory. An estimate is given for the number of operations required for more realistic calculations by each algorithm on a larger lattice. (orig.)
International Nuclear Information System (INIS)
Johnson, J.O.
2000-01-01
The Department of Energy (DOE) has given the Spallation Neutron Source (SNS) project approval to begin Title I design of the proposed facility to be built at Oak Ridge National Laboratory (ORNL) and construction is scheduled to commence in FY01 . The SNS initially will consist of an accelerator system capable of delivering an ∼0.5 microsecond pulse of 1 GeV protons, at a 60 Hz frequency, with 1 MW of beam power, into a single target station. The SNS will eventually be upgraded to a 2 MW facility with two target stations (a 60 Hz station and a 10 Hz station). The radiation transport analysis, which includes the neutronic, shielding, activation, and safety analyses, is critical to the design of an intense high-energy accelerator facility like the proposed SNS, and the Monte Carlo method is the cornerstone of the radiation transport analyses
The MC21 Monte Carlo Transport Code
International Nuclear Information System (INIS)
Sutton TM; Donovan TJ; Trumbull TH; Dobreff PS; Caro E; Griesheimer DP; Tyburski LJ; Carpenter DC; Joo H
2007-01-01
MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities
Propagation of Statistical and Nuclear Data Uncertainties in Monte-Carlo Burn-up Calculations
García Herranz, Nuria; Cabellos de Francisco, Oscar Luis; Sanz Gonzalo, Javier; Juan Ruiz, Jesús; Kuijper, Jim C.
2008-01-01
Two methodologies to propagate the uncertainties on the nuclide inventory in combined Monte Carlo-spectrum and burn-up calculations are presented, based on sensitivity/uncertainty and random sampling techniques (uncertainty Monte Carlo method). Both enable the assessment of the impact of uncertainties in the nuclear data as well as uncertainties due to the statistical nature of the Monte Carlo neutron transport calculation. The methodologies are implemented in our MCNP–ACAB system, which comb...
Pseudopotentials for quantum-Monte-Carlo-calculations
International Nuclear Information System (INIS)
Burkatzki, Mark Thomas
2008-01-01
The author presents scalar-relativistic energy-consistent Hartree-Fock pseudopotentials for the main-group and 3d-transition-metal elements. The pseudopotentials do not exhibit a singularity at the nucleus and are therefore suitable for quantum Monte Carlo (QMC) calculations. The author demonstrates their transferability through extensive benchmark calculations of atomic excitation spectra as well as molecular properties. In particular, the author computes the vibrational frequencies and binding energies of 26 first- and second-row diatomic molecules using post Hartree-Fock methods, finding excellent agreement with the corresponding all-electron values. The author shows that the presented pseudopotentials give superior accuracy than other existing pseudopotentials constructed specifically for QMC. The localization error and the efficiency in QMC are discussed. The author also presents QMC calculations for selected atomic and diatomic 3d-transitionmetal systems. Finally, valence basis sets of different sizes (VnZ with n=D,T,Q,5 for 1st and 2nd row; with n=D,T for 3rd to 5th row; with n=D,T,Q for the 3d transition metals) optimized for the pseudopotentials are presented. (orig.)
Madsen, J. R.; Akabani, G.
2014-05-01
The present state of modeling radio-induced effects at the cellular level does not account for the microscopic inhomogeneity of the nucleus from the non-aqueous contents (i.e. proteins, DNA) by approximating the entire cellular nucleus as a homogenous medium of water. Charged particle track-structure calculations utilizing this approximation are therefore neglecting to account for approximately 30% of the molecular variation within the nucleus. To truly understand what happens when biological matter is irradiated, charged particle track-structure calculations need detailed knowledge of the secondary electron cascade, resulting from interactions with not only the primary biological component—water--but also the non-aqueous contents, down to very low energies. This paper presents our work on a generic approach for calculating low-energy interaction cross-sections between incident charged particles and individual molecules. The purpose of our work is to develop a self-consistent computational method for predicting molecule-specific interaction cross-sections, such as the component molecules of DNA and proteins (i.e. nucleotides and amino acids), in the very low-energy regime. These results would then be applied in a track-structure code and thereby reduce the homogenous water approximation. The present methodology—inspired by seeking a combination of the accuracy of quantum mechanics and the scalability, robustness, and flexibility of Monte Carlo methods—begins with the calculation of a solution to the many-body Schrödinger equation and proceeds to use Monte Carlo methods to calculate the perturbations in the internal electron field to determine the interaction processes, such as ionization and excitation. As a test of our model, the approach is applied to a water molecule in the same method as it would be applied to a nucleotide or amino acid and compared with the low-energy cross-sections from the GEANT4-DNA physics package of the Geant4 simulation toolkit
Parallel processing Monte Carlo radiation transport codes
International Nuclear Information System (INIS)
McKinney, G.W.
1994-01-01
Issues related to distributed-memory multiprocessing as applied to Monte Carlo radiation transport are discussed. Measurements of communication overhead are presented for the radiation transport code MCNP which employs the communication software package PVM, and average efficiency curves are provided for a homogeneous virtual machine
Multilevel transport calculations
International Nuclear Information System (INIS)
Sanchez, R.; Mondot, J.
1986-10-01
A new model for multigroup transport calculations based on a group-dependent spatial representation has been developed. The multilevel method takes advantage of the orthogonality of the energy and space operators, inherent to the structure of the linear transport equation, to decompose the energy domain into subdomains or levels, i.e., fast, epithermal and thermal, where suitable spatial approximations are used. The aim of the method is to allow for the use of larger mesh spacings at high neutron energies and, therefore, to cut down the computational cost while preserving the overall accuracy. The method can be easily implemented in today's standard transport codes by introducing small modifications in the computation of the multigroup external source. The multilevel model is of special interest for the calculation of media containing high thermal absorbers. A variant of this method, based on a nested, multilevel approximation, has been implemented in the APOLLO-II assembly transport code. Comparisons between the multilevel model and the usual multigroup approximation have been made for a PWR poisoned cell and for a thermal neutron barrier used to feed a molten FBR fuel sample. The results show that significant savings in computational times are obtained with the multilevel approximation. 10 refs
Radiation Transport Calculations and Simulations
Energy Technology Data Exchange (ETDEWEB)
Fasso, Alberto; /SLAC; Ferrari, A.; /CERN
2011-06-30
This article is an introduction to the Monte Carlo method as used in particle transport. After a description at an elementary level of the mathematical basis of the method, the Boltzmann equation and its physical meaning are presented, followed by Monte Carlo integration and random sampling, and by a general description of the main aspects and components of a typical Monte Carlo particle transport code. In particular, the most common biasing techniques are described, as well as the concepts of estimator and detector. After a discussion of the different types of errors, the issue of Quality Assurance is briefly considered.
Monte Carlo dose calculations in advanced radiotherapy
Bush, Karl Kenneth
The remarkable accuracy of Monte Carlo (MC) dose calculation algorithms has led to the widely accepted view that these methods should and will play a central role in the radiotherapy treatment verification and planning of the future. The advantages of using MC clinically are particularly evident for radiation fields passing through inhomogeneities, such as lung and air cavities, and for small fields, including those used in today's advanced intensity modulated radiotherapy techniques. Many investigators have reported significant dosimetric differences between MC and conventional dose calculations in such complex situations, and have demonstrated experimentally the unmatched ability of MC calculations in modeling charged particle disequilibrium. The advantages of using MC dose calculations do come at a cost. The nature of MC dose calculations require a highly detailed, in-depth representation of the physical system (accelerator head geometry/composition, anatomical patient geometry/composition and particle interaction physics) to allow accurate modeling of external beam radiation therapy treatments. To perform such simulations is computationally demanding and has only recently become feasible within mainstream radiotherapy practices. In addition, the output of the accelerator head simulation can be highly sensitive to inaccuracies within a model that may not be known with sufficient detail. The goal of this dissertation is to both improve and advance the implementation of MC dose calculations in modern external beam radiotherapy. To begin, a novel method is proposed to fine-tune the output of an accelerator model to better represent the measured output. In this method an intensity distribution of the electron beam incident on the model is inferred by employing a simulated annealing algorithm. The method allows an investigation of arbitrary electron beam intensity distributions and is not restricted to the commonly assumed Gaussian intensity. In a second component of
Advanced Computational Methods for Monte Carlo Calculations
Energy Technology Data Exchange (ETDEWEB)
Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2018-01-12
This course is intended for graduate students who already have a basic understanding of Monte Carlo methods. It focuses on advanced topics that may be needed for thesis research, for developing new state-of-the-art methods, or for working with modern production Monte Carlo codes.
Parallel implementation of the Monte Carlo transport code EGS4 on the hypercube
International Nuclear Information System (INIS)
Kirk, B.L.; Azmy, Y.Y.; Gabriel, T.A.; Fu, C.Y.
1991-01-01
Monte Carlo transport codes are commonly used in the study of particle interactions. The CALOR89 code system is a combination of several Monte Carlo transport and analysis programs. In order to produce good results, a typical Monte Carlo run will have to produce many particle histories. On a single processor computer, the transport calculation can take a huge amount of time. However, if the transport of particles were divided among several processors in a multiprocessor machine, the time can be drastically reduced
Simulation of transport equations with Monte Carlo
International Nuclear Information System (INIS)
Matthes, W.
1975-09-01
The main purpose of the report is to explain the relation between the transport equation and the Monte Carlo game used for its solution. The introduction of artificial particles carrying a weight provides one with high flexibility in constructing many different games for the solution of the same equation. This flexibility opens a way to construct a Monte Carlo game for the solution of the adjoint transport equation. Emphasis is laid mostly on giving a clear understanding of what to do and not on the details of how to do a specific game
Monte Carlo electron/photon transport
International Nuclear Information System (INIS)
Mack, J.M.; Morel, J.E.; Hughes, H.G.
1985-01-01
A review of nonplasma coupled electron/photon transport using Monte Carlo method is presented. Remarks are mainly restricted to linerarized formalisms at electron energies from 1 keV to 1000 MeV. Applications involving pulse-height estimation, transport in external magnetic fields, and optical Cerenkov production are discussed to underscore the importance of this branch of computational physics. Advances in electron multigroup cross-section generation is reported, and its impact on future code development assessed. Progress toward the transformation of MCNP into a generalized neutral/charged-particle Monte Carlo code is described. 48 refs
Advances in Monte Carlo electron transport
International Nuclear Information System (INIS)
Bielajew, Alex F.
1995-01-01
Notwithstanding the success of Monte Carlo (MC) calculations for determining ion chamber correction factors for air-kerma standards and radiotherapy applications, a great challenge remains. MC is unable to calculate ion chamber response to better than 1% for low-Z and 3% for high-Z wall materials. Moreover, the two major MC code systems employed in radiation dosimetry, the EGS and ITS codes, differ in opposite directions from ion chamber experiments. The discrepancy with experiment is due to inadequacies in the underlying e - condensed-history algorithms. As modeled by MC calculations, the e - step-lengths in the chamber walls and the ionisation cavity differ in terms of material traversed by about three orders of magnitude. This demands that the underlying e - transport algorithms be very stable over a great dynamic range. Otherwise a spurious e - disequilibrium may be generated. The multiple-scattering (MS) algorithms, Moliere in the case of EGS and Goudsmit-Saunderson (GS) in the case of ITS, are either mathematically or numerically unstable in the plural-scattering environment of the ionisation cavity. Recently, a new MS theory has been developed that is an exact solution of the Wentzel small-angle formalism using a screened Rutherford cross section. This new MS theory is mathematically, physically and numerically stable from the no-scattering to the MS regimes. This theory is the small-angle equivalent of the GS equation for a Rutherford cross section. Large-angle corrections connecting this theory to GS theory have been derived by Bethe. The Moliere theory is the large-pathlength limit of this theory. The strategy for employing this new theory for ion chamber and radiotherapy calculations is described
Neutron transport model for standard calculation experiment
International Nuclear Information System (INIS)
Lukhminskij, B.E.; Lyutostanskij, Yu.S.; Lyashchuk, V.I.; Panov, I.V.
1989-01-01
The neutron transport calculation algorithms in complex composition media with a predetermined geometry are realized by the multigroups representations within Monte Carlo methods in the MAMONT code. The code grade was evaluated with benchmark experiments comparison. The neutron leakage spectra calculations in the spherical-symmetric geometry were carried out for iron and polyethylene. The MAMONT code utilization for metrological furnishes of the geophysics tasks is proposed. The code is orientated towards neutron transport and secondary nuclides accumulation calculations in blankets and geophysics media. 7 refs.; 2 figs
Scalable Domain Decomposed Monte Carlo Particle Transport
Energy Technology Data Exchange (ETDEWEB)
O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)
2013-12-05
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.
Wielandt acceleration for MCNP5 Monte Carlo eigenvalue calculations
International Nuclear Information System (INIS)
Brown, F.
2007-01-01
Monte Carlo criticality calculations use the power iteration method to determine the eigenvalue (k eff ) and eigenfunction (fission source distribution) of the fundamental mode. A recently proposed method for accelerating convergence of the Monte Carlo power iteration using Wielandt's method has been implemented in a test version of MCNP5. The method is shown to provide dramatic improvements in convergence rates and to greatly reduce the possibility of false convergence assessment. The method is effective and efficient, improving the Monte Carlo figure-of-merit for many problems. In addition, the method should eliminate most of the underprediction bias in confidence intervals for Monte Carlo criticality calculations. (authors)
Hybrid Monte-Carlo method for ICF calculations
Energy Technology Data Exchange (ETDEWEB)
Clouet, J.F.; Samba, G. [CEA Bruyeres-le-Chatel, 91 (France)
2003-07-01
) conduction and ray-tracing for laser description. Radiation transport is usually solved by a Monte-Carlo method. In coupling diffusion approximation and transport description, the difficult part comes from the need for an implicit discretization of the emission-absorption terms: this problem was solved by using the symbolic Monte-Carlo method. This means that at each step of the simulation a matrix is computed by a Monte-Carlo method which accounts for the radiation energy exchange between the cells. Because of time step limitation by hydrodynamic motion, energy exchange is limited to a small number of cells and the matrix remains sparse. This matrix is added to usual diffusion matrix for thermal and radiative conductions: finally we arrive at a non-symmetric linear system to invert. A generalized Marshak condition describe the coupling between transport and diffusion. In this paper we will present the principles of the method and numerical simulation of an ICF hohlraum. We shall illustrate the benefits of the method by comparing the results with full implicit Monte-Carlo calculations. In particular we shall show how the spectral cut-off evolves during the propagation of the radiative front in the gold wall. Several issues are still to be addressed (robust algorithm for spectral cut- off calculation, coupling with ALE capabilities): we shall briefly discuss these problems. (authors)
Hybrid Monte-Carlo method for ICF calculations
International Nuclear Information System (INIS)
Clouet, J.F.; Samba, G.
2003-01-01
) conduction and ray-tracing for laser description. Radiation transport is usually solved by a Monte-Carlo method. In coupling diffusion approximation and transport description, the difficult part comes from the need for an implicit discretization of the emission-absorption terms: this problem was solved by using the symbolic Monte-Carlo method. This means that at each step of the simulation a matrix is computed by a Monte-Carlo method which accounts for the radiation energy exchange between the cells. Because of time step limitation by hydrodynamic motion, energy exchange is limited to a small number of cells and the matrix remains sparse. This matrix is added to usual diffusion matrix for thermal and radiative conductions: finally we arrive at a non-symmetric linear system to invert. A generalized Marshak condition describe the coupling between transport and diffusion. In this paper we will present the principles of the method and numerical simulation of an ICF hohlraum. We shall illustrate the benefits of the method by comparing the results with full implicit Monte-Carlo calculations. In particular we shall show how the spectral cut-off evolves during the propagation of the radiative front in the gold wall. Several issues are still to be addressed (robust algorithm for spectral cut- off calculation, coupling with ALE capabilities): we shall briefly discuss these problems. (authors)
Time step length versus efficiency of Monte Carlo burnup calculations
International Nuclear Information System (INIS)
Dufek, Jan; Valtavirta, Ville
2014-01-01
Highlights: • Time step length largely affects efficiency of MC burnup calculations. • Efficiency of MC burnup calculations improves with decreasing time step length. • Results were obtained from SIE-based Monte Carlo burnup calculations. - Abstract: We demonstrate that efficiency of Monte Carlo burnup calculations can be largely affected by the selected time step length. This study employs the stochastic implicit Euler based coupling scheme for Monte Carlo burnup calculations that performs a number of inner iteration steps within each time step. In a series of calculations, we vary the time step length and the number of inner iteration steps; the results suggest that Monte Carlo burnup calculations get more efficient as the time step length is reduced. More time steps must be simulated as they get shorter; however, this is more than compensated by the decrease in computing cost per time step needed for achieving a certain accuracy
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Energy Technology Data Exchange (ETDEWEB)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
Nuclear data treatment for SAM-CE Monte Carlo calculations
International Nuclear Information System (INIS)
Lichtenstein, H.; Troubetzkoy, E.S.; Beer, M.
1980-01-01
The treatment of nuclear data by the SAM-CE Monte Carlo code system is presented. The retrieval of neutron, gamma production, and photon data from the ENDF/B fils is described. Integral cross sections as well as differential data are utilized in the Monte Carlo calculations, and the processing procedures for the requisite data are summarized
Approximating Sievert Integrals to Monte Carlo Methods to calculate ...
African Journals Online (AJOL)
Radiation dose rates along the transverse axis of a miniature P192PIr source were calculated using Sievert Integral (considered simple and inaccurate), and by the sophisticated and accurate Monte Carlo method. Using data obt-ained by the Monte Carlo method as benchmark and applying least squares regression curve ...
International Nuclear Information System (INIS)
Hoogenboom, J.E.
2000-01-01
The Monte Carlo method is a statistical method to solve mathematical and physical problems using random numbers. The principle of the methods will be demonstrated for a simple mathematical problem and for neutron transport. Various types of estimators will be discussed, as well as generally applied variance reduction methods like splitting, Russian roulette and importance biasing. The theoretical formulation for solving eigenvalue problems for multiplying systems will be shown. Some reflections will be given about the applicability of the Monte Carlo method, its limitations and its future prospects for reactor physics calculations. Adjoint Monte Carlo is a Monte Carlo game to solve the adjoint neutron (or photon) transport equation. The adjoint transport equation can be interpreted in terms of simulating histories of artificial particles, which show properties of neutrons that move backwards in history. These particles will start their history at the detector from which the response must be estimated and give a contribution to the estimated quantity when they hit or pass through the neutron source. Application to multigroup transport formulation will be demonstrated Possible implementation for the continuous energy case will be outlined. The inherent advantages and disadvantages of the method will be discussed. The Midway Monte Carlo method will be presented for calculating a detector response due to a (neutron or photon) source. A derivation will be given of the basic formula for the Midway Monte Carlo method The black absorber technique, allowing for a cutoff of particle histories when reaching the midway surface in one of the calculations will be derived. An extension of the theory to coupled neutron-photon problems is given. The method will be demonstrated for an oil well logging problem, comprising a neutron source in a borehole and photon detectors to register the photons generated by inelastic neutron scattering. (author)
Intergenerational Correlation in Monte Carlo k-Eigenvalue Calculation
International Nuclear Information System (INIS)
Ueki, Taro
2002-01-01
This paper investigates intergenerational correlation in the Monte Carlo k-eigenvalue calculation of a neutron effective multiplicative factor. To this end, the exponential transform for path stretching has been applied to large fissionable media with localized highly multiplying regions because in such media an exponentially decaying shape is a rough representation of the importance of source particles. The numerical results show that the difference between real and apparent variances virtually vanishes for an appropriate value of the exponential transform parameter. This indicates that the intergenerational correlation of k-eigenvalue samples could be eliminated by the adjoint biasing of particle transport. The relation between the biasing of particle transport and the intergenerational correlation is therefore investigated in the framework of collision estimators, and the following conclusion has been obtained: Within the leading order approximation with respect to the number of histories per generation, the intergenerational correlation vanishes when immediate importance is constant, and the immediate importance under simulation can be made constant by the biasing of particle transport with a function adjoint to the source neutron's distribution, i.e., the importance over all future generations
Monte Carlo calculations of elementary particle properties
Guralnik, G. S.; Warnock, T.; Zemach, C.
1984-01-01
The object of this project is to calculate the masses of the elementary particles. This ambitious goal apparently is not possible using analytic methods or known approximation methods. However, it is probable that the power of a modern super computer will make at least part of the low lying mass spectrum accessible through direct numerical computation. Initial attempts by several groups at calculating this spectrum on small lattices of space time points have been very promising. Using new methods and super computers considerable progress has been made towards evaluating the mass spectrum on comparatively large lattices. Only more time and faster machines with increased storage will allow calculations of systems with guaranteed minimal boundary effects. The ideas that currently go into this calculation are outlined.
Développement de la méthode de Monte Carlo pour le calcul des ...
African Journals Online (AJOL)
In this paper; we show the interest of the heterostructures initially, then the need for using a numerical method and in particular that of Monte Carlo, to calculate electric transport in the semiconductors. We justify also the composition of our ternary semiconductor AlxGa1-xAs. Afterwards; we give the principle and the ...
Maucec, M
2005-01-01
Monte Carlo simulations for nuclear logging applications are considered to be highly demanding transport problems. In this paper, the implementation of weight-window variance reduction schemes in a 'manual' fashion to improve the efficiency of calculations for a neutron logging tool is presented.
A multi-microcomputer system for Monte Carlo calculations
International Nuclear Information System (INIS)
Hertzberger, L.O.; Berg, B.; Krasemann, H.
1981-01-01
We propose a microcomputer system which allows parallel processing for Monte Carlo calculations in lattice gauge theories, simulations of high energy physics experiments and presumably many other fields of current interest. The master-n-slave multiprocessor system is based on the Motorola MC 68000 microprocessor. One attraction if this processor is that it allows up to 16 M Byte random access memory. (orig.)
Multi-microcomputer system for Monte-Carlo calculations
Berg, B; Krasemann, H
1981-01-01
The authors propose a microcomputer system that allows parallel processing for Monte Carlo calculations in lattice gauge theories, simulations of high energy physics experiments and many other fields of current interest. The master-n-slave multiprocessor system is based on the Motorola MC 6800 microprocessor. One attraction of this processor is that it allows up to 16 M Byte random access memory.
Comparison of ONETRAN calculations of electron beam dose profiles with Monte Carlo and experiment
International Nuclear Information System (INIS)
Garth, J.C.; Woolf, S.
1987-01-01
Electron beam dose profiles have been calculated using a multigroup, discrete ordinates solution of the Spencer-Lewis electron transport equation. This was accomplished by introducing electron transport cross-sections into the ONETRAN code in a simple manner. The authors' purpose is to ''benchmark'' this electron transport model and to demonstrate its accuracy and capabilities over the energy range from 30 keV to 20 MeV. Many of their results are compared with the extensive measurements and TIGER Monte Carlo data. In general the ONETRAN results are smoother, agree with TIGER within the statistical error of the Monte Carlo histograms and require about one tenth the running time of Monte Carlo
Morse Monte Carlo Radiation Transport Code System
Energy Technology Data Exchange (ETDEWEB)
Emmett, M.B.
1975-02-01
The report contains sections containing descriptions of the MORSE and PICTURE codes, input descriptions, sample problems, deviations of the physical equations and explanations of the various error messages. The MORSE code is a multipurpose neutron and gamma-ray transport Monte Carlo code. Time dependence for both shielding and criticality problems is provided. General three-dimensional geometry may be used with an albedo option available at any material surface. The PICTURE code provide aid in preparing correct input data for the combinatorial geometry package CG. It provides a printed view of arbitrary two-dimensional slices through the geometry. By inspecting these pictures one may determine if the geometry specified by the input cards is indeed the desired geometry. 23 refs. (WRF)
Monte carlo dose calculation in dental amalgam phantom
Mohd Zahri Abdul Aziz; A L Yusoff; N D Osman; R Abdullah; N A Rabaie; M S Salikin
2015-01-01
It has become a great challenge in the modern radiation treatment to ensure the accuracy of treatment delivery in electron beam therapy. Tissue inhomogeneity has become one of the factors for accurate dose calculation, and this requires complex algorithm calculation like Monte Carlo (MC). On the other hand, computed tomography (CT) images used in treatment planning system need to be trustful as they are the input in radiotherapy treatment. However, with the presence of metal amalgam in treatm...
Exponential convergence on a continuous Monte Carlo transport problem
International Nuclear Information System (INIS)
Booth, T.E.
1997-01-01
For more than a decade, it has been known that exponential convergence on discrete transport problems was possible using adaptive Monte Carlo techniques. An adaptive Monte Carlo method that empirically produces exponential convergence on a simple continuous transport problem is described
Therapeutic Applications of Monte Carlo Calculations in Nuclear Medicine
International Nuclear Information System (INIS)
Coulot, J
2003-01-01
Monte Carlo techniques are involved in many applications in medical physics, and the field of nuclear medicine has seen a great development in the past ten years due to their wider use. Thus, it is of great interest to look at the state of the art in this domain, when improving computer performances allow one to obtain improved results in a dramatically reduced time. The goal of this book is to make, in 15 chapters, an exhaustive review of the use of Monte Carlo techniques in nuclear medicine, also giving key features which are not necessary directly related to the Monte Carlo method, but mandatory for its practical application. As the book deals with therapeutic' nuclear medicine, it focuses on internal dosimetry. After a general introduction on Monte Carlo techniques and their applications in nuclear medicine (dosimetry, imaging and radiation protection), the authors give an overview of internal dosimetry methods (formalism, mathematical phantoms, quantities of interest). Then, some of the more widely used Monte Carlo codes are described, as well as some treatment planning softwares. Some original techniques are also mentioned, such as dosimetry for boron neutron capture synovectomy. It is generally well written, clearly presented, and very well documented. Each chapter gives an overview of each subject, and it is up to the reader to investigate it further using the extensive bibliography provided. Each topic is discussed from a practical point of view, which is of great help for non-experienced readers. For instance, the chapter about mathematical aspects of Monte Carlo particle transport is very clear and helps one to apprehend the philosophy of the method, which is often a difficulty with a more theoretical approach. Each chapter is put in the general (clinical) context, and this allows the reader to keep in mind the intrinsic limitation of each technique involved in dosimetry (for instance activity quantitation). Nevertheless, there are some minor remarks to
An efficient parallel computing scheme for Monte Carlo criticality calculations
International Nuclear Information System (INIS)
Dufek, Jan; Gudowski, Waclaw
2009-01-01
The existing parallel computing schemes for Monte Carlo criticality calculations suffer from a low efficiency when applied on many processors. We suggest a new fission matrix based scheme for efficient parallel computing. The results are derived from the fission matrix that is combined from all parallel simulations. The scheme allows for a practically ideal parallel scaling as no communication among the parallel simulations is required, and inactive cycles are not needed.
Automatic fission source convergence criteria for Monte Carlo criticality calculations
International Nuclear Information System (INIS)
Shim, Hyung Jin; Kim, Chang Hyo
2005-01-01
The Monte Carlo criticality calculations for the multiplication factor and the power distribution in a nuclear system require knowledge of stationary or fundamental-mode fission source distribution (FSD) in the system. Because it is a priori unknown, so-called inactive cycle Monte Carlo (MC) runs are performed to determine it. The inactive cycle MC runs should be continued until the FSD converges to the stationary FSD. Obviously, if one stops them prematurely, the MC calculation results may have biases because the followup active cycles may be run with the non-stationary FSD. Conversely, if one performs the inactive cycle MC runs more than necessary, one is apt to waste computing time because inactive cycle MC runs are used to elicit the fundamental-mode FSD only. In the absence of suitable criteria for terminating the inactive cycle MC runs, one cannot but rely on empiricism in deciding how many inactive cycles one should conduct for a given problem. Depending on the problem, this may introduce biases into Monte Carlo estimates of the parameters one tries to calculate. The purpose of this paper is to present new fission source convergence criteria designed for the automatic termination of inactive cycle MC runs
Monte Carlo calculations of few-body and light nuclei
International Nuclear Information System (INIS)
Wiringa, R.B.
1992-01-01
A major goal in nuclear physics is to understand how nuclear structure comes about from the underlying interactions between nucleons. This requires modelling nuclei as collections of strongly interacting particles. Using realistic nucleon-nucleon potentials, supplemented with consistent three-nucleon potentials and two-body electroweak current operators, variational Monte Carlo methods are used to calculate nuclear ground-state properties, such as the binding energy, electromagnetic form factors, and momentum distributions. Other properties such as excited states and low-energy reactions are also calculable with these methods
Adjoint sensitivity and uncertainty analyses in Monte Carlo forward calculations
International Nuclear Information System (INIS)
Shim, Hyung Jin; Kim, Chang Hyo
2011-01-01
The adjoint-weighted perturbation (AWP) method, in which the required adjoint flux is estimated in the course of Monte Carlo (MC) forward calculations, has recently been proposed as an alternative to the conventional MC perturbation techniques, such as the correlated sampling and differential operator sampling (DOS) methods. The equivalence of the first-order AWP method and first-order DOS method with the fission source perturbation taken into account is proven. An algorithm for the AWP calculations is implemented in the Seoul National University MC code McCARD and applied to the sensitivity and uncertainty analyses of the Godiva and Bigten criticalities. (author)
Energy Technology Data Exchange (ETDEWEB)
Liang, Jingang; Wang, Kan; Qiu, Yishu [Dept. of Engineering Physics, LiuQing Building, Tsinghua University, Beijing (China); Chai, Xiao Ming; Qiang, Sheng Long [Science and Technology on Reactor System Design Technology Laboratory, Nuclear Power Institute of China, Chengdu (China)
2016-06-15
Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC) codes in accomplishing pin-wise three-dimensional (3D) full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC) code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.
Reactor physics simulations with coupled Monte Carlo calculation and computational fluid dynamics
International Nuclear Information System (INIS)
Seker, V.; Thomas, J.W.; Downar, T.J.
2007-01-01
A computational code system based on coupling the Monte Carlo code MCNP5 and the Computational Fluid Dynamics (CFD) code STAR-CD was developed as an audit tool for lower order nuclear reactor calculations. This paper presents the methodology of the developed computer program 'McSTAR'. McSTAR is written in FORTRAN90 programming language and couples MCNP5 and the commercial CFD code STAR-CD. MCNP uses a continuous energy cross section library produced by the NJOY code system from the raw ENDF/B data. A major part of the work was to develop and implement methods to update the cross section library with the temperature distribution calculated by STARCD for every region. Three different methods were investigated and implemented in McSTAR. The user subroutines in STAR-CD are modified to read the power density data and assign them to the appropriate variables in the program and to write an output data file containing the temperature, density and indexing information to perform the mapping between MCNP and STAR-CD cells. Preliminary testing of the code was performed using a 3x3 PWR pin-cell problem. The preliminary results are compared with those obtained from a STAR-CD coupled calculation with the deterministic transport code DeCART. Good agreement in the k eff and the power profile was observed. Increased computational capabilities and improvements in computational methods have accelerated interest in high fidelity modeling of nuclear reactor cores during the last several years. High-fidelity has been achieved by utilizing full core neutron transport solutions for the neutronics calculation and computational fluid dynamics solutions for the thermal-hydraulics calculation. Previous researchers have reported the coupling of 3D deterministic neutron transport method to CFD and their application to practical reactor analysis problems. One of the principal motivations of the work here was to utilize Monte Carlo methods to validate the coupled deterministic neutron transport
OGRE, Monte-Carlo System for Gamma Transport Problems
International Nuclear Information System (INIS)
1984-01-01
1 - Nature of physical problem solved: The OGRE programme system was designed to calculate, by Monte Carlo methods, any quantity related to gamma-ray transport. The system is represented by two examples - OGRE-P1 and OGRE-G. The OGRE-P1 programme is a simple prototype which calculates dose rate on one side of a slab due to a plane source on the other side. The OGRE-G programme, a prototype of a programme utilizing a general-geometry routine, calculates dose rate at arbitrary points. A very general source description in OGRE-G may be employed by reading a tape prepared by the user. 2 - Method of solution: Case histories of gamma rays in the prescribed geometry are generated and analyzed to produce averages of any desired quantity which, in the case of the prototypes, are gamma-ray dose rates. The system is designed to achieve generality by ease of modification. No importance sampling is built into the prototypes, a very general geometry subroutine permits the treatment of complicated geometries. This is essentially the same routine used in the O5R neutron transport system. Boundaries may be either planes or quadratic surfaces, arbitrarily oriented and intersecting in arbitrary fashion. Cross section data is prepared by the auxiliary master cross section programme XSECT which may be used to originate, update, or edit the master cross section tape. The master cross section tape is utilized in the OGRE programmes to produce detailed tables of macroscopic cross sections which are used during the Monte Carlo calculations. 3 - Restrictions on the complexity of the problem: Maximum cross-section array information may be estimated by a given formula for a specific problem. The number of regions must be less than or equal to 50
Microwave transport in EBT distribution manifolds using Monte Carlo ray-tracing techniques
International Nuclear Information System (INIS)
Lillie, R.A.; White, T.L.; Gabriel, T.A.; Alsmiller, R.G. Jr.
1983-01-01
Ray tracing Monte Carlo calculations have been carried out using an existing Monte Carlo radiation transport code to obtain estimates of the microsave power exiting the torus coupling links in EPT microwave manifolds. The microwave power loss and polarization at surface reflections were accounted for by treating the microwaves as plane waves reflecting off plane surfaces. Agreement on the order of 10% was obtained between the measured and calculated output power distribution for an existing EBT-S toroidal manifold. A cost effective iterative procedure utilizing the Monte Carlo history data was implemented to predict design changes which could produce increased manifold efficiency and improved output power uniformity
Propagation of statistical and nuclear data uncertainties in Monte Carlo burn-up calculations
Energy Technology Data Exchange (ETDEWEB)
Garcia-Herranz, Nuria [Departamento de Ingenieria Nuclear, Universidad Politecnica de Madrid, UPM (Spain)], E-mail: nuria@din.upm.es; Cabellos, Oscar [Departamento de Ingenieria Nuclear, Universidad Politecnica de Madrid, UPM (Spain); Sanz, Javier [Departamento de Ingenieria Energetica, Universidad Nacional de Educacion a Distancia, UNED (Spain); Juan, Jesus [Laboratorio de Estadistica, Universidad Politecnica de Madrid, UPM (Spain); Kuijper, Jim C. [NRG - Fuels, Actinides and Isotopes Group, Petten (Netherlands)
2008-04-15
Two methodologies to propagate the uncertainties on the nuclide inventory in combined Monte Carlo-spectrum and burn-up calculations are presented, based on sensitivity/uncertainty and random sampling techniques (uncertainty Monte Carlo method). Both enable the assessment of the impact of uncertainties in the nuclear data as well as uncertainties due to the statistical nature of the Monte Carlo neutron transport calculation. The methodologies are implemented in our MCNP-ACAB system, which combines the neutron transport code MCNP-4C and the inventory code ACAB. A high burn-up benchmark problem is used to test the MCNP-ACAB performance in inventory predictions, with no uncertainties. A good agreement is found with the results of other participants. This benchmark problem is also used to assess the impact of nuclear data uncertainties and statistical flux errors in high burn-up applications. A detailed calculation is performed to evaluate the effect of cross-section uncertainties in the inventory prediction, taking into account the temporal evolution of the neutron flux level and spectrum. Very large uncertainties are found at the unusually high burn-up of this exercise (800 MWd/kgHM). To compare the impact of the statistical errors in the calculated flux with respect to the cross uncertainties, a simplified problem is considered, taking a constant neutron flux level and spectrum. It is shown that, provided that the flux statistical deviations in the Monte Carlo transport calculation do not exceed a given value, the effect of the flux errors in the calculated isotopic inventory are negligible (even at very high burn-up) compared to the effect of the large cross-section uncertainties available at present in the data files.
Propagation of statistical and nuclear data uncertainties in Monte Carlo burn-up calculations
International Nuclear Information System (INIS)
Garcia-Herranz, Nuria; Cabellos, Oscar; Sanz, Javier; Juan, Jesus; Kuijper, Jim C.
2008-01-01
Two methodologies to propagate the uncertainties on the nuclide inventory in combined Monte Carlo-spectrum and burn-up calculations are presented, based on sensitivity/uncertainty and random sampling techniques (uncertainty Monte Carlo method). Both enable the assessment of the impact of uncertainties in the nuclear data as well as uncertainties due to the statistical nature of the Monte Carlo neutron transport calculation. The methodologies are implemented in our MCNP-ACAB system, which combines the neutron transport code MCNP-4C and the inventory code ACAB. A high burn-up benchmark problem is used to test the MCNP-ACAB performance in inventory predictions, with no uncertainties. A good agreement is found with the results of other participants. This benchmark problem is also used to assess the impact of nuclear data uncertainties and statistical flux errors in high burn-up applications. A detailed calculation is performed to evaluate the effect of cross-section uncertainties in the inventory prediction, taking into account the temporal evolution of the neutron flux level and spectrum. Very large uncertainties are found at the unusually high burn-up of this exercise (800 MWd/kgHM). To compare the impact of the statistical errors in the calculated flux with respect to the cross uncertainties, a simplified problem is considered, taking a constant neutron flux level and spectrum. It is shown that, provided that the flux statistical deviations in the Monte Carlo transport calculation do not exceed a given value, the effect of the flux errors in the calculated isotopic inventory are negligible (even at very high burn-up) compared to the effect of the large cross-section uncertainties available at present in the data files
Development of general-purpose particle and heavy ion transport monte carlo code
International Nuclear Information System (INIS)
Iwase, Hiroshi; Nakamura, Takashi; Niita, Koji
2002-01-01
The high-energy particle transport code NMTC/JAM, which has been developed at JAERI, was improved for the high-energy heavy ion transport calculation by incorporating the JQMD code, the SPAR code and the Shen formula. The new NMTC/JAM named PHITS (Particle and Heavy-Ion Transport code System) is the first general-purpose heavy ion transport Monte Carlo code over the incident energies from several MeV/nucleon to several GeV/nucleon. (author)
Monte Carlo impurity transport modeling in the DIII-D transport
International Nuclear Information System (INIS)
Evans, T.E.; Finkenthal, D.F.
1998-04-01
A description of the carbon transport and sputtering physics contained in the Monte Carlo Impurity (MCI) transport code is given. Examples of statistically significant carbon transport pathways are examined using MCI's unique tracking visualizer and a mechanism for enhanced carbon accumulation on the high field side of the divertor chamber is discussed. Comparisons between carbon emissions calculated with MCI and those measured in the DIII-D tokamak are described. Good qualitative agreement is found between 2D carbon emission patterns calculated with MCI and experimentally measured carbon patterns. While uncertainties in the sputtering physics, atomic data, and transport models have made quantitative comparisons with experiments more difficult, recent results using a physics based model for physical and chemical sputtering has yielded simulations with about 50% of the total carbon radiation measured in the divertor. These results and plans for future improvement in the physics models and atomic data are discussed
KAMCCO, a reactor physics Monte Carlo neutron transport code
International Nuclear Information System (INIS)
Arnecke, G.; Borgwaldt, H.; Brandl, V.; Lalovic, M.
1976-06-01
KAMCCO is a 3-dimensional reactor Monte Carlo code for fast neutron physics problems. Two options are available for the solution of 1) the inhomogeneous time-dependent neutron transport equation (census time scheme), and 2) the homogeneous static neutron transport equation (generation cycle scheme). The user defines the desired output, e.g. estimates of reaction rates or neutron flux integrated over specified volumes in phase space and time intervals. Such primary quantities can be arbitrarily combined, also ratios of these quantities can be estimated with their errors. The Monte Carlo techniques are mostly analogue (exceptions: Importance sampling for collision processes, ELP/MELP, Russian roulette and splitting). Estimates are obtained from the collision and track length estimators. Elastic scattering takes into account first order anisotropy in the center of mass system. Inelastic scattering is processed via the evaporation model or via the excitation of discrete levels. For the calculation of cross sections, the energy is treated as a continuous variable. They are computed by a) linear interpolation, b) from optionally Doppler broadened single level Breit-Wigner resonances or c) from probability tables (in the region of statistically distributed resonances). (orig.) [de
Infinite variance in fermion quantum Monte Carlo calculations
Shi, Hao; Zhang, Shiwei
2016-03-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.
Développement de la méthode de Monte Carlo pour le calcul des ...
African Journals Online (AJOL)
AKA Boko
Dans ce papier, nous montrons en premier lieu l'intérêt des hétérostructures, puis la nécessité d'utiliser une méthode numérique et notamment celle de Monte Carlo, pour calculer le transport électrique dans les semi-conducteurs. Nous justifions aussi la composition de notre semi-conducteur ternaire AlxGa1-xAs. Ensuite ...
Monte Carlo reactor calculation with substantially reduced number of cycles
International Nuclear Information System (INIS)
Lee, M. J.; Joo, H. G.; Lee, D.; Smith, K.
2012-01-01
A new Monte Carlo (MC) eigenvalue calculation scheme that substantially reduces the number of cycles is introduced with the aid of coarse mesh finite difference (CMFD) formulation. First, it is confirmed in terms of pin power errors that using extremely many particles resulting in short active cycles is beneficial even in the conventional MC scheme although wasted operations in inactive cycles cannot be reduced with more particles. A CMFD-assisted MC scheme is introduced as an effort to reduce the number of inactive cycles and the fast convergence behavior and reduced inter-cycle effect of the CMFD assisted MC calculation is investigated in detail. As a practical means of providing a good initial fission source distribution, an assembly based few-group condensation and homogenization scheme is introduced and it is shown that efficient MC eigenvalue calculations with fewer than 20 total cycles (including inactive cycles) are possible for large power reactor problems. (authors)
Energy Technology Data Exchange (ETDEWEB)
Martin, E.; Gschwind, R.; Henriet, J.; Sauget, M.; Makovicka, L. [IRMA/Enisys/FEMTO-ST, Pole universitaire des Portes du Jura, place Tharradin, BP 71427, 2521 1 - Montbeliard cedex (France)
2010-07-01
In order to reduce the computing time needed by Monte Carlo codes in the field of irradiation physics, notably in dosimetry, the authors report the use of artificial neural networks in combination with preliminary Monte Carlo calculations. During the learning phase, Monte Carlo calculations are performed in homogeneous media to allow the building up of the neural network. Then, dosimetric calculations (in heterogeneous media, unknown by the network) can be performed by the so-learned network. Results with an equivalent precision can be obtained within less than one minute on a simple PC whereas several days are needed with a Monte Carlo calculation
Monte-Carlo calculations of positron implantation profiles in silver and gold
Aydin, A
2000-01-01
To investigate the implantation profiles of positrons in silver and gold, the Monte-Carlo programs developed previously by to simulate the transport of positrons in metals was used. The simulation technique is mainly based on the screened Rutherford differential cross section with a spin-relativistic correction factor for the elastic scattering at high energies supplemented by total cross sections at low energies, Gryzinski's semi-empirical expression to simulate the energy loss due to inelastic scattering, and Liljequist's model to calculate the total inelastic scattering cross section. Backscattering probabilities and mean penetration depths were calculated from the implantation profiles of positrons at energies between 1 and 50 keV, entering normally to semi-infinite silver and gold targets. The calculated backscattering probabilities and mean penetration depths are compared with comparable Monte-Carlo data and experimental results for semi-infinite silver and gold targets. The agreement is quite satisfact...
A Monte Carlo model of complex spectra of opacity calculations
International Nuclear Information System (INIS)
Klapisch, M.; Duffy, P.; Goldstein, W.H.
1991-01-01
We are developing a Monte Carlo method for calculating opacities of complex spectra. It should be faster than atomic structure codes and is more accurate than the UTA method. We use the idea that wavelength-averaged opacities depend on the overall properties, but not the details, of the spectrum; our spectra have the same statistical properties as real ones but the strength and energy of each line is random. In preliminary tests we can get Rosseland mean opacities within 20% of actual values. (orig.)
MCOR - Monte Carlo depletion code for reference LWR calculations
Energy Technology Data Exchange (ETDEWEB)
Puente Espel, Federico, E-mail: fup104@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Tippayakul, Chanatip, E-mail: cut110@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Ivanov, Kostadin, E-mail: kni1@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Misu, Stefan, E-mail: Stefan.Misu@areva.com [AREVA, AREVA NP GmbH, Erlangen (Germany)
2011-04-15
Research highlights: > Introduction of a reference Monte Carlo based depletion code with extended capabilities. > Verification and validation results for MCOR. > Utilization of MCOR for benchmarking deterministic lattice physics (spectral) codes. - Abstract: The MCOR (MCnp-kORigen) code system is a Monte Carlo based depletion system for reference fuel assembly and core calculations. The MCOR code is designed as an interfacing code that provides depletion capability to the LANL Monte Carlo code by coupling two codes: MCNP5 with the AREVA NP depletion code, KORIGEN. The physical quality of both codes is unchanged. The MCOR code system has been maintained and continuously enhanced since it was initially developed and validated. The verification of the coupling was made by evaluating the MCOR code against similar sophisticated code systems like MONTEBURNS, OCTOPUS and TRIPOLI-PEPIN. After its validation, the MCOR code has been further improved with important features. The MCOR code presents several valuable capabilities such as: (a) a predictor-corrector depletion algorithm, (b) utilization of KORIGEN as the depletion module, (c) individual depletion calculation of each burnup zone (no burnup zone grouping is required, which is particularly important for the modeling of gadolinium rings), and (d) on-line burnup cross-section generation by the Monte Carlo calculation for 88 isotopes and usage of the KORIGEN libraries for PWR and BWR typical spectra for the remaining isotopes. Besides the just mentioned capabilities, the MCOR code newest enhancements focus on the possibility of executing the MCNP5 calculation in sequential or parallel mode, a user-friendly automatic re-start capability, a modification of the burnup step size evaluation, and a post-processor and test-matrix, just to name the most important. The article describes the capabilities of the MCOR code system; from its design and development to its latest improvements and further ameliorations. Additionally
MCOR - Monte Carlo depletion code for reference LWR calculations
International Nuclear Information System (INIS)
Puente Espel, Federico; Tippayakul, Chanatip; Ivanov, Kostadin; Misu, Stefan
2011-01-01
Research highlights: → Introduction of a reference Monte Carlo based depletion code with extended capabilities. → Verification and validation results for MCOR. → Utilization of MCOR for benchmarking deterministic lattice physics (spectral) codes. - Abstract: The MCOR (MCnp-kORigen) code system is a Monte Carlo based depletion system for reference fuel assembly and core calculations. The MCOR code is designed as an interfacing code that provides depletion capability to the LANL Monte Carlo code by coupling two codes: MCNP5 with the AREVA NP depletion code, KORIGEN. The physical quality of both codes is unchanged. The MCOR code system has been maintained and continuously enhanced since it was initially developed and validated. The verification of the coupling was made by evaluating the MCOR code against similar sophisticated code systems like MONTEBURNS, OCTOPUS and TRIPOLI-PEPIN. After its validation, the MCOR code has been further improved with important features. The MCOR code presents several valuable capabilities such as: (a) a predictor-corrector depletion algorithm, (b) utilization of KORIGEN as the depletion module, (c) individual depletion calculation of each burnup zone (no burnup zone grouping is required, which is particularly important for the modeling of gadolinium rings), and (d) on-line burnup cross-section generation by the Monte Carlo calculation for 88 isotopes and usage of the KORIGEN libraries for PWR and BWR typical spectra for the remaining isotopes. Besides the just mentioned capabilities, the MCOR code newest enhancements focus on the possibility of executing the MCNP5 calculation in sequential or parallel mode, a user-friendly automatic re-start capability, a modification of the burnup step size evaluation, and a post-processor and test-matrix, just to name the most important. The article describes the capabilities of the MCOR code system; from its design and development to its latest improvements and further ameliorations
Development and verification of Monte Carlo burnup calculation system
International Nuclear Information System (INIS)
Ando, Yoshihira; Yoshioka, Kenichi; Mitsuhashi, Ishi; Sakurada, Koichi; Sakurai, Shungo
2003-01-01
Monte Carlo burnup calculation code system has been developed to evaluate accurate various quantities required in the backend field. From the Actinide Research in a Nuclear Element (ARIANE) program, by using, the measured nuclide compositions of fuel rods in the fuel assemblies irradiated in the commercial Netherlands BWR, the analyses have been performed for the code system verification. The code system developed in this paper has been verified through analysis for MOX and UO2 fuel rods. This system enables to reduce large margin assumed in the present criticality analysis for LWR spent fuels. (J.P.N.)
Brachytherapy structural shielding calculations using Monte Carlo generated, monoenergetic data
Energy Technology Data Exchange (ETDEWEB)
Zourari, K.; Peppa, V.; Papagiannis, P., E-mail: ppapagi@phys.uoa.gr [Medical Physics Laboratory, Medical School, University of Athens, 75 Mikras Asias, 11527 Athens (Greece); Ballester, Facundo [Department of Atomic, Molecular and Nuclear Physics, University of Valencia, Burjassot 46100 (Spain); Siebert, Frank-André [Clinic of Radiotherapy, University Hospital of Schleswig-Holstein, Campus Kiel 24105 (Germany)
2014-04-15
Purpose: To provide a method for calculating the transmission of any broad photon beam with a known energy spectrum in the range of 20–1090 keV, through concrete and lead, based on the superposition of corresponding monoenergetic data obtained from Monte Carlo simulation. Methods: MCNP5 was used to calculate broad photon beam transmission data through varying thickness of lead and concrete, for monoenergetic point sources of energy in the range pertinent to brachytherapy (20–1090 keV, in 10 keV intervals). The three parameter empirical model introduced byArcher et al. [“Diagnostic x-ray shielding design based on an empirical model of photon attenuation,” Health Phys. 44, 507–517 (1983)] was used to describe the transmission curve for each of the 216 energy-material combinations. These three parameters, and hence the transmission curve, for any polyenergetic spectrum can then be obtained by superposition along the lines of Kharrati et al. [“Monte Carlo simulation of x-ray buildup factors of lead and its applications in shielding of diagnostic x-ray facilities,” Med. Phys. 34, 1398–1404 (2007)]. A simple program, incorporating a graphical user interface, was developed to facilitate the superposition of monoenergetic data, the graphical and tabular display of broad photon beam transmission curves, and the calculation of material thickness required for a given transmission from these curves. Results: Polyenergetic broad photon beam transmission curves of this work, calculated from the superposition of monoenergetic data, are compared to corresponding results in the literature. A good agreement is observed with results in the literature obtained from Monte Carlo simulations for the photon spectra emitted from bare point sources of various radionuclides. Differences are observed with corresponding results in the literature for x-ray spectra at various tube potentials, mainly due to the different broad beam conditions or x-ray spectra assumed. Conclusions
Electron transport in radiotherapy using local-to-global Monte Carlo
International Nuclear Information System (INIS)
Svatos, M.M.; Chandler, W.P.; Siantar, C.L.H.; Rathkopf, J.A.; Ballinger, C.T.
1994-09-01
Local-to-Global (L-G) Monte Carlo methods are a way to make three-dimensional electron transport both fast and accurate relative to other Monte Carlo methods. This is achieved by breaking the simulation into two stages: a local calculation done over small geometries having the size and shape of the ''steps'' to be taken through the mesh; and a global calculation which relies on a stepping code that samples the stored results of the local calculation. The increase in speed results from taking fewer steps in the global calculation than required by ordinary Monte Carlo codes and by speeding up the calculation per step. The potential for accuracy comes from the ability to use long runs of detailed codes to compile probability distribution functions (PDFs) in the local calculation. Specific examples of successful Local-to-Global algorithms are given
New features of the mercury Monte Carlo particle transport code
International Nuclear Information System (INIS)
Procassini, Richard; Brantley, Patrick; Dawson, Shawn
2010-01-01
Several new capabilities have been added to the Mercury Monte Carlo transport code over the past four years. The most important algorithmic enhancement is a general, extensible infrastructure to support source, tally and variance reduction actions. For each action, the user defines a phase space, as well as any number of responses that are applied to a specified event. Tallies are accumulated into a correlated, multi-dimensional. Cartesian-product result phase space. Our approach employs a common user interface to specify the data sets and distributions that define the phase, response and result for each action. Modifications to the particle trackers include the use of facet halos (instead of extrapolative fuzz) for robust tracking, and material interface reconstruction for use in shape overlaid meshes. Support for expected-value criticality eigenvalue calculations has also been implemented. Computer science enhancements include an in-line Python interface for user customization of problem setup and output. (author)
Calculation of Monte Carlo importance functions for use in nuclear-well logging calculations
International Nuclear Information System (INIS)
Soran, P.D.; McKeon, D.C.; Booth, T.E.
1989-07-01
Importance sampling is essential to the timely solution of Monte Carlo nuclear-logging computer simulations. Achieving minimum variance (maximum precision) of a response in minimum computation time is one criteria for the choice of an importance function. Various methods for calculating importance functions will be presented, new methods investigated, and comparisons with porosity and density tools will be shown. 5 refs., 1 tab
A decorrelation technique for iterated source Monte Carlo calculations
International Nuclear Information System (INIS)
Nease, Brian R.; Dumonteil, Eric
2010-01-01
In Monte Carlo (MC) iterated source calculations, the distribution of starter neutrons in a given cycle is based on the distribution of fission sites from the previous cycle. The consequence is that the neutron distribution and corresponding tallies in one cycle are correlated to those in successive cycles. Most MC codes do not account for these correlations, resulting in underestimation of the real variance. In this work, we propose a technique to reduce the correlations between MC cycles by modifying the power iteration process. To achieve this objective, we have developed two new methods. The first method is an orthogonalization procedure that removes the eigenmode corresponding to the largest eigenvalue. Since this method relies on the availability of the k-eigenvalues and corresponding eigenmodes, we have developed the second method, which calculates an unbiased estimator of the fission matrix. This estimator is novel because it does not require saving the source distribution from previous cycles. In this paper, we first show how the correlations are related to the eigenmodes of the fission matrix, then develop the theory behind the unbiased fission matrix estimator, and, finally, develop the decorrelation technique. These methods were implemented into a small mono-energetic research code as well as the continuous-energy Tripoli4 Monte Carlo code. Many results are provided using both codes. (author)
Therapeutic Applications of Monte Carlo Calculations in Nuclear Medicine
Sgouros, George
2003-01-01
This book examines the applications of Monte Carlo (MC) calculations in therapeutic nuclear medicine, from basic principles to computer implementations of software packages and their applications in radiation dosimetry and treatment planning. It is written for nuclear medicine physicists and physicians as well as radiation oncologists, and can serve as a supplementary text for medical imaging, radiation dosimetry and nuclear engineering graduate courses in science, medical and engineering faculties. With chapters is written by recognised authorities in that particular field, the book covers the entire range of MC applications in therapeutic medical and health physics, from its use in imaging prior to therapy to dose distribution modelling targeted radiotherapy. The contributions discuss the fundamental concepts of radiation dosimetry, radiobiological aspects of targeted radionuclide therapy and the various components and steps required for implementing a dose calculation and treatment planning methodology in ...
The calculation of neutron flux using Monte Carlo method
Günay, Mehtap; Bardakçı, Hilal
2017-09-01
In this study, a hybrid reactor system was designed by using 99-95% Li20Sn80 + 1-5% RG-Pu, 99-95% Li20Sn80 + 1-5% RG-PuF4, and 99-95% Li20Sn80 + 1-5% RG-PuO2 fluids, ENDF/B-VII.0 evaluated nuclear data library and 9Cr2WVTa structural material. The fluids were used in the liquid first wall, liquid second wall (blanket) and shield zones of a fusion-fission hybrid reactor system. The neutron flux was calculated according to the mixture components, radial, energy spectrum in the designed hybrid reactor system for the selected fluids, library and structural material. Three-dimensional nucleonic calculations were performed using the most recent version MCNPX-2.7.0 the Monte Carlo code.
Monte Carlo calculations for intermediate-energy standard neutron field
International Nuclear Information System (INIS)
Joneja, O.P.; Subbukutty, K.; Iyengar, S.B.D.; Navalkar, M.P.
Intermediate-Energy Standard Neutron Field (ISNF) which produces a well characterised spectrum in the energy range of interest for fast reactors including breeders, has been set up at NBS using thin enriched 235 U fission sources. A proposal has been made for setting up a similar facility at BARC using however, easily available natural U instead of enriched U sources, to start with. In order to simulate the neutronics of such a facility Monte Carlo method of calculations has been adopted and developed. The results of these calculations have been compared with those of NBS and it is found that there may be a maximum difference of 10% in spectrum characteristics for the two cases of using thick and thin fission sources. (K.B.)
Monte Carlo dose calculation algorithm on a distributed system
International Nuclear Information System (INIS)
Chauvie, Stephane; Dominoni, Matteo; Marini, Piergiorgio; Stasi, Michele; Pia, Maria Grazia; Scielzo, Giuseppe
2003-01-01
The main goal of modern radiotherapy, such as 3D conformal radiotherapy and intensity-modulated radiotherapy is to deliver a high dose to the target volume sparing the surrounding healthy tissue. The accuracy of dose calculation in a treatment planning system is therefore a critical issue. Among many algorithms developed over the last years, those based on Monte Carlo proven to be very promising in terms of accuracy. The most severe obstacle in application to clinical practice is the high time necessary for calculations. We have studied a high performance network of Personal Computer as a realistic alternative to a high-costs dedicated parallel hardware to be used routinely as instruments of evaluation of treatment plans. We set-up a Beowulf Cluster, configured with 4 nodes connected with low-cost network and installed MC code Geant4 to describe our irradiation facility. The MC, once parallelised, was run on the Beowulf Cluster. The first run of the full simulation showed that the time required for calculation decreased linearly increasing the number of distributed processes. The good scalability trend allows both statistically significant accuracy and good time performances. The scalability of the Beowulf Cluster system offers a new instrument for dose calculation that could be applied in clinical practice. These would be a good support particularly in high challenging prescription that needs good calculation accuracy in zones of high dose gradient and great dishomogeneities
Monte carlo dose calculation in dental amalgam phantom
Directory of Open Access Journals (Sweden)
Mohd Zahri Abdul Aziz
2015-01-01
Full Text Available It has become a great challenge in the modern radiation treatment to ensure the accuracy of treatment delivery in electron beam therapy. Tissue inhomogeneity has become one of the factors for accurate dose calculation, and this requires complex algorithm calculation like Monte Carlo (MC. On the other hand, computed tomography (CT images used in treatment planning system need to be trustful as they are the input in radiotherapy treatment. However, with the presence of metal amalgam in treatment volume, the CT images input showed prominent streak artefact, thus, contributed sources of error. Hence, metal amalgam phantom often creates streak artifacts, which cause an error in the dose calculation. Thus, a streak artifact reduction technique was applied to correct the images, and as a result, better images were observed in terms of structure delineation and density assigning. Furthermore, the amalgam density data were corrected to provide amalgam voxel with accurate density value. As for the errors of dose uncertainties due to metal amalgam, they were reduced from 46% to as low as 2% at d80 (depth of the 80% dose beyond Zmax using the presented strategies. Considering the number of vital and radiosensitive organs in the head and the neck regions, this correction strategy is suggested in reducing calculation uncertainties through MC calculation.
Monte Carlo dose calculation in dental amalgam phantom.
Aziz, Mohd Zahri Abdul; Yusoff, A L; Osman, N D; Abdullah, R; Rabaie, N A; Salikin, M S
2015-01-01
It has become a great challenge in the modern radiation treatment to ensure the accuracy of treatment delivery in electron beam therapy. Tissue inhomogeneity has become one of the factors for accurate dose calculation, and this requires complex algorithm calculation like Monte Carlo (MC). On the other hand, computed tomography (CT) images used in treatment planning system need to be trustful as they are the input in radiotherapy treatment. However, with the presence of metal amalgam in treatment volume, the CT images input showed prominent streak artefact, thus, contributed sources of error. Hence, metal amalgam phantom often creates streak artifacts, which cause an error in the dose calculation. Thus, a streak artifact reduction technique was applied to correct the images, and as a result, better images were observed in terms of structure delineation and density assigning. Furthermore, the amalgam density data were corrected to provide amalgam voxel with accurate density value. As for the errors of dose uncertainties due to metal amalgam, they were reduced from 46% to as low as 2% at d80 (depth of the 80% dose beyond Zmax) using the presented strategies. Considering the number of vital and radiosensitive organs in the head and the neck regions, this correction strategy is suggested in reducing calculation uncertainties through MC calculation.
Energy Technology Data Exchange (ETDEWEB)
Baker, Randal Scott [Univ. of Arizona, Tucson, AZ (United States)
1990-01-01
The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S_{N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S_{N} regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S_{N} is well suited for by themselves. The fully coupled Monte Carlo/S_{N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S_{N} calculation is to be performed. The Monte Carlo region may comprise the entire spatial region for selected energy groups, or may consist of a rectangular area that is either completely or partially embedded in an arbitrary S_{N} region. The Monte Carlo and S_{N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and volumetric sources. The hybrid method has been implemented in the S_{N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and volumetric sources, and linkage subrountines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S_{N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating S_{N} calculations. The special-purpose Monte Carlo routines used are essentially analog, with few variance reduction techniques employed. However, the routines have been successfully vectorized, with approximately a factor of five increase in speed over the non-vectorized version.
Calculating alpha Eigenvalues in a Continuous-Energy Infinite Medium with Monte Carlo
Energy Technology Data Exchange (ETDEWEB)
Betzler, Benjamin R. [Los Alamos National Laboratory; Kiedrowski, Brian C. [Los Alamos National Laboratory; Brown, Forrest B. [Los Alamos National Laboratory; Martin, William R. [Los Alamos National Laboratory
2012-09-04
The {alpha} eigenvalue has implications for time-dependent problems where the system is sub- or supercritical. We present methods and results from calculating the {alpha}-eigenvalue spectrum for a continuous-energy infinite medium with a simplified Monte Carlo transport code. We formulate the {alpha}-eigenvalue problem, detail the Monte Carlo code physics, and provide verification and results. We have a method for calculating the {alpha}-eigenvalue spectrum in a continuous-energy infinite-medium. The continuous-time Markov process described by the transition rate matrix provides a way of obtaining the {alpha}-eigenvalue spectrum and kinetic modes. These are useful for the approximation of the time dependence of the system.
Quantum Monte Carlo Calculations Applied to Magnetic Molecules
Energy Technology Data Exchange (ETDEWEB)
Engelhardt, Larry [Iowa State Univ., Ames, IA (United States)
2006-01-01
We have calculated the equilibrium thermodynamic properties of Heisenberg spin systems using a quantum Monte Carlo (QMC) method. We have used some of these systems as models to describe recently synthesized magnetic molecules, and-upon comparing the results of these calculations with experimental data-have obtained accurate estimates for the basic parameters of these models. We have also performed calculations for other systems that are of more general interest, being relevant both for existing experimental data and for future experiments. Utilizing the concept of importance sampling, these calculations can be carried out in an arbitrarily large quantum Hilbert space, while still avoiding any approximations that would introduce systematic errors. The only errors are statistical in nature, and as such, their magnitudes are accurately estimated during the course of a simulation. Frustrated spin systems present a major challenge to the QMC method, nevertheless, in many instances progress can be made. In this chapter, the field of magnetic molecules is introduced, paying particular attention to the characteristics that distinguish magnetic molecules from other systems that are studied in condensed matter physics. We briefly outline the typical path by which we learn about magnetic molecules, which requires a close relationship between experiments and theoretical calculations. The typical experiments are introduced here, while the theoretical methods are discussed in the next chapter. Each of these theoretical methods has a considerable limitation, also described in Chapter 2, which together serve to motivate the present work. As is shown throughout the later chapters, the present QMC method is often able to provide useful information where other methods fail. In Chapter 3, the use of Monte Carlo methods in statistical physics is reviewed, building up the fundamental ideas that are necessary in order to understand the method that has been used in this work. With these
Quantum Monte Carlo calculations with chiral effective field theory interactions
Energy Technology Data Exchange (ETDEWEB)
Tews, Ingo
2015-10-12
The neutron-matter equation of state connects several physical systems over a wide density range, from cold atomic gases in the unitary limit at low densities, to neutron-rich nuclei at intermediate densities, up to neutron stars which reach supranuclear densities in their core. An accurate description of the neutron-matter equation of state is therefore crucial to describe these systems. To calculate the neutron-matter equation of state reliably, precise many-body methods in combination with a systematic theory for nuclear forces are needed. Chiral effective field theory (EFT) is such a theory. It provides a systematic framework for the description of low-energy hadronic interactions and enables calculations with controlled theoretical uncertainties. Chiral EFT makes use of a momentum-space expansion of nuclear forces based on the symmetries of Quantum Chromodynamics, which is the fundamental theory of strong interactions. In chiral EFT, the description of nuclear forces can be systematically improved by going to higher orders in the chiral expansion. On the other hand, continuum Quantum Monte Carlo (QMC) methods are among the most precise many-body methods available to study strongly interacting systems at finite densities. They treat the Schroedinger equation as a diffusion equation in imaginary time and project out the ground-state wave function of the system starting from a trial wave function by propagating the system in imaginary time. To perform this propagation, continuum QMC methods require as input local interactions. However, chiral EFT, which is naturally formulated in momentum space, contains several sources of nonlocality. In this Thesis, we show how to construct local chiral two-nucleon (NN) and three-nucleon (3N) interactions and discuss results of first QMC calculations for pure neutron systems. We have performed systematic auxiliary-field diffusion Monte Carlo (AFDMC) calculations for neutron matter using local chiral NN interactions. By
Streamlining resummed QCD calculations using Monte Carlo integration
Energy Technology Data Exchange (ETDEWEB)
Farhi, David; Feige, Ilya; Freytsis, Marat; Schwartz, Matthew D. [Center for the Fundamental Laws of Nature, Harvard University,17 Oxford St., Cambridge, MA 02138 (United States)
2016-08-18
Some of the most arduous and error-prone aspects of precision resummed calculations are related to the partonic hard process, having nothing to do with the resummation. In particular, interfacing to parton-distribution functions, combining various channels, and performing the phase space integration can be limiting factors in completing calculations. Conveniently, however, most of these tasks are already automated in many Monte Carlo programs, such as MADGRAPH http://dx.doi.org/10.1007/JHEP07(2014)079, ALPGEN http://dx.doi.org/10.1088/1126-6708/2003/07/001 or SHERPA http://dx.doi.org/10.1088/1126-6708/2009/02/007. In this paper, we show how such programs can be used to produce distributions of partonic kinematics with associated color structures representing the hard factor in a resummed distribution. These distributions can then be used to weight convolutions of jet, soft and beam functions producing a complete resummed calculation. In fact, only around 1000 unweighted events are necessary to produce precise distributions. A number of examples and checks are provided, including e{sup +}e{sup −} two- and four-jet event shapes, n-jettiness and jet-mass related observables at hadron colliders at next-to-leading-log (NLL) matched to leading order (LO). Attached code can be used to modify MADGRAPH to export the relevant LO hard functions and color structures for arbitrary processes.
International Nuclear Information System (INIS)
Kim, Jung-Ha; Hill, Robin; Kuncic, Zdenka
2012-01-01
The Monte Carlo (MC) method has proven invaluable for radiation transport simulations to accurately determine radiation doses and is widely considered a reliable computational measure that can substitute a physical experiment where direct measurements are not possible or feasible. In the EGSnrc/BEAMnrc MC codes, there are several user-specified parameters and customized transport algorithms, which may affect the calculation results. In order to fully utilize the MC methods available in these codes, it is essential to understand all these options and to use them appropriately. In this study, the effects of the electron transport algorithms in EGSnrc/BEAMnrc, which are often a trade-off between calculation accuracy and efficiency, were investigated in the buildup region of a homogeneous water phantom and also in a heterogeneous phantom using the DOSRZnrc user code. The algorithms and parameters investigated include: boundary crossing algorithm (BCA), skin depth, electron step algorithm (ESA), global electron cutoff energy (ECUT) and electron production cutoff energy (AE). The variations in calculated buildup doses were found to be larger than 10% for different user-specified transport parameters. We found that using BCA = EXACT gave the best results in terms of accuracy and efficiency in calculating buildup doses using DOSRZnrc. In addition, using the ESA = PRESTA-I option was found to be the best way of reducing the total calculation time without losing accuracy in the results at high energies (few keV ∼ MeV). We also found that although choosing a higher ECUT/AE value in the beam modelling can dramatically improve computation efficiency, there is a significant trade-off in surface dose uncertainty. Our study demonstrates that a careful choice of user-specified transport parameters is required when conducting similar MC calculations. (note)
Monte Carlo dose calculations for phantoms with hip prostheses
International Nuclear Information System (INIS)
Bazalova, M; Verhaegen, F; Coolens, C; Childs, P; Cury, F; Beaulieu, L
2008-01-01
Computed tomography (CT) images of patients with hip prostheses are severely degraded by metal streaking artefacts. The low image quality makes organ contouring more difficult and can result in large dose calculation errors when Monte Carlo (MC) techniques are used. In this work, the extent of streaking artefacts produced by three common hip prosthesis materials (Ti-alloy, stainless steel, and Co-Cr-Mo alloy) was studied. The prostheses were tested in a hypothetical prostate treatment with five 18 MV photon beams. The dose distributions for unilateral and bilateral prosthesis phantoms were calculated with the EGSnrc/DOSXYZnrc MC code. This was done in three phantom geometries: in the exact geometry, in the original CT geometry, and in an artefact-corrected geometry. The artefact-corrected geometry was created using a modified filtered back-projection correction technique. It was found that unilateral prosthesis phantoms do not show large dose calculation errors, as long as the beams miss the artefact-affected volume. This is possible to achieve in the case of unilateral prosthesis phantoms (except for the Co-Cr-Mo prosthesis which gives a 3% error) but not in the case of bilateral prosthesis phantoms. The largest dose discrepancies were obtained for the bilateral Co-Cr-Mo hip prosthesis phantom, up to 11% in some voxels within the prostate. The artefact correction algorithm worked well for all phantoms and resulted in dose calculation errors below 2%. In conclusion, a MC treatment plan should include an artefact correction algorithm when treating patients with hip prostheses
Energy Technology Data Exchange (ETDEWEB)
Burkatzki, Mark Thomas
2008-07-01
The author presents scalar-relativistic energy-consistent Hartree-Fock pseudopotentials for the main-group and 3d-transition-metal elements. The pseudopotentials do not exhibit a singularity at the nucleus and are therefore suitable for quantum Monte Carlo (QMC) calculations. The author demonstrates their transferability through extensive benchmark calculations of atomic excitation spectra as well as molecular properties. In particular, the author computes the vibrational frequencies and binding energies of 26 first- and second-row diatomic molecules using post Hartree-Fock methods, finding excellent agreement with the corresponding all-electron values. The author shows that the presented pseudopotentials give superior accuracy than other existing pseudopotentials constructed specifically for QMC. The localization error and the efficiency in QMC are discussed. The author also presents QMC calculations for selected atomic and diatomic 3d-transitionmetal systems. Finally, valence basis sets of different sizes (VnZ with n=D,T,Q,5 for 1st and 2nd row; with n=D,T for 3rd to 5th row; with n=D,T,Q for the 3d transition metals) optimized for the pseudopotentials are presented. (orig.)
Espel, Federico Puente
The main objective of this PhD research is to develop a high accuracy modeling tool using a Monte Carlo based coupled system. The presented research comprises the development of models to include the thermal-hydraulic feedback to the Monte Carlo method and speed-up mechanisms to accelerate the Monte Carlo criticality calculation. Presently, deterministic codes based on the diffusion approximation of the Boltzmann transport equation, coupled with channel-based (or sub-channel based) thermal-hydraulic codes, carry out the three-dimensional (3-D) reactor core calculations of the Light Water Reactors (LWRs). These deterministic codes utilize nuclear homogenized data (normally over large spatial zones, consisting of fuel assembly or parts of fuel assembly, and in the best case, over small spatial zones, consisting of pin cell), which is functionalized in terms of thermal-hydraulic feedback parameters (in the form of off-line pre-generated cross-section libraries). High accuracy modeling is required for advanced nuclear reactor core designs that present increased geometry complexity and material heterogeneity. Such high-fidelity methods take advantage of the recent progress in computation technology and coupled neutron transport solutions with thermal-hydraulic feedback models on pin or even on sub-pin level (in terms of spatial scale). The continuous energy Monte Carlo method is well suited for solving such core environments with the detailed representation of the complicated 3-D problem. The major advantages of the Monte Carlo method over the deterministic methods are the continuous energy treatment and the exact 3-D geometry modeling. However, the Monte Carlo method involves vast computational time. The interest in Monte Carlo methods has increased thanks to the improvements of the capabilities of high performance computers. Coupled Monte-Carlo calculations can serve as reference solutions for verifying high-fidelity coupled deterministic neutron transport methods
Monte Carlo calculation of ''skyshine'' neutron dose from ALS [Advanced Light Source
International Nuclear Information System (INIS)
Moin-Vasiri, M.
1990-06-01
This report discusses the following topics on ''skyshine'' neutron dose from ALS: Sources of radiation; ALS modeling for skyshine calculations; MORSE Monte-Carlo; Implementation of MORSE; Results of skyshine calculations from storage ring; and Comparison of MORSE shielding calculations
Shield calculation of research reactor IAN-R1 by Monte Carlo method
International Nuclear Information System (INIS)
Puerta, J.; Buritica, D.A.; Cardenas, H.F.
1993-01-01
Using the Monte Carlo Method a computer program has been developed to simulate the neutron radiation transport and determine the basic parameters in shielding calculations. The program has been tested comparing dose conversion factors with kerma factors issued by the international commission on radiation units and measurements (ICRU) on its report (No 26 of 1987) giving errors less than ten percent showing the goodness of the method. The program computer transmitted backscattered and absorbed flux on energy less on each collision. when neutrons are produced by region source with knowing energy; results are given like conversion factors and reliability of this program allows a wide application on radiological and medical physics
Implementation of random set-up errors in Monte Carlo calculated dynamic IMRT treatment plans
International Nuclear Information System (INIS)
Stapleton, S; Zavgorodni, S; Popescu, I A; Beckham, W A
2005-01-01
The fluence-convolution method for incorporating random set-up errors (RSE) into the Monte Carlo treatment planning dose calculations was previously proposed by Beckham et al, and it was validated for open field radiotherapy treatments. This study confirms the applicability of the fluence-convolution method for dynamic intensity modulated radiotherapy (IMRT) dose calculations and evaluates the impact of set-up uncertainties on a clinical IMRT dose distribution. BEAMnrc and DOSXYZnrc codes were used for Monte Carlo calculations. A sliding window IMRT delivery was simulated using a dynamic multi-leaf collimator (DMLC) transport model developed by Keall et al. The dose distributions were benchmarked for dynamic IMRT fields using extended dose range (EDR) film, accumulating the dose from 16 subsequent fractions shifted randomly. Agreement of calculated and measured relative dose values was well within statistical uncertainty. A clinical seven field sliding window IMRT head and neck treatment was then simulated and the effects of random set-up errors (standard deviation of 2 mm) were evaluated. The dose-volume histograms calculated in the PTV with and without corrections for RSE showed only small differences indicating a reduction of the volume of high dose region due to set-up errors. As well, it showed that adequate coverage of the PTV was maintained when RSE was incorporated. Slice-by-slice comparison of the dose distributions revealed differences of up to 5.6%. The incorporation of set-up errors altered the position of the hot spot in the plan. This work demonstrated validity of implementation of the fluence-convolution method to dynamic IMRT Monte Carlo dose calculations. It also showed that accounting for the set-up errors could be essential for correct identification of the value and position of the hot spot
Implementation of random set-up errors in Monte Carlo calculated dynamic IMRT treatment plans
Stapleton, S.; Zavgorodni, S.; Popescu, I. A.; Beckham, W. A.
2005-02-01
The fluence-convolution method for incorporating random set-up errors (RSE) into the Monte Carlo treatment planning dose calculations was previously proposed by Beckham et al, and it was validated for open field radiotherapy treatments. This study confirms the applicability of the fluence-convolution method for dynamic intensity modulated radiotherapy (IMRT) dose calculations and evaluates the impact of set-up uncertainties on a clinical IMRT dose distribution. BEAMnrc and DOSXYZnrc codes were used for Monte Carlo calculations. A sliding window IMRT delivery was simulated using a dynamic multi-leaf collimator (DMLC) transport model developed by Keall et al. The dose distributions were benchmarked for dynamic IMRT fields using extended dose range (EDR) film, accumulating the dose from 16 subsequent fractions shifted randomly. Agreement of calculated and measured relative dose values was well within statistical uncertainty. A clinical seven field sliding window IMRT head and neck treatment was then simulated and the effects of random set-up errors (standard deviation of 2 mm) were evaluated. The dose-volume histograms calculated in the PTV with and without corrections for RSE showed only small differences indicating a reduction of the volume of high dose region due to set-up errors. As well, it showed that adequate coverage of the PTV was maintained when RSE was incorporated. Slice-by-slice comparison of the dose distributions revealed differences of up to 5.6%. The incorporation of set-up errors altered the position of the hot spot in the plan. This work demonstrated validity of implementation of the fluence-convolution method to dynamic IMRT Monte Carlo dose calculations. It also showed that accounting for the set-up errors could be essential for correct identification of the value and position of the hot spot.
A computer programme for perturbation calculations by correlated sampling Monte Carlo method
International Nuclear Information System (INIS)
Nakagawa, Masayuki; Asaoka, Takumi
1979-11-01
The perturbation calculation method by the Monte Carlo approach has been improved with use of correlated sampling technique and incorporated into the general purpose Monte Carlo code MORSE. The two methods, similar flight path and identical flight path methods have been adopted for evaluating the reactivity change. In the conventional perturbation method, only the first order term of the perturbation formula was taken into account but the present method can estimate up to the second order term. Through the Monte Carlo games, neutrons passing through perturbed regions in both unperturbed and perturbed systems are followed in a way to have a strong correlation for not only the first but the second generation. In this article, the perturbation formula is derived from the integral transport equation to estimate the reactivity change. The calculation flow and input/output format are explained for the user of the present computer programme. In Appendices, the FORTRAN list of main subroutines modified from the original code is shown in addition to an output example. (author)
Neutron point-flux calculation by Monte Carlo
International Nuclear Information System (INIS)
Eichhorn, M.
1986-04-01
A survey of the usual methods for estimating flux at a point is given. The associated variance-reducing techniques in direct Monte Carlo games are explained. The multigroup Monte Carlo codes MC for critical systems and PUNKT for point source-point detector-systems are represented, and problems in applying the codes to practical tasks are discussed. (author)
Improved estimation of the variance in Monte Carlo criticality calculations
International Nuclear Information System (INIS)
Hoogenboom, J. Eduard
2008-01-01
Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k eff results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k eff will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k eff are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)
Monte Carlo dose calculation of microbeam in a lung phantom
International Nuclear Information System (INIS)
Company, F.Z.; Mino, C.; Mino, F.
1998-01-01
Full text: Recent advances in synchrotron generated X-ray beams with high fluence rate permit investigation of the application of an array of closely spaced, parallel or converging microplanar beams in radiotherapy. The proposed techniques takes advantage of the hypothesised repair mechanism of capillary cells between alternate microbeam zones, which regenerates the lethally irradiated endothelial cells. The lateral and depth doses of 100 keV microplanar beams are investigated for different beam dimensions and spacings in a tissue, lung and tissue/lung/tissue phantom. The EGS4 Monte Carlo code is used to calculate dose profiles at different depth and bundles of beams (up to 20x20cm square cross section). The maximum dose on the beam axis (peak) and the minimum interbeam dose (valley) are compared at different depths, bundles, heights, widths and beam spacings. Relatively high peak to valley ratios are observed in the lung region, suggesting an ideal environment for microbeam radiotherapy. For a single field, the ratio at the tissue/lung interface will set the maximum dose to the target volume. However, in clinical application, several fields would be involved allowing much greater doses to be applied for the elimination of cancer cells. We conclude therefore that multifield microbeam therapy has the potential to achieve useful therapeutic ratios for the treatment of lung cancer
Monte Carlo calculations supporting patient plan verification in proton therapy
Directory of Open Access Journals (Sweden)
Thiago Viana Miranda Lima
2016-03-01
Full Text Available Patient’s treatment plan verification covers substantial amount of the quality assurance (QA resources, this is especially true for Intensity Modulated Proton Therapy (IMPT. The use of Monte Carlo (MC simulations in supporting QA has been widely discussed and several methods have been proposed. In this paper we studied an alternative approach from the one being currently applied clinically at Centro Nazionale di Adroterapia Oncologica (CNAO. We reanalysed the previously published data (Molinelli et al. 2013, where 9 patient plans were investigated in which the warning QA threshold of 3% mean dose deviation was crossed. The possibility that these differences between measurement and calculated dose were related to dose modelling (Treatment Planning Systems (TPS vs MC, limitations on dose delivery system or detectors mispositioning was originally explored but other factors such as the geometric description of the detectors were not ruled out. For the purpose of this work we compared ionisation-chambers measurements with different MC simulations results. It was also studied some physical effects introduced by this new approach for example inter detector interference and the delta ray thresholds. The simulations accounting for a detailed geometry typically are superior (statistical difference - p-value around 0.01 to most of the MC simulations used at CNAO (only inferior to the shift approach used. No real improvement were observed in reducing the current delta-ray threshold used (100 keV and no significant interference between ion chambers in the phantom were detected (p-value 0.81. In conclusion, it was observed that the detailed geometrical description improves the agreement between measurement and MC calculations in some cases. But in other cases position uncertainty represents the dominant uncertainty. The inter chamber disturbance was not detected for the therapeutic protons energies and the results from the current delta threshold are
Monte Carlo transport of electrons and positrons through thin foils
International Nuclear Information System (INIS)
Legarda, F.; Idoeta, R.
2000-01-01
In the different measurements made with electrons traversing matter it becomes useful the knowledge of its transmission through that medium, their paths and their angular distribution through matter so as to process and get information about the traversed medium and to improve and innovate the techniques that employ electrons, as medical applications or materials irradiation. This work presents a simulation of the transport of beams of electrons and positrons through thin foils using an analog Monte Carlo code that simulates in a detailed way every electron movement or interaction in matter. As those particles penetrate thin absorbers it has been assumed that they interact with matter only through elastic scattering, with negligible energy loss. This type of interaction has been described quite precisely because its angular form influences very much the angular distribution of electrons and positrons in matter. With this code it has been calculated the number of particles, with energies between 100 and 3000 keV, that are transmitted through different media of various thicknesses as well as its angular distribution, showing a good agreement with experimental data. The discrepancies are less than 5% for thicknesses lower than about 30% of the corresponding range in the tested material. As elastic scattering is very anisotropic, angular distributions resemble a collimated incident beam for very thin foils becoming slowly more isotropic when absorber thickness is increased. (author)
Present status of vectorization for particle transport Monte Carlo
International Nuclear Information System (INIS)
Martin, W.R.
1987-01-01
The conventional particle transport Monte Carlo algorithm is ill-suited for modern vector supercomputers. This history-based algorithm is not amenable to vectorization due to the random nature of the particle transport process, which inhibits the construction of vectors that are necessary for efficient utilization of a vector (pipelined) processor. An alternative algorithm, the event-based algorithm, is suitable for vectorization and has been used by several researchers in recent years to achieve impressive gains (5-20) in performance on modern vector supercomputers. This paper describes the event-based algorithm in some detail and discusses several implementations of this algorithm for specific applications in particle transport, including photon transport in a nuclear fusion plasma and neutron transport in a nuclear reactor. A discussion of the relative merits of these alternative approaches is included. A short discussion of the implementation of Monte Carlo methods on parallel processors, in particular multiple vector processors such as the Cray X-MP/48 and the IBM 3090/400, is included. The paper concludes with some thoughts regarding the potential of massively parallel processors (vector and scalar) for Monte Carlo simulation
Monte Carlo dose calculations for high-dose-rate brachytherapy using GPU-accelerated processing.
Tian, Z; Zhang, M; Hrycushko, B; Albuquerque, K; Jiang, S B; Jia, X
2016-01-01
Current clinical brachytherapy dose calculations are typically based on the Association of American Physicists in Medicine Task Group report 43 (TG-43) guidelines, which approximate patient geometry as an infinitely large water phantom. This ignores patient and applicator geometries and heterogeneities, causing dosimetric errors. Although Monte Carlo (MC) dose calculation is commonly recognized as the most accurate method, its associated long computational time is a major bottleneck for routine clinical applications. This article presents our recent developments of a fast MC dose calculation package for high-dose-rate (HDR) brachytherapy, gBMC, built on a graphics processing unit (GPU) platform. gBMC-simulated photon transport in voxelized geometry with physics in (192)Ir HDR brachytherapy energy range considered. A phase-space file was used as a source model. GPU-based parallel computation was used to simultaneously transport multiple photons, one on a GPU thread. We validated gBMC by comparing the dose calculation results in water with that computed TG-43. We also studied heterogeneous phantom cases and a patient case and compared gBMC results with Acuros BV results. Radial dose function in water calculated by gBMC showed GPU-based MC dose calculation package, gBMC, for HDR brachytherapy make it attractive for clinical applications. Copyright © 2016 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Monte Carlo benchmark calculations for 400MWTH PBMR core
International Nuclear Information System (INIS)
Kim, H. C.; Kim, J. K.; Kim, S. Y.; Noh, J. M.
2007-01-01
A large interest in high-temperature gas-cooled reactors (HTGR) has been initiated in connection with hydrogen production in recent years. In this study, as a part of work for establishing Monte Carlo computation system for HTGR core analysis, some benchmark calculations for pebble-type HTGR were carried out using MCNP5 code. The core of the 400MW t h Pebble-bed Modular Reactor (PBMR) was selected as a benchmark model. Recently, the IAEA CRP5 neutronics and thermal-hydraulics benchmark problem was proposed for the testing of existing methods for HTGRs to analyze the neutronics and thermal-hydraulic behavior for the design and safety evaluations of the PBMR. This study deals with the neutronic benchmark problems, for fresh fuel and cold conditions (Case F-1), and first core loading with given number densities (Case F-2), proposed for PBMR. After the detailed MCNP modeling of the whole facility, benchmark calculations were performed. Spherical fuel region of a fuel pebble is divided into cubic lattice element in order to model a fuel pebble which contains, on average, 15000 CFPs (Coated Fuel Particles). Each element contains one CFP at its center. In this study, the side length of each cubic lattice element to have the same amount of fuel was calculated to be 0.1635 cm. The remaining volume of each lattice element was filled with graphite. All of different 5 concentric shells of CFP were modeled. The PBMR annular core consists of approximately 452000 pebbles in the benchmark problems. In Case F-1 where the core was filled with only fresh fuel pebble, a BCC(body-centered-cubic) lattice model was employed in order to achieve the random packing core with the packing fraction of 0.61. The BCC lattice was also employed with the size of the moderator pebble increased in a manner that reproduces the specified F/M ratio of 1:2 while preserving the packing fraction of 0.61 in Case F-2. The calculations were pursued with ENDF/B-VI cross-section library and used sab2002 S(α,
Usefulness of the Monte Carlo method in reliability calculations
International Nuclear Information System (INIS)
Lanore, J.M.; Kalli, H.
1977-01-01
Three examples of reliability Monte Carlo programs developed in the LEP (Laboratory for Radiation Shielding Studies in the Nuclear Research Center at Saclay) are presented. First, an uncertainty analysis is given for a simplified spray system; a Monte Carlo program PATREC-MC has been written to solve the problem with the system components given in the fault tree representation. The second program MONARC 2 has been written to solve the problem of complex systems reliability by the Monte Carlo simulation, here again the system (a residual heat removal system) is in the fault tree representation. Third, the Monte Carlo program MONARC was used instead of the Markov diagram to solve the simulation problem of an electric power supply including two nets and two stand-by diesels
Monte Carlo calculations and measurements of spectra from a C-14 source
International Nuclear Information System (INIS)
Borg, J.
1996-05-01
To perform Monte Carlo simulations it is necessary to model the physical geometries i.e., the source and detector geometry. However, a complete model of the physical geometry may not be possible or may result in a very low calculation efficiency. Substituting the complete source model with a simplified model is one way of increasing the calculation efficiency. In this report, the study of a simplified model of a 14 C source is described. Results of Monte Carlo calculations with the EGS4 code are compared with measurements with a β spectrometer consisting of two coaxial Si detectors, and a low-energy photon spectrometer being a Si(Li) detector. Calculations and measurements show generally good agreement. However, the difference (a factor of 4) between calculated and measured response to electrons for the Si(Li) detector indicates that this detector has a dead layer about 12 μm thick instead of 0.2 μm as reported by the manufacturer. The efficiency of the calculations is increased by a factor of 10, when the complete source model is replaced by the simplified source model. This reduces the calculation time of detector responses to a few days instead of weeks on the NRC SGI R4400 computers. Good agreement between measured and calculated data also verifies that the MC code EGS4 is a reliable and useful tool for simulating coupled electron and photon transport for particles with energies down to a few keV. (au) 3 tabs., 15 ills., 11 refs
Baräo, Fernando; Nakagawa, Masayuki; Távora, Luis; Vaz, Pedro
2001-01-01
This book focusses on the state of the art of Monte Carlo methods in radiation physics and particle transport simulation and applications, the latter involving in particular, the use and development of electron--gamma, neutron--gamma and hadronic codes. Besides the basic theory and the methods employed, special attention is paid to algorithm development for modeling, and the analysis of experiments and measurements in a variety of fields ranging from particle to medical physics.
Calculation of neutron importance function in fissionable assemblies using Monte Carlo method
International Nuclear Information System (INIS)
Feghhi, S.A.H.; Shahriari, M.; Afarideh, H.
2007-01-01
The purpose of the present work is to develop an efficient solution method for the calculation of neutron importance function in fissionable assemblies for all criticality conditions, based on Monte Carlo calculations. The neutron importance function has an important role in perturbation theory and reactor dynamic calculations. Usually this function can be determined by calculating the adjoint flux while solving the adjoint weighted transport equation based on deterministic methods. However, in complex geometries these calculations are very complicated. In this article, considering the capabilities of MCNP code in solving problems with complex geometries and its closeness to physical concepts, a comprehensive method based on the physical concept of neutron importance has been introduced for calculating the neutron importance function in sub-critical, critical and super-critical conditions. For this propose a computer program has been developed. The results of the method have been benchmarked with ANISN code calculations in 1 and 2 group modes for simple geometries. The correctness of these results has been confirmed for all three criticality conditions. Finally, the efficiency of the method for complex geometries has been shown by the calculation of neutron importance in Miniature Neutron Source Reactor (MNSR) research reactor
Application de la methode des sous-groupes au calcul Monte-Carlo multigroupe
Martin, Nicolas
This thesis is dedicated to the development of a Monte Carlo neutron transport solver based on the subgroup (or multiband) method. In this formalism, cross sections for resonant isotopes are represented in the form of probability tables on the whole energy spectrum. This study is intended in order to test and validate this approach in lattice physics and criticality-safety applications. The probability table method seems promising since it introduces an alternative computational way between the legacy continuous-energy representation and the multigroup method. In the first case, the amount of data invoked in continuous-energy Monte Carlo calculations can be very important and tend to slow down the overall computational time. In addition, this model preserves the quality of the physical laws present in the ENDF format. Due to its cheap computational cost, the multigroup Monte Carlo way is usually at the basis of production codes in criticality-safety studies. However, the use of a multigroup representation of the cross sections implies a preliminary calculation to take into account self-shielding effects for resonant isotopes. This is generally performed by deterministic lattice codes relying on the collision probability method. Using cross-section probability tables on the whole energy range permits to directly take into account self-shielding effects and can be employed in both lattice physics and criticality-safety calculations. Several aspects have been thoroughly studied: (1) The consistent computation of probability tables with a energy grid comprising only 295 or 361 groups. The CALENDF moment approach conducted to probability tables suitable for a Monte Carlo code. (2) The combination of the probability table sampling for the energy variable with the delta-tracking rejection technique for the space variable, and its impact on the overall efficiency of the proposed Monte Carlo algorithm. (3) The derivation of a model for taking into account anisotropic
Energy Technology Data Exchange (ETDEWEB)
Zychor, I. [Soltan Inst. for Nuclear Studies, Otwock-Swierk (Poland)
1994-12-31
The application of a Monte Carlo method to study a transport in matter of electron and photon beams is presented, especially for electrons with energies up to 18 MeV. The SHOWME Monte Carlo code, a modified version of GEANT3 code, was used on the CONVEX C3210 computer at Swierk. It was assumed that an electron beam is mono directional and monoenergetic. Arbitrary user-defined, complex geometries made of any element or material can be used in calculation. All principal phenomena occurring when electron beam penetrates the matter are taken into account. The use of calculation for a therapeutic electron beam collimation is presented. (author). 20 refs, 29 figs.
Energy Technology Data Exchange (ETDEWEB)
Perfetti, Christopher M [ORNL; Rearden, Bradley T [ORNL
2014-01-01
This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.
Minimizing the cost of splitting in Monte Carlo radiation transport simulation
Energy Technology Data Exchange (ETDEWEB)
Juzaitis, R.J.
1980-10-01
A deterministic analysis of the computational cost associated with geometric splitting/Russian roulette in Monte Carlo radiation transport calculations is presented. Appropriate integro-differential equations are developed for the first and second moments of the Monte Carlo tally as well as time per particle history, given that splitting with Russian roulette takes place at one (or several) internal surfaces of the geometry. The equations are solved using a standard S/sub n/ (discrete ordinates) solution technique, allowing for the prediction of computer cost (formulated as the product of sample variance and time per particle history, sigma/sup 2//sub s/tau p) associated with a given set of splitting parameters. Optimum splitting surface locations and splitting ratios are determined. Benefits of such an analysis are particularly noteworthy for transport problems in which splitting is apt to be extensively employed (e.g., deep penetration calculations).
Importance estimation in Monte Carlo modelling of neutron and photon transport
International Nuclear Information System (INIS)
Mickael, M.W.
1992-01-01
The estimation of neutron and photon importance in a three-dimensional geometry is achieved using a coupled Monte Carlo and diffusion theory calculation. The parameters required for the solution of the multigroup adjoint diffusion equation are estimated from an analog Monte Carlo simulation of the system under investigation. The solution of the adjoint diffusion equation is then used as an estimate of the particle importance in the actual simulation. This approach provides an automated and efficient variance reduction method for Monte Carlo simulations. The technique has been successfully applied to Monte Carlo simulation of neutron and coupled neutron-photon transport in the nuclear well-logging field. The results show that the importance maps obtained in a few minutes of computer time using this technique are in good agreement with Monte Carlo generated importance maps that require prohibitive computing times. The application of this method to Monte Carlo modelling of the response of neutron porosity and pulsed neutron instruments has resulted in major reductions in computation time. (Author)
Progress on RMC: a Monte Carlo neutron transport code for reactor analysis
International Nuclear Information System (INIS)
Wang, Kan; Li, Zeguang; She, Ding; Liu, Yuxuan; Xu, Qi; Shen, Huayun; Yu, Ganglin
2011-01-01
This paper presents a new 3-D Monte Carlo neutron transport code named RMC (Reactor Monte Carlo code), specifically intended for reactor physics analysis. This code is being developed by Department of Engineering Physics in Tsinghua University and written in C++ and Fortran 90 language with the latest version of RMC 2.5.0. The RMC code uses the method known as the delta-tracking method to simulate neutron transport, the advantages of which include fast simulation in complex geometries and relatively simple handling of complicated geometrical objects. Some other techniques such as computational-expense oriented method and hash-table method have been developed and implemented in RMC to speedup the calculation. To meet the requirements of reactor analysis, the RMC code has the calculational functions including criticality calculation, burnup calculation and also kinetics simulation. In this paper, comparison calculations of criticality problems, burnup problems and transient problems are carried out using RMC code and other Monte Carlo codes, and the results show that RMC performs quite well in these kinds of problems. Based on MPI, RMC succeeds in parallel computation and represents a high speed-up. This code is still under intensive development and the further work directions are mentioned at the end of this paper. (author)
Monte Carlo transport simulation of velocity undershoot in zinc blende and wurtzite InN
Energy Technology Data Exchange (ETDEWEB)
Wang, Shulong; Liu, Hongxia; Gao, Bo; Zhuo, Qingqing [School of Microelectronics, Key Laboratory of Wide Band-gap Semiconductor Materials and Device, Xidian University, Xi& #x27; an, 710071 (China)
2012-09-15
Velocity undershoot in zinc blende (ZB) and wurtzite (WZ) InN is investigated by ensemble Monte Carlo (EMC) calculation. The results show that velocity undershoot arises from the relatively long energy relaxation time compared with momentum. Monte Carlo transport simulations over wide range of electric fields is presented in the paper. The results show that velocity undershoot impacts the electron transport greatly, compared with velocity overshoot, when the electric field changes quickly with time and space. A comparison study between WZ and ZB InN shows that WZ InN has more advantages in device applications due to its excellent electron transport properties. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Variational Monte Carlo calculations of few-body nuclei
International Nuclear Information System (INIS)
Wiringa, R.B.
1986-01-01
The variational Monte Carlo method is described. Results for the binding energies, density distributions, momentum distributions, and static longitudinal structure functions of the 3 H, 3 He, and 4 He ground states, and for the energies of the low-lying scattering states in 4 He are presented. 25 refs., 3 figs
Variational Monte Carlo calculations of few-body nuclei
Energy Technology Data Exchange (ETDEWEB)
Wiringa, R.B.
1986-01-01
The variational Monte Carlo method is described. Results for the binding energies, density distributions, momentum distributions, and static longitudinal structure functions of the /sup 3/H, /sup 3/He, and /sup 4/He ground states, and for the energies of the low-lying scattering states in /sup 4/He are presented. 25 refs., 3 figs.
Cross sections needed for investigations into track phenomena and Monte-Carlo calculations
International Nuclear Information System (INIS)
Paretzke, H.G.
1983-01-01
Investigations into basic radiation action mechanisms as well as into applied radiation transport problems (e.g. electron microscopy) greatly benefit from detailed computer simulations of charged particle track structures in matter. The first and in fact most important and most difficult step in any such calculation is the derivation of reliable cross sections for the most relevant interaction processes in the material(s) under consideration. The second step in radiation transport calculations is the testing of results or intermediate results for quantitative or qualitative consistency with other experimental or theoretical information (e.g. yields, backscatter factors). This paper discusses the types of the most important collision cross sections for studies on track phenomena by detailed Monte-Carlo calculations, the necessary accuracy of such data and various means of consistency checks of calculated results. This will be done mainly with examples taken from radiation physics as applied to dosimetric and biological problems (i.e. to gaseous and condensed targets). 12 references, 8 figures
Monte Carlo radiation transport: A revolution in science
International Nuclear Information System (INIS)
Hendricks, J.
1993-01-01
When Enrico Fermi, Stan Ulam, Nicholas Metropolis, John von Neuman, and Robert Richtmyer invented the Monte Carlo method fifty years ago, little could they imagine the far-flung consequences, the international applications, and the revolution in science epitomized by their abstract mathematical method. The Monte Carlo method is used in a wide variety of fields to solve exact computational models approximately by statistical sampling. It is an alternative to traditional physics modeling methods which solve approximate computational models exactly by deterministic methods. Modern computers and improved methods, such as variance reduction, have enhanced the method to the point of enabling a true predictive capability in areas such as radiation or particle transport. This predictive capability has contributed to a radical change in the way science is done: design and understanding come from computations built upon experiments rather than being limited to experiments, and the computer codes doing the computations have become the repository for physics knowledge. The MCNP Monte Carlo computer code effort at Los Alamos is an example of this revolution. Physicians unfamiliar with physics details can design cancer treatments using physics buried in the MCNP computer code. Hazardous environments and hypothetical accidents can be explored. Many other fields, from underground oil well exploration to aerospace, from physics research to energy production, from safety to bulk materials processing, benefit from MCNP, the Monte Carlo method, and the revolution in science
International Nuclear Information System (INIS)
Ding, Aiping; Liu, Tianyu; Liang, Chao; Ji, Wei; Shephard, Mark S.; Xu, X George; Brown, Forrest B.
2011-01-01
Monte Carlo simulation is ideally suited for solving Boltzmann neutron transport equation in inhomogeneous media. However, routine applications require the computation time to be reduced to hours and even minutes in a desktop system. The interest in adopting GPUs for Monte Carlo acceleration is rapidly mounting, fueled partially by the parallelism afforded by the latest GPU technologies and the challenge to perform full-size reactor core analysis on a routine basis. In this study, Monte Carlo codes for a fixed-source neutron transport problem and an eigenvalue/criticality problem were developed for CPU and GPU environments, respectively, to evaluate issues associated with computational speedup afforded by the use of GPUs. The results suggest that a speedup factor of 30 in Monte Carlo radiation transport of neutrons is within reach using the state-of-the-art GPU technologies. However, for the eigenvalue/criticality problem, the speedup was 8.5. In comparison, for a task of voxelizing unstructured mesh geometry that is more parallel in nature, the speedup of 45 was obtained. It was observed that, to date, most attempts to adopt GPUs for Monte Carlo acceleration were based on naïve implementations and have not yielded the level of anticipated gains. Successful implementation of Monte Carlo schemes for GPUs will likely require the development of an entirely new code. Given the prediction that future-generation GPU products will likely bring exponentially improved computing power and performances, innovative hardware and software solutions may make it possible to achieve full-core Monte Carlo calculation within one hour using a desktop computer system in a few years. (author)
International Nuclear Information System (INIS)
Nordenfors, C.
1999-02-01
To determine dose rate in a gamma radiation field, based on measurements with a semiconductor detector, it is necessary to know how the detector effects the field. This work aims to describe this effect with Monte Carlo simulations and calculations, that is to identify the detector response function. This is done for a germanium gamma detector. The detector is normally used in the in-situ measurements that is carried out regularly at the department. After the response function is determined it is used to reconstruct a spectrum from an in-situ measurement, a so called unfolding. This is done to be able to calculate fluence rate and dose rate directly from a measured (and unfolded) spectrum. The Monte Carlo code used in this work is EGS4 developed mainly at Stanford Linear Accelerator Center. It is a widely used code package to simulate particle transport. The results of this work indicates that the method could be used as-is since the accuracy of this method compares to other methods already in use to measure dose rate. Bearing in mind that this method provides the nuclide specific dose it is useful, in radiation protection, since knowing what the relations between different nuclides are and how they change is very important when estimating the risks
International Nuclear Information System (INIS)
Pereira, A.; Broed, R.
2002-03-01
In this report, several issues related to the probabilistic methodology for performance assessments of repositories for high-level nuclear waste and spent fuel are addressed. Random Monte Carlo sampling is used to make uncertainty analyses for the migration of four nuclides and a decay chain in the geosphere. The nuclides studied are cesium, chlorine, iodine and carbon, and radium from a decay chain. A procedure is developed to take advantage of the information contained in the hydrogeological data obtained from a three-dimensional discrete fracture model as the input data for one-dimensional transport models for use in Monte Carlo calculations. This procedure retains the original correlations between parameters representing different physical entities, namely, between the groundwater flow rate and the hydrodynamic dispersion in fractured rock, in contrast with the approach commonly used that assumes that all parameters supplied for the Monte Carlo calculations are independent of each other. A small program is developed to allow the above-mentioned procedure to be used if the available three-dimensional data are scarce for Monte Carlo calculations. The program allows random sampling of data from the 3-D data distribution in the hydrogeological calculations. The impact of correlations between the groundwater flow and the hydrodynamic dispersion on the uncertainty associated with the output distribution of the radionuclides' peak releases is studied. It is shown that for the SITE-94 data, this impact can be disregarded. A global sensitivity analysis is also performed on the peak releases of the radionuclides studied. The results of these sensitivity analyses, using several known statistical methods, show discrepancies that are attributed to the limitations of these methods. The reason for the difficulties is to be found in the complexity of the models needed for the predictions of radionuclide migration, models that deliver results covering variation of several
International Nuclear Information System (INIS)
Tanner, J.E.; Witts, D.; Tanner, R.J.; Bartlett, D.T.; Burgess, P.H.; Edwards, A.A.; More, B.R.
1995-01-01
A Monte Carlo facility has been developed for modelling the response of semiconductor devices to mixed neutron-photon fields. This utilises the code MCNP for neutron and photon transport and a new code, STRUGGLE, which has been developed to model the secondary charged particle transport. It is thus possible to predict the pulse height distribution expected from prototype electronic personal detectors, given the detector efficiency factor. Initial calculations have been performed on a simple passivated implanted planar silicon detector. This device has also been irradiated in neutron, gamma and X ray fields to verify the accuracy of the predictions. Good agreement was found between experiment and calculation. (author)
Green's function Monte Carlo calculations of /sup 4/He
Energy Technology Data Exchange (ETDEWEB)
Carlson, J.A.
1988-01-01
Green's Function Monte Carlo methods have been developed to study the ground state properties of light nuclei. These methods are shown to reproduce results of Faddeev calculations for A = 3, and are then used to calculate ground state energies, one- and two-body distribution functions, and the D-state probability for the alpha particle. Results are compared to variational Monte Carlo calculations for several nuclear interaction models. 31 refs.
A hybrid transport-diffusion method for Monte Carlo radiative-transfer simulations
International Nuclear Information System (INIS)
Densmore, Jeffery D.; Urbatsch, Todd J.; Evans, Thomas M.; Buksas, Michael W.
2007-01-01
Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Monte Carlo particle-transport simulations in diffusive media. If standard Monte Carlo is used in such media, particle histories will consist of many small steps, resulting in a computationally expensive calculation. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Each discrete step replaces many small Monte Carlo steps, thus increasing the efficiency of the simulation. In addition, given that DDMC is based on a diffusion equation, it should produce accurate solutions if used judiciously. In practice, DDMC is combined with standard Monte Carlo to form a hybrid transport-diffusion method that can accurately simulate problems with both diffusive and non-diffusive regions. In this paper, we extend previously developed DDMC techniques in several ways that improve the accuracy and utility of DDMC for nonlinear, time-dependent, radiative-transfer calculations. The use of DDMC in these types of problems is advantageous since, due to the underlying linearizations, optically thick regions appear to be diffusive. First, we employ a diffusion equation that is discretized in space but is continuous in time. Not only is this methodology theoretically more accurate than temporally discretized DDMC techniques, but it also has the benefit that a particle's time is always known. Thus, there is no ambiguity regarding what time to assign a particle that leaves an optically thick region (where DDMC is used) and begins transporting by standard Monte Carlo in an optically thin region. Also, we treat the interface between optically thick and optically thin regions with an improved method, based on the asymptotic diffusion-limit boundary condition, that can produce accurate results regardless of the angular distribution of the incident Monte Carlo particles. Finally, we develop a technique for estimating radiation momentum deposition during the
Monte Carlo calculation with unquenched Wilson-Fermions
International Nuclear Information System (INIS)
Montvay, I.
1984-01-01
A Monte Carlo updating procedure taking into account the virtual quark loops is described. It is based on high order hopping parameter expansion of the quark determinant for Wilson-fermions. In a first test run Wilson-loop expectation values are measured on 6 4 lattice at β=5.70 using 16sup(th) order hopping parameter expansion for the quark determinant. (orig.)
Parallelism in continuous energy Monte Carlo method for neutron transport
Energy Technology Data Exchange (ETDEWEB)
Uenohara, Yuji (Nuclear Engineering Lab., Toshiba Corp. (Japan))
1993-04-01
The continuous energy Monte Carlo code VIM was implemented on a prototype highly parallel computer called PRODIGY developed by TOSHIBA Corporation. The author tried to distribute nuclear data to the processing elements (PEs) for the purpose of studying domain decompositon for the velocity space. Eigenvalue problems for a 1-D plate-cell infinite lattice mockup of ZPR-6-7 wa examined. For the geometrical space, the PEs were assigned to domains corresponding to nuclear fuel bundles in a typical boiling water reactor. The author estimated the parallelization efficiencies for both highly parallel and a massively parallel computer. Negligible communication overhead derived from neutron transports resulted from the heavy computing loads of Monte Carlo simulations. In the case of highly parallel computers, the communication overheads scarcely contributed to the parallelization efficiency. In the case of massively parallel computers, the control of PEs resulted in considerable communication overheads. (orig.)
Parallelism in continuous energy Monte Carlo method for neutron transport
International Nuclear Information System (INIS)
Uenohara, Yuji
1993-01-01
The continuous energy Monte Carlo code VIM was implemented on a prototype highly parallel computer called PRODIGY developed by TOSHIBA Corporation. The author tried to distribute nuclear data to the processing elements (PEs) for the purpose of studying domain decompositon for the velocity space. Eigenvalue problems for a 1-D plate-cell infinite lattice mockup of ZPR-6-7 wa examined. For the geometrical space, the PEs were assigned to domains corresponding to nuclear fuel bundles in a typical boiling water reactor. The author estimated the parallelization efficiencies for both highly parallel and a massively parallel computer. Negligible communication overhead derived from neutron transports resulted from the heavy computing loads of Monte Carlo simulations. In the case of highly parallel computers, the communication overheads scarcely contributed to the parallelization efficiency. In the case of massively parallel computers, the control of PEs resulted in considerable communication overheads. (orig.)
International Nuclear Information System (INIS)
Christoforou, Stavros; Hoogenboom, J. Eduard
2011-01-01
A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k eff estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)
Energy Technology Data Exchange (ETDEWEB)
Cobut, V.; Frongillo, Y.; Jay-Gerin, J.-P. (Sherbrooke Univ., PQ (Canada). Faculte de Medecine); Patau, J.-P. (Toulouse-3 Univ., 31 (France))
1992-12-01
An energy spectrum of ''subexcitation electrons'' produced in liquid water by electrons with initial energies of a few keV is obtained by using a Monte Carlo transport simulation calculation. It is found that the introduction of vibrational-excitation cross sections leads to the appearance of a sharp peak in the probability density function near the electronic-excitation threshold. Electrons contributing to this peak are shown to be more naturally described if a novel energy spectrum, that we propose to name ''vibrationally-relaxing electron'' spectrum, is introduced. The corresponding distribution function is presented, and an empirical expression of it is given. (author).
Graphical User Interface for Simplified Neutron Transport Calculations
Energy Technology Data Exchange (ETDEWEB)
Schwarz, Randolph; Carter, Leland L
2011-07-18
A number of codes perform simple photon physics calculations. The nuclear industry is lacking in similar tools to perform simplified neutron physics shielding calculations. With the increased importance of performing neutron calculations for homeland security applications and defense nuclear nonproliferation tasks, having an efficient method for performing simple neutron transport calculations becomes increasingly important. Codes such as Monte Carlo N-particle (MCNP) can perform the transport calculations; however, the technical details in setting up, running, and interpreting the required simulations are quite complex and typically go beyond the abilities of most users who need a simple answer to a neutron transport calculation. The work documented in this report resulted in the development of the NucWiz program, which can create an MCNP input file for a set of simple geometries, source, and detector configurations. The user selects source, shield, and tally configurations from a set of pre-defined lists, and the software creates a complete MCNP input file that can be optionally run and the results viewed inside NucWiz.
Sabouri, P.; Bidaud, A.; Dabiran, S.; Lecarpentier, D.; Ferragut, F.
2014-04-01
The development of tools for nuclear data uncertainty propagation in lattice calculations are presented. The Total Monte Carlo method and the Generalized Perturbation Theory method are used with the code DRAGON to allow propagation of nuclear data uncertainties in transport calculations. Both methods begin the propagation of uncertainties at the most elementary level of the transport calculation - the Evaluated Nuclear Data File. The developed tools are applied to provide estimates for response uncertainties of a PWR cell as a function of burnup.
Application of Monte Carlo method for dose calculation in thyroid follicle
International Nuclear Information System (INIS)
Silva, Frank Sinatra Gomes da
2008-02-01
The Monte Carlo method is an important tool to simulate radioactive particles interaction with biologic medium. The principal advantage of the method when compared with deterministic methods is the ability to simulate a complex geometry. Several computational codes use the Monte Carlo method to simulate the particles transport and they have the capacity to simulate energy deposition in models of organs and/or tissues, as well models of cells of human body. Thus, the calculation of the absorbed dose to thyroid's follicles (compound of colloid and follicles' cells) have a fundamental importance to dosimetry, because these cells are radiosensitive due to ionizing radiation exposition, in particular, exposition due to radioisotopes of iodine, because a great amount of radioiodine may be released into the environment in case of a nuclear accidents. In this case, the goal of this work was use the code of particles transport MNCP4C to calculate absorbed doses in models of thyroid's follicles, for Auger electrons, internal conversion electrons and beta particles, by iodine-131 and short-lived iodines (131, 132, 133, 134 e 135), with diameters varying from 30 to 500 μm. The results obtained from simulation with the MCNP4C code shown an average percentage of the 25% of total absorbed dose by colloid to iodine- 131 and 75% to short-lived iodine's. For follicular cells, this percentage was of 13% to iodine-131 and 87% to short-lived iodine's. The contributions from particles with low energies, like Auger and internal conversion electrons should not be neglected, to assessment the absorbed dose in cellular level. Agglomerative hierarchical clustering was used to compare doses obtained by codes MCNP4C, EPOTRAN, EGS4 and by deterministic methods. (author)
Boltzmann equation and Monte Carlo studies of electron transport in resistive plate chambers
International Nuclear Information System (INIS)
Bošnjaković, D; Petrović, Z Lj; Dujko, S; White, R D
2014-01-01
A multi term theory for solving the Boltzmann equation and Monte Carlo simulation technique are used to investigate electron transport in Resistive Plate Chambers (RPCs) that are used for timing and triggering purposes in many high energy physics experiments at CERN and elsewhere. Using cross sections for electron scattering in C 2 H 2 F 4 , iso-C 4 H 10 and SF 6 as an input in our Boltzmann and Monte Carlo codes, we have calculated data for electron transport as a function of reduced electric field E/N in various C 2 H 2 F 4 /iso-C 4 H 10 /SF 6 gas mixtures used in RPCs in the ALICE, CMS and ATLAS experiments. Emphasis is placed upon the explicit and implicit effects of non-conservative collisions (e.g. electron attachment and/or ionization) on the drift and diffusion. Among many interesting and atypical phenomena induced by the explicit effects of non-conservative collisions, we note the existence of negative differential conductivity (NDC) in the bulk drift velocity component with no indication of any NDC for the flux component in the ALICE timing RPC system. We systematically study the origin and mechanisms for such phenomena as well as the possible physical implications which arise from their explicit inclusion into models of RPCs. Spatially-resolved electron transport properties are calculated using a Monte Carlo simulation technique in order to understand these phenomena. (paper)
International Nuclear Information System (INIS)
Jacimovic, R.; Maucec, M.; Trkov, A.
2002-01-01
In this work experimental verification of Monte Carlo neutron flux calculations in the carousel facility (CF) of the 250 kW TRIGA Mark II reactor at the Jozef Stefan Institute is presented. Simulations were carried out using the Monte Carlo radiation-transport code, MCNP4B. The objective of the work was to model and verify experimentally the azimuthal variation of neutron flux in the CF for core No. 176, set up in April 2002. '1'9'8Au activities of Al-Au(0.1%) disks irradiated in 11 channels of the CF covering 180'0 around the perimeter of the core were measured. The comparison between MCNP calculation and measurement shows relatively good agreement and demonstrates the overall accuracy with which the detailed spectral characteristics can be predicted by calculations.(author)
Condensed history Monte Carlo methods for photon transport problems
International Nuclear Information System (INIS)
Bhan, Katherine; Spanier, Jerome
2007-01-01
We study methods for accelerating Monte Carlo simulations that retain most of the accuracy of conventional Monte Carlo algorithms. These methods - called Condensed History (CH) methods - have been very successfully used to model the transport of ionizing radiation in turbid systems. Our primary objective is to determine whether or not such methods might apply equally well to the transport of photons in biological tissue. In an attempt to unify the derivations, we invoke results obtained first by Lewis, Goudsmit and Saunderson and later improved by Larsen and Tolar. We outline how two of the most promising of the CH models - one based on satisfying certain similarity relations and the second making use of a scattering phase function that permits only discrete directional changes - can be developed using these approaches. The main idea is to exploit the connection between the space-angle moments of the radiance and the angular moments of the scattering phase function. We compare the results obtained when the two CH models studied are used to simulate an idealized tissue transport problem. The numerical results support our findings based on the theoretical derivations and suggest that CH models should play a useful role in modeling light-tissue interactions
International Nuclear Information System (INIS)
Noack, K.
1982-01-01
The perturbation source method may be a powerful Monte-Carlo means to calculate small effects in a particle field. In a preceding paper we have formulated this methos in inhomogeneous linear particle transport problems describing the particle fields by solutions of Fredholm integral equations and have derived formulae for the second moment of the difference event point estimator. In the present paper we analyse the general structure of its variance, point out the variance peculiarities, discuss the dependence on certain transport games and on generation procedures of the auxiliary particles and draw conclusions to improve this method
Optimizing portal dose calculation for an amorphous silicon detector using Swiss Monte Carlo Plan
International Nuclear Information System (INIS)
Frauchiger, D; Fix, M K; Frei, D; Volken, W; Mini, R; Manser, P
2007-01-01
Purpose: Modern treatment planning systems (TPS) are able to calculate doses within the patient for numerous delivery techniques as e. g. intensity modulated radiation therapy (IMRT). Even dose predictions to an electronic portal image device (EPID) are available in some TPS, but with limitations in accuracy. With the steadily increasing number of facilities using EPIDs for pre-treatment and treatment verification, the desire of calculating accurate EPID dose distributions is growing. A solution for this problem is the use of Monte Carlo (MC) methods. Aims of this study were firstly to implement geometries of an amorphous silicon based EPID with varying levels of geometry complexity. Secondly to analyze the differences between simulation results and measurements for each geometry. Thirdly, to compare different transport algorithms within all EPID geometries in a flexible C++ MC environment. Materials and Methods: In this work three geometry sets, representing the EPID, are implemented and investigated. To gain flexibility in the MC environment geometry and particle transport code are independent. That allows the user to select between the transport algorithms EGSnrc, VMC++ and PIN (an in-house developed transport code) while using one of the implemented geometries of the EPID. For all implemented EPID geometries dose distributions were calculated for 6 MV and 15 MV beams using different transport algorithms and are then compared with measurements. Results: A very simple geometry, consisting of a water slab, is not capable to reproduce measurements, whereas 8 material layers perform well. The more layers with different materials are used, the longer last the calculations. EGSnrc and VMC++ lead to dosimetrically equal results. Gamma analysis between calculated and measured EPID dose distributions, using a dose difference criterion of ± 3% and a distance to agreement criterion of ± 3 mm, revealed a gamma value < 1 within more than 95% of all pixels, that have a
Minimizing the cost of splitting in Monte Carlo radiation transport simulation
International Nuclear Information System (INIS)
Juzaitis, R.J.
1980-10-01
A deterministic analysis of the computational cost associated with geometric splitting/Russian roulette in Monte Carlo radiation transport calculations is presented. Appropriate integro-differential equations are developed for the first and second moments of the Monte Carlo tally as well as time per particle history, given that splitting with Russian roulette takes place at one (or several) internal surfaces of the geometry. The equations are solved using a standard S/sub n/ (discrete ordinates) solution technique, allowing for the prediction of computer cost (formulated as the product of sample variance and time per particle history, sigma 2 /sub s/tau p) associated with a given set of splitting parameters. Optimum splitting surface locations and splitting ratios are determined. Benefits of such an analysis are particularly noteworthy for transport problems in which splitting is apt to be extensively employed
A retrospective and prospective survey of three-dimensional transport calculations
International Nuclear Information System (INIS)
Nakahara, Yasuaki
1985-01-01
A retrospective survey is made on the three-dimensional radiation transport calculations. Introduction is given to computer codes based on the distinctive numerical methods such as the Monte Carlo, Direct Integration, Ssub(n) and Finite Element Methods to solve the three-dimensional transport equations. Prospective discussions are made on pros and cons of these methods. (author)
Monte Carlo criticality calculations accelerated by a growing neutron population
International Nuclear Information System (INIS)
Dufek, Jan; Tuttelberg, Kaur
2016-01-01
Highlights: • Efficiency is significantly improved when population size grows over cycles. • The bias in the fission source is balanced to other errors in the source. • The bias in the fission source decays over the cycle as the population grows. - Abstract: We propose a fission source convergence acceleration method for Monte Carlo criticality simulation. As the efficiency of Monte Carlo criticality simulations is sensitive to the selected neutron population size, the method attempts to achieve the acceleration via on-the-fly control of the neutron population size. The neutron population size is gradually increased over successive criticality cycles so that the fission source bias amounts to a specific fraction of the total error in the cumulative fission source. An optimal setting then gives a reasonably small neutron population size, allowing for an efficient source iteration; at the same time the neutron population size is chosen large enough to ensure a sufficiently small source bias, such that does not limit accuracy of the simulation.
Adaptively Learning an Importance Function Using Transport Constrained Monte Carlo
International Nuclear Information System (INIS)
Booth, T.E.
1998-01-01
It is well known that a Monte Carlo estimate can be obtained with zero-variance if an exact importance function for the estimate is known. There are many ways that one might iteratively seek to obtain an ever more exact importance function. This paper describes a method that has obtained ever more exact importance functions that empirically produce an error that is dropping exponentially with computer time. The method described herein constrains the importance function to satisfy the (adjoint) Boltzmann transport equation. This constraint is provided by using the known form of the solution, usually referred to as the Case eigenfunction solution
Monte Carlo methods in electron transport problems. Pt. 1
International Nuclear Information System (INIS)
Cleri, F.
1989-01-01
The condensed-history Monte Carlo method for charged particles transport is reviewed and discussed starting from a general form of the Boltzmann equation (Part I). The physics of the electronic interactions, together with some pedagogic example will be introduced in the part II. The lecture is directed to potential users of the method, for which it can be a useful introduction to the subject matter, and wants to establish the basis of the work on the computer code RECORD, which is at present in a developing stage
Independent Monte-Carlo dose calculation for MLC based CyberKnife radiotherapy
Mackeprang, P.-H.; Vuong, D.; Volken, W.; Henzen, D.; Schmidhalter, D.; Malthaner, M.; Mueller, S.; Frei, D.; Stampanoni, M. F. M.; Dal Pra, A.; Aebersold, D. M.; Fix, M. K.; Manser, P.
2018-01-01
This work aims to develop, implement and validate a Monte Carlo (MC)-based independent dose calculation (IDC) framework to perform patient-specific quality assurance (QA) for multi-leaf collimator (MLC)-based CyberKnife® (Accuray Inc., Sunnyvale, CA) treatment plans. The IDC framework uses an XML-format treatment plan as exported from the treatment planning system (TPS) and DICOM format patient CT data, an MC beam model using phase spaces, CyberKnife MLC beam modifier transport using the EGS++ class library, a beam sampling and coordinate transformation engine and dose scoring using DOSXYZnrc. The framework is validated against dose profiles and depth dose curves of single beams with varying field sizes in a water tank in units of cGy/Monitor Unit and against a 2D dose distribution of a full prostate treatment plan measured with Gafchromic EBT3 (Ashland Advanced Materials, Bridgewater, NJ) film in a homogeneous water-equivalent slab phantom. The film measurement is compared to IDC results by gamma analysis using 2% (global)/2 mm criteria. Further, the dose distribution of the clinical treatment plan in the patient CT is compared to TPS calculation by gamma analysis using the same criteria. Dose profiles from IDC calculation in a homogeneous water phantom agree within 2.3% of the global max dose or 1 mm distance to agreement to measurements for all except the smallest field size. Comparing the film measurement to calculated dose, 99.9% of all voxels pass gamma analysis, comparing dose calculated by the IDC framework to TPS calculated dose for the clinical prostate plan shows 99.0% passing rate. IDC calculated dose is found to be up to 5.6% lower than dose calculated by the TPS in this case near metal fiducial markers. An MC-based modular IDC framework was successfully developed, implemented and validated against measurements and is now available to perform patient-specific QA by IDC.
Monte Carlo calculations of thermodynamic properties of deuterium under high pressures
International Nuclear Information System (INIS)
Levashov, P R; Filinov, V S; BoTan, A; Fortov, V E; Bonitz, M
2008-01-01
Two different numerical approaches have been applied for calculations of shock Hugoniots and compression isentrope of deuterium: direct path integral Monte Carlo and reactive Monte Carlo. The results show good agreement between two methods at intermediate pressure which is an indication of correct accounting of dissociation effects in the direct path integral Monte Carlo method. Experimental data on both shock and quasi-isentropic compression of deuterium are well described by calculations. Thus dissociation of deuterium molecules in these experiments together with interparticle interaction play significant role
A Comparison of Monte Carlo and Deterministic Solvers for keff and Sensitivity Calculations
Energy Technology Data Exchange (ETDEWEB)
Haeck, Wim [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Parsons, Donald Kent [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); White, Morgan Curtis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Saller, Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Favorite, Jeffrey A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-12-12
Verification and validation of our solutions for calculating the neutron reactivity for nuclear materials is a key issue to address for many applications, including criticality safety, research reactors, power reactors, and nuclear security. Neutronics codes solve variations of the Boltzmann transport equation. The two main variants are Monte Carlo versus deterministic solutions, e.g. the MCNP [1] versus PARTISN [2] codes, respectively. There have been many studies over the decades that examined the accuracy of such solvers and the general conclusion is that when the problems are well-posed, either solver can produce accurate results. However, the devil is always in the details. The current study examines the issue of self-shielding and the stress it puts on deterministic solvers. Most Monte Carlo neutronics codes use continuous-energy descriptions of the neutron interaction data that are not subject to this effect. The issue of self-shielding occurs because of the discretisation of data used by the deterministic solutions. Multigroup data used in these solvers are the average cross section and scattering parameters over an energy range. Resonances in cross sections can occur that change the likelihood of interaction by one to three orders of magnitude over a small energy range. Self-shielding is the numerical effect that the average cross section in groups with strong resonances can be strongly affected as neutrons within that material are preferentially absorbed or scattered out of the resonance energies. This affects both the average cross section and the scattering matrix.
Clinical implementation of full Monte Carlo dose calculation in proton beam therapy
International Nuclear Information System (INIS)
Paganetti, Harald; Jiang, Hongyu; Parodi, Katia; Slopsema, Roelf; Engelsman, Martijn
2008-01-01
The goal of this work was to facilitate the clinical use of Monte Carlo proton dose calculation to support routine treatment planning and delivery. The Monte Carlo code Geant4 was used to simulate the treatment head setup, including a time-dependent simulation of modulator wheels (for broad beam modulation) and magnetic field settings (for beam scanning). Any patient-field-specific setup can be modeled according to the treatment control system of the facility. The code was benchmarked against phantom measurements. Using a simulation of the ionization chamber reading in the treatment head allows the Monte Carlo dose to be specified in absolute units (Gy per ionization chamber reading). Next, the capability of reading CT data information was implemented into the Monte Carlo code to model patient anatomy. To allow time-efficient dose calculation, the standard Geant4 tracking algorithm was modified. Finally, a software link of the Monte Carlo dose engine to the patient database and the commercial planning system was established to allow data exchange, thus completing the implementation of the proton Monte Carlo dose calculation engine ('DoC++'). Monte Carlo re-calculated plans are a valuable tool to revisit decisions in the planning process. Identification of clinically significant differences between Monte Carlo and pencil-beam-based dose calculations may also drive improvements of current pencil-beam methods. As an example, four patients (29 fields in total) with tumors in the head and neck regions were analyzed. Differences between the pencil-beam algorithm and Monte Carlo were identified in particular near the end of range, both due to dose degradation and overall differences in range prediction due to bony anatomy in the beam path. Further, the Monte Carlo reports dose-to-tissue as compared to dose-to-water by the planning system. Our implementation is tailored to a specific Monte Carlo code and the treatment planning system XiO (Computerized Medical Systems Inc
GPU-accelerated Monte Carlo convolution/superposition implementation for dose calculation.
Zhou, Bo; Yu, Cedric X; Chen, Danny Z; Hu, X Sharon
2010-11-01
Dose calculation is a key component in radiation treatment planning systems. Its performance and accuracy are crucial to the quality of treatment plans as emerging advanced radiation therapy technologies are exerting ever tighter constraints on dose calculation. A common practice is to choose either a deterministic method such as the convolution/superposition (CS) method for speed or a Monte Carlo (MC) method for accuracy. The goal of this work is to boost the performance of a hybrid Monte Carlo convolution/superposition (MCCS) method by devising a graphics processing unit (GPU) implementation so as to make the method practical for day-to-day usage. Although the MCCS algorithm combines the merits of MC fluence generation and CS fluence transport, it is still not fast enough to be used as a day-to-day planning tool. To alleviate the speed issue of MC algorithms, the authors adopted MCCS as their target method and implemented a GPU-based version. In order to fully utilize the GPU computing power, the MCCS algorithm is modified to match the GPU hardware architecture. The performance of the authors' GPU-based implementation on an Nvidia GTX260 card is compared to a multithreaded software implementation on a quad-core system. A speedup in the range of 6.7-11.4x is observed for the clinical cases used. The less than 2% statistical fluctuation also indicates that the accuracy of the authors' GPU-based implementation is in good agreement with the results from the quad-core CPU implementation. This work shows that GPU is a feasible and cost-efficient solution compared to other alternatives such as using cluster machines or field-programmable gate arrays for satisfying the increasing demands on computation speed and accuracy of dose calculation. But there are also inherent limitations of using GPU for accelerating MC-type applications, which are also analyzed in detail in this article.
Criticality calculation for cluster fuel bundles using monte carlo generated grey dancoff factor
International Nuclear Information System (INIS)
Kim, Hyeong Heon; Cho, Nam Zin
1999-01-01
The grey Dancoff factor calculated by Monte Carlo method is applied to the criticality calculation for cluster fuel bundles. Dancoff factors for five symmetrically different pin positions of CANDU37 and CANFLEX fuel bundles in full three-dimensional geometry are calculated by Monte Carlo method. The concept of equivalent Dancoff factor is introduced to use the grey Dancoff factor in the resonance calculation based on equivalence theorem. The equivalent Dancoff factor which is based on the realistic model produces an exact fuel collision probability and can be used in the resonance calculation just as the black Dancoff factor. The infinite multiplication factors based on the black Dancoff factors calculated by collision probability or Monte Carlo method are overestimated by about 2mk for normal condition and 4mk for void condition of CANDU37 and CANFLEX fuel bundles in comparison with those based on the equivalent Dancoff factors
Directory of Open Access Journals (Sweden)
Jingang Liang
2016-06-01
Full Text Available Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC codes in accomplishing pin-wise three-dimensional (3D full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.
Absorbed dose calculations using mesh-based human phantoms and Monte Carlo methods
International Nuclear Information System (INIS)
Kramer, Richard
2010-01-01
Full text. Health risks attributable to ionizing radiation are considered to be a function of the absorbed dose to radiosensitive organs and tissues of the human body. However, as human tissue cannot express itself in terms of absorbed dose, exposure models have to be used to determine the distribution of absorbed dose throughout the human body. An exposure model, be it physical or virtual, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the absorbed dose to organ and tissues of interest. Female Adult meSH (FASH) and the Male Adult meSH (MASH) virtual phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools. Representing standing adults, FASH and MASH have organ and tissue masses, body height and mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which transports photons, electrons and positrons through arbitrary media. This presentation reports on the development of the FASH and the MASH phantoms and will show dosimetric applications for X-ray diagnosis and for prostate brachytherapy. (author)
Absorbed Dose Calculations Using Mesh-based Human Phantoms And Monte Carlo Methods
International Nuclear Information System (INIS)
Kramer, Richard
2011-01-01
Health risks attributable to the exposure to ionizing radiation are considered to be a function of the absorbed or equivalent dose to radiosensitive organs and tissues. However, as human tissue cannot express itself in terms of equivalent dose, exposure models have to be used to determine the distribution of equivalent dose throughout the human body. An exposure model, be it physical or computational, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the equivalent dose to organ and tissues of interest. The FASH2 (Female Adult meSH) and the MASH2 (Male Adult meSH) computational phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools and anatomical atlases. Representing standing adults, FASH2 and MASH2 have organ and tissue masses, body height and body mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which can transport photons, electrons and positrons through arbitrary media. This paper reviews the development of the FASH2 and the MASH2 phantoms and presents dosimetric applications for X-ray diagnosis and for prostate brachytherapy.
Absorbed Dose Calculations Using Mesh-based Human Phantoms And Monte Carlo Methods
Kramer, Richard
2011-08-01
Health risks attributable to the exposure to ionizing radiation are considered to be a function of the absorbed or equivalent dose to radiosensitive organs and tissues. However, as human tissue cannot express itself in terms of equivalent dose, exposure models have to be used to determine the distribution of equivalent dose throughout the human body. An exposure model, be it physical or computational, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the equivalent dose to organ and tissues of interest. The FASH2 (Female Adult meSH) and the MASH2 (Male Adult meSH) computational phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools and anatomical atlases. Representing standing adults, FASH2 and MASH2 have organ and tissue masses, body height and body mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which can transport photons, electrons and positrons through arbitrary media. This paper reviews the development of the FASH2 and the MASH2 phantoms and presents dosimetric applications for X-ray diagnosis and for prostate brachytherapy.
Absorbed dose calculations using mesh-based human phantoms and Monte Carlo methods
Energy Technology Data Exchange (ETDEWEB)
Kramer, Richard [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil)
2010-07-01
Full text. Health risks attributable to ionizing radiation are considered to be a function of the absorbed dose to radiosensitive organs and tissues of the human body. However, as human tissue cannot express itself in terms of absorbed dose, exposure models have to be used to determine the distribution of absorbed dose throughout the human body. An exposure model, be it physical or virtual, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the absorbed dose to organ and tissues of interest. Female Adult meSH (FASH) and the Male Adult meSH (MASH) virtual phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools. Representing standing adults, FASH and MASH have organ and tissue masses, body height and mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which transports photons, electrons and positrons through arbitrary media. This presentation reports on the development of the FASH and the MASH phantoms and will show dosimetric applications for X-ray diagnosis and for prostate brachytherapy. (author)
MCNP Perturbation Capability for Monte Carlo Criticality Calculations
International Nuclear Information System (INIS)
Hendricks, J.S.; Carter, L.L.; McKinney, G.W.
1999-01-01
The differential operator perturbation capability in MCNP4B has been extended to automatically calculate perturbation estimates for the track length estimate of k eff in MCNP4B. The additional corrections required in certain cases for MCNP4B are no longer needed. Calculating the effect of small design changes on the criticality of nuclear systems with MCNP is now straightforward
Neutron and gamma ray transport calculations in shielding system
Energy Technology Data Exchange (ETDEWEB)
Masukawa, Fumihiro; Sakamoto, Hiroki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1998-03-01
In the shields for radiation in nuclear facilities, the penetrating holes of various kinds and irregular shapes are made for the reasons of operation, control and others. These penetrating holes and gaps are filled with air or the substances with relatively small shielding performance, and radiation flows out through them, which is called streaming. As the calculation techniques for the shielding design or analysis related to the streaming problem, there are the calculations by simplified evaluation, transport calculation and Monte Carlo method. In this report, the example of calculation by Monte Carlo method which is represented by MCNP code is discussed. A number of variance reduction techniques which seem effective for the analysis of streaming problem were tried. As to the investigation of the applicability of MCNP code to streaming analysis, the object of analysis which are the concrete walls without hole and with horizontal hole, oblique hole and bent oblique hole, the analysis procedure, the composition of concrete, and the conversion coefficient of dose equivalent, and the results of analysis are reported. As for variance reduction technique, cell importance was adopted. (K.I.)
Deep-penetration calculation for the ISIS target station shielding using the MARS Monte Carlo code
International Nuclear Information System (INIS)
Nunomiya, Tomoya; Iwase, Hiroshi; Nakamura, Takashi; Nakao, Noriaki
2002-03-01
A calculation of neutron penetration through a thick shield was performed with a three-dimensional multi-layer technique using the MARS14(02) Monte Carlo code to compare with the experimental shielding data in 1998 at the ISIS spallation neutron source facility. In this calculation, secondary particles from a tantalum target bombarded by 800-MeV protons were transmitted through a bulk shield of approximately 3-m-thick iron and 1-m-thick concrete. To accomplish this deep-penetration calculation with good statistics, the following three techniques were used in this study. First, the geometry of the bulk shield was three-dimensionally divided into several layers of about 50-cm thickness, and a step-by-step calculation was carried out to multiply the number of penetrated particles at the boundaries between the layers. Second, the source particles in the layers were divided into two parts to maintain the statistical balance on the spatial-flux distribution. Third, only high-energy particles above 20 MeV were transported up to approximately 1 m before the region for benchmark calculation. Finally, the energy spectra of neutrons behind the very thick shield were calculated down to the thermal energy with good statistics, and typically agree well within a factor of two with the experimental data over a broad energy range. The 12 C(n,2n) 11 C reaction rates behind the bulk shield were also calculated, which agree with the experimental data typically within 60%. These results are quite impressive in calculation accuracy for deep-penetration problem. In this report, the calculation conditions, geometry and the variance reduction techniques used in the deep-penetration calculation with the MARS14 code are clarified, and several subroutines of MARS14 which were used in our calculation are also given in the appendix. The numerical data of the calculated neutron energy spectra, reaction rates, dose rates and their C/E (Calculation/Experiment) values are also summarized. The
International Nuclear Information System (INIS)
Androsenko, A.A.; Androsenko, P.A.; Kagalenko, I.Eh.; Mironovich, Yu.N.
1992-01-01
Consideration is given of a technique and algorithms of constructing neutron trajectories in the Monte-Carlo method taking into account the data on adjoint transport equation solution. When simulating the transport part of transfer kernel the use is made of piecewise-linear approximation of free path length density along the particle motion direction. The approach has been implemented in programs within the framework of the BRAND code system. The importance is calculated in the multigroup P 1 -approximation within the framework of the DD-30 code system. The efficiency of the developed computation technique is demonstrated by means of solution of two model problems. 4 refs.; 2 tabs
ORPHEE research reactor: 3D core depletion calculation using Monte-Carlo code TRIPOLI-4®
Damian, F.; Brun, E.
2014-06-01
ORPHEE is a research reactor located at CEA Saclay. It aims at producing neutron beams for experiments. This is a pool-type reactor (heavy water), and the core is cooled by light water. Its thermal power is 14 MW. ORPHEE core is 90 cm height and has a cross section of 27x27 cm2. It is loaded with eight fuel assemblies characterized by a various number of fuel plates. The fuel plate is composed of aluminium and High Enriched Uranium (HEU). It is a once through core with a fuel cycle length of approximately 100 Equivalent Full Power Days (EFPD) and with a maximum burnup of 40%. Various analyses under progress at CEA concern the determination of the core neutronic parameters during irradiation. Taking into consideration the geometrical complexity of the core and the quasi absence of thermal feedback for nominal operation, the 3D core depletion calculations are performed using the Monte-Carlo code TRIPOLI-4® [1,2,3]. A preliminary validation of the depletion calculation was performed on a 2D core configuration by comparison with the deterministic transport code APOLLO2 [4]. The analysis showed the reliability of TRIPOLI-4® to calculate a complex core configuration using a large number of depleting regions with a high level of confidence.
Three-dimensional Monte Carlo calculation of some nuclear parameters
Günay, Mehtap; Şeker, Gökmen
2017-09-01
In this study, a fusion-fission hybrid reactor system was designed by using 9Cr2WVTa Ferritic steel structural material and the molten salt-heavy metal mixtures 99-95% Li20Sn80 + 1-5% RG-Pu, 99-95% Li20Sn80 + 1-5% RG-PuF4, and 99-95% Li20Sn80 + 1-5% RG-PuO2, as fluids. The fluids were used in the liquid first wall, blanket and shield zones of a fusion-fission hybrid reactor system. Beryllium (Be) zone with the width of 3 cm was used for the neutron multiplication between the liquid first wall and blanket. This study analyzes the nuclear parameters such as tritium breeding ratio (TBR), energy multiplication factor (M), heat deposition rate, fission reaction rate in liquid first wall, blanket and shield zones and investigates effects of reactor grade Pu content in the designed system on these nuclear parameters. Three-dimensional analyses were performed by using the Monte Carlo code MCNPX-2.7.0 and nuclear data library ENDF/B-VII.0.
Monte Carlo simulation of light fluence calculation during pleural PDT
Meo, Julia L.; Zhu, Timothy
2013-03-01
A thorough understanding of light distribution in the desired tissue is necessary for accurate light dosimetry in PDT. Solving the problem of light dose depends, in part, on the geometry of the tissue to be treated. When considering PDT in the thoracic cavity for treatment of malignant, localized tumors such as those observed in malignant pleural mesothelioma (MPM), changes in light dose caused by the cavity geometry should be accounted for in order to improve treatment efficacy. Cavity-like geometries demonstrate what is known as the "integrating sphere effect" where multiple light scattering off the cavity walls induces an overall increase in light dose in the cavity. We present a Monte Carlo simulation of light fluence based on a spherical and an elliptical cavity geometry with various dimensions. The tissue optical properties as well as the non-scattering medium (air and water) varies. We have also introduced small absorption inside the cavity to simulate the effect of blood absorption. We expand the MC simulation to track photons both within the cavity and in the surrounding cavity walls. Simulations are run for a variety of cavity optical properties determined using spectroscopic methods. We concluded from the MC simulation that the light fluence inside the cavity is inversely proportional to the surface area.
Geometry modeling for SAM-CE Monte Carlo calculations
International Nuclear Information System (INIS)
Steinberg, H.A.; Troubetzkoy, E.S.
1980-01-01
Three geometry packages have been developed and incorporated into SAM-CE, for representing in three dimensions the transport medium. These are combinatorial geometry - a general (non-lattice) system, complex combinatorial geometry - a very general system with lattice capability, and special reactor geometry - a special purpose system for light water reactor geometries. Their different attributes are described
Exploring the use of a deterministic adjoint flux calculation in criticality Monte Carlo simulations
International Nuclear Information System (INIS)
Jinaphanh, A.; Miss, J.; Richet, Y.; Martin, N.; Hebert, A.
2011-01-01
The paper presents a preliminary study on the use of a deterministic adjoint flux calculation to improve source convergence issues by reducing the number of iterations needed to reach the converged distribution in criticality Monte Carlo calculations. Slow source convergence in Monte Carlo eigenvalue calculations may lead to underestimate the effective multiplication factor or reaction rates. The convergence speed depends on the initial distribution and the dominance ratio. We propose using an adjoint flux estimation to modify the transition kernel according to the Importance Sampling technique. This adjoint flux is also used as the initial guess of the first generation distribution for the Monte Carlo simulation. Calculated Variance of a local estimator of current is being checked. (author)
Strategies for CT tissue segmentation for Monte Carlo calculations in nuclear medicine dosimetry
DEFF Research Database (Denmark)
Braad, Poul-Erik; Andersen, Thomas; Hansen, Søren Baarsgaard
2016-01-01
Purpose: CT images are used for patient specific Monte Carlo treatment planning in radionuclide therapy. The authors investigated the impact of tissue classification, CT image segmentation, and CT errors on Monte Carlo calculated absorbed dose estimates in nuclear medicine. Methods: CT errors...... calibration of the CT number-to-density conversion ramp. Tissue segmentation by a 13-tissue CT conversion ramp, calibrated by a stoichiometric method, resulted in low (isotopes. Conclusions: A calibrated CT scanner specific conversion ramp is required for accurate...
Bécares, V.; Pérez Martín, S.; Vázquez Antolín, Miriam; Villamarín, D.; Martín Fuertes, Francisco; González Romero, E.M.; Merino Rodríguez, Iván
2014-01-01
The calculation of the effective delayed neutron fraction, beff , with Monte Carlo codes is a complex task due to the requirement of properly considering the adjoint weighting of delayed neutrons. Nevertheless, several techniques have been proposed to circumvent this difficulty and obtain accurate Monte Carlo results for beff without the need of explicitly determining the adjoint flux. In this paper, we make a review of some of these techniques; namely we have analyzed two variants of what we...
Energy Technology Data Exchange (ETDEWEB)
O' Brien, M. J.; Brantley, P. S.
2015-01-20
In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 2^{21} = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domain’s replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.
International Nuclear Information System (INIS)
Gifford, K; Horton, J; Steger, T; Heard, M; Jackson, E; Ibbott, G
2004-01-01
The goal of this work is to calculate the effect of including the anterior and posterior ovoid shields on the dose distribution around a Fletcher Suit Delclos (FSD) ovoid (Nucletron Trading BV, Leersum, Netherlands) and verify these calculations with normoxic polymer gel dosimetry. To date, no Monte Carlo results verified with dosimetry have been published for this ovoid
Towards real-time photon Monte Carlo dose calculation in the cloud
Ziegenhein, Peter; Kozin, Igor N.; Kamerling, Cornelis Ph; Oelfke, Uwe
2017-06-01
Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.
International Nuclear Information System (INIS)
Devine, R.T.; Hsu, Hsiao-Hua
1994-01-01
The current basis for conversion coefficients for calibrating individual photon dosimeters in terms of dose equivalents is found in the series of papers by Grosswent. In his calculation the collision kerma inside the phantom is determined by calculation of the energy fluence at the point of interest and the use of the mass energy absorption coefficient. This approximates the local absorbed dose. Other Monte Carlo methods can be sued to provide calculations of the conversion coefficients. Rogers has calculated fluence-to-dose equivalent conversion factors with the Electron-Gamma Shower Version 3, EGS3, Monte Carlo program and produced results similar to Grosswent's calculations. This paper will report on calculations using the Integrated TIGER Series Version 3, ITS3, code to calculate the conversion coefficients in ICRU Tissue and in PMMA. A complete description of the input parameters to the program is given and comparison to previous results is included
Acceleration of a Monte Carlo radiation transport code
International Nuclear Information System (INIS)
Hochstedler, R.D.; Smith, L.M.
1996-01-01
Execution time for the Integrated TIGER Series (ITS) Monte Carlo radiation transport code has been reduced by careful re-coding of computationally intensive subroutines. Three test cases for the TIGER (1-D slab geometry), CYLTRAN (2-D cylindrical geometry), and ACCEPT (3-D arbitrary geometry) codes were identified and used to benchmark and profile program execution. Based upon these results, sixteen top time-consuming subroutines were examined and nine of them modified to accelerate computations with equivalent numerical output to the original. The results obtained via this study indicate that speedup factors of 1.90 for the TIGER code, 1.67 for the CYLTRAN code, and 1.11 for the ACCEPT code are achievable. copyright 1996 American Institute of Physics
Monte Carlo methods for flux expansion solutions of transport problems
International Nuclear Information System (INIS)
Spanier, J.
1999-01-01
Adaptive Monte Carlo methods, based on the use of either correlated sampling or importance sampling, to obtain global solutions to certain transport problems have recently been described. The resulting learning algorithms are capable of achieving geometric convergence when applied to the estimation of a finite number of coefficients in a flux expansion representation of the global solution. However, because of the nonphysical nature of the random walk simulations needed to perform importance sampling, conventional transport estimators and source sampling techniques require modification to be used successfully in conjunction with such flux expansion methods. It is shown how these problems can be overcome. First, the traditional path length estimators in wide use in particle transport simulations are generalized to include rather general detector functions (which, in this application, are the individual basis functions chosen for the flus expansion). Second, it is shown how to sample from the signed probabilities that arise as source density functions in these applications, without destroying the zero variance property needed to ensure geometric convergence to zero error
Energy Technology Data Exchange (ETDEWEB)
Boudou, C
2006-09-15
High grade gliomas are extremely aggressive brain tumours. Specific techniques combining the presence of high atomic number elements within the tumour to an irradiation with a low x-rays (below 100 keV) beam from a synchrotron source were proposed. For the sake of clinical trials, the use of treatment planning system has to be foreseen as well as tailored dosimetry protocols. Objectives of this thesis work were (1) the development of a dose calculation tools based on Monte Carlo code for particles transport and (2) the implementation of an experimental method for the three dimensional verification of the dose delivered. The dosimetric tool is an interface between tomography images from patient or sample and the M.C.N.P.X. general purpose code. Besides, dose distributions were measured through a radiosensitive polymer gel, providing acceptable results compared to calculations.
Directory of Open Access Journals (Sweden)
Diego Ferraro
2011-01-01
Full Text Available Monte Carlo neutron transport codes are usually used to perform criticality calculations and to solve shielding problems due to their capability to model complex systems without major approximations. However, these codes demand high computational resources. The improvement in computer capabilities leads to several new applications of Monte Carlo neutron transport codes. An interesting one is to use this method to perform cell-level fuel assembly calculations in order to obtain few group constants to be used on core calculations. In the present work the VTT recently developed Serpent v.1.1.7 cell-oriented neutronic calculation code is used to perform cell calculations of a theoretical BWR lattice benchmark with burnable poisons, and the main results are compared to reported ones and with calculations performed with Condor v.2.61, the INVAP's neutronic collision probability cell code.
Non-periodic pseudo-random numbers used in Monte Carlo calculations
International Nuclear Information System (INIS)
Barberis, Gaston E.
2007-01-01
The generation of pseudo-random numbers is one of the interesting problems in Monte Carlo simulations, mostly because the common computer generators produce periodic numbers. We used simple pseudo-random numbers generated with the simplest chaotic system, the logistic map, with excellent results. The numbers generated in this way are non-periodic, which we demonstrated for 10 13 numbers, and they are obtained in a deterministic way, which allows to repeat systematically any calculation. The Monte Carlo calculations are the ideal field to apply these numbers, and we did it for simple and more elaborated cases. Chemistry and Information Technology use this kind of simulations, and the application of this numbers to quantum Monte Carlo and cryptography is immediate. I present here the techniques to calculate, analyze and use these pseudo-random numbers, show that they lack periodicity up to 10 13 numbers and that they are not correlated
Non-periodic pseudo-random numbers used in Monte Carlo calculations
Barberis, Gaston E.
2007-09-01
The generation of pseudo-random numbers is one of the interesting problems in Monte Carlo simulations, mostly because the common computer generators produce periodic numbers. We used simple pseudo-random numbers generated with the simplest chaotic system, the logistic map, with excellent results. The numbers generated in this way are non-periodic, which we demonstrated for 1013 numbers, and they are obtained in a deterministic way, which allows to repeat systematically any calculation. The Monte Carlo calculations are the ideal field to apply these numbers, and we did it for simple and more elaborated cases. Chemistry and Information Technology use this kind of simulations, and the application of this numbers to quantum Monte Carlo and cryptography is immediate. I present here the techniques to calculate, analyze and use these pseudo-random numbers, show that they lack periodicity up to 1013 numbers and that they are not correlated.
Directory of Open Access Journals (Sweden)
Chapoutier Nicolas
2017-01-01
Full Text Available In the context of the rising of Monte Carlo transport calculations for any kind of application, AREVA recently improved its suite of engineering tools in order to produce efficient Monte Carlo workflow. Monte Carlo codes, such as MCNP or TRIPOLI, are recognized as reference codes to deal with a large range of radiation transport problems. However the inherent drawbacks of theses codes - laboring input file creation and long computation time - contrast with the maturity of the treatment of the physical phenomena. The goals of the recent AREVA developments were to reach similar efficiency as other mature engineering sciences such as finite elements analyses (e.g. structural or fluid dynamics. Among the main objectives, the creation of a graphical user interface offering CAD tools for geometry creation and other graphical features dedicated to the radiation field (source definition, tally definition has been reached. The computations times are drastically reduced compared to few years ago thanks to the use of massive parallel runs, and above all, the implementation of hybrid variance reduction technics. From now engineering teams are capable to deliver much more prompt support to any nuclear projects dealing with reactors or fuel cycle facilities from conceptual phase to decommissioning.
Chapoutier, Nicolas; Mollier, François; Nolin, Guillaume; Culioli, Matthieu; Mace, Jean-Reynald
2017-09-01
In the context of the rising of Monte Carlo transport calculations for any kind of application, AREVA recently improved its suite of engineering tools in order to produce efficient Monte Carlo workflow. Monte Carlo codes, such as MCNP or TRIPOLI, are recognized as reference codes to deal with a large range of radiation transport problems. However the inherent drawbacks of theses codes - laboring input file creation and long computation time - contrast with the maturity of the treatment of the physical phenomena. The goals of the recent AREVA developments were to reach similar efficiency as other mature engineering sciences such as finite elements analyses (e.g. structural or fluid dynamics). Among the main objectives, the creation of a graphical user interface offering CAD tools for geometry creation and other graphical features dedicated to the radiation field (source definition, tally definition) has been reached. The computations times are drastically reduced compared to few years ago thanks to the use of massive parallel runs, and above all, the implementation of hybrid variance reduction technics. From now engineering teams are capable to deliver much more prompt support to any nuclear projects dealing with reactors or fuel cycle facilities from conceptual phase to decommissioning.
Monte Carlo study in the mechanisms of transport of fast neutrons in various media
International Nuclear Information System (INIS)
Ku, L.
1976-01-01
The life histories of fast neutrons created by the straight Monte Carlo method in various attenuation media were examined. The media studied range from the one with simple, featureless properties (Na) to iron with very complicated cross section structure. The life histories of exceptional neutrons, i.e. those staying very close to the source, or those going very far from the source, were compared with those of the general population. When the exceptional neutrons exploited a particular collision property in a narrow energy band in order to reach a given detector, the method of analyzing Monte Carlo histories was able to provide a clear physical picture and single out the influence of that property on the macroscopic behavior of the neutrons. Two such phenomena were demonstrated by using this technique. In one, transport in a cross section minimum dominates the deep penetration of the neutrons. In such a circumstance most of the spatial transport is accomplished by the traveling at energies in and near the minimum, while little transport occurs at any other energies. The second example involves the effect of inelastic scattering on the low-energy leakage spectra for small bare assemblies. It is shown that, for a small bare iron sphere and for a fission source, the exit current spectrum below 100 keV is extremely sensitive to the details of the inelastic scattering near threshold. It often happened that in some exceptional situations the number of histories available for the analysis was too few to give statistically significant results. The most important conclusion to be drawn here is that the analysis of Monte Carlo histories can provide information on the details of transport mechanisms that is not available through forward or even adjoint deterministic transport calculations. 47 figures, 21 tables
Feasibility study on embedded transport core calculations
International Nuclear Information System (INIS)
Ivanov, B.; Zikatanov, L.; Ivanov, K.
2007-01-01
The main objective of this study is to develop an advanced core calculation methodology based on embedded diffusion and transport calculations. The scheme proposed in this work is based on embedded diffusion or SP 3 pin-by-pin local fuel assembly calculation within the framework of the Nodal Expansion Method (NEM) diffusion core calculation. The SP 3 method has gained popularity in the last 10 years as an advanced method for neutronics calculation. NEM is a multi-group nodal diffusion code developed, maintained and continuously improved at the Pennsylvania State University. The developed calculation scheme is a non-linear iteration process, which involves cross-section homogenization, on-line discontinuity factors generation, and boundary conditions evaluation by the global solution passed to the local calculation. In order to accomplish the local calculation, a new code has been developed based on the Finite Elements Method (FEM), which is capable of performing both diffusion and SP 3 calculations. The new code will be used in the framework of the NEM code in order to perform embedded pin-by-pin diffusion and SP 3 calculations on fuel assembly basis. The development of the diffusion and SP 3 FEM code is presented first following by its application to several problems. Description of the proposed embedded scheme is provided next as well as the obtained preliminary results of the C3 MOX benchmark. The results from the embedded calculations are compared with direct pin-by-pin whole core calculations in terms of accuracy and efficiency followed by conclusions made about the feasibility of the proposed embedded approach. (authors)
Mairani, A; Valente, M; Battistoni, G; Botta, F; Pedroli, G; Ferrari, A; Cremonesi, M; Di Dia, A; Ferrari, M; Fasso, A
2011-01-01
Purpose: The calculation of patient-specific dose distribution can be achieved by Monte Carlo simulations or by analytical methods. In this study, FLUKA Monte Carlo code has been considered for use in nuclear medicine dosimetry. Up to now, FLUKA has mainly been dedicated to other fields, namely high energy physics, radiation protection, and hadrontherapy. When first employing a Monte Carlo code for nuclear medicine dosimetry, its results concerning electron transport at energies typical of nuclear medicine applications need to be verified. This is commonly achieved by means of calculation of a representative parameter and comparison with reference data. Dose point kernel (DPK), quantifying the energy deposition all around a point isotropic source, is often the one. Methods: FLUKA DPKS have been calculated in both water and compact bone for monoenergetic electrons (10-3 MeV) and for beta emitting isotopes commonly used for therapy ((89)Sr, (90)Y, (131)I, (153)Sm, (177)Lu, (186)Re, and (188)Re). Point isotropic...
A method for transient, three-dimensional neutron transport calculations
Energy Technology Data Exchange (ETDEWEB)
Waddell, M.W. Jr. (Oak Ridge Y-12 Plant, TN (United States)); Dodds, H.L. (Tennessee Univ., Knoxville, TN (United States))
1992-12-28
This paper describes the development and evaluation of a method for solving the time-dependent, three-dimensional Boltzmann transport model with explicit representation of delayed neutrons. A hybrid stochastic/deterministic technique is utilized with a Monte Carlo code embedded inside of a quasi-static kinetics framework. The time-dependent flux amplitude, which is usually fast varying, is computed deterministically by a conventional point kinetics algorithm. The point kinetics parameters, reactivity and generation time as well as the flux shape, which is usually slowly varying in time, are computed stochastically during the random walk of the Monte Carlo calculation. To verify the accuracy of this new method, several computational benchmark problems from the Argonne National Laboratory benchmark book, ANL-7416, were calculated. The results are shown to be in reasonably good agreement with other independently obtained solutions. The results obtained in this work indicate that the method/code is working properly and that it is economically feasible for many practical applications provided a dedicated high performance workstation is available.
A method for transient, three-dimensional neutron transport calculations
Energy Technology Data Exchange (ETDEWEB)
Waddell, M.W. Jr. (Martin Marietta Energy Systems, Inc. (United States)); Dodds, H.L. (Univ. of Tennessee (United States))
1993-04-01
This paper describes the development and evaluation of a method for solving the time-dependent, three-dimensional Boltzmann transport model with explicit representation of delayed neutrons. A hybrid stochastic/deterministic technique is utilized with a Monte Carlo code embedded inside of a quasi-static kinetics framework. The time-dependent flux amplitude, which is usually fast varying, is computed deterministically by a conventional point kinetics algorithm. The point kinetics parameters, reactivity and generation time as well as the flux shape, which is usually slowly varying in time, are computed stochastically during the random walk of the Monte Carlo calculation. To verify the accuracy of this new method, several computational benchmark problems from the Argonne National Laboratory benchmark book, ANL-7416, were calculated. The results are shown to be in reasonably good agreement with other independently obtained solutions. The results obtained in this work indicate that the method/code is working properly and that it is economically feasible for many practical applications provided a dedicated high performance workstation is available. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Walsh, Jonathan A., E-mail: walshjon@mit.edu [Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, 24-107, Cambridge, MA 02139 (United States); Palmer, Todd S. [Department of Nuclear Engineering and Radiation Health Physics, Oregon State University, 116 Radiation Center, Corvallis, OR 97331 (United States); Urbatsch, Todd J. [XTD-IDA: Theoretical Design, Integrated Design and Assessment, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)
2015-12-15
Highlights: • Generation of discrete differential scattering angle and energy loss cross sections. • Gauss–Radau quadrature utilizing numerically computed cross section moments. • Development of a charged particle transport capability in the Milagro IMC code. • Integration of cross section generation and charged particle transport capabilities. - Abstract: We investigate a method for numerically generating discrete scattering cross sections for use in charged particle transport simulations. We describe the cross section generation procedure and compare it to existing methods used to obtain discrete cross sections. The numerical approach presented here is generalized to allow greater flexibility in choosing a cross section model from which to derive discrete values. Cross section data computed with this method compare favorably with discrete data generated with an existing method. Additionally, a charged particle transport capability is demonstrated in the time-dependent Implicit Monte Carlo radiative transfer code, Milagro. We verify the implementation of charged particle transport in Milagro with analytic test problems and we compare calculated electron depth–dose profiles with another particle transport code that has a validated electron transport capability. Finally, we investigate the integration of the new discrete cross section generation method with the charged particle transport capability in Milagro.
DEFF Research Database (Denmark)
Salling, Kim Bang; Leleur, Steen
2006-01-01
calculation, where risk analysis (RA) is car-ried out using Monte Carlo Simulation (MCS). After a de-scription of the deterministic and stochastic calculations emphasis is paid to the RA part of CBA-DK with consid-erations about which probability distributions to make use of. Furthermore, a comprehensive......This paper presents the Danish CBA-DK software model for assessment of transport infrastructure projects. The as-sessment model is based on both a deterministic calcula-tion following the cost-benefit analysis (CBA) methodol-ogy in a Danish manual from the Ministry of Transport and on a stochastic...
Modelling of electron contamination in clinical photon beams for Monte Carlo dose calculation
International Nuclear Information System (INIS)
Yang, J; Li, J S; Qin, L; Xiong, W; Ma, C-M
2004-01-01
The purpose of this work is to model electron contamination in clinical photon beams and to commission the source model using measured data for Monte Carlo treatment planning. In this work, a planar source is used to represent the contaminant electrons at a plane above the upper jaws. The source size depends on the dimensions of the field size at the isocentre. The energy spectra of the contaminant electrons are predetermined using Monte Carlo simulations for photon beams from different clinical accelerators. A 'random creep' method is employed to derive the weight of the electron contamination source by matching Monte Carlo calculated monoenergetic photon and electron percent depth-dose (PDD) curves with measured PDD curves. We have integrated this electron contamination source into a previously developed multiple source model and validated the model for photon beams from Siemens PRIMUS accelerators. The EGS4 based Monte Carlo user code BEAM and MCSIM were used for linac head simulation and dose calculation. The Monte Carlo calculated dose distributions were compared with measured data. Our results showed good agreement (less than 2% or 2 mm) for 6, 10 and 18 MV photon beams
Energy Technology Data Exchange (ETDEWEB)
Koch, Nicholas; Newhauser, Wayne D; Titt, Uwe; Starkschall, George [Department of Radiation Physics, University of Texas M. D. Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX 77030 (United States); Gombos, Dan [Section of Ophthalmology, Department of Head and Neck Surgery MDACC Unit 441 (United States); Coombes, Kevin [Graduate School of Biomedical Sciences, University of Texas Health Science Center, 6767 Bertner Avenue, Houston, TX 77030 (United States)], E-mail: kochn@musc.edu
2008-03-21
The treatment of uveal melanoma with proton radiotherapy has provided excellent clinical outcomes. However, contemporary treatment planning systems use simplistic dose algorithms that limit the accuracy of relative dose distributions. Further, absolute predictions of absorbed dose per monitor unit are not yet available in these systems. The purpose of this study was to determine if Monte Carlo methods could predict dose per monitor unit (D/MU) value at the center of a proton spread-out Bragg peak (SOBP) to within 1% on measured values for a variety of treatment fields relevant to ocular proton therapy. The MCNPX Monte Carlo transport code, in combination with realistic models for the ocular beam delivery apparatus and a water phantom, was used to calculate dose distributions and D/MU values, which were verified by the measurements. Measured proton beam data included central-axis depth dose profiles, relative cross-field profiles and absolute D/MU measurements under several combinations of beam penetration ranges and range-modulation widths. The Monte Carlo method predicted D/MU values that agreed with measurement to within 1% and dose profiles that agreed with measurement to within 3% of peak dose or within 0.5 mm distance-to-agreement. Lastly, a demonstration of the clinical utility of this technique included calculations of dose distributions and D/MU values in a realistic model of the human eye. It is possible to predict D/MU values accurately for clinical relevant range-modulated proton beams for ocular therapy using the Monte Carlo method. It is thus feasible to use the Monte Carlo method as a routine absolute dose algorithm for ocular proton therapy.
Molecular transport calculations with Wannier Functions
DEFF Research Database (Denmark)
Thygesen, Kristian Sommer; Jacobsen, Karsten Wedel
2005-01-01
We present a scheme for calculating coherent electron transport in atomic-scale contacts. The method combines a formally exact Green's function formalism with a mean-field description of the electronic structure based on the Kohn-Sham scheme of density functional theory. We use an accurate plane......-wave electronic structure method to calculate the eigenstates which are subsequently transformed into a set of localized Wannier functions (WFs). The WFs provide a highly efficient basis set which at the same time is well suited for analysis due to the chemical information contained in the WFs. The method...
Parallelization of a Monte Carlo particle transport simulation code
Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.
2010-05-01
We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.
Calculations of neoclassical impurity transport in stellarators
Mollén, Albert; Smith, Håkan M.; Langenberg, Andreas; Turkin, Yuriy; Beidler, Craig D.; Helander, Per; Landreman, Matt; Newton, Sarah L.; García-Regaña, José M.; Nunami, Masanori
2017-10-01
The new stellarator Wendelstein 7-X has finished the first operational campaign and is restarting operation in the summer 2017. To demonstrate that the stellarator concept is a viable candidate for a fusion reactor and to allow for long pulse lengths of 30 min, i.e. ``quasi-stationary'' operation, it will be important to avoid central impurity accumulation typically governed by the radial neoclassical transport. The SFINCS code has been developed to calculate neoclassical quantities such as the radial collisional transport and the ambipolar radial electric field in 3D magnetic configurations. SFINCS is a cutting-edge numerical tool which combines several important features: the ability to model an arbitrary number of kinetic plasma species, the full linearized Fokker-Planck collision operator for all species, and the ability to calculate and account for the variation of the electrostatic potential on flux surfaces. In the present work we use SFINCS to study neoclassical impurity transport in stellarators. We explore how flux-surface potential variations affect the radial particle transport, and how the radial electric field is modified by non-trace impurities and flux-surface potential variations.
Energy Technology Data Exchange (ETDEWEB)
Moskvin, Vadim [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN (United States)]. E-mail: vmoskvin@iupui.edu; DesRosiers, Colleen; Papiez, Lech; Timmerman, Robert; Randall, Marcus; DesRosiers, Paul [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN (United States)
2002-06-21
The Monte Carlo code PENELOPE has been used to simulate photon flux from the Leksell Gamma Knife, a precision method for treating intracranial lesions. Radiation from a single {sup 60}Co assembly traversing the collimator system was simulated, and phase space distributions at the output surface of the helmet for photons and electrons were calculated. The characteristics describing the emitted final beam were used to build a two-stage Monte Carlo simulation of irradiation of a target. A dose field inside a standard spherical polystyrene phantom, usually used for Gamma Knife dosimetry, has been computed and compared with experimental results, with calculations performed by other authors with the use of the EGS4 Monte Carlo code, and data provided by the treatment planning system Gamma Plan. Good agreement was found between these data and results of simulations in homogeneous media. Owing to this established accuracy, PENELOPE is suitable for simulating problems relevant to stereotactic radiosurgery. (author)
Effects of changing the random number stride in Monte Carlo calculations
International Nuclear Information System (INIS)
Hendricks, J.S.
1991-01-01
This paper reports on a common practice in Monte Carlo radiation transport codes which is to start each random walk a specified number of steps up the random number sequence from the previous one. This is called the stride in the random number sequence between source particles. It is used for correlated sampling or to provide tree-structured random numbers. A new random number generator algorithm for the major Monte Carlo code MCNP has been written to allow adjustment of the random number stride. This random number generator is machine portable. The effects of varying the stride for several sample problems are examined
Evaluation of Monte Carlo Codes Regarding the Calculated Detector Response Function in NDP Method
Energy Technology Data Exchange (ETDEWEB)
Tuan, Hoang Sy Minh; Sun, Gwang Min; Park, Byung Gun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-10-15
The basis of the NDP is the irradiation of a sample with a thermal or cold neutron beam and the subsequent release of charged particles due to neutron-induced exoergic charged particle reactions. Neutrons interact with the nuclei of elements and release mono-energetic charged particles, e.g. alpha particles or protons, and recoil atoms. Depth profile of the analyzed element can be obtained by making a linear transformation of the measured energy spectrum by using the stopping power of the sample material. A few micrometer of the material can be analyzed nondestructively, and on the order of 10nm depth resolution can be obtained depending on the material type with NDP method. In the NDP method, the one first steps of the analytical process is a channel-energy calibration. This calibration is normally made with the experimental measurement of NIST Standard Reference Material sample (SRM-93a). In this study, some Monte Carlo (MC) codes were tried to calculate the Si detector response function when this detector accounted the energy charges particles emitting from an analytical sample. In addition, these MC codes were also tried to calculate the depth distributions of some light elements ({sup 10}B, {sup 3}He, {sup 6}Li, etc.) in SRM-93a and SRM-2137 samples. These calculated profiles were compared with the experimental profiles and SIMS profiles. In this study, some popular MC neutron transport codes are tried and tested to calculate the detector response function in the NDP method. The simulations were modeled based on the real CN-NDP system which is a part of Cold Neutron Activation Station (CONAS) at HANARO (KAERI). The MC simulations are very successful at predicting the alpha peaks in the measured energy spectrum. The net area difference between the measured and predicted alpha peaks are less than 1%. A possible explanation might be bad cross section data set usage in the MC codes for the transport of low energetic lithium atoms inside the silicon substrate.
Energy Technology Data Exchange (ETDEWEB)
Clouet, J.F.; Samba, G. [CEA Bruyeres-le-Chatel, 91 (France)
2005-07-01
We use asymptotic analysis to study the diffusion limit of the Symbolic Implicit Monte-Carlo (SIMC) method for the transport equation. For standard SIMC with piecewise constant basis functions, we demonstrate mathematically that the solution converges to the solution of a wrong diffusion equation. Nevertheless a simple extension to piecewise linear basis functions enables to obtain the correct solution. This improvement allows the calculation in opaque medium on a mesh resolving the diffusion scale much larger than the transport scale. Anyway, the huge number of particles which is necessary to get a correct answer makes this computation time consuming. Thus, we have derived from this asymptotic study an hybrid method coupling deterministic calculation in the opaque medium and Monte-Carlo calculation in the transparent medium. This method gives exactly the same results as the previous one but at a much lower price. We present numerical examples which illustrate the analysis. (authors)
Monte Carlo calculation of received dose from ingestion and inhalation of natural uranium
International Nuclear Information System (INIS)
Trobok, M.; Zupunski, Lj.; Spasic-Jokic, V.; Gordanic, V.; Sovilj, P.
2009-01-01
For the purpose of this study eighty samples are taken from the area Bela Crkva and Vrsac. The activity of radionuclide in the soil is determined by gamma- ray spectrometry. Monte Carlo method is used to calculate effective dose received by population resulting from the inhalation and ingestion of natural uranium. The estimated doses were compared with the legally prescribed levels. (author) [sr
Widder, Joachim; Hollander, Miranda; Ubbels, Jan F.; Bolt, Rene A.; Langendijk, Johannes A.
Purpose: To define a method of dose prescription employing Monte Carlo (MC) dose calculation in stereotactic body radiotherapy (SBRT) for lung tumours aiming at a dose as low as possible outside of the PTV. Methods and materials: Six typical T1 lung tumours - three small, three large - were
A general framework for implementing NLO calculations in shower Monte Carlo programs. The POWHEG BOX
Energy Technology Data Exchange (ETDEWEB)
Alioli, Simone [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Nason, Paolo [INFN, Milano-Bicocca (Italy); Oleari, Carlo [INFN, Milano-Bicocca (Italy); Milano-Bicocca Univ. (Italy); Re, Emanuele [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomenology
2010-02-15
In this work we illustrate the POWHEG BOX, a general computer code framework for implementing NLO calculations in shower Monte Carlo programs according to the POWHEG method. Aim of this work is to provide an illustration of the needed theoretical ingredients, a view of how the code is organized and a description of what a user should provide in order to use it. (orig.)
A general framework for implementing NLO calculations in shower Monte Carlo programs. The POWHEG BOX
International Nuclear Information System (INIS)
Alioli, Simone; Nason, Paolo; Oleari, Carlo; Re, Emanuele
2010-02-01
In this work we illustrate the POWHEG BOX, a general computer code framework for implementing NLO calculations in shower Monte Carlo programs according to the POWHEG method. Aim of this work is to provide an illustration of the needed theoretical ingredients, a view of how the code is organized and a description of what a user should provide in order to use it. (orig.)
Monte Carlo calculation of efficiencies of whole-body counter, by microcomputer
International Nuclear Information System (INIS)
Fernandes Neto, J.M.
1987-01-01
A computer programming using the Monte Carlo method for calculation of efficiencies of whole-body counting of body radiation distribution is presented. An analytical simulator (for man e for child) incorporated with 99m Tc, 131 I and 42 K is used. (M.A.C.) [pt
Clouvas, A; Antonopoulos-Domis, M; Silva, J
2000-01-01
The dose rate conversion factors D/sub CF/ (absorbed dose rate in air per unit activity per unit of soil mass, nGy h/sup -1/ per Bq kg/sup -1/) are calculated 1 m above ground for photon emitters of natural radionuclides uniformly distributed in the soil. Three Monte Carlo codes are used: 1) The MCNP code of Los Alamos; 2) The GEANT code of CERN; and 3) a Monte Carlo code developed in the Nuclear Technology Laboratory of the Aristotle University of Thessaloniki. The accuracy of the Monte Carlo results is tested by the comparison of the unscattered flux obtained by the three Monte Carlo codes with an independent straightforward calculation. All codes and particularly the MCNP calculate accurately the absorbed dose rate in air due to the unscattered radiation. For the total radiation (unscattered plus scattered) the D/sub CF/ values calculated from the three codes are in very good agreement between them. The comparison between these results and the results deduced previously by other authors indicates a good ag...
A Fano cavity test for Monte Carlo proton transport algorithms
International Nuclear Information System (INIS)
Sterpin, Edmond; Sorriaux, Jefferson; Souris, Kevin; Vynckier, Stefaan; Bouchard, Hugo
2014-01-01
Purpose: In the scope of reference dosimetry of radiotherapy beams, Monte Carlo (MC) simulations are widely used to compute ionization chamber dose response accurately. Uncertainties related to the transport algorithm can be verified performing self-consistency tests, i.e., the so-called “Fano cavity test.” The Fano cavity test is based on the Fano theorem, which states that under charged particle equilibrium conditions, the charged particle fluence is independent of the mass density of the media as long as the cross-sections are uniform. Such tests have not been performed yet for MC codes simulating proton transport. The objectives of this study are to design a new Fano cavity test for proton MC and to implement the methodology in two MC codes: Geant4 and PENELOPE extended to protons (PENH). Methods: The new Fano test is designed to evaluate the accuracy of proton transport. Virtual particles with an energy ofE 0 and a mass macroscopic cross section of (Σ)/(ρ) are transported, having the ability to generate protons with kinetic energy E 0 and to be restored after each interaction, thus providing proton equilibrium. To perform the test, the authors use a simplified simulation model and rigorously demonstrate that the computed cavity dose per incident fluence must equal (ΣE 0 )/(ρ) , as expected in classic Fano tests. The implementation of the test is performed in Geant4 and PENH. The geometry used for testing is a 10 × 10 cm 2 parallel virtual field and a cavity (2 × 2 × 0.2 cm 3 size) in a water phantom with dimensions large enough to ensure proton equilibrium. Results: For conservative user-defined simulation parameters (leading to small step sizes), both Geant4 and PENH pass the Fano cavity test within 0.1%. However, differences of 0.6% and 0.7% were observed for PENH and Geant4, respectively, using larger step sizes. For PENH, the difference is attributed to the random-hinge method that introduces an artificial energy straggling if step size is not
Yeh, Peter C. Y.; Lee, C. C.; Chao, T. C.; Tung, C. J.
2017-11-01
Intensity-modulated radiation therapy is an effective treatment modality for the nasopharyngeal carcinoma. One important aspect of this cancer treatment is the need to have an accurate dose algorithm dealing with the complex air/bone/tissue interface in the head-neck region to achieve the cure without radiation-induced toxicities. The Acuros XB algorithm explicitly solves the linear Boltzmann transport equation in voxelized volumes to account for the tissue heterogeneities such as lungs, bone, air, and soft tissues in the treatment field receiving radiotherapy. With the single beam setup in phantoms, this algorithm has already been demonstrated to achieve the comparable accuracy with Monte Carlo simulations. In the present study, five nasopharyngeal carcinoma patients treated with the intensity-modulated radiation therapy were examined for their dose distributions calculated using the Acuros XB in the planning target volume and the organ-at-risk. Corresponding results of Monte Carlo simulations were computed from the electronic portal image data and the BEAMnrc/DOSXYZnrc code. Analysis of dose distributions in terms of the clinical indices indicated that the Acuros XB was in comparable accuracy with Monte Carlo simulations and better than the anisotropic analytical algorithm for dose calculations in real patients.
Modelling of an industrial environment, part 1.: Monte Carlo simulations of photon transport
International Nuclear Information System (INIS)
Kis, Z.; Eged, K.; Meckbach, R.; Voigt, G.
2002-01-01
After a nuclear accident releasing radioactive material into the environment the external exposures may contribute significantly to the radiation exposure of the population (UNSCEAR 1988, 2000). For urban populations the external gamma exposure from radionuclides deposited on the surfaces of the urban-industrial environments yields the dominant contributions to the total dose to the public (Kelly 1987; Jacob and Meckbach 1990). The radiation field is naturally influenced by the environment around the sources. For calculations of the shielding effect of the structures in complex and realistic urban environments Monte Carlo methods turned out to be useful tools (Jacob and Meckbach 1987; Meckbach et al. 1988). Using these methods a complex environment can be set up in which the photon transport can be solved on a reliable way. The accuracy of the methods is in principle limited only by the knowledge of the atomic cross sections and the computational time. Several papers using Monte Carlo results for calculating doses from the external gamma exposures were published (Jacob and Meckbach 1987, 1990; Meckbach et al. 1988; Rochedo et al. 1996). In these papers the Monte Carlo simulations were run in urban environments and for different photon energies. The industrial environment can be defined as such an area where productive and/or commercial activity is carried out. A good example can be a factory or a supermarket. An industrial environment can rather be different from the urban ones as for the types and structures of the buildings and their dimensions. These variations will affect the radiation field of this environment. Hence there is a need to run new Monte Carlo simulations designed specially for the industrial environments
A meshless approach to radionuclide transport calculations
International Nuclear Information System (INIS)
Perko, J.; Sarler, B.
2005-01-01
Over the past thirty years numerical modelling has emerged as an interdisciplinary scientific discipline which has a significant impact in engineering and design. In the field of numerical modelling of transport phenomena in porous media, many commercial codes exist, based on different numerical methods. Some of them are widely used for performance assessment and safety analysis of radioactive waste repositories and groundwater modelling. Although they proved to be an accurate and reliable tool, they have certain limitations and drawbacks. Realistic problems often involve complex geometry which is difficult and time consuming to discretize. In recent years, meshless methods have attracted much attention due to their flexibility in solving engineering and scientific problems. In meshless methods the cumbersome polygonization of calculation domain is not necessary. By this the discretization time is reduced. In addition, the simulation is not as discretization density dependent as in traditional methods because of the lack of polygon interfaces. In this work fully meshless Diffuse Approximate Method (DAM) is used for calculation of radionuclide transport. Two cases are considered; First 1D comparison of 226 Ra transport and decay solved by the commercial Finite Volume Method (FVM) and Finite Element Method (FEM) based packages and DAM. This case shows the level of discretization density dependence. And second realistic 2D case of near-field modelling of radionuclide transport from the radioactive waste repository. Comparison is made again between FVM based code and DAM simulation for two radionuclides: Long-lived 14 C and short-lived 3 H. Comparisons indicate great capability of meshless methods to simulate complex transport problems and show that they should be seriously considered in future commercial simulation tools. (author)
Srna-Monte Carlo codes for proton transport simulation in combined and voxelized geometries
International Nuclear Information System (INIS)
Ilic, R.D.; Lalic, D.; Stankovic, S.J.
2002-01-01
This paper describes new Monte Carlo codes for proton transport simulations in complex geometrical forms and in materials of different composition. The SRNA codes were developed for three dimensional (3D) dose distribution calculation in proton therapy and dosimetry. The model of these codes is based on the theory of proton multiple scattering and a simple model of compound nucleus decay. The developed package consists of two codes: SRNA-2KG and SRNA-VOX. The first code simulates proton transport in combined geometry that can be described by planes and second order surfaces. The second one uses the voxelized geometry of material zones and is specifically adopted for the application of patient computer tomography data. Transition probabilities for both codes are given by the SRNADAT program. In this paper, we will present the models and algorithms of our programs, as well as the results of the numerical experiments we have carried out applying them, along with the results of proton transport simulation obtained through the PETRA and GEANT programs. The simulation of the proton beam characterization by means of the Multi-Layer Faraday Cup and spatial distribution of positron emitters obtained by our program indicate the imminent application of Monte Carlo techniques in clinical practice. (author)
Srna - Monte Carlo codes for proton transport simulation in combined and voxelized geometries
Directory of Open Access Journals (Sweden)
Ilić Radovan D.
2002-01-01
Full Text Available This paper describes new Monte Carlo codes for proton transport simulations in complex geometrical forms and in materials of different composition. The SRNA codes were developed for three dimensional (3D dose distribution calculation in proton therapy and dosimetry. The model of these codes is based on the theory of proton multiple scattering and a simple model of compound nucleus decay. The developed package consists of two codes: SRNA-2KG and SRNA-VOX. The first code simulates proton transport in combined geometry that can be described by planes and second order surfaces. The second one uses the voxelized geometry of material zones and is specifically adopted for the application of patient computer tomography data. Transition probabilities for both codes are given by the SRNADAT program. In this paper, we will present the models and algorithms of our programs, as well as the results of the numerical experiments we have carried out applying them, along with the results of proton transport simulation obtained through the PETRA and GEANT programs. The simulation of the proton beam characterization by means of the Multi-Layer Faraday Cup and spatial distribution of positron emitters obtained by our program indicate the imminent application of Monte Carlo techniques in clinical practice.
International Nuclear Information System (INIS)
Bellezzo, Murillo
2014-01-01
As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo Method (MCM) has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this thesis, the CUBMC code is presented, a GPU-based MC photon transport algorithm for dose calculation under the Compute Unified Device Architecture (CUDA) platform. The simulation of physical events is based on the algorithm used in PENELOPE, and the cross section table used is the one generated by the MATERIAL routine, also present in PENELOPE code. Photons are transported in voxel-based geometries with different compositions. There are two distinct approaches used for transport simulation. The rst of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon ignores the existence of borders and travels in homogeneous fictitious media. The CUBMC code aims to be an alternative of Monte Carlo simulator code that, by using the capability of parallel processing of graphics processing units (GPU), provide high performance simulations in low cost compact machines, and thus can be applied in clinical cases and incorporated in treatment planning systems for radiotherapy. (author)
A modified version of the Monte Carlo computer code for calculating neutron detection efficiencies
International Nuclear Information System (INIS)
Nakayama, K.; Pessoa, E.F.; Douglas, R.A.
1980-12-01
A calculation of neutron detection efficiencies has been performed for organic scintillators using the Monte Carlo Method. Effects which contribute to the detection efficiency have been incorporated in the calculations as thoroughly as possible. The reliability of the results is verified by comparison with the efficiency measurements available in the literature for neutrons in the energy range between 1 and 170 MeV with neutron detection thresholds between O.1 and 22.3 MeV. (Author) [pt
Energy Technology Data Exchange (ETDEWEB)
Perfetti, Christopher M [ORNL; Martin, William R [University of Michigan; Rearden, Bradley T [ORNL; Williams, Mark L [ORNL
2012-01-01
Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the SHIFT Monte Carlo code within the Scale code package. The methods were used for several simple test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods.
DETEF a Monte Carlo system for the calculation of gamma spectrometers efficiency
International Nuclear Information System (INIS)
Cornejo, N.; Mann, G.
1996-01-01
The Monte Carlo method program DETEF calculates the efficiency of cylindrical NaI, Csi, Ge or Si, detectors for photons energy until 2 MeV and several sample geometric. These sources could be punctual, plane cylindrical or rectangular. The energy spectrum appears on the screen simultaneously with the statistical simulation. The calculated and experimental estimated efficiencies coincidence well in the standards deviations intervals
Calculation of transportation energy for biomass collection
Energy Technology Data Exchange (ETDEWEB)
Kanai, G.; Takekura, K.; Kato, H.; Kobayashi, Y.; Yakushido, K. [National Agricultural Research Center, Tsukuba, Ibaraki (Japan)
2010-07-01
This paper reported on a study at a rice straw facility in Japan that produces bioethanol. Simulation modeling and calculations methods were used to examine the characteristics of field-to-facility transportation. Fuel consumption was found to be influenced by the conversion rate from straw to ethanol, the quantity of straw collected, and the ratio of the field area to that around the facility. Standard conditions were assumed based on reported data and actual observations for 15 ML/yr ethanol production, 0.3 kL output of ethanol from 1 t dry straw, 53.6 day/yr working days, 2.7 t truck load capacity, and 0.128 as the ratio of field to the area around the facility. According to calculations, a quantity of 50 kt dry straw requires 2.78 L of fuel to transport 1 t of dry straw, 109.5 trucks, and a 19.1 km collection area radius. The fuel consumption for transportation was found to be proportional to the quantity of straw to the 0.5 power, but inversely proportional to the ratio of field to the 0.5 power. The rate of increase in the number of trucks needed to collect straw increases with the decrease in the ratio of the field to area surface around the facility.
International Nuclear Information System (INIS)
Johnson, Jeffrey O.; Gallmeier, Franz X.; Popova, Irina
2002-01-01
Determining the bulk shielding requirements for accelerator environments is generally an easy task compared to analyzing the radiation transport through the complex shield configurations and penetrations typically associated with the detailed Title II design efforts of a facility. Shielding calculations for penetrations in the SNS accelerator environment are presented based on hybrid Monte Carlo and discrete ordinates particle transport methods. This methodology relies on coupling tools that map boundary surface leakage information from the Monte Carlo calculations to boundary sources for one-, two-, and three-dimensional discrete ordinates calculations. The paper will briefly introduce the coupling tools for coupling MCNPX to the one-, two-, and three-dimensional discrete ordinates codes in the DOORS code suite. The paper will briefly present typical applications of these tools in the design of complex shield configurations and penetrations in the SNS proton beam transport system
Meric, N; Bor, D
1999-01-01
Scatter fractions have been determined experimentally for lucite, polyethylene, polypropylene, aluminium and copper of varying thicknesses using a polyenergetic broad X-ray beam of 67 kVp. Simulation of the experiment has been carried out by the Monte Carlo technique under the same input conditions. Comparison of the measured and predicted data with each other and with the previously reported values has been given. The Monte Carlo calculations have also been carried out for water, bakelite and bone to examine the dependence of scatter fraction on the density of the scatterer.
Monte-Carlo Method Python Library for dose distribution Calculation in Brachytherapy
International Nuclear Information System (INIS)
Randriantsizafy, R.D.; Ramanandraibe, M.J.; Raboanary, R.
2007-01-01
The Cs-137 Brachytherapy treatment is performed in Madagascar since 2005. Time treatment calculation for prescribed dose is made manually. Monte-Carlo Method Python library written at Madagascar INSTN is experimentally used to calculate the dose distribution on the tumour and around it. The first validation of the code was done by comparing the library curves with the Nucletron company curves. To reduce the duration of the calculation, a Grid of PC's is set up with listner patch run on each PC. The library will be used to modelize the dose distribution in the CT scan patient picture for individual and better accuracy time calculation for a prescribed dose.
Energy Technology Data Exchange (ETDEWEB)
Silva, Frank Sinatra Gomes da
2008-02-15
The Monte Carlo method is an important tool to simulate radioactive particles interaction with biologic medium. The principal advantage of the method when compared with deterministic methods is the ability to simulate a complex geometry. Several computational codes use the Monte Carlo method to simulate the particles transport and they have the capacity to simulate energy deposition in models of organs and/or tissues, as well models of cells of human body. Thus, the calculation of the absorbed dose to thyroid's follicles (compound of colloid and follicles' cells) have a fundamental importance to dosimetry, because these cells are radiosensitive due to ionizing radiation exposition, in particular, exposition due to radioisotopes of iodine, because a great amount of radioiodine may be released into the environment in case of a nuclear accidents. In this case, the goal of this work was use the code of particles transport MNCP4C to calculate absorbed doses in models of thyroid's follicles, for Auger electrons, internal conversion electrons and beta particles, by iodine-131 and short-lived iodines (131, 132, 133, 134 e 135), with diameters varying from 30 to 500 {mu}m. The results obtained from simulation with the MCNP4C code shown an average percentage of the 25% of total absorbed dose by colloid to iodine- 131 and 75% to short-lived iodine's. For follicular cells, this percentage was of 13% to iodine-131 and 87% to short-lived iodine's. The contributions from particles with low energies, like Auger and internal conversion electrons should not be neglected, to assessment the absorbed dose in cellular level. Agglomerative hierarchical clustering was used to compare doses obtained by codes MCNP4C, EPOTRAN, EGS4 and by deterministic methods. (author)
Monte Carlo dose calculation improvements for low energy electron beams using eMC
International Nuclear Information System (INIS)
Fix, Michael K; Frei, Daniel; Volken, Werner; Born, Ernst J; Manser, Peter; Neuenschwander, Hans
2010-01-01
The electron Monte Carlo (eMC) dose calculation algorithm in Eclipse (Varian Medical Systems) is based on the macro MC method and is able to predict dose distributions for high energy electron beams with high accuracy. However, there are limitations for low energy electron beams. This work aims to improve the accuracy of the dose calculation using eMC for 4 and 6 MeV electron beams of Varian linear accelerators. Improvements implemented into the eMC include (1) improved determination of the initial electron energy spectrum by increased resolution of mono-energetic depth dose curves used during beam configuration; (2) inclusion of all the scrapers of the applicator in the beam model; (3) reduction of the maximum size of the sphere to be selected within the macro MC transport when the energy of the incident electron is below certain thresholds. The impact of these changes in eMC is investigated by comparing calculated dose distributions for 4 and 6 MeV electron beams at source to surface distance (SSD) of 100 and 110 cm with applicators ranging from 6 x 6 to 25 x 25 cm 2 of a Varian Clinac 2300C/D with the corresponding measurements. Dose differences between calculated and measured absolute depth dose curves are reduced from 6% to less than 1.5% for both energies and all applicators considered at SSD of 100 cm. Using the original eMC implementation, absolute dose profiles at depths of 1 cm, d max and R50 in water lead to dose differences of up to 8% for applicators larger than 15 x 15 cm 2 at SSD 100 cm. Those differences are now reduced to less than 2% for all dose profiles investigated when the improved version of eMC is used. At SSD of 110 cm the dose difference for the original eMC version is even more pronounced and can be larger than 10%. Those differences are reduced to within 2% or 2 mm with the improved version of eMC. In this work several enhancements were made in the eMC algorithm leading to significant improvements in the accuracy of the dose calculation
Monte Carlo dose calculation improvements for low energy electron beams using eMC.
Fix, Michael K; Frei, Daniel; Volken, Werner; Neuenschwander, Hans; Born, Ernst J; Manser, Peter
2010-08-21
The electron Monte Carlo (eMC) dose calculation algorithm in Eclipse (Varian Medical Systems) is based on the macro MC method and is able to predict dose distributions for high energy electron beams with high accuracy. However, there are limitations for low energy electron beams. This work aims to improve the accuracy of the dose calculation using eMC for 4 and 6 MeV electron beams of Varian linear accelerators. Improvements implemented into the eMC include (1) improved determination of the initial electron energy spectrum by increased resolution of mono-energetic depth dose curves used during beam configuration; (2) inclusion of all the scrapers of the applicator in the beam model; (3) reduction of the maximum size of the sphere to be selected within the macro MC transport when the energy of the incident electron is below certain thresholds. The impact of these changes in eMC is investigated by comparing calculated dose distributions for 4 and 6 MeV electron beams at source to surface distance (SSD) of 100 and 110 cm with applicators ranging from 6 x 6 to 25 x 25 cm(2) of a Varian Clinac 2300C/D with the corresponding measurements. Dose differences between calculated and measured absolute depth dose curves are reduced from 6% to less than 1.5% for both energies and all applicators considered at SSD of 100 cm. Using the original eMC implementation, absolute dose profiles at depths of 1 cm, d(max) and R50 in water lead to dose differences of up to 8% for applicators larger than 15 x 15 cm(2) at SSD 100 cm. Those differences are now reduced to less than 2% for all dose profiles investigated when the improved version of eMC is used. At SSD of 110 cm the dose difference for the original eMC version is even more pronounced and can be larger than 10%. Those differences are reduced to within 2% or 2 mm with the improved version of eMC. In this work several enhancements were made in the eMC algorithm leading to significant improvements in the accuracy of the dose
Calculating iron transport in nuclear systems
International Nuclear Information System (INIS)
Horowitz, J.S.; Merilo, M.; Munson, D.
2002-01-01
The presence of high levels of iron in the final feedwater of nuclear plants is undesirable and can have a significant contribution to plant operations and maintenance (O and M) costs. A number of options are available to reduce the iron concentration, but tend to be expensive. Recently a method was developed to quantitatively determine the contribution of each iron source, such that reduction options can be quantitatively compared. The method is based on industry experience that the majority of iron has been released by flow-accelerated corrosion (FAC). FAC is one of the most predictable forms of corrosion and a well-developed predictive model has been developed and also encoded in the CHECWORKS. A combination of CHECWORKS and supplemental calculations have been used to model the iron transport in a number of US BWRs and PWRs. The iron generated by FAC in all the normally operating piping systems has been calculated using the results of CHECWORKS predictions and a special post processor. The post processor accounts for the differences between the maximum corrosion rate calculated by CHECWORKS and the average corrosion (iron generation) rate for a pipe-fitting or length of pipe. It also calculates the amount of iron generated within the fitting or pipe. Supplemental calculations have been used to determine the iron generation from the major, in-line components - high and low pressure turbines, moisture separators, feedwater heaters and the condenser. All of the iron generation rates for the equipment and piping were appropriately summed and iron concentrations estimated throughout the steam-feedwater system. Predicted iron concentrations have agreed well with plant measurements. The availability of specific iron generation rates allows plant management to make reasoned decisions about the countermeasures to deal with iron generation and transport. The countermeasures that have been examined to reduce the amount of iron transport include installing additional water
TOPIC: a debugging code for torus geometry input data of Monte Carlo transport code
International Nuclear Information System (INIS)
Iida, Hiromasa; Kawasaki, Hiromitsu.
1979-06-01
TOPIC has been developed for debugging geometry input data of the Monte Carlo transport code. the code has the following features: (1) It debugs the geometry input data of not only MORSE-GG but also MORSE-I capable of treating torus geometry. (2) Its calculation results are shown in figures drawn by Plotter or COM, and the regions not defined or doubly defined are easily detected. (3) It finds a multitude of input data errors in a single run. (4) The input data required in this code are few, so that it is readily usable in a time sharing system of FACOM 230-60/75 computer. Example TOPIC calculations in design study of tokamak fusion reactors (JXFR, INTOR-J) are presented. (author)
CAD-Based Monte Carlo Neutron Transport KSTAR Analysis for KSTAR
Seo, Geon Ho; Choi, Sung Hoon; Shim, Hyung Jin
2017-09-01
The Monte Carlo (MC) neutron transport analysis for a complex nuclear system such as fusion facility may require accurate modeling of its complicated geometry. In order to take advantage of modeling capability of the computer aided design (CAD) system for the MC neutronics analysis, the Seoul National University MC code, McCARD, has been augmented with a CAD-based geometry processing module by imbedding the OpenCASCADE CAD kernel. In the developed module, the CAD geometry data are internally converted to the constructive solid geometry model with help of the CAD kernel. An efficient cell-searching algorithm is devised for the void space treatment. The performance of the CAD-based McCARD calculations are tested for the Korea Superconducting Tokamak Advanced Research device by comparing with results of the conventional MC calculations using a text-based geometry input.
CPMC-Lab: A MATLAB package for Constrained Path Monte Carlo calculations
Nguyen, Huy; Shi, Hao; Xu, Jie; Zhang, Shiwei
2014-12-01
We describe CPMC-Lab, a MATLAB program for the constrained-path and phaseless auxiliary-field Monte Carlo methods. These methods have allowed applications ranging from the study of strongly correlated models, such as the Hubbard model, to ab initio calculations in molecules and solids. The present package implements the full ground-state constrained-path Monte Carlo (CPMC) method in MATLAB with a graphical interface, using the Hubbard model as an example. The package can perform calculations in finite supercells in any dimensions, under periodic or twist boundary conditions. Importance sampling and all other algorithmic details of a total energy calculation are included and illustrated. This open-source tool allows users to experiment with various model and run parameters and visualize the results. It provides a direct and interactive environment to learn the method and study the code with minimal overhead for setup. Furthermore, the package can be easily generalized for auxiliary-field quantum Monte Carlo (AFQMC) calculations in many other models for correlated electron systems, and can serve as a template for developing a production code for AFQMC total energy calculations in real materials. Several illustrative studies are carried out in one- and two-dimensional lattices on total energy, kinetic energy, potential energy, and charge- and spin-gaps.
Sakamoto, Y
2002-01-01
In the prevention of nuclear disaster, there needs the information on the dose equivalent rate distribution inside and outside the site, and energy spectra. The three dimensional radiation transport calculation code is a useful tool for the site specific detailed analysis with the consideration of facility structures. It is important in the prediction of individual doses in the future countermeasure that the reliability of the evaluation methods of dose equivalent rate distribution and energy spectra by using of Monte Carlo radiation transport calculation code, and the factors which influence the dose equivalent rate distribution outside the site are confirmed. The reliability of radiation transport calculation code and the influence factors of dose equivalent rate distribution were examined through the analyses of critical accident at JCO's uranium processing plant occurred on September 30, 1999. The radiation transport calculations including the burn-up calculations were done by using of the structural info...
Energy Technology Data Exchange (ETDEWEB)
Mille, M; Lee, C [Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Rockville, MD (United States); Failla, G [Varian Medical Systems, Gig Harbor, WA (United States)
2016-06-15
Purpose: To use the Attila deterministic solver as a supplement to Monte Carlo for calculating out-of-field organ dose in support of epidemiological studies looking at the risks of second cancers. Supplemental dosimetry tools are needed to speed up dose calculations for studies involving large-scale patient cohorts. Methods: Attila is a multi-group discrete ordinates code which can solve the 3D photon-electron coupled linear Boltzmann radiation transport equation on a finite-element mesh. Dose is computed by multiplying the calculated particle flux in each mesh element by a medium-specific energy deposition cross-section. The out-of-field dosimetry capability of Attila is investigated by comparing average organ dose to that which is calculated by Monte Carlo simulation. The test scenario consists of a 6 MV external beam treatment of a female patient with a tumor in the left breast. The patient is simulated by a whole-body adult reference female computational phantom. Monte Carlo simulations were performed using MCNP6 and XVMC. Attila can export a tetrahedral mesh for MCNP6, allowing for a direct comparison between the two codes. The Attila and Monte Carlo methods were also compared in terms of calculation speed and complexity of simulation setup. A key perquisite for this work was the modeling of a Varian Clinac 2100 linear accelerator. Results: The solid mesh of the torso part of the adult female phantom for the Attila calculation was prepared using the CAD software SpaceClaim. Preliminary calculations suggest that Attila is a user-friendly software which shows great promise for our intended application. Computational performance is related to the number of tetrahedral elements included in the Attila calculation. Conclusion: Attila is being explored as a supplement to the conventional Monte Carlo radiation transport approach for performing retrospective patient dosimetry. The goal is for the dosimetry to be sufficiently accurate for use in retrospective
Monte Carlo calculations of the impact of a hip prosthesis on the dose distribution
Buffard, Edwige; Gschwind, Régine; Makovicka, Libor; David, Céline
2006-09-01
Because of the ageing of the population, an increasing number of patients with hip prostheses are undergoing pelvic irradiation. Treatment planning systems (TPS) currently available are not always able to accurately predict the dose distribution around such implants. In fact, only Monte Carlo simulation has the ability to precisely calculate the impact of a hip prosthesis during radiotherapeutic treatment. Monte Carlo phantoms were developed to evaluate the dose perturbations during pelvic irradiation. A first model, constructed with the DOSXYZnrc usercode, was elaborated to determine the dose increase at the tissue-metal interface as well as the impact of the material coating the prosthesis. Next, CT-based phantoms were prepared, using the usercode CTCreate, to estimate the influence of the geometry and the composition of such implants on the beam attenuation. Thanks to a program that we developed, the study was carried out with CT-based phantoms containing a hip prosthesis without metal artefacts. Therefore, anthropomorphic phantoms allowed better definition of both patient anatomy and the hip prosthesis in order to better reproduce the clinical conditions of pelvic irradiation. The Monte Carlo results revealed the impact of certain coatings such as PMMA on dose enhancement at the tissue-metal interface. Monte Carlo calculations in CT-based phantoms highlighted the marked influence of the implant's composition, its geometry as well as its position within the beam on dose distribution.
Energy Technology Data Exchange (ETDEWEB)
Del Nero, Renata Aline; Yoriyaz, Hélio [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Nakandakari, Marcos Vinicius Nakaoka, E-mail: hyoriyaz@ipen.br, E-mail: marcos.sake@gmail.com [Hospital Beneficência Portuguesa de São Paulo, SP (Brazil)
2017-07-01
The Monte Carlo method for radiation transport data has been adapted for medical physics application. More specifically, it has received more attention in clinical treatment planning with the development of more efficient computer simulation techniques. In linear accelerator modeling by the Monte Carlo method, the phase space data file (phsp) is used a lot. However, to obtain precision in the results, it is necessary detailed information about the accelerator's head and commonly the supplier does not provide all the necessary data. An alternative to the phsp is the Virtual Source Model (VSM). This alternative approach presents many advantages for the clinical Monte Carlo application. This is the most efficient method for particle generation and can provide an accuracy similar when the phsp is used. This research propose a VSM simulation with the use of a Virtual Flattening Filter (VFF) for profiles and percent deep doses calculation. Two different sizes of open fields (40 x 40 cm² and 40√2 x 40√2 cm²) were used and two different source to surface distance (SSD) were applied: the standard 100 cm and custom SSD of 370 cm, which is applied in radiotherapy treatments of total body irradiation. The data generated by the simulation was analyzed and compared with experimental data to validate the VSM. This current model is easy to build and test. (author)
Full-dispersion Monte Carlo simulation of phonon transport in micron-sized graphene nanoribbons
Energy Technology Data Exchange (ETDEWEB)
Mei, S., E-mail: smei4@wisc.edu; Knezevic, I., E-mail: knezevic@engr.wisc.edu [Department of Electrical and Computer Engineering, University of Wisconsin-Madison, Madison, Wisconsin 53706 (United States); Maurer, L. N. [Department of Physics, University of Wisconsin-Madison, Madison, Wisconsin 53706 (United States); Aksamija, Z. [Department of Electrical and Computer Engineering, University of Massachusetts-Amherst, Amherst, Massachusetts 01003 (United States)
2014-10-28
We simulate phonon transport in suspended graphene nanoribbons (GNRs) with real-space edges and experimentally relevant widths and lengths (from submicron to hundreds of microns). The full-dispersion phonon Monte Carlo simulation technique, which we describe in detail, involves a stochastic solution to the phonon Boltzmann transport equation with the relevant scattering mechanisms (edge, three-phonon, isotope, and grain boundary scattering) while accounting for the dispersion of all three acoustic phonon branches, calculated from the fourth-nearest-neighbor dynamical matrix. We accurately reproduce the results of several experimental measurements on pure and isotopically modified samples [S. Chen et al., ACS Nano 5, 321 (2011);S. Chen et al., Nature Mater. 11, 203 (2012); X. Xu et al., Nat. Commun. 5, 3689 (2014)]. We capture the ballistic-to-diffusive crossover in wide GNRs: room-temperature thermal conductivity increases with increasing length up to roughly 100 μm, where it saturates at a value of 5800 W/m K. This finding indicates that most experiments are carried out in the quasiballistic rather than the diffusive regime, and we calculate the diffusive upper-limit thermal conductivities up to 600 K. Furthermore, we demonstrate that calculations with isotropic dispersions overestimate the GNR thermal conductivity. Zigzag GNRs have higher thermal conductivity than same-size armchair GNRs, in agreement with atomistic calculations.
SAM-CE, Time-Dependent 3-D Neutron Transport, Gamma Transport in Complex Geometry by Monte-Carlo
International Nuclear Information System (INIS)
2003-01-01
1 - Nature of physical problem solved: The SAM-CE system comprises two Monte Carlo codes, SAM-F and SAM-A. SAM-F supersedes the forward Monte Carlo code, SAM-C. SAM-A is an adjoint Monte Carlo code designed to calculate the response due to fields of primary and secondary gamma radiation. The SAM-CE system is a FORTRAN Monte Carlo computer code designed to solve the time-dependent neutron and gamma-ray transport equations in complex three-dimensional geometries. SAM-CE is applicable for forward neutron calculations and for forward as well as adjoint primary gamma-ray calculations. In addition, SAM-CE is applicable for the gamma-ray stage of the coupled neutron-secondary gamma ray problem, which may be solved in either the forward or the adjoint mode. Time-dependent fluxes, and flux functionals such as dose, heating, count rates, etc., are calculated as functions of energy, time and position. Multiple scoring regions are permitted and these may be either finite volume regions or point detectors or both. Other scores of interest, e.g., collision and absorption densities, etc., are also made. 2 - Method of solution: A special feature of SAM-CE is its use of the 'combinatorial geometry' technique which affords the user geometric capabilities exceeding those available with other commonly used geometric packages. All nuclear interaction cross section data (derived from the ENDF for neutrons and from the UNC-format library for gamma-rays) are tabulated in point energy meshes. The energy meshes for neutrons are internally derived, based on built-in convergence criteria and user- supplied tolerances. Tabulated neutron data for each distinct nuclide are in unique and appropriate energy meshes. Both resolved and unresolved resonance parameters from ENDF data files are treated automatically, and extremely precise and detailed descriptions of cross section behaviour is permitted. Such treatment avoids the ambiguities usually associated with multi-group codes, which use flux
International Nuclear Information System (INIS)
Karriem, Z.; Ivanov, K.; Zamonsky, O.
2011-01-01
This paper presents work that has been performed to develop an integrated Monte Carlo- Deterministic transport methodology in which the two methods make use of exactly the same general geometry and multigroup nuclear data. The envisioned application of this methodology is in reactor lattice physics methods development and shielding calculations. The methodology will be based on the Method of Long Characteristics (MOC) and the Monte Carlo N-Particle Transport code MCNP5. Important initial developments pertaining to ray tracing and the development of an MOC flux solver for the proposed methodology are described. Results showing the viability of the methodology are presented for two 2-D general geometry transport problems. The essential developments presented is the use of MCNP as geometry construction and ray tracing tool for the MOC, verification of the ray tracing indexing scheme that was developed to represent the MCNP geometry in the MOC and the verification of the prototype 2-D MOC flux solver. (author)
Sakamoto, Hiroki; Yamamoto, Toshihiro
2017-09-01
This paper presents improvement and performance evaluation of the "perturbation source method", which is one of the Monte Carlo perturbation techniques. The formerly proposed perturbation source method was first-order accurate, although it is known that the method can be easily extended to an exact perturbation method. A transport equation for calculating an exact flux difference caused by a perturbation is solved. A perturbation particle representing a flux difference is explicitly transported in the perturbed system, instead of in the unperturbed system. The source term of the transport equation is defined by the unperturbed flux and the cross section (or optical parameter) changes. The unperturbed flux is provided by an "on-the-fly" technique during the course of the ordinary fixed source calculation for the unperturbed system. A set of perturbation particle is started at the collision point in the perturbed region and tracked until death. For a perturbation in a smaller portion of the whole domain, the efficiency of the perturbation source method can be improved by using a virtual scattering coefficient or cross section in the perturbed region, forcing collisions. Performance is evaluated by comparing the proposed method to other Monte Carlo perturbation methods. Numerical tests performed for a particle transport in a two-dimensional geometry reveal that the perturbation source method is less effective than the correlated sampling method for a perturbation in a larger portion of the whole domain. However, for a perturbation in a smaller portion, the perturbation source method outperforms the correlated sampling method. The efficiency depends strongly on the adjustment of the new virtual scattering coefficient or cross section.
Monte Carlo analysis of radiative transport in oceanographic lidar measurements
Energy Technology Data Exchange (ETDEWEB)
Cupini, E.; Ferro, G. [ENEA, Divisione Fisica Applicata, Centro Ricerche Ezio Clementel, Bologna (Italy); Ferrari, N. [Bologna Univ., Bologna (Italy). Dipt. Ingegneria Energetica, Nucleare e del Controllo Ambientale
2001-07-01
The analysis of oceanographic lidar systems measurements is often carried out with semi-empirical methods, since there is only a rough understanding of the effects of many environmental variables. The development of techniques for interpreting the accuracy of lidar measurements is needed to evaluate the effects of various environmental situations, as well as of different experimental geometric configurations and boundary conditions. A Monte Carlo simulation model represents a tool that is particularly well suited for answering these important questions. The PREMAR-2F Monte Carlo code has been developed taking into account the main molecular and non-molecular components of the marine environment. The laser radiation interaction processes of diffusion, re-emission, refraction and absorption are treated. In particular are considered: the Rayleigh elastic scattering, produced by atoms and molecules with small dimensions with respect to the laser emission wavelength (i.e. water molecules), the Mie elastic scattering, arising from atoms or molecules with dimensions comparable to the laser wavelength (hydrosols), the Raman inelastic scattering, typical of water, the absorption of water, inorganic (sediments) and organic (phytoplankton and CDOM) hydrosols, the fluorescence re-emission of chlorophyll and yellow substances. PREMAR-2F is an extension of a code for the simulation of the radiative transport in atmospheric environments (PREMAR-2). The approach followed in PREMAR-2 was to combine conventional Monte Carlo techniques with analytical estimates of the probability of the receiver to have a contribution from photons coming back after an interaction in the field of view of the lidar fluorosensor collecting apparatus. This offers an effective mean for modelling a lidar system with realistic geometric constraints. The retrieved semianalytic Monte Carlo radiative transfer model has been developed in the frame of the Italian Research Program for Antarctica (PNRA) and it is
Development of a Monte-Carlo based method for calculating the effect of stationary fluctuations
DEFF Research Database (Denmark)
Pettersen, E. E.; Demazire, C.; Jareteg, K.
2015-01-01
equivalent problems nevertheless requires the possibility to modify the macroscopic cross-sections, and we use the work of Kuijper, van der Marck and Hogenbirk to define group-wise macroscopic cross-sections in MCNP [1]. The method is illustrated in this paper at a frequency of 1 Hz, for which only the real......This paper deals with the development of a novel method for performing Monte Carlo calculations of the effect, on the neutron flux, of stationary fluctuations in macroscopic cross-sections. The basic principle relies on the formulation of two equivalent problems in the frequency domain: one...... stationary dynamic calculations, the presented method does not require any modification of the Monte Carlo code....
The study of importance sampling in Monte-carlo calculation of blocking dips
International Nuclear Information System (INIS)
Pan Zhengying; Zhou Peng
1988-01-01
Angular blocking dips around the axis in Al single crystal of α-particles of about 2 Mev produced at a depth of 0.2 μm are calculated by a Monte-carlo simulation. The influence of the small solid angle emission of particles and the importance sampling in the solid angle emission have been investigated. By means of importance sampling, a more reasonable results with high accuracy are obtained
International Nuclear Information System (INIS)
Kling, A.; Barao, F.J.C.; Nakagawa, M.; Tavora, L.
2001-01-01
The following topics were dealt with: Electron and photon interactions and transport mechanisms, random number generation, applications in medical physisc, microdosimetry, track structure, radiobiological modeling, Monte Carlo method in radiotherapy, dosimetry, and medical accelerator simulation, neutron transport, high-energy hadron transport. (HSI)
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-07
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0
Simulation of neutron transport equation using parallel Monte Carlo for deep penetration problems
International Nuclear Information System (INIS)
Bekar, K. K.; Tombakoglu, M.; Soekmen, C. N.
2001-01-01
Neutron transport equation is simulated using parallel Monte Carlo method for deep penetration neutron transport problem. Monte Carlo simulation is parallelized by using three different techniques; direct parallelization, domain decomposition and domain decomposition with load balancing, which are used with PVM (Parallel Virtual Machine) software on LAN (Local Area Network). The results of parallel simulation are given for various model problems. The performances of the parallelization techniques are compared with each other. Moreover, the effects of variance reduction techniques on parallelization are discussed
Energy Technology Data Exchange (ETDEWEB)
Bordy, J.M.; Kodeli, I.; Menard, St.; Bouchet, J.L.; Renard, F.; Martin, E.; Blazy, L.; Voros, S.; Bochud, F.; Laedermann, J.P.; Beaugelin, K.; Makovicka, L.; Quiot, A.; Vermeersch, F.; Roche, H.; Perrin, M.C.; Laye, F.; Bardies, M.; Struelens, L.; Vanhavere, F.; Gschwind, R.; Fernandez, F.; Quesne, B.; Fritsch, P.; Lamart, St.; Crovisier, Ph.; Leservot, A.; Antoni, R.; Huet, Ch.; Thiam, Ch.; Donadille, L.; Monfort, M.; Diop, Ch.; Ricard, M
2006-07-01
The purpose of this conference was to describe the present state of computer codes dedicated to radiation transport or radiation source assessment or dosimetry. The presentations have been parted into 2 sessions: 1) methodology and 2) uses in industrial or medical or research domains. It appears that 2 different calculation strategies are prevailing, both are based on preliminary Monte-Carlo calculations with data storage. First, quick simulations made from a database of particle histories built though a previous Monte-Carlo simulation and secondly, a neuronal approach involving a learning platform generated through a previous Monte-Carlo simulation. This document gathers the slides of the presentations.
ETRAN, Electron Transport and Gamma Transport with Secondary Radiation in Slab by Monte-Carlo
International Nuclear Information System (INIS)
1992-01-01
A - Nature of physical problem solved: ETRAN computes the transport of electrons and photons through plane-parallel slab targets that have a finite thickness in one dimension and are unbound in the other two-dimensions. The incident radiation can consist of a beam of either electrons or photons with specified spectral and directional distribution. Options are available by which all orders of the electron-photon cascade can be included in the calculation. Thus electrons are allowed to give rise to secondary knock-on electrons, continuous Bremsstrahlung and characteristic x-rays; and photons are allowed to produce photo-electrons, Compton electrons, and electron- positron pairs. Annihilation quanta, fluorescence radiation, and Auger electrons are also taken into account. If desired, the Monte- Carlo histories of all generations of secondary radiations are followed. The information produced by ETRAN includes the following items: 1) reflection and transmission of electrons or photons, differential in energy and direction; 2) the production of continuous Bremsstrahlung and characteristic x-rays by electrons and the emergence of such radiations from the target (differential in photon energy and direction); 3) the spectrum of the amounts of energy left behind in a thick target by an incident electron beam; 4) the deposition of energy and charge by an electron beam as function of the depth in the target; 5) the flux of electrons, differential in energy, as function of the depth in the target. B - Method of solution: A programme called DATAPAC-4 takes data for a particular material from a library tape and further processes them. The function of DATAPAC-4 is to produce single-scattering and multiple-scattering data in the form of tabular arrays (again stored on magnetic tape) which facilitate the rapid sampling of electron and photon Monte Carlo histories in ETRAN. The photon component of the electron-photon cascade is calculated by conventional random sampling that imitates
International Nuclear Information System (INIS)
Shahriari, M.; Soharbpour, M.
1997-01-01
Monte Carlo calculation using MCNP code has been carried out to determine the time dependent responses of pulsed neutron gamma spectrometry tools. Inelastic scattering and thermal capture gamma count rates after each 14 MeV neutron pulse has been calculated and is shown that the pulse height response of the inelastic gamma rays can be easily separated from the thermal capture gamma response in the time domain. There by it is possible to improve the precision of the carbon oxygen ratios in the borehole formations
Radon detection in conical diffusion chambers: Monte Carlo calculations and experiment
Energy Technology Data Exchange (ETDEWEB)
Rickards, J.; Golzarri, J. I.; Espinosa, G., E-mail: espinosa@fisica.unam.mx [Instituto de Física, Universidad Nacional Autónoma de México Circuito de la Investigación Científica, Ciudad Universitaria México, D.F. 04520, México (Mexico); Vázquez-López, C. [Departamento de Física, Centro de Investigación y de Estudios Avanzados del IPN Ave. IPN 2508, Col. San Pedro Zacatenco, México 07360, DF, México (Mexico)
2015-07-23
The operation of radon detection diffusion chambers of truncated conical shape was studied using Monte Carlo calculations. The efficiency was studied for alpha particles generated randomly in the volume of the chamber, and progeny generated randomly on the interior surface, which reach track detectors placed in different positions within the chamber. Incidence angular distributions, incidence energy spectra and path length distributions are calculated. Cases studied include different positions of the detector within the chamber, varying atmospheric pressure, and introducing a cutoff incidence angle and energy.
International Nuclear Information System (INIS)
Perfetti, C.; Martin, W.; Rearden, B.; Williams, M.
2012-01-01
Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the Shift Monte Carlo code within the SCALE code package. The methods were used for two small-scale test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods. (authors)
Energy Technology Data Exchange (ETDEWEB)
Perfetti, C.; Martin, W. [Univ. of Michigan, Dept. of Nuclear Engineering and Radiological Sciences, 2355 Bonisteel Boulevard, Ann Arbor, MI 48109-2104 (United States); Rearden, B.; Williams, M. [Oak Ridge National Laboratory, Reactor and Nuclear Systems Div., Bldg. 5700, P.O. Box 2008, Oak Ridge, TN 37831-6170 (United States)
2012-07-01
Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the Shift Monte Carlo code within the SCALE code package. The methods were used for two small-scale test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods. (authors)
Three-dimensional hypersonic rarefied flow calculations using direct simulation Monte Carlo method
Celenligil, M. Cevdet; Moss, James N.
1993-01-01
A summary of three-dimensional simulations on the hypersonic rarefied flows in an effort to understand the highly nonequilibrium flows about space vehicles entering the Earth's atmosphere for a realistic estimation of the aerothermal loads is presented. Calculations are performed using the direct simulation Monte Carlo method with a five-species reacting gas model, which accounts for rotational and vibrational internal energies. Results are obtained for the external flows about various bodies in the transitional flow regime. For the cases considered, convective heating, flowfield structure and overall aerodynamic coefficients are presented and comparisons are made with the available experimental data. The agreement between the calculated and measured results are very good.
Monte Carlo calculations of triton and 4He nuclei with the Reid potential
International Nuclear Information System (INIS)
Lomnitz-Adler, J.; Pandharipande, V.R.; Smith, R.A.
1981-01-01
A Monte Carlo method is developed to calculate the binding energy and density distribution of the 3 H and 4 H nuclei for a variational wave function written as a symmetrized product of correlation operators. The upper bounds obtained with the Reid potential are -6.86 +- 0.08 and -22.9 +- 0.5 MeV respectively. The Coulomb interaction in 4 H is ignored. The calculated density distributions have reasonable radii, but they do not show any dip at the center. (orig.)
Monte-Carlo calculations of light nuclei with the Reid potential
Energy Technology Data Exchange (ETDEWEB)
Lomnitz-Adler, J. (Universidad Nacional Autonoma de Mexico, Mexico City. Inst. de Fisica)
1981-01-01
A Monte-Carlo method is developed to calculate the binding energy and density distribution of the /sup 3/H and /sup 4/He nuclei for a variational wave function written as a symmetrized product of correlation operators. The upper bounds obtained with the Reid potential are -6.86 +- .08 and -22.9 +- .5 MeV respectively. The Coulomb interaction in /sup 4/He is ignored. The calculated density distributions have reasonable radii, but they do not show any dip at the center.
Monte-Carlo calculations of light nuclei with the Reid potential
International Nuclear Information System (INIS)
Lomnitz-Adler, J.
1981-01-01
A Monte-Carlo method is developed to calculate the binding energy and density distribution of the 3 H and 4 He nuclei for a variational wave function written as a symmetrized product of correlation operators. The upper bounds obtained with the Reid potential are -6.86 +- .08 and -22.9 +- .5 MeV respectively. The Coulomb interaction in 4 He is ignored. The calculated density distributions have reasonable radii, but they do not show any dip at the center. (author)
Applying graphics processor units to Monte Carlo dose calculation in radiation therapy
Directory of Open Access Journals (Sweden)
Bakhtiari M
2010-01-01
Full Text Available We investigate the potential in using of using a graphics processor unit (GPU for Monte-Carlo (MC-based radiation dose calculations. The percent depth dose (PDD of photons in a medium with known absorption and scattering coefficients is computed using a MC simulation running on both a standard CPU and a GPU. We demonstrate that the GPU′s capability for massive parallel processing provides a significant acceleration in the MC calculation, and offers a significant advantage for distributed stochastic simulations on a single computer. Harnessing this potential of GPUs will help in the early adoption of MC for routine planning in a clinical environment.
Monte Carlo calculations of lung dose in ORNL phantom for boron neutron capture therapy.
Krstic, D; Markovic, V M; Jovanovic, Z; Milenkovic, B; Nikezic, D; Atanackovic, J
2014-10-01
Monte Carlo simulations were performed to evaluate dose for possible treatment of cancers by boron neutron capture therapy (BNCT). The computational model of male Oak Ridge National Laboratory (ORNL) phantom was used to simulate tumours in the lung. Calculations have been performed by means of the MCNP5/X code. In this simulation, two opposite neutron beams were considered, in order to obtain uniform neutron flux distribution inside the lung. The obtained results indicate that the lung cancer could be treated by BNCT under the assumptions of calculations. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations
Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B.; Jia, Xun
2015-10-01
Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum
An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations.
Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun
2015-10-21
Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum
MCPT: A Monte Carlo code for simulation of photon transport in tomographic scanners
International Nuclear Information System (INIS)
Prettyman, T.H.; Gardner, R.P.; Verghese, K.
1990-01-01
MCPT is a special-purpose Monte Carlo code designed to simulate photon transport in tomographic scanners. Variance reduction schemes and sampling games present in MCPT were selected to characterize features common to most tomographic scanners. Combined splitting and biasing (CSB) games are used to systematically sample important detection pathways. An efficient splitting game is used to tally particle energy deposition in detection zones. The pulse height distribution of each detector can be found by convolving the calculated energy deposition distribution with the detector's resolution function. A general geometric modelling package, HERMETOR, is used to describe the geometry of the tomographic scanners and provide MCPT information needed for particle tracking. MCPT's modelling capabilites are described and preliminary experimental validation is presented. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Walsh, J. A. [Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, NW12-312 Albany, St. Cambridge, MA 02139 (United States); Palmer, T. S. [Department of Nuclear Engineering and Radiation Health Physics, Oregon State University, 116 Radiation Center, Corvallis, OR 97331 (United States); Urbatsch, T. J. [XTD-5: Air Force Systems, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)
2013-07-01
A new method for generating discrete scattering cross sections to be used in charged particle transport calculations is investigated. The method of data generation is presented and compared to current methods for obtaining discrete cross sections. The new, more generalized approach allows greater flexibility in choosing a cross section model from which to derive discrete values. Cross section data generated with the new method is verified through a comparison with discrete data obtained with an existing method. Additionally, a charged particle transport capability is demonstrated in the time-dependent Implicit Monte Carlo radiative transfer code package, Milagro. The implementation of this capability is verified using test problems with analytic solutions as well as a comparison of electron dose-depth profiles calculated with Milagro and an already-established electron transport code. An initial investigation of a preliminary integration of the discrete cross section generation method with the new charged particle transport capability in Milagro is also presented. (authors)
Directory of Open Access Journals (Sweden)
Laureau Axel
2017-01-01
Full Text Available These studies are performed in the general framework of transient coupled calculations with accurate neutron kinetics models. This kind of application requires a modeling of the influence on the neutronics of the macroscopic cross-section evolution. Depending on the targeted accuracy, this feedback can be limited to the reactivity for point kinetics, or can take into account the redistribution of the power in the core for spatial kinetics. The local correlated sampling technique for Monte Carlo calculation presented in this paper has been developed for this purpose, i.e. estimating the influence on the neutron transport of a local variation of different parameters such as sodium density or fuel Doppler effect. This method is associated to an innovative spatial kinetics model named Transient Fission Matrix, which condenses the time-dependent Monte Carlo neutronic response in Green functions. Finally, an accurate estimation of the feedback effects on these Green functions provides an on-the-fly prediction of the flux redistribution in the core, whatever the actual perturbation shape is during the transient. This approach is also used to estimate local feedback effects for point kinetics resolution.
Energy Technology Data Exchange (ETDEWEB)
Hubbell, J.H.; Seltzer, S.M. [National Inst. of Standards and Technology, Gaithersburg, MD (United States)
2001-07-01
Some early examples of Monte Carlo simulations of radiation transport, prior to the general availability of automatic electronic computers, are recalled. In particular, some results and details are presented of a gamma ray albedo calculation in the early 1950s by Hayward and Hubbell using mechanical desk calculators (+, -, x, / only), in which 67 trajectories were determined using the RAND book of random numbers, with three random numbers at each collision being used to determine (1) the Compton scatter energy loss (and thus the deflection angle), (2) the azimuthal angle and (3) the path length since the previous collision. Successive angles were compounded in three dimensions using a two-dimensional grid with a rotating arm with a slider on it, the device being dubbed an ''Ouija Board''. Survival probabilities along each path segment were determined analytically according to photoelectric absorption exponential attenuation in each of five materials, using a slide rule. For the five substances, H{sub 2}O, Al, Cu, Sn and Pb, useful number and energy albedo values were obtained for 1 MeV photons incident at 0 (normal), 45 and 80 angles of incidence. Advances in the Monte Carlo method following this and other early-1950s computations, up to the present time with high-speed all-function automatic computers, are briefly reviewed. A brief review of advances in the input cross section data, particularly for photon interactions, over that same period, is included. (orig.)
Strategies for CT tissue segmentation for Monte Carlo calculations in nuclear medicine dosimetry.
Braad, P E N; Andersen, T; Hansen, S B; Høilund-Carlsen, P F
2016-12-01
CT images are used for patient specific Monte Carlo treatment planning in radionuclide therapy. The authors investigated the impact of tissue classification, CT image segmentation, and CT errors on Monte Carlo calculated absorbed dose estimates in nuclear medicine. CT errors as a function of patient size, CT reconstruction, and tube current modulation methods were assessed in a phantom experiment on a clinical CT system. The impact of tissue segmentation methods and CT number variations on EGSnrc Monte Carlo calculated absorbed dose distributions was assessed for 99m Tc and 131 I in the ICRP/ICRU male phantom and in a patient PET/CT-scanned with 124 I prior to radioiodine therapy. CT number variations segmentation by a 13-tissue CT conversion ramp, calibrated by a stoichiometric method, resulted in low (<4%) dose errors in selected organs for both isotopes. A calibrated CT scanner specific conversion ramp is required for accurate patient specific dosimetry in nuclear medicine. Accurate dosimetry was obtained with a 13-tissue ramp that included five different bone types.
Monte Carlo dose calculation in photon beam radiotherapy: a dosimetric characterization
International Nuclear Information System (INIS)
Caccia, B.; Frustagli, G.; Valentini, S.; Petetti, E.; Andenna, C.
2008-01-01
Radiotherapy requires improved dose evaluation procedures in order to better exploit novel, high-performance techniques. This is the case with Intensity Modulated Radiation Therapy (IMRT) where high gradients of dose are the result of highly conformed dose releases. Among all the methods for dose calculation, the Monte Carlo approach is considered the best one in terms of accuracy, but it is very time consuming and requires varied and specialised expertise. In the present paper, Monte Carlo beam models have been developed for a Varian Clinac 2100 medical accelerator. A GEANT4-based model and a distributed computing environment on a Beowulf cluster have been used to perform the simulations. The behaviour of the model was investigated with the use of two phantoms. A good agreement was obtained upon comparing the depth dose profiles simulated for both phantoms with experimental measurements. We consider this a first step towards a more complete model capable of accounting for more complex phantoms and irradiation conditions. (author)
Calculation of the Feynman integrals by means of the Monte Carlo method
International Nuclear Information System (INIS)
Filinov, V.S.
1986-01-01
The Monte Carlo method (the Metropolis algorithm), which is employed extensively in lattice gauge theories and quantum mechanics, was applicable only to the euclidean version of the Feynman path integrals, i.e. it was valid for evaluating the integrals of real functions. In the present work the Monte Carlo method is extended to the evaluation of the integrals of complex-valued functions. The Feynman path integrals representing the time-dependent Green function of the one-dimensional non-stationary Schroedinger equation have been calculated for the harmonic oscillator and the particle motion in barrier- and well-type potential fields. The numerical results are in reasonable agreement with the analytical estimates, in spite of the presence of singularities in the Green functions. (orig.)
Bourva, L C A
1999-01-01
The general purpose neutron-photon-electron Monte Carlo N-Particle code, MCNP sup T sup M , has been used to simulate the neutronic characteristics of the on-site laboratory passive neutron coincidence counter to be installed, under Euratom Safeguards Directorate supervision, at the Sellafield reprocessing plant in Cumbria, UK. This detector is part of a series of nondestructive assay instruments to be installed for the accurate determination of the plutonium content of nuclear materials. The present work focuses on one aspect of this task, namely, the accurate calculation of the coincidence gate utilisation factor. This parameter is an important term in the interpretative model used to analyse the passive neutron coincidence count data acquired using pulse train deconvolution electronics based on the shift register technique. It accounts for the limited proportion of neutrons detected within the time interval for which the electronics gate is open. The Monte Carlo code MCF, presented in this work, represents...
Applicability of 3D Monte Carlo simulations for local values calculations in a PWR core
Bernard, Franck; Cochet, Bertrand; Jinaphanh, Alexis; Jacquet, Olivier
2014-06-01
As technical support of the French Nuclear Safety Authority, IRSN has been developing the MORET Monte Carlo code for many years in the framework of criticality safety assessment and is now working to extend its application to reactor physics. For that purpose, beside the validation for criticality safety (more than 2000 benchmarks from the ICSBEP Handbook have been modeled and analyzed), a complementary validation phase for reactor physics has been started, with benchmarks from IRPHEP Handbook and others. In particular, to evaluate the applicability of MORET and other Monte Carlo codes for local flux or power density calculations in large power reactors, it has been decided to contribute to the "Monte Carlo Performance Benchmark" (hosted by OECD/NEA). The aim of this benchmark is to monitor, in forthcoming decades, the performance progress of detailed Monte Carlo full core calculations. More precisely, it measures their advancement towards achieving high statistical accuracy in reasonable computation time for local power at fuel pellet level. A full PWR reactor core is modeled to compute local power densities for more than 6 million fuel regions. This paper presents results obtained at IRSN for this benchmark with MORET and comparisons with MCNP. The number of fuel elements is so large that source convergence as well as statistical convergence issues could cause large errors in local tallies, especially in peripheral zones. Various sampling or tracking methods have been implemented in MORET, and their operational effects on such a complex case have been studied. Beyond convergence issues, to compute local values in so many fuel regions could cause prohibitive slowing down of neutron tracking. To avoid this, energy grid unification and tallies preparation before tracking have been implemented, tested and proved to be successful. In this particular case, IRSN obtained promising results with MORET compared to MCNP, in terms of local power densities, standard
Sarno, Antonio; Mettivier, Giovanni; Russo, Paolo
2017-07-01
The estimation of the mean glandular dose in mammography using Monte Carlo simulations requires the calculation of the incident air kerma evaluated on the breast surface. In such a calculation, caution should be applied in considering explicitly the presence of the top compression paddle, since Compton scattering in this slab may produce a large spread of the incidence angles of x-ray photons on the scoring surface. Then, the calculation of the incident air kerma should contain the ‘effective’ area of the scoring surface, which takes into account the angle of incidence of photons on such a surface. Using Geant4 Monte Carlo simulations with a code previously validated according to the Task Group 195 of the American Association of Physicists in Medicine, we show that for typical x-ray spectra and energy range adopted in mammography, the resulting discrepancy in the calculation of the incident air kerma may lead to an overestimation from a minimum of 10% up to 12% of normalized dose coefficients and, hence, of the corresponding mean glandular dose if this contribution is not considered.
International Nuclear Information System (INIS)
Kim, Ok Joo
2007-02-01
Wavelet theory was applied to detect the singularity in reactor power signal. Compared to Fourier transform, wavelet transform has localization properties in space and frequency. Therefore, by wavelet transform after de-noising, singular points can be found easily. To demonstrate this, we generated reactor power signals using a HANARO (a Korean multi-purpose research reactor) dynamics model consisting of 39 nonlinear differential equations and Gaussian noise. We applied wavelet transform decomposition and de-noising procedures to these signals. It was effective to detect the singular events such as sudden reactivity change and abrupt intrinsic property changes. Thus this method could be profitably utilized in a real-time system for automatic event recognition (e.g., reactor condition monitoring). In addition, using the wavelet de-noising concept, variance reduction of Monte Carlo result was tried. To get correct solution in Monte Carlo calculation, small uncertainty is required and it is quite time-consuming on a computer. Instead of long-time calculation in the Monte Carlo code (MCNP), wavelet de-noising can be performed to get small uncertainties. We applied this idea to MCNP results of k eff and fission source. Variance was reduced somewhat while the average value is kept constant. In MCNP criticality calculation, initial guess for the fission distribution is used and it could give contamination to solution. To avoid this situation, sufficient number of initial generations should be discarded, and they are called inactive cycles. Convergence check can give guildeline to determine when we should start the active cycles. Various entropy functions are tried to check the convergence of fission distribution. Some entropy functions reflect the convergence behavior of fission distribution well. Entropy could be a powerful method to determine inactive/active cycles in MCNP calculation
Energy Technology Data Exchange (ETDEWEB)
Zhaoyuan Liu; Kord Smith; Benoit Forget; Javier Ortensi
2016-05-01
A new method for computing homogenized assembly neutron transport cross sections and dif- fusion coefficients that is both rigorous and computationally efficient is proposed in this paper. In the limit of a homogeneous hydrogen slab, the new method is equivalent to the long-used, and only-recently-published CASMO transport method. The rigorous method is used to demonstrate the sources of inaccuracy in the commonly applied “out-scatter” transport correction. It is also demonstrated that the newly developed method is directly applicable to lattice calculations per- formed by Monte Carlo and is capable of computing rigorous homogenized transport cross sections for arbitrarily heterogeneous lattices. Comparisons of several common transport cross section ap- proximations are presented for a simple problem of infinite medium hydrogen. The new method has also been applied in computing 2-group diffusion data for an actual PWR lattice from BEAVRS benchmark.
International Nuclear Information System (INIS)
Petrov, Eh.E.; Fadeev, I.A.
1979-01-01
A possibility to use displaced sampling from a bulk gamma source in calculating the secondary gamma fields by the Monte Carlo method is discussed. The algorithm proposed is based on the concept of conjugate functions alongside the dispersion minimization technique. For the sake of simplicity a plane source is considered. The algorithm has been put into practice on the M-220 computer. The differential gamma current and flux spectra in 21cm-thick lead have been calculated. The source of secondary gamma-quanta was assumed to be a distributed, constant and isotropic one emitting 4 MeV gamma quanta with the rate of 10 9 quanta/cm 3 xs. The calculations have demonstrated that the last 7 cm of lead are responsible for the whole gamma spectral pattern. The spectra practically coincide with the ones calculated by the ROZ computer code. Thus the algorithm proposed can be offectively used in the calculations of secondary gamma radiation transport and reduces the computation time by 2-4 times
Energy Technology Data Exchange (ETDEWEB)
Betzler, Benjamin R., E-mail: betzlerbr@ornl.gov [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, Ann Arbor, MI 48109 (United States); Kiedrowski, Brian C., E-mail: bckiedro@umich.edu [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, Ann Arbor, MI 48109 (United States); Brown, Forrest B., E-mail: fbrown@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663, MS A143, Los Alamos, NM 87545 (United States); Martin, William R., E-mail: wrm@umich.edu [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, Ann Arbor, MI 48109 (United States)
2015-12-15
Highlights: • A transition rate matrix method for calculating α-eigenvalues is formulated. • Verification of this method is performed using multigroup infinite-medium problems. • Applications to continuous-energy media examine the slowing down of neutrons. • The effect of the α-eigenvalue spectrum on the short-time flux behavior is discussed. - Abstract: The time-dependent behavior of the energy spectrum in neutron transport was investigated with a formulation, based on continuous-time Markov processes, for computing α eigenvalues and eigenvectors in an infinite medium. For this, a research Monte Carlo code called “TORTE” (To Obtain Real Time Eigenvalues) was created and used to estimate elements of a transition rate matrix. TORTE is capable of using both multigroup and continuous-energy nuclear data, and verification was performed. Eigenvalue spectra for infinite homogeneous mixtures were obtained, and an eigenfunction expansion was used to investigate transient behavior of the neutron energy spectrum.
Vojtyla, P
2005-01-01
The radiological impact of emissions of radioactive substances from accelerator facilities is characterized by a dominant contribution of the external exposure from short-lived radionuclides in the plume. Ventilation outlets of accelerator facilities are often at low emission heights and receptors reside very close to stacks. Simplified exposure models are not appropriate and integration of the dose kernel over the radioactive plume is required. By using Monte Carlo integration with certain biasing, the integrand can be simplified substantially and an optimum spatial resolution can be achieved. Moreover, long-term releases can be modeled by sampling real weather situations. The mathematical formulation does not depend on any particular atmospheric dispersion model and the applicable code parts can be designed separately, which is another advantage. The obtained results agree within ±10% with results calculated for the semi-infinite cloud model by using detailed particle transport codes and human phantoms.
International Nuclear Information System (INIS)
Przybilla, G.
1980-11-01
The present paper reports on the structure and first results from a new Monte Carlo programme for calculations of energy distributions within tissue equivalent phantoms irradiated from π - -beams. Each pion or generated secondary particle is transported until to the complete loss of its kinetic energy taking into account pion processes like multiple Coulomb scattering, pion reactions in flight and absorption of stopped pions. The code uses mainly data from experiments, and physical models have been added only in cases of lacking data. Depth dose curves for a pensil beam of 170 MeV/c within a water phantom are discussed as a function of various parameters. Isodose contours are plotted resulting from a convolution of an extended beam profile and the dose distribution of a pencil beams. (orig.) [de
International Nuclear Information System (INIS)
Bourva, L.C.A.; Croft, S.
1999-01-01
The general purpose neutron-photon-electron Monte Carlo N-Particle code, MCNP TM , has been used to simulate the neutronic characteristics of the on-site laboratory passive neutron coincidence counter to be installed, under Euratom Safeguards Directorate supervision, at the Sellafield reprocessing plant in Cumbria, UK. This detector is part of a series of nondestructive assay instruments to be installed for the accurate determination of the plutonium content of nuclear materials. The present work focuses on one aspect of this task, namely, the accurate calculation of the coincidence gate utilisation factor. This parameter is an important term in the interpretative model used to analyse the passive neutron coincidence count data acquired using pulse train deconvolution electronics based on the shift register technique. It accounts for the limited proportion of neutrons detected within the time interval for which the electronics gate is open. The Monte Carlo code MCF, presented in this work, represents a new evaluation technique for the estimation of gate utilisation factors. It uses the die-away profile of a neutron coincidence chamber generated either by MCNP TM , or by other means, to simulate the neutron detection arrival time pattern originating from independent spontaneous fission events. A shift register simulation algorithm, embedded in the MCF code, then calculates the coincidence counts scored within the electronics gate. The gate utilisation factor is then deduced by dividing the coincidence counts obtained with that obtained in the same Monte Carlo run, but for an ideal detection system with a coincidence gate utilisation factor equal to unity. The MCF code has been benchmarked against analytical results calculated for both single and double exponential die-away profiles. These results are presented along with the development of the closed form algebraic expressions for the two cases. Results of this validity check showed very good agreement. On this
Evaluation of a new commercial Monte Carlo dose calculation algorithm for electron beams.
Vandervoort, Eric J; Tchistiakova, Ekaterina; La Russa, Daniel J; Cygler, Joanna E
2014-02-01
In this report the authors present the validation of a Monte Carlo dose calculation algorithm (XiO EMC from Elekta Software) for electron beams. Calculated and measured dose distributions were compared for homogeneous water phantoms and for a 3D heterogeneous phantom meant to approximate the geometry of a trachea and spine. Comparisons of measurements and calculated data were performed using 2D and 3D gamma index dose comparison metrics. Measured outputs agree with calculated values within estimated uncertainties for standard and extended SSDs for open applicators, and for cutouts, with the exception of the 17 MeV electron beam at extended SSD for cutout sizes smaller than 5 × 5 cm(2). Good agreement was obtained between calculated and experimental depth dose curves and dose profiles (minimum number of measurements that pass a 2%/2 mm agreement 2D gamma index criteria for any applicator or energy was 97%). Dose calculations in a heterogeneous phantom agree with radiochromic film measurements (>98% of pixels pass a 3 dimensional 3%/2 mm γ-criteria) provided that the steep dose gradient in the depth direction is considered. Clinically acceptable agreement (at the 2%/2 mm level) between the measurements and calculated data for measurements in water are obtained for this dose calculation algorithm. Radiochromic film is a useful tool to evaluate the accuracy of electron MC treatment planning systems in heterogeneous media.
Srna-Monte Carlo codes for proton transport simulation in combined and voxelized geometries
Ilic, R D; Stankovic, S J
2002-01-01
This paper describes new Monte Carlo codes for proton transport simulations in complex geometrical forms and in materials of different composition. The SRNA codes were developed for three dimensional (3D) dose distribution calculation in proton therapy and dosimetry. The model of these codes is based on the theory of proton multiple scattering and a simple model of compound nucleus decay. The developed package consists of two codes: SRNA-2KG and SRNA-VOX. The first code simulates proton transport in combined geometry that can be described by planes and second order surfaces. The second one uses the voxelized geometry of material zones and is specifically adopted for the application of patient computer tomography data. Transition probabilities for both codes are given by the SRNADAT program. In this paper, we will present the models and algorithms of our programs, as well as the results of the numerical experiments we have carried out applying them, along with the results of proton transport simulation obtaine...
A solution algorithm for calculating photon radiation fields with the aid of the Monte Carlo method
International Nuclear Information System (INIS)
Zappe, D.
1978-04-01
The MCTEST program and its subroutines for the solution of the Boltzmann transport equation is presented. The program renders possible to calculate photon radiation fields of point or plane gamma sources. After changing two subroutines the calculation can also be carried out for the case of directed incidence of radiation on plane shields of iron or concrete. (author)
Variational Monte Carlo calculations of lithium atom in strong magnetic field
Energy Technology Data Exchange (ETDEWEB)
Doma, S. B., E-mail: sbdoma@alexu.edu.eg [Alexandria University, Mathematics Department, Faculty of Science (Egypt); Shaker, M. O.; Farag, A. M. [Tanta University, Mathematics Department, Faculty of Science (Egypt); El-Gammal, F. N., E-mail: famta-elzahraa4@yahoo.com [Menofia University, Mathematics Department, Faculty of Science (Egypt)
2017-01-15
The variational Monte Carlo method is applied to investigate the ground state and some excited states of the lithium atom and its ions up to Z = 10 in the presence of an external magnetic field regime with γ = 0–100 arb. units. The effect of increasing field strength on the ground state energy is studied and precise values for the crossover field strengths were obtained. Our calculations are based on using accurate forms of trial wave functions, which were put forward in calculating energies in the absence of magnetic field. Furthermore, the value of Y at which ground-state energy of the lithium atom approaches to zero was calculated. The obtained results are in good agreement with the most recent values and also with the exact values.
Monte Carlo-based dose calculation engine for minibeam radiation therapy.
Martínez-Rovira, I; Sempau, J; Prezado, Y
2014-02-01
Minibeam radiation therapy (MBRT) is an innovative radiotherapy approach based on the well-established tissue sparing effect of arrays of quasi-parallel micrometre-sized beams. In order to guide the preclinical trials in progress at the European Synchrotron Radiation Facility (ESRF), a Monte Carlo-based dose calculation engine has been developed and successfully benchmarked with experimental data in anthropomorphic phantoms. Additionally, a realistic example of treatment plan is presented. Despite the micron scale of the voxels used to tally dose distributions in MBRT, the combination of several efficiency optimisation methods allowed to achieve acceptable computation times for clinical settings (approximately 2 h). The calculation engine can be easily adapted with little or no programming effort to other synchrotron sources or for dose calculations in presence of contrast agents. Copyright © 2013 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Use of SCALE Continuous-Energy Monte Carlo Tools for Eigenvalue Sensitivity Coefficient Calculations
Energy Technology Data Exchange (ETDEWEB)
Perfetti, Christopher M [ORNL; Rearden, Bradley T [ORNL
2013-01-01
The TSUNAMI code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the development of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The CLUTCH and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in the CE KENO framework to generate the capability for TSUNAMI-3D to perform eigenvalue sensitivity calculations in continuous-energy applications. This work explores the improvements in accuracy that can be gained in eigenvalue and eigenvalue sensitivity calculations through the use of the SCALE CE KENO and CE TSUNAMI continuous-energy Monte Carlo tools as compared to multigroup tools. The CE KENO and CE TSUNAMI tools were used to analyze two difficult models of critical benchmarks, and produced eigenvalue and eigenvalue sensitivity coefficient results that showed a marked improvement in accuracy. The CLUTCH sensitivity method in particular excelled in terms of efficiency and computational memory requirements.
Binding of hydrogen on benzene, coronene, and graphene from quantum Monte Carlo calculations
Ma, Jie; Michaelides, Angelos; Alfè, Dario
2011-04-01
Quantum Monte Carlo calculations with the diffusion Monte Carlo (DMC) method have been used to compute the binding energy curves of hydrogen on benzene, coronene, and graphene. The DMC results on benzene agree with both Møller-Plessett second order perturbation theory (MP2) and coupled cluster with singles, doubles, and perturbative triples [CCSD(T)] calculations, giving an adsorption energy of ˜25 meV. For coronene, DMC agrees well with MP2, giving an adsorption energy of ˜40 meV. For physisorbed hydrogen on graphene, DMC predicts a very small adsorption energy of only 5 ± 5 meV. Density functional theory (DFT) calculations with various exchange-correlation functionals, including van der Waals corrected functionals, predict a wide range of binding energies on all three systems. The present DMC results are a step toward filling the gap in accurate benchmark data on weakly bound systems. These results can help us to understand the performance of current DFT based methods, and may aid in the development of improved approaches.
MORSE/STORM: A generalized albedo option for Monte Carlo calculations
International Nuclear Information System (INIS)
Gomes, I.C.; Stevens, P.N.
1991-09-01
The advisability of using the albedo procedure for the Monte Carlo solution of deep penetration shielding problems that have ducts and other penetrations has been investigated. The use of albedo data can dramatically improve the computational efficiency of certain Monte Carlo calculations. However, the accuracy of these results may be unacceptable because of lost information during the albedo event and serious errors in the available differential albedo data. This study was done to evaluate and appropriately modify the MORSE/BREESE package, to develop new methods for generating the required albedo data, and to extend the adjoint capability to the albedo-modified calculations. Major modifications to MORSE/BREESE include an option to save for further use information that would be lost at the albedo event, an option to displace the point of emergence during an albedo event, and an option to use spatially dependent albedo data for both forward and adjoint calculations, which includes the point of emergence as a new random variable to be selected during an albedo event. The theoretical basis for using TORT-generated forward albedo information to produce adjuncton albedos was derived. The MORSE/STORM package was developed to perform both forward and adjoint modes of analysis using spatially dependent albedo data. Results obtained with MORSE/STORM for both forward and adjoint modes were compared with benchmark solutions. Excellent agreement and improved computational efficiency were achieved, demonstrating the full utilization of the albedo option in the MORSE code. 7 refs., 17 figs., 15 tabs
International Nuclear Information System (INIS)
Rojas C, E.L.; Varon T, C.F.; Pedraza N, R.
2007-01-01
The treatment of the breast cancer at early stages is of vital importance. For that, most of the investigations are dedicated to the early detection of the suffering and their treatment. As investigation consequence and clinical practice, in 2002 it was developed in U.S.A. an irradiation system of high dose rate known as Mammosite. In this work we carry out dose calculations for a simplified Mammosite system with the Monte Carlo Penelope simulation code and MCNPX, varying the concentration of the contrast material that it is used in the one. (Author)
SU-E-J-60: Efficient Monte Carlo Dose Calculation On CPU-GPU Heterogeneous Systems
International Nuclear Information System (INIS)
Xiao, K; Chen, D. Z; Hu, X. S; Zhou, B
2014-01-01
Purpose: It is well-known that the performance of GPU-based Monte Carlo dose calculation implementations is bounded by memory bandwidth. One major cause of this bottleneck is the random memory writing patterns in dose deposition, which leads to several memory efficiency issues on GPU such as un-coalesced writing and atomic operations. We propose a new method to alleviate such issues on CPU-GPU heterogeneous systems, which achieves overall performance improvement for Monte Carlo dose calculation. Methods: Dose deposition is to accumulate dose into the voxels of a dose volume along the trajectories of radiation rays. Our idea is to partition this procedure into the following three steps, which are fine-tuned for CPU or GPU: (1) each GPU thread writes dose results with location information to a buffer on GPU memory, which achieves fully-coalesced and atomic-free memory transactions; (2) the dose results in the buffer are transferred to CPU memory; (3) the dose volume is constructed from the dose buffer on CPU. We organize the processing of all radiation rays into streams. Since the steps within a stream use different hardware resources (i.e., GPU, DMA, CPU), we can overlap the execution of these steps for different streams by pipelining. Results: We evaluated our method using a Monte Carlo Convolution Superposition (MCCS) program and tested our implementation for various clinical cases on a heterogeneous system containing an Intel i7 quad-core CPU and an NVIDIA TITAN GPU. Comparing with a straightforward MCCS implementation on the same system (using both CPU and GPU for radiation ray tracing), our method gained 2-5X speedup without losing dose calculation accuracy. Conclusion: The results show that our new method improves the effective memory bandwidth and overall performance for MCCS on the CPU-GPU systems. Our proposed method can also be applied to accelerate other Monte Carlo dose calculation approaches. This research was supported in part by NSF under Grants CCF
Simultaneous global calculation of flux and importance with forward Monte Carlo
International Nuclear Information System (INIS)
Deutsch, O.L.; Carter, L.L.
1977-01-01
A procedure is described for obtaining flux and importance globally in one Monte Carlo calculation at small to moderate incremental cost in terms of the time required to process a fixed number of particle histories. The application of this procedure and analysis of results are illustrated for a prototypical controlled thermonuclear reactor (CTR) streaming problem with coolant pipe penetrations through a concrete magnet shield. Our experience indicates that the availability of global information about both flux and importance can help to generate intuition in multidimensional shielding problems and can be of significant value during the early phase of shield design
On line CALDoseX: real time Monte Carlo calculation via Internet for dosimetry in radiodiagnostic
International Nuclear Information System (INIS)
Kramer, Richard; Cassola, Vagner Ferreira; Lira, Carlos Alberto Brayner de Oliveira; Khoury, Helen Jamil; Cavalcanti, Arthur; Lins, Rafael Dueire
2011-01-01
The CALDose X 4.1 is a software which uses thr MASH and FASH phantoms. Patient dosimetry with reference phantoms is limited because the results can be applied only for patients which possess the same body mass and right height that the reference phantom. In this paper, the dosimetry of patients for diagnostic with X ray was extended by using a series of 18 phantoms with defined gender, different body masses and heights, in order to cover the real anatomy of the patients. It is possible to calculate absorbed doses in organs and tissues by real time Monte Carlo dosimetry through the Internet through a dosimetric service called CALDose X on line
Damage flux analysis. Solid state detector and Monte-Carlo calculation
International Nuclear Information System (INIS)
Genthon, J.P.; Nimal, J.C.; Vergnaud, T.
1975-09-01
The change of resistivity induced by radiation in materials is particularly suitable for the measurement of equivalent damage fluxes, when it is used at low fluence for calibration of more classical activation reactions used at high fluences. A graphite and a tungsten detector are briefly described and results obtained in a good number of European reactors are given. The polykinetic three dimensional Monte-Carlo code Tripoli is used for calculation of damage fluxes. Comparison with above measurements shows a good agreement and confirms the use of the EURATOM damaging function for graphite [fr
Monte Carlo calculations on a parallel computer using MORSE-C.G
International Nuclear Information System (INIS)
Wood, J.
1995-01-01
The general purpose particle transport Monte Carlo code, MORSE-C.G., is implemented on a parallel computing transputer-based system having MIMD architecture. Example problems are solved which are representative of the 3-principal types of problem that can be solved by the original serial code, namely, fixed source, eigenvalue (k-eff) and time-dependent. The results from the parallelized version of the code are compared in tables with the serial code run on a mainframe serial computer, and with an independent, deterministic transport code. The performance of the parallel computer as the number of processors is varied is shown graphically. For the parallel strategy used, the loss of efficiency as the number of processors is increased, is investigated. (author)
Characterizing a Proton Beam Scanning System for Monte Carlo Dose Calculation in Patients
Grassberger, C; Lomax, Tony; Paganetti, H
2015-01-01
The presented work has two goals. First, to demonstrate the feasibility of accurately characterizing a proton radiation field at treatment head exit for Monte Carlo dose calculation of active scanning patient treatments. Second, to show that this characterization can be done based on measured depth dose curves and spot size alone, without consideration of the exact treatment head delivery system. This is demonstrated through calibration of a Monte Carlo code to the specific beam lines of two institutions, Massachusetts General Hospital (MGH) and Paul Scherrer Institute (PSI). Comparison of simulations modeling the full treatment head at MGH to ones employing a parameterized phase space of protons at treatment head exit reveals the adequacy of the method for patient simulations. The secondary particle production in the treatment head is typically below 0.2% of primary fluence, except for low–energy electrons (protons), whose contribution to skin dose is negligible. However, there is significant difference between the two methods in the low-dose penumbra, making full treatment head simulations necessary to study out-of field effects such as secondary cancer induction. To calibrate the Monte Carlo code to measurements in a water phantom, we use an analytical Bragg peak model to extract the range-dependent energy spread at the two institutions, as this quantity is usually not available through measurements. Comparison of the measured with the simulated depth dose curves demonstrates agreement within 0.5mm over the entire energy range. Subsequently, we simulate three patient treatments with varying anatomical complexity (liver, head and neck and lung) to give an example how this approach can be employed to investigate site-specific discrepancies between treatment planning system and Monte Carlo simulations. PMID:25549079
Fix, Michael K; Cygler, Joanna; Frei, Daniel; Volken, Werner; Neuenschwander, Hans; Born, Ernst J; Manser, Peter
2013-05-07
The electron Monte Carlo (eMC) dose calculation algorithm available in the Eclipse treatment planning system (Varian Medical Systems) is based on the macro MC method and uses a beam model applicable to Varian linear accelerators. This leads to limitations in accuracy if eMC is applied to non-Varian machines. In this work eMC is generalized to also allow accurate dose calculations for electron beams from Elekta and Siemens accelerators. First, changes made in the previous study to use eMC for low electron beam energies of Varian accelerators are applied. Then, a generalized beam model is developed using a main electron source and a main photon source representing electrons and photons from the scattering foil, respectively, an edge source of electrons, a transmission source of photons and a line source of electrons and photons representing the particles from the scrapers or inserts and head scatter radiation. Regarding the macro MC dose calculation algorithm, the transport code of the secondary particles is improved. The macro MC dose calculations are validated with corresponding dose calculations using EGSnrc in homogeneous and inhomogeneous phantoms. The validation of the generalized eMC is carried out by comparing calculated and measured dose distributions in water for Varian, Elekta and Siemens machines for a variety of beam energies, applicator sizes and SSDs. The comparisons are performed in units of cGy per MU. Overall, a general agreement between calculated and measured dose distributions for all machine types and all combinations of parameters investigated is found to be within 2% or 2 mm. The results of the dose comparisons suggest that the generalized eMC is now suitable to calculate dose distributions for Varian, Elekta and Siemens linear accelerators with sufficient accuracy in the range of the investigated combinations of beam energies, applicator sizes and SSDs.
MCNP: a general Monte Carlo code for neutron and photon transport
International Nuclear Information System (INIS)
1979-11-01
The general-purpose Monte Carlo code MCNP ca be used for neutron, photon, or coupled neutron-photon transport, including the capability to calculate eigenvalues for critical systems. The code treats an arbitrary three-dimensional configuration of materials in geometric cells bounded by first- and second-degree surfaces and some special fourth-degree surfaces (elliptical tori). Pointwise cross-section data are used. For neutrons, all reactions given in a particular cross-section evaluation are accounted for. Thermal neutrons are described by both the free-gas and S(α,β) models. For photons, the code takes account of incoherent and coherent scattering, the possibility of fluorescent emission following photoelectric absorption, and absorption in pair production with local emission of annihilation radiation. MCNP includes an elaborate, interactive plotting capability that allows the user to view his input geometry to help check for setup errors. Standard features which are available to improve computational efficiency include geometry splitting and Russian roulette, weight cutoff with Russian roulette, correlated sampling, analog capture or capture by weight reduction, the exponential transformation, energy splitting, forced collisions in designated cells, flux estimates at point or ring detectors, deterministically transporting pseudo-particles to designated regions, track-length estimators, source biasing, and several parameter cutoffs. Extensive summary information is provided to help the user better understand the physics and Monte Carlo simulation of his problem. The standard, user-defined output of MCNP includes two-way current as a function of direction across any set of surfaces or surface segments in the problem. Flux across any set of surfaces or surface segments is available. 58 figures, 28 tables
PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code
International Nuclear Information System (INIS)
Iandola, F.N.; O'Brien, M.J.; Procassini, R.J.
2010-01-01
Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.
Radaev, A. I.; Schurovskaya, M. V.
2015-12-01
The choice of the spatial nodalization for the calculation of the power density and burnup distribution in a research reactor core with fuel assemblies of the IRT-3M and VVR-KN type using the program based on the Monte Carlo code is described. The influence of the spatial nodalization on the results of calculating basic neutronic characteristics and calculation time is investigated.
International Nuclear Information System (INIS)
Kim, Hyeong Heon
2000-02-01
The equivalence theorem providing a relation between a homogeneous and a heterogeneous medium has been used in the resonance calculation for the heterogeneous system. The accuracy of the resonance calculation based on the equivalence theorem depends on how accurately the fuel collision probability is expressed by the rational terms. The fuel collision probability is related to the Dancoff factor in closely packed lattices. The calculation of the Dancoff factor is one of the most difficult problems in the core analysis because the actual configuration of fuel elements in the lattice is very complex. Most reactor physics codes currently used are based on the roughly calculated black Dancoff factor, where total cross section of the fuel is assumed to be infinite. Even the black Dancoff factors have not been calculated accurately though many methods have been proposed. The equivalence theorem based on the black Dancoff factor causes some errors inevitably due to the approximations involved in the Dancoff factor calculation and the derivation of the fuel collision probability, but they have not been evaluated seriously before. In this study, a Monte Carlo program - G-DANCOFF - was developed to calculate not only the traditional black Dancoff factor but also grey Dancoff factor where the medium is described realistically. G-DANCOFF calculates the Dancoff factor based on its collision probability definition in an arbitrary arrangement of cylindrical fuel pins in full three-dimensional fashion. G-DANCOFF was verified by comparing the black Dancoff factors calculated for the geometries where accurate solutions are available. With 100,000 neutron histories, the calculated results by G-DANCOFF were matched within maximum 1% and in most cases less than 0.2% with previous results. G-DANCOFF also provides graphical information on particle tracks which makes it possible to calculate the Dancoff factor independently. The effects of the Dancoff factor on the criticality calculation
International Nuclear Information System (INIS)
Esnaashari, K. N.; Allahverdi, M.; Gharaati, H.; Shahriari, M.
2007-01-01
Stereotactic radiosurgery is an important clinical tool for the treatment of small lesions in the brain, including benign conditions, malignant and localized metastatic tumors. A dosimetry study was performed for Elekta 'Synergy S' as a dedicated Stereotactic radiosurgery unit, capable of generating circular radiation fields with diameters of 1-5 cm at iso centre using the BEAM/EGS4 Monte Carlo code. Materials and Methods: The linear accelerator Elekta Synergy S equipped with a set of 5 circular collimators from 10 mm to 50 mm in diameter at iso centre distance was used. The cones were inserted in a base plate mounted on the collimator linac head. A PinPoint chamber and Wellhofer water tank chamber were selected for clinical dosimetry of 6 MV photon beams. The results of simulations using the Monte Carlo system BEAM/EGS4 to model the beam geometry were compared with dose measurements. Results: An excellent agreement was found between Monte Carlo calculated and measured percentage depth dose and lateral dose profiles which were performed in water phantom for circular cones with 1, 2, 3, 4 and 5 cm in diameter. The comparison between calculation and measurements showed up to 0.5 % or 1 m m difference for all field sizes. The penumbra (80-20%) results at 5 cm depth in water phantom and SSD=95 ranged from 1.5 to 2.1 mm for circular collimators with diameter 1 to 5 cm. Conclusion: This study showed that BEAMnrc code has been accurate in modeling Synergy S linear accelerator equipped with circular collimators
International Nuclear Information System (INIS)
Procassini, R J; Beck, B R
2004-01-01
It might be assumed that use of a ''high-quality'' random number generator (RNG), producing a sequence of ''pseudo random'' numbers with a ''long'' repetition period, is crucial for producing unbiased results in Monte Carlo particle transport simulations. While several theoretical and empirical tests have been devised to check the quality (randomness and period) of an RNG, for many applications it is not clear what level of RNG quality is required to produce unbiased results. This paper explores the issue of RNG quality in the context of parallel, Monte Carlo transport simulations in order to determine how ''good'' is ''good enough''. This study employs the MERCURY Monte Carlo code, which incorporates the CNPRNG library for the generation of pseudo-random numbers via linear congruential generator (LCG) algorithms. The paper outlines the usage of random numbers during parallel MERCURY simulations, and then describes the source and criticality transport simulations which comprise the empirical basis of this study. A series of calculations for each test problem in which the quality of the RNG (period of the LCG) is varied provides the empirical basis for determining the minimum repetition period which may be employed without producing a bias in the mean integrated results
New electron multiple scattering distributions for Monte Carlo transport simulation
Energy Technology Data Exchange (ETDEWEB)
Chibani, Omar (Haut Commissariat a la Recherche (C.R.S.), 2 Boulevard Franz Fanon, Alger B.P. 1017, Alger-Gare (Algeria)); Patau, Jean Paul (Laboratoire de Biophysique et Biomathematiques, Faculte des Sciences Pharmaceutiques, Universite Paul Sabatier, 35 Chemin des Maraichers, 31062 Toulouse cedex (France))
1994-10-01
New forms of electron (positron) multiple scattering distributions are proposed. The first is intended for use in the conditions of validity of the Moliere theory. The second distribution takes place when the electron path is so short that only few elastic collisions occur. These distributions are adjustable formulas. The introduction of some parameters allows impositions of the correct value of the first moment. Only positive and analytic functions were used in constructing the present expressions. This makes sampling procedures easier. Systematic tests are presented and some Monte Carlo simulations, as benchmarks, are carried out. ((orig.))
Monte Carlo calculations for doses in organs and tissues to oral radiography
International Nuclear Information System (INIS)
Sampaio, E.V.M.
1985-01-01
Using the MIRD 5 phantom and Monte Carlo technique, organ doses in patients undergoing external dental examination were calculated taking into account the different x-ray beam geometries and the various possible positions of x-ray source with regard to the head of the patient. It was necessary to introduce in the original computer program a new source description specific for dental examinations. To have a realistic evaluation of organ doses during dental examination it was necessary to introduce a new region in the phantom heat which characterizes the teeth and salivary glands. The attenuation of the x-ray beam by the lead shield of the radiographic film was also introduced in the calculation. (author)
International Nuclear Information System (INIS)
Levitan, Iu.L.; Sobol, I.M.; Khlopov, M.Iu.; Chechetkin, V.M.
1982-01-01
The variation of the hard part of the neutrino emission spectra of collapsing degenerate stellar cores with matter having a small optical depth to neutrinos is analyzed. The interaction of neutrinos with the degenerate matter is determined by processes of neutrino scattering on nuclei (without a change in neutrino energy) and neutrino scattering on degenerate electrons, in which the neutrino energy can only decrease. The neutrino emission spectrum of a collapsing stellar core in the initial stage of the onset of opacity is calculated by the Monte Carlo method: using a central density of 10 trillion g/cu cm and, in the stage of deep collapse, for a central density of 60 trillion g/cu cm. In the latter case the calculation of the spectrum without allowance for effects of neutrino degeneration in the central part of the collapsing stellar core corresponds to the maximum possible suppression of the hard part of the neutrino emission spectrum
Yang, Jing; Youssef, Mostafa; Yildiz, Bilge
2018-01-01
In this work, we quantify oxygen self-diffusion in monoclinic-phase zirconium oxide as a function of temperature and oxygen partial pressure. A migration barrier of each type of oxygen defect was obtained by first-principles calculations. Random walk theory was used to quantify the diffusivities of oxygen interstitials by using the calculated migration barriers. Kinetic Monte Carlo simulations were used to calculate diffusivities of oxygen vacancies by distinguishing the threefold- and fourfold-coordinated lattice oxygen. By combining the equilibrium defect concentrations obtained in our previous work together with the herein calculated diffusivity of each defect species, we present the resulting oxygen self-diffusion coefficients and the corresponding atomistically resolved transport mechanisms. The predicted effective migration barriers and diffusion prefactors are in reasonable agreement with the experimentally reported values. This work provides insights into oxygen diffusion engineering in Zr O2 -related devices and parametrization for continuum transport modeling.
Tripoli-3: monte Carlo transport code for neutral particles - version 3.5 - users manual
International Nuclear Information System (INIS)
Vergnaud, Th.; Nimal, J.C.; Chiron, M.
2001-01-01
The TRIPOLI-3 code applies the Monte Carlo method to neutron, gamma-ray and coupled neutron and gamma-ray transport calculations in three-dimensional geometries, either in steady-state conditions or having a time dependence. It can be used to study problems where there is a high flux attenuation between the source zone and the result zone (studies of shielding configurations or source driven sub-critical systems, with fission being taken into account), as well as problems where there is a low flux attenuation (neutronic calculations -- in a fuel lattice cell, for example -- where fission is taken into account, usually with the calculation on the effective multiplication factor, fine structure studies, numerical experiments to investigate methods approximations, etc). TRIPOLI-3 has been operational since 1995 and is the version of the TRIPOLI code that follows on from TRIPOLI-2; it can be used on SUN, RISC600 and HP workstations and on PC using the Linux or Windows/NT operating systems. The code uses nuclear data libraries generated using the THEMIS/NJOY system. The current libraries were derived from ENDF/B6 and JEF2. There is also a response function library based on a number of evaluations, notably the dosimetry libraries IRDF/85, IRDF/90 and also evaluations from JEF2. The treatment of particle transport is the same in version 3.5 as in version 3.4 of the TRIPOLI code; but the version 3.5 is more convenient for preparing the input data and for reading the output. The french version of the user's manual exists. (authors)
International Nuclear Information System (INIS)
Wang Guozhong; Zhang Junjun; Xiong Jian
2010-01-01
MCAM (Monte Carlo Automatic Modeling program for particle transport simulation) was developed by FDS Team as a CAD based bi-directional interface program between general CAD systems and Monte Carlo particle transport simulation codes. The physics and material modeling and void space modeling functions were improved and the free form surfaces processing function was developed recently. The applications to the ITER (International Thermonuclear Experimental Reactor) building model and FFHR (Force Free Helical Reactor) model have demonstrated the feasibility, effectiveness and maturity of MCAM latest version for nuclear applications with complex geometry. (author)
A user's manual for the three-dimensional Monte Carlo transport code SPARTAN
International Nuclear Information System (INIS)
Bending, R.C.; Heffer, P.J.H.
1975-09-01
SPARTAN is a general-purpose Monte Carlo particle transport code intended for neutron or gamma transport problems in reactor physics, health physics, shielding, and safety studies. The code used a very general geometry system enabling a complex layout to be described and allows the user to obtain physics data from a number of different types of source library. Special tracking and scoring techniques are used to improve the quality of the results obtained. To enable users to run SPARTAN, brief descriptions of the facilities available in the code are given and full details of data input and job control language, as well as examples of complete calculations, are included. It is anticipated that changes may be made to SPARTAN from time to time, particularly in those parts of the code which deal with physics data processing. The load module is identified by a version number and implementation date, and updates of sections of this manual will be issued when significant changes are made to the code. (author)
SU-E-T-558: Monte Carlo Photon Transport Simulations On GPU with Quadric Geometry
International Nuclear Information System (INIS)
Chi, Y; Tian, Z; Jiang, S; Jia, X
2015-01-01
Purpose: Monte Carlo simulation on GPU has experienced rapid advancements over the past a few years and tremendous accelerations have been achieved. Yet existing packages were developed only in voxelized geometry. In some applications, e.g. radioactive seed modeling, simulations in more complicated geometry are needed. This abstract reports our initial efforts towards developing a quadric geometry module aiming at expanding the application scope of GPU-based MC simulations. Methods: We defined the simulation geometry consisting of a number of homogeneous bodies, each specified by its material composition and limiting surfaces characterized by quadric functions. A tree data structure was utilized to define geometric relationship between different bodies. We modified our GPU-based photon MC transport package to incorporate this geometry. Specifically, geometry parameters were loaded into GPU’s shared memory for fast access. Geometry functions were rewritten to enable the identification of the body that contains the current particle location via a fast searching algorithm based on the tree data structure. Results: We tested our package in an example problem of HDR-brachytherapy dose calculation for shielded cylinder. The dose under the quadric geometry and that under the voxelized geometry agreed in 94.2% of total voxels within 20% isodose line based on a statistical t-test (95% confidence level), where the reference dose was defined to be the one at 0.5cm away from the cylinder surface. It took 243sec to transport 100million source photons under this quadric geometry on an NVidia Titan GPU card. Compared with simulation time of 99.6sec in the voxelized geometry, including quadric geometry reduced efficiency due to the complicated geometry-related computations. Conclusion: Our GPU-based MC package has been extended to support photon transport simulation in quadric geometry. Satisfactory accuracy was observed with a reduced efficiency. Developments for charged
Bahadori, Amir Alexander
Astronauts are exposed to a unique radiation environment in space. United States terrestrial radiation worker limits, derived from guidelines produced by scientific panels, do not apply to astronauts. Limits for astronauts have changed throughout the Space Age, eventually reaching the current National Aeronautics and Space Administration limit of 3% risk of exposure induced death, with an administrative stipulation that the risk be assured to the upper 95% confidence limit. Much effort has been spent on reducing the uncertainty associated with evaluating astronaut risk for radiogenic cancer mortality, while tools that affect the accuracy of the calculations have largely remained unchanged. In the present study, the impacts of using more realistic computational phantoms with size variability to represent astronauts with simplified deterministic radiation transport were evaluated. Next, the impacts of microgravity-induced body changes on space radiation dosimetry using the same transport method were investigated. Finally, dosimetry and risk calculations resulting from Monte Carlo radiation transport were compared with results obtained using simplified deterministic radiation transport. The results of the present study indicated that the use of phantoms that more accurately represent human anatomy can substantially improve space radiation dose estimates, most notably for exposures from solar particle events under light shielding conditions. Microgravity-induced changes were less important, but results showed that flexible phantoms could assist in optimizing astronaut body position for reducing exposures during solar particle events. Finally, little overall differences in risk calculations using simplified deterministic radiation transport and 3D Monte Carlo radiation transport were found; however, for the galactic cosmic ray ion spectra, compensating errors were observed for the constituent ions, thus exhibiting the need to perform evaluations on a particle
Monte Carlo calculations of the depth-dose distribution in skin contaminated by hot particles
Energy Technology Data Exchange (ETDEWEB)
Patau, J.-P. (Toulouse-3 Univ., 31 (France))
1991-01-01
Accurate computer programs were developed in order to calculate the spatial distribution of absorbed radiation doses in the skin, near high activity particles (''hot particles''). With a view to ascertaining the reliability of the codes the transport of beta particles was simulated in a complex configuration used for dosimetric measurements: spherical {sup 60}Co sources of 10-1000 {mu}m fastened to an aluminium support with a tissue-equivalent adhesive overlaid with 10 {mu}m thick aluminium foil. Behind it an infinite polystyrene medium including an extrapolation chamber was assumed. The exact energy spectrum of beta emission was sampled. Production and transport of secondary knock-on electrons were also simulated. Energy depositions in polystyrene were calculated with a high spatial resolution. Finally, depth-dose distributions were calculated for hot particles placed on the skin. The calculations will be continued for other radionuclides and for a configuration suited to TLD measurements. (author).
International Nuclear Information System (INIS)
Mairani, A; Brons, S; Parodi, K; Cerutti, F; Ferrari, A; Sommerer, F; Fasso, A; Kraemer, M; Scholz, M
2010-01-01
Clinical Monte Carlo (MC) calculations for carbon ion therapy have to provide absorbed and RBE-weighted dose. The latter is defined as the product of the dose and the relative biological effectiveness (RBE). At the GSI Helmholtzzentrum fuer Schwerionenforschung as well as at the Heidelberg Ion Therapy Center (HIT), the RBE values are calculated according to the local effect model (LEM). In this paper, we describe the approach followed for coupling the FLUKA MC code with the LEM and its application to dose and RBE-weighted dose calculations for a superimposition of two opposed 12 C ion fields as applied in therapeutic irradiations. The obtained results are compared with the available experimental data of CHO (Chinese hamster ovary) cell survival and the outcomes of the GSI analytical treatment planning code TRiP98. Some discrepancies have been observed between the analytical and MC calculations of absorbed physical dose profiles, which can be explained by the differences between the laterally integrated depth-dose distributions in water used as input basic data in TRiP98 and the FLUKA recalculated ones. On the other hand, taking into account the differences in the physical beam modeling, the FLUKA-based biological calculations of the CHO cell survival profiles are found in good agreement with the experimental data as well with the TRiP98 predictions. The developed approach that combines the MC transport/interaction capability with the same biological model as in the treatment planning system (TPS) will be used at HIT to support validation/improvement of both dose and RBE-weighted dose calculations performed by the analytical TPS.
Cullen, D
2000-01-01
TART2000 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input Preparation, running Monte Carlo calculations, and analysis of output results. TART2000 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART2000 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART2000 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART2000 and its data files.
International Nuclear Information System (INIS)
Cullen, D.E
2000-01-01
TART2000 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input Preparation, running Monte Carlo calculations, and analysis of output results. TART2000 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART2000 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART2000 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART2000 and its data files
International Nuclear Information System (INIS)
Homma, Y.; Moriwaki, H.; Ikeda, K.; Ohdi, S.
2013-01-01
This paper deals with the verification of the 3 dimensional triangular prismatic discrete ordinates transport calculation code ENSEMBLE-TRIZ by comparison with the multi-group Monte Carlo calculation code GMVP in a large fast breeder reactor. The reactor is a 750 MWe electric power sodium cooled reactor. Nuclear characteristics are calculated at the beginning of cycle of an initial core and at the beginning and the end of cycle of an equilibrium core. According to the calculations, the differences between the two methodologies are smaller than 0.0002 Δk in the multiplication factor, relatively about 1% in the control rod reactivity, and 1% in the sodium void reactivity. (authors)
International Nuclear Information System (INIS)
Bahreyni Toossi, M.T.; Hashemi, S.M.; Momen Nezhad, M.
2008-01-01
In recent decades, cancer has been one of the main ever increasing causes of death in developed countries. In order to fulfill the aforementioned considerations different techniques have been used, one of which is Monte Carlo simulation technique. High accuracy of the Monte Carlo simulation has been one of the main reason for its wide spread application. In this study, MCNP-4C code was employed to simulate electron mode of the Neptun 10 PC Linac, dosimetric quantities for conventional fields have also been both measured and calculated. Although Neptun 10 PC Linac is no longer licensed for installation in European and some other countries but regrettably nearly 10 of them have been installed in different centers around the country and are in operation. Therefore, in this circumstance, to improve the accuracy of treatment planning, Monte Carlo simulation for Neptun 10 PC was recognized as a necessity. Simulated and measured values of depth dose curves, off axis dose distributions for 6 , 8 and 10 MeV electrons applied for four different size fields, 6 x 6 cm 2 , 10 x 10 cm 2 , 15 x 15 cm 2 and 20 x 20 cm 2 were obtained. The measurements were carried out by a Welhofer-Scanditronix dose scanning system, Semiconductor Detector and Ionization Chamber. The results of this study have revealed that the values of two main dosimetric quantities depth dose curves and off axis dose distributions, acquired by MCNP-4C simulation and the corresponding values achieved by direct measurements are in a very good agreement (within 1% to 2% difference). In general, very good consistency of simulated and measured results, is a good proof that the goal of this work has been accomplished. In other word where measurements of some parameters are not practically achievable, MCNP-4C simulation can be implemented confidently. (author)
Infantino, Angelo; Marengo, Mario; Baschetti, Serafina; Cicoria, Gianfranco; Longo Vaschetto, Vittorio; Lucconi, Giulia; Massucci, Piera; Vichi, Sara; Zagni, Federico; Mostacci, Domiziano
2015-11-01
Biomedical cyclotrons for production of Positron Emission Tomography (PET) radionuclides and radiotherapy with hadrons or ions are widely diffused and established in hospitals as well as in industrial facilities and research sites. Guidelines for site planning and installation, as well as for radiation protection assessment, are given in a number of international documents; however, these well-established guides typically offer analytic methods of calculation of both shielding and materials activation, in approximate or idealized geometry set up. The availability of Monte Carlo codes with accurate and up-to-date libraries for transport and interactions of neutrons and charged particles at energies below 250 MeV, together with the continuously increasing power of nowadays computers, makes systematic use of simulations with realistic geometries possible, yielding equipment and site specific evaluation of the source terms, shielding requirements and all quantities relevant to radiation protection. In this work, the well-known Monte Carlo code FLUKA was used to simulate two representative models of cyclotron for PET radionuclides production, including their targetry; and one type of proton therapy cyclotron including the energy selection system. Simulations yield estimates of various quantities of radiological interest, including the effective dose distribution around the equipment, the effective number of neutron produced per incident proton and the activation of target materials, the structure of the cyclotron, the energy degrader, the vault walls and the soil. The model was validated against experimental measurements and comparison with well-established reference data. Neutron ambient dose equivalent H*(10) was measured around a GE PETtrace cyclotron: an average ratio between experimental measurement and simulations of 0.99±0.07 was found. Saturation yield of 18F, produced by the well-known 18O(p,n)18F reaction, was calculated and compared with the IAEA recommended
Uncertainty calculation in transport models and forecasts
DEFF Research Database (Denmark)
Manzo, Stefano; Prato, Carlo Giacomo
. Forthcoming: European Journal of Transport and Infrastructure Research, 15-3, 64-72. 4 The last paper4 examined uncertainty in the spatial composition of residence and workplace locations in the Danish National Transport Model. Despite the evidence that spatial structure influences travel behaviour...... to increase the quality of the decision process and to develop robust or adaptive plans. In fact, project evaluation processes that do not take into account model uncertainty produce not fully informative and potentially misleading results so increasing the risk inherent to the decision to be taken...
Microscopic calculation of level densities: the shell model Monte Carlo approach
International Nuclear Information System (INIS)
Alhassid, Yoram
2012-01-01
The shell model Monte Carlo (SMMC) approach provides a powerful technique for the microscopic calculation of level densities in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods. We discuss a number of developments: (i) Spin distribution. We used a spin projection method to calculate the exact spin distribution of energy levels as a function of excitation energy. In even-even nuclei we find an odd-even staggering effect (in spin). Our results were confirmed in recent analysis of experimental data. (ii) Heavy nuclei. The SMMC approach was extended to heavy nuclei. We have studied the crossover between vibrational and rotational collectivity in families of samarium and neodymium isotopes in model spaces of dimension approx. 10 29 . We find good agreement with experimental results for both state densities and 2 > (where J is the total spin). (iii) Collective enhancement factors. We have calculated microscopically the vibrational and rotational enhancement factors of level densities versus excitation energy. We find that the decay of these enhancement factors in heavy nuclei is correlated with the pairing and shape phase transitions. (iv) Odd-even and odd-odd nuclei. The projection on an odd number of particles leads to a sign problem in SMMC. We discuss a novel method to calculate state densities in odd-even and odd-odd nuclei despite the sign problem. (v) State densities versus level densities. The SMMC approach has been used extensively to calculate state densities. However, experiments often measure level densities (where levels are counted without including their spin degeneracies.) A spin projection method enables us to also calculate level densities in SMMC. We have calculated the SMMC level density of 162 Dy and found it to agree well with experiments
LDRD Final Review: Radiation Transport Calculations
Energy Technology Data Exchange (ETDEWEB)
Goorley, John Timothy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Morgan, George Lake [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lestone, John Paul [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-06-22
Both high-fidelity & toy simulations are being used to understand measured signals and improve the Area 11 NDSE diagnostic. We continue to gain more and more confidence in the ability for MCNP to simulate neutron and photon transport from source to radiation detector.
Standard deviation of local tallies in global Monte Carlo calculation of nuclear reactor core
International Nuclear Information System (INIS)
Ueki, Taro
2010-01-01
Time series methodology has been studied to assess the feasibility of statistical error estimation in the continuous space and energy Monte Carlo calculation of the three-dimensional whole reactor core. The noise propagation was examined and the fluctuation of track length tallies for local fission rate and power has been formally shown to be represented by the autoregressive moving average process of orders p and p-1 [ARMA(p,p-1)], where p is an integer larger than or equal to two. Therefore, ARMA(p,p-1) fitting was applied to the real standard deviation estimation of the power of fuel assemblies at particular heights. Numerical results indicate that straightforward ARMA(3,2) fitting is promising, but a stability issue must be resolved toward the incorporation in the distributed version of production Monte Carlo codes. The same numerical results reveal that the average performance of ARMA(3,2) fitting is equivalent to that of the batch method with a batch size larger than 100 and smaller than 200 cycles for a 1,100 MWe pressurized water reactor. (author)
Energy Technology Data Exchange (ETDEWEB)
Serena, P. A. [Instituto de Ciencias de Materiales de Madrid, Madrid (Spain); Costa-Kraemer, J. L. [Instituto de Microelectronica de Madrid, Madrid (Spain)
2001-03-01
A Monte Carlo algorithm suitable to study systems described by an anisotropic Heisenberg Hamiltonian is presented. This technique has been tested successfully with 3D and 2D systems, illustrating how magnetic properties depend on the dimensionality and the coordination number. We have found that magnetic properties of constrictions differ from those appearing in bulk. In particular, spin fluctuations are considerable larger than those calculated for bulk materials. In addition, domain walls are strongly modified when a constriction is present, with a decrease of the domain-wall width. This decrease is explained in terms of previous theoretical works. [Spanish] Se presenta un algoritmo de Monte Carlo para estudiar sistemas discritos por un hamiltoniano anisotropico de Heisenburg. Esta tecnica ha sido probada exitosamente con sistemas de dos y tres dimensiones, ilustrado con las propiedades magneticas dependen de la dimensionalidad y el numero de coordinacion. Hemos encontrado que las propiedades magneticas de constricciones difieren de aquellas del bulto. En particular, las fluctuaciones de espin son considerablemente mayores. Ademas, las paredes de dominio son fuertemente modificadas cuando una construccion esta presente, originando un decrecimiento del ancho de la pared de dominio. Damos cuenta de este decrecimiento en terminos de un trabajo teorico previo.
Application of a Monte Carlo linac model in routine verifications of dose calculations
International Nuclear Information System (INIS)
Linares Rosales, H. M.; Alfonso Laguardia, R.; Lara Mas, E.; Popescu, T.
2015-01-01
The analysis of some parameters of interest in Radiotherapy Medical Physics based on an experimentally validated Monte Carlo model of an Elekta Precise lineal accelerator, was performed for 6 and 15 Mv photon beams. The simulations were performed using the EGSnrc code. As reference for simulations, the optimal beam parameters values (energy and FWHM) previously obtained were used. Deposited dose calculations in water phantoms were done, on typical complex geometries commonly are used in acceptance and quality control tests, such as irregular and asymmetric fields. Parameters such as MLC scatter, maximum opening or closing position, and the separation between them were analyzed from calculations in water. Similarly simulations were performed on phantoms obtained from CT studies of real patients, making comparisons of the dose distribution calculated with EGSnrc and the dose distribution obtained from the computerized treatment planning systems (TPS) used in routine clinical plans. All the results showed a great agreement with measurements, finding all of them within tolerance limits. These results allowed the possibility of using the developed model as a robust verification tool for validating calculations in very complex situation, where the accuracy of the available TPS could be questionable. (Author)
Criticality Analysis Of TCA Critical Lattices With MNCP-4C Monte Carlo Calculation
International Nuclear Information System (INIS)
Zuhair
2002-01-01
The use of uranium-plutonium mixed oxide (MOX) fuel in electric generation light water reactor (PWR, BWR) is being planned in Japan. Therefore, the accuracy evaluations of neutronic analysis code for MOX cores have been employed by many scientists and reactor physicists. Benchmark evaluations for TCA was done using various calculation methods. The Monte Carlo become the most reliable method to predict criticality of various reactor types. In this analysis, the MCNP-4C code was chosen because various superiorities the code has. All in all, the MCNP-4C calculation for TCA core with 38 MOX critical lattice configurations gave the results with high accuracy. The JENDL-3.2 library showed significantly closer results to the ENDF/B-V. The k eff values calculated with the ENDF/B-VI library gave underestimated results. The ENDF/B-V library gave the best estimation. It can be concluded that MCNP-4C calculation, especially with ENDF/B-V and JENDL-3.2 libraries, for MOX fuel utilized NPP design in reactor core is the best choice
Monte Carlo calculation of the energy response characteristics of a RadFET radiation detector
Belicev, P.; Spasic Jokic, V.; Mayer, S.; Milosevic, M.; Ilic, R.; Pesic, M.
2010-07-01
The Metal -Oxide Semiconductor Field-Effect-Transistor (MOSFET, RadFET) is frequently used as a sensor of ionizing radiation in nuclear-medicine, diagnostic-radiology, radiotherapy quality-assurance and in the nuclear and space industries. We focused our investigations on calculating the energy response of a p-type RadFET to low-energy photons in range from 12 keV to 2 MeV and on understanding the influence of uncertainties in the composition and geometry of the device in calculating the energy response function. All results were normalized to unit air kerma incident on the RadFET for incident photon energy of 1.1 MeV. The calculations of the energy response characteristics of a RadFET radiation detector were performed via Monte Carlo simulations using the MCNPX code and for a limited number of incident photon energies the FOTELP code was also used for the sake of comparison. The geometry of the RadFET was modeled as a simple stack of appropriate materials. Our goal was to obtain results with statistical uncertainties better than 1% (fulfilled in MCNPX calculations for all incident energies which resulted in simulations with 1 - 2×109 histories.
Unified description of pf-shell nuclei by the Monte Carlo shell model calculations
Energy Technology Data Exchange (ETDEWEB)
Mizusaki, Takahiro; Otsuka, Takaharu [Tokyo Univ. (Japan). Dept. of Physics; Honma, Michio
1998-03-01
The attempts to solve shell model by new methods are briefed. The shell model calculation by quantum Monte Carlo diagonalization which was proposed by the authors is a more practical method, and it became to be known that it can solve the problem with sufficiently good accuracy. As to the treatment of angular momentum, in the method of the authors, deformed Slater determinant is used as the basis, therefore, for making angular momentum into the peculiar state, projected operator is used. The space determined dynamically is treated mainly stochastically, and the energy of the multibody by the basis formed as the result is evaluated and selectively adopted. The symmetry is discussed, and the method of decomposing shell model space into dynamically determined space and the product of spin and isospin spaces was devised. The calculation processes are shown with the example of {sup 50}Mn nuclei. The calculation of the level structure of {sup 48}Cr with known exact energy can be done with the accuracy of peculiar absolute energy value within 200 keV. {sup 56}Ni nuclei are the self-conjugate nuclei of Z=N=28. The results of the shell model calculation of {sup 56}Ni nucleus structure by using the interactions of nuclear models are reported. (K.I.)
Calculating CR-39 Response to Radon in Water Using Monte Carlo Simulation
International Nuclear Information System (INIS)
Razaie Rayeni Nejad, M. R.
2012-01-01
CR-39 detectors are widely used for Radon and progeny measurement in the air. In this paper, using the Monte Carlo simulation, the possibility of using the CR-39 for direct measurement of Radon and progeny in water is investigated. Assuming the random position and angle of alpha particle emitted by Radon and progeny, alpha energy and angular spectrum that arrive at CR-39, the calibration factor, and the suitable depth of chemical etching of CR-39 in air and water was calculated. In this simulation, a range of data were obtained from SRIM2008 software. Calibration factor of CR-39 in water is calculated as 6.6 (kBq.d/m 3 )/(track/cm 2 ) that is corresponding with EPA standard level of Radon concentration in water (10-11 kBq/m 3 ). With replacing the skin instead of CR-39, the volume affected by Radon and progeny was determined to be 2.51 mm 3 for one m 2 of skin area. The annual dose conversion factor for Radon and progeny was calculated to be between 8.8-58.8 nSv/(Bq.h/m 3 ). Using the CR-39 for Radon measurement in water can be beneficial. The annual dose conversion factor for Radon and progeny was calculated to be between 8.8-58.8 nSv/ (Bq.h/m 3 ).
International Nuclear Information System (INIS)
Jabbari, N.; Hashemi-Malayeri, B.; Farajollahi, A. R.; Kazemnejad, A.
2007-01-01
In radiotherapy with electron beams, scattered radiation from an electron applicator influences the dose distribution in the patient. The contribution of this radiation to the patient dose is significant, even in modern accelerators. In most of radiotherapy treatment planning systems, this component is not explicitly included. In addition, the scattered radiation produced by applicators varies based on the applicator design as well as the field size and distance from the applicators. The aim of this study was to calculate the amount of scattered dose contribution from applicators. We also tried to provide an extensive set of calculated data that could be used as input or benchmark data for advanced treatment planning systems that use Monte Carlo algorithms for dose distribution calculations. Electron beams produced by a NEPTUN 10PC medical linac were modeled using the BEAMnrc system. Central axis depth dose curves of the electron beams were measured and calculated, with and without the applicators in place, for different field sizes and energies. The scattered radiation from the applicators was determined by subtracting the central axis depth dose curves obtained without the applicators from that with the applicator. The results of this study indicated that the scattered radiation from the electron applicators of the NEPTUN 10PC is significant and cannot be neglected in advanced treatment planning systems. Furthermore, our results showed that the scattered radiation depends on the field size and decreases almost linearly with depth. (author)
Postimplant Dosimetry Using a Monte Carlo Dose Calculation Engine: A New Clinical Standard
International Nuclear Information System (INIS)
Carrier, Jean-Francois; D'Amours, Michel; Verhaegen, Frank; Reniers, Brigitte; Martin, Andre-Guy; Vigneault, Eric; Beaulieu, Luc
2007-01-01
Purpose: To use the Monte Carlo (MC) method as a dose calculation engine for postimplant dosimetry. To compare the results with clinically approved data for a sample of 28 patients. Two effects not taken into account by the clinical calculation, interseed attenuation and tissue composition, are being specifically investigated. Methods and Materials: An automated MC program was developed. The dose distributions were calculated for the target volume and organs at risk (OAR) for 28 patients. Additional MC techniques were developed to focus specifically on the interseed attenuation and tissue effects. Results: For the clinical target volume (CTV) D 90 parameter, the mean difference between the clinical technique and the complete MC method is 10.7 Gy, with cases reaching up to 17 Gy. For all cases, the clinical technique overestimates the deposited dose in the CTV. This overestimation is mainly from a combination of two effects: the interseed attenuation (average, 6.8 Gy) and tissue composition (average, 4.1 Gy). The deposited dose in the OARs is also overestimated in the clinical calculation. Conclusions: The clinical technique systematically overestimates the deposited dose in the prostate and in the OARs. To reduce this systematic inaccuracy, the MC method should be considered in establishing a new standard for clinical postimplant dosimetry and dose-outcome studies in a near future
Energy Technology Data Exchange (ETDEWEB)
Biondo, Elliott D [ORNL; Ibrahim, Ahmad M [ORNL; Mosher, Scott W [ORNL; Grove, Robert E [ORNL
2015-01-01
Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNce reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).
Monte Carlo calculations for reporting patient organ doses from interventional radiology
Huo, Wanli; Feng, Mang; Pi, Yifei; Chen, Zhi; Gao, Yiming; Xu, X. George
2017-09-01
This paper describes a project to generate organ dose data for the purposes of extending VirtualDose software from CT imaging to interventional radiology (IR) applications. A library of 23 mesh-based anthropometric patient phantoms were involved in Monte Carlo simulations for database calculations. Organ doses and effective doses of IR procedures with specific beam projection, filed of view (FOV) and beam quality for all parts of body were obtained. Comparing organ doses for different beam qualities, beam projections, patients' ages and patient's body mass indexes (BMIs) which generated by VirtualDose-IR, significant discrepancies were observed. For relatively long time exposure, IR doses depend on beam quality, beam direction and patient size. Therefore, VirtualDose-IR, which is based on the latest anatomically realistic patient phantoms, can generate accurate doses for IR treatment. It is suitable to apply this software in clinical IR dose management as an effective tool to estimate patient doses and optimize IR treatment plans.
Monte Carlo dose calculations for BNCT treatment of diffuse human lung tumours
International Nuclear Information System (INIS)
Altieri, S.; Bortolussi, S.; Bruschi, P.
2006-01-01
In order to test the possibility to apply BNCT in the core of diffuse lung tumours, dose distribution calculations were made. The simulations were performed with the Monte Carlo code MCNP.4c2, using the male computational phantom Adam, version 07/94. Volumes of interest were voxelized for the tally requests, and results were obtained for tissues with and without Boron. Different collimated neutron sources were tested in order to establish the proper energies, as well as single and multiple beams to maximize neutron flux uniformity inside the target organs. Flux and dose distributions are reported. The use of two opposite epithermal neutron collimated beams insures good levels of dose homogeneity inside the lungs, with a substantially lower radiation dose delivered to surrounding structures. (author)
DSMC calculations for the double ellipse. [direct simulation Monte Carlo method
Moss, James N.; Price, Joseph M.; Celenligil, M. Cevdet
1990-01-01
The direct simulation Monte Carlo (DSMC) method involves the simultaneous computation of the trajectories of thousands of simulated molecules in simulated physical space. Rarefied flow about the double ellipse for test case 6.4.1 has been calculated with the DSMC method of Bird. The gas is assumed to be nonreacting nitrogen flowing at a 30 degree incidence with respect to the body axis, and for the surface boundary conditions, the wall is assumed to be diffuse with full thermal accommodation and at a constant wall temperature of 620 K. A parametric study is presented that considers the effect of variations of computational domain, gas model, cell size, and freestream density on surface quantities.
Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector
Energy Technology Data Exchange (ETDEWEB)
Padilla Cabal, Fatima, E-mail: fpadilla@instec.c [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba); Lopez-Pino, Neivy; Luis Bernal-Castillo, Jose; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D' Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba)
2010-12-15
A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ({sup 241}Am, {sup 133}Ba, {sup 22}Na, {sup 60}Co, {sup 57}Co, {sup 137}Cs and {sup 152}Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.
Quantum Monte Carlo calculations of weak transitions in A =6 -10 nuclei
Pastore, S.; Baroni, A.; Carlson, J.; Gandolfi, S.; Pieper, Steven C.; Schiavilla, R.; Wiringa, R. B.
2018-02-01
Ab initio calculations of the Gamow-Teller (GT) matrix elements in the β decays of 6He and 10C and electron captures in 7Be are carried out using both variational and Green's function Monte Carlo wave functions obtained from the Argonne v18 two-nucleon and Illinois-7 three-nucleon interactions, and axial many-body currents derived from either meson-exchange phenomenology or chiral effective field theory. The agreement with experimental data is excellent for the electron captures in 7Be, while theory overestimates the 6He and 10C data by ˜2 % and ˜10%, respectively. We show that for these systems correlations in the nuclear wave functions are crucial to explaining the data, while many-body currents increase by ˜2-3% the one-body GT contributions.
Monte Carlo Calculated Effective Dose to Teenage Girls from Computed Tomography Examinations
International Nuclear Information System (INIS)
Caon, M.; Bibbo, G.; Pattison, J.
2000-01-01
Effective doses from CT to paediatric patients are not common in the literature. This article reports some effective doses to teenage girls from CT examinations. The voxel computational model ADELAIDE, representative of a 14-year-old girl, was scaled in size by ±5% to represent also 11-12-year-old and 16-year-old girls. The EGS4 Monte Carlo code was used to calculate the effective dose from chest, abdomen and whole torso CT examinations to the three version of ADELAIDE using a 120 kV spectrum. For the whole torso CT examination, in order of increasing model size, the effective doses were 9.0, 8.2 and 7.8 mSv per 100 mA.s. Data are presented that allow the estimation of effective dose from CT examinations of the torso for girls between the ages of 11 and 16. (author)
Françoise Benz
2006-01-01
2005-2006 ACADEMIC TRAINING PROGRAMME LECTURE SERIES 27, 28, 29 June 11:00-12:00 - TH Conference Room, bldg. 4 The use of Monte Carlo radiation transport codes in radiation physics and dosimetry F. Salvat Gavalda,Univ. de Barcelona, A. FERRARI, CERN-AB, M. SILARI, CERN-SC Lecture 1. Transport and interaction of electromagnetic radiation F. Salvat Gavalda,Univ. de Barcelona Interaction models and simulation schemes implemented in modern Monte Carlo codes for the simulation of coupled electron-photon transport will be briefly reviewed. Different schemes for simulating electron transport will be discussed. Condensed algorithms, which rely on multiple-scattering theories, are comparatively fast, but less accurate than mixed algorithms, in which hard interactions (with energy loss or angular deflection larger than certain cut-off values) are simulated individually. The reliability, and limitations, of electron-interaction models and multiple-scattering theories will be analyzed. Benchmark comparisons of simu...
Acceleration methods for assembly-level transport calculations
International Nuclear Information System (INIS)
Adams, Marvin L.; Ramone, Gilles
1995-01-01
A family acceleration methods for the iterations that arise in assembly-level transport calculations is presented. A single iteration in these schemes consists of a transport sweep followed by a low-order calculation which is itself a simplified transport problem. It is shown that a previously-proposed method fitting this description is unstable in two and three dimensions. It is presented a family of methods and shown that some members are unconditionally stable. (author). 8 refs, 4 figs, 4 tabs
International Nuclear Information System (INIS)
Infantino, Angelo; Marengo, Mario; Baschetti, Serafina; Cicoria, Gianfranco; Longo Vaschetto, Vittorio; Lucconi, Giulia; Massucci, Piera; Vichi, Sara; Zagni, Federico; Mostacci, Domiziano
2015-01-01
Biomedical cyclotrons for production of Positron Emission Tomography (PET) radionuclides and radiotherapy with hadrons or ions are widely diffused and established in hospitals as well as in industrial facilities and research sites. Guidelines for site planning and installation, as well as for radiation protection assessment, are given in a number of international documents; however, these well-established guides typically offer analytic methods of calculation of both shielding and materials activation, in approximate or idealized geometry set up. The availability of Monte Carlo codes with accurate and up-to-date libraries for transport and interactions of neutrons and charged particles at energies below 250 MeV, together with the continuously increasing power of nowadays computers, makes systematic use of simulations with realistic geometries possible, yielding equipment and site specific evaluation of the source terms, shielding requirements and all quantities relevant to radiation protection. In this work, the well-known Monte Carlo code FLUKA was used to simulate two representative models of cyclotron for PET radionuclides production, including their targetry; and one type of proton therapy cyclotron including the energy selection system. Simulations yield estimates of various quantities of radiological interest, including the effective dose distribution around the equipment, the effective number of neutron produced per incident proton and the activation of target materials, the structure of the cyclotron, the energy degrader, the vault walls and the soil. The model was validated against experimental measurements and comparison with well-established reference data. Neutron ambient dose equivalent H ⁎ (10) was measured around a GE PETtrace cyclotron: an average ratio between experimental measurement and simulations of 0.99±0.07 was found. Saturation yield of 18 F, produced by the well-known 18 O(p,n) 18 F reaction, was calculated and compared with the IAEA
TU-AB-BRC-12: Optimized Parallel MonteCarlo Dose Calculations for Secondary MU Checks
Energy Technology Data Exchange (ETDEWEB)
French, S; Nazareth, D [Roswell Park Cancer Institute, Buffalo, NY (United States); Bellor, M [Lockheed Martin, Manassas, VA (United States)
2016-06-15
Purpose: Secondary MU checks are an important tool used during a physics review of a treatment plan. Commercial software packages offer varying degrees of theoretical dose calculation accuracy, depending on the modality involved. Dose calculations of VMAT plans are especially prone to error due to the large approximations involved. Monte Carlo (MC) methods are not commonly used due to their long run times. We investigated two methods to increase the computational efficiency of MC dose simulations with the BEAMnrc code. Distributed computing resources, along with optimized code compilation, will allow for accurate and efficient VMAT dose calculations. Methods: The BEAMnrc package was installed on a high performance computing cluster accessible to our clinic. MATLAB and PYTHON scripts were developed to convert a clinical VMAT DICOM plan into BEAMnrc input files. The BEAMnrc installation was optimized by running the VMAT simulations through profiling tools which indicated the behavior of the constituent routines in the code, e.g. the bremsstrahlung splitting routine, and the specified random number generator. This information aided in determining the most efficient compiling parallel configuration for the specific CPU’s available on our cluster, resulting in the fastest VMAT simulation times. Our method was evaluated with calculations involving 10{sup 8} – 10{sup 9} particle histories which are sufficient to verify patient dose using VMAT. Results: Parallelization allowed the calculation of patient dose on the order of 10 – 15 hours with 100 parallel jobs. Due to the compiler optimization process, further speed increases of 23% were achieved when compared with the open-source compiler BEAMnrc packages. Conclusion: Analysis of the BEAMnrc code allowed us to optimize the compiler configuration for VMAT dose calculations. In future work, the optimized MC code, in conjunction with the parallel processing capabilities of BEAMnrc, will be applied to provide accurate
FMCEIR: a Monte Carlo program for solving the stationary neutron and gamma transport equation
International Nuclear Information System (INIS)
Taormina, A.
1978-05-01
FMCEIR is a three-dimensional Monte Carlo program for solving the stationary neutron and gamma transport equation. It is used to study the problem of neutron and gamma streaming in the GCFR and HHT reactor channels. (G.T.H.)
Monte Carlo particle simulation and finite-element techniques for tandem mirror transport
International Nuclear Information System (INIS)
Rognlien, T.D.; Cohen, B.I.; Matsuda, Y.; Stewart, J.J. Jr.
1985-12-01
A description is given of numerical methods used in the study of axial transport in tandem mirrors owing to Coulomb collisions and rf diffusion. The methods are Monte Carlo particle simulations and direct solution to the Fokker-Planck equations by finite-element expansion. 11 refs
Evaluation of an electron Monte Carlo dose calculation algorithm for treatment planning.
Chamberland, Eve; Beaulieu, Luc; Lachance, Bernard
2015-05-08
The purpose of this study is to evaluate the accuracy of the electron Monte Carlo (eMC) dose calculation algorithm included in a commercial treatment planning system and compare its performance against an electron pencil beam algorithm. Several tests were performed to explore the system's behavior in simple geometries and in configurations encountered in clinical practice. The first series of tests were executed in a homogeneous water phantom, where experimental measurements and eMC-calculated dose distributions were compared for various combinations of energy and applicator. More specifically, we compared beam profiles and depth-dose curves at different source-to-surface distances (SSDs) and gantry angles, by using dose difference and distance to agreement. Also, we compared output factors, we studied the effects of algorithm input parameters, which are the random number generator seed, as well as the calculation grid size, and we performed a calculation time evaluation. Three different inhomogeneous solid phantoms were built, using high- and low-density materials inserts, to clinically simulate relevant heterogeneity conditions: a small air cylinder within a homogeneous phantom, a lung phantom, and a chest wall phantom. We also used an anthropomorphic phantom to perform comparison of eMC calculations to measurements. Finally, we proceeded with an evaluation of the eMC algorithm on a clinical case of nose cancer. In all mentioned cases, measurements, carried out by means of XV-2 films, radiographic films or EBT2 Gafchromic films. were used to compare eMC calculations with dose distributions obtained from an electron pencil beam algorithm. eMC calculations in the water phantom were accurate. Discrepancies for depth-dose curves and beam profiles were under 2.5% and 2 mm. Dose calculations with eMC for the small air cylinder and the lung phantom agreed within 2% and 4%, respectively. eMC calculations for the chest wall phantom and the anthropomorphic phantom also
X-ray dose estimation from cathode ray tube monitors by Monte Carlo calculation.
Khaledi, Navid; Arbabi, Azim; Dabaghi, Moloud
2015-04-01
Cathode Ray Tube (CRT) monitors are associated with the possible emission of bremsstrahlung radiation produced by electrons striking the monitor screen. Because of the low dose rate, accurate dosimetry is difficult. In this study, the dose equivalent (DE) and effective dose (ED) to an operator working in front of the monitor have been calculated using the Monte Carlo (MC) method by employing the MCNP code. The mean energy of photons reaching the operator was above 17 keV. The phantom ED was 454 μSv y (348 nSv h), which was reduced to 16 μSv y (12 nSv h) after adding a conventional leaded glass sheet. The ambient dose equivalent (ADE) and personal dose equivalent (PDE) for the head, neck, and thorax of the phantom were also calculated. The uncertainty of calculated ED, ADE, and PDE ranged from 3.3% to 10.7% and 4.2% to 14.6% without and with the leaded glass, respectively.
Fast on-site Monte Carlo tool for dose calculations in CT applications.
Chen, Wei; Kolditz, Daniel; Beister, Marcel; Bohle, Robert; Kalender, Willi A
2012-06-01
Monte Carlo (MC) simulation is an established technique for dose calculation in diagnostic radiology. The major drawback is its high computational demand, which limits the possibility of usage in real-time applications. The aim of this study was to develop fast on-site computed tomography (CT) specific MC dose calculations by using a graphics processing unit (GPU) cluster. GPUs are powerful systems which are especially suited to problems that can be expressed as data-parallel computations. In MC simulations, each photon track is independent of the others; each launched photon can be mapped to one thread on the GPU, thousands of threads are executed in parallel in order to achieve high performance. For further acceleration, the authors considered multiple GPUs. The total computation was divided into different parts which can be calculated in parallel on multiple devices. The GPU cluster is an MC calculation server which is connected to the CT scanner and computes 3D dose distributions on-site immediately after image reconstruction. To estimate the performance gain, the authors benchmarked dose calculation times on a 2.6 GHz Intel Xeon 5430 Quad core workstation equipped with two NVIDIA GeForce GTX 285 cards. The on-site calculation concept was demonstrated for clinical and preclinical datasets on CT scanners (multislice CT, flat-detector CT, and micro-CT) with varying geometry, spectra, and filtration. To validate the GPU-based MC algorithm, the authors measured dose values on a 64-slice CT system using calibrated ionization chambers and thermoluminesence dosimeters (TLDs) which were placed inside standard cylindrical polymethyl methacrylate (PMMA) phantoms. The dose values and profiles obtained by GPU-based MC simulations were in the expected good agreement with computed tomography dose index (CTDI) measurements and reference TLD profiles with differences being less than 5%. For 10(9) photon histories simulated in a 256 × 256 × 12 voxel thorax dataset with voxel
Use of the Apollo-II multigroup transport code for criticality calculations
International Nuclear Information System (INIS)
Coste, M.; Mathonniere, G.; Sanchez, R.; Stankovski, Z.; Van der Gucht, C.; Zmijarevic, I.
1992-01-01
APPOLO-II is a new-generation multigroup transport code for assembly calculation. The code has been designed to be used as a tool for reactor design as well as for the analysis and interpretation of small nuclear facilities. As the first step in a criticality calculation, the collision probability module of the APPOLO-II code can be used to generate cell or assembly homogenized reaction-rate preserving cross sections that account for self-shielding effects as well as for the fine-energy within cell flux spectral variations. These cross section data can then be used either directly within the APPOLO-II code in a direct discrete ordinate multigroup transport calculation of a small nuclear facility or, more generally, be formatted by a post-processing module to be used by the multigroup diffusion code CRONOS-II or by the multigroup Monte Carlo code TRIMARAN
International Nuclear Information System (INIS)
Meireles, Ramiro Conceicao
2016-01-01
The shielding calculation methodology for radiotherapy services adopted in Brazil and in several countries is that described in publication 151 of the National Council on Radiation Protection and Measurements (NCRP 151). This methodology however, markedly employs several approaches that can impact both in the construction cost and in the radiological safety of the facility. Although this methodology is currently well established by the high level of use, some parameters employed in the calculation methodology did not undergo to a detailed assessment to evaluate the impact of the various approaches considered. In this work the MCNP5 Monte Carlo code was used with the purpose of evaluating the above mentioned approaches. TVLs values were obtained for photons in conventional concrete (2.35g / cm 3 ), considering the energies of 6, 10 and 25 MeV, respectively, first considering an isotropic radiation source impinging perpendicular to the barriers, and subsequently a lead head shielding emitting a shaped beam, in the format of a pyramid trunk. Primary barriers safety margins, taking in account the head shielding emitting photon beam pyramid-shaped in the energies of 6, 10, 15 and 18 MeV were assessed. A study was conducted considering the attenuation provided by the patient's body in the energies of 6,10, 15 and 18 MeV, leading to new attenuation factors. Experimental measurements were performed in a real radiotherapy room, in order to map the leakage radiation emitted by the accelerator head shielding and the results obtained were employed in the Monte Carlo simulation, as well as to validate the entire study. The study results indicate that the TVLs values provided by (NCRP, 2005) show discrepancies in comparison with the values obtained by simulation and that there may be some barriers that are calculated with insufficient thickness. Furthermore, the simulation results show that the additional safety margins considered when calculating the width of the primary
Energy Technology Data Exchange (ETDEWEB)
Chibani, Omar, E-mail: omar.chibani@fccc.edu; C-M Ma, Charlie [Fox Chase Cancer Center, Philadelphia, Pennsylvania 19111 (United States)
2014-05-15
Purpose: To present a new accelerated Monte Carlo code for CT-based dose calculations in high dose rate (HDR) brachytherapy. The new code (HDRMC) accounts for both tissue and nontissue heterogeneities (applicator and contrast medium). Methods: HDRMC uses a fast ray-tracing technique and detailed physics algorithms to transport photons through a 3D mesh of voxels representing the patient anatomy with applicator and contrast medium included. A precalculated phase space file for the{sup 192}Ir source is used as source term. HDRM is calibrated to calculated absolute dose for real plans. A postprocessing technique is used to include the exact density and composition of nontissue heterogeneities in the 3D phantom. Dwell positions and angular orientations of the source are reconstructed using data from the treatment planning system (TPS). Structure contours are also imported from the TPS to recalculate dose-volume histograms. Results: HDRMC was first benchmarked against the MCNP5 code for a single source in homogenous water and for a loaded gynecologic applicator in water. The accuracy of the voxel-based applicator model used in HDRMC was also verified by comparing 3D dose distributions and dose-volume parameters obtained using 1-mm{sup 3} versus 2-mm{sup 3} phantom resolutions. HDRMC can calculate the 3D dose distribution for a typical HDR cervix case with 2-mm resolution in 5 min on a single CPU. Examples of heterogeneity effects for two clinical cases (cervix and esophagus) were demonstrated using HDRMC. The neglect of tissue heterogeneity for the esophageal case leads to the overestimate of CTV D90, CTV D100, and spinal cord maximum dose by 3.2%, 3.9%, and 3.6%, respectively. Conclusions: A fast Monte Carlo code for CT-based dose calculations which does not require a prebuilt applicator model is developed for those HDR brachytherapy treatments that use CT-compatible applicators. Tissue and nontissue heterogeneities should be taken into account in modern HDR
Energy Technology Data Exchange (ETDEWEB)
Lee, Ki Bog; Kim, Yeong Il; Kim, Kang Seok; Kim, Sang Ji; Kim, Young Gyun; Song, Hoon; Lee, Dong Uk; Lee, Byoung Oon; Jang, Jin Wook; Lim, Hyun Jin; Kim, Hak Sung
2004-05-01
In this report, the results of KALIMER (Korea Advanced LIquid MEtal Reactor) core design calculated by the K-CORE computing system are compared and analyzed with those of MCDEP calculation. The effective multiplication factor, flux distribution, fission power distribution and the number densities of the important nuclides effected from the depletion calculation for the R-Z model and Hex-Z model of KALIMER core are compared. It is confirmed that the results of K-CORE system compared with those of MCDEP based on the Monte Carlo transport theory method agree well within 700 pcm for the effective multiplication factor estimation and also within 2% in the driver fuel region, within 10% in the radial blanket region for the reaction rate and the fission power density. Thus, the K-CORE system for the core design of KALIMER by treating the lumped fission product and mainly important nuclides can be used as a core design tool keeping the necessary accuracy.
Botta, F; Mairani, A; Battistoni, G; Cremonesi, M; Di Dia, A; Fassò, A; Ferrari, A; Ferrari, M; Paganelli, G; Pedroli, G; Valente, M
2011-07-01
The calculation of patient-specific dose distribution can be achieved by Monte Carlo simulations or by analytical methods. In this study, FLUKA Monte Carlo code has been considered for use in nuclear medicine dosimetry. Up to now, FLUKA has mainly been dedicated to other fields, namely high energy physics, radiation protection, and hadrontherapy. When first employing a Monte Carlo code for nuclear medicine dosimetry, its results concerning electron transport at energies typical of nuclear medicine applications need to be verified. This is commonly achieved by means of calculation of a representative parameter and comparison with reference data. Dose point kernel (DPK), quantifying the energy deposition all around a point isotropic source, is often the one. FLUKA DPKS have been calculated in both water and compact bone for monoenergetic electrons (10-3 MeV) and for beta emitting isotopes commonly used for therapy (89Sr, 90Y, 131I 153Sm, 177Lu, 186Re, and 188Re). Point isotropic sources have been simulated at the center of a water (bone) sphere, and deposed energy has been tallied in concentric shells. FLUKA outcomes have been compared to PENELOPE v.2008 results, calculated in this study as well. Moreover, in case of monoenergetic electrons in water, comparison with the data from the literature (ETRAN, GEANT4, MCNPX) has been done. Maximum percentage differences within 0.8.RCSDA and 0.9.RCSDA for monoenergetic electrons (RCSDA being the continuous slowing down approximation range) and within 0.8.X90 and 0.9.X90 for isotopes (X90 being the radius of the sphere in which 90% of the emitted energy is absorbed) have been computed, together with the average percentage difference within 0.9.RCSDA and 0.9.X90 for electrons and isotopes, respectively. Concerning monoenergetic electrons, within 0.8.RCSDA (where 90%-97% of the particle energy is deposed), FLUKA and PENELOPE agree mostly within 7%, except for 10 and 20 keV electrons (12% in water, 8.3% in bone). The
MC++: A parallel, portable, Monte Carlo neutron transport code in C++
International Nuclear Information System (INIS)
Lee, S.R.; Cummings, J.C.; Nolen, S.D.
1997-01-01
MC++ is an implicit multi-group Monte Carlo neutron transport code written in C++ and based on the Parallel Object-Oriented Methods and Applications (POOMA) class library. MC++ runs in parallel on and is portable to a wide variety of platforms, including MPPs, SMPs, and clusters of UNIX workstations. MC++ is being developed to provide transport capabilities to the Accelerated Strategic Computing Initiative (ASCI). It is also intended to form the basis of the first transport physics framework (TPF), which is a C++ class library containing appropriate abstractions, objects, and methods for the particle transport problem. The transport problem is briefly described, as well as the current status and algorithms in MC++ for solving the transport equation. The alpha version of the POOMA class library is also discussed, along with the implementation of the transport solution algorithms using POOMA. Finally, a simple test problem is defined and performance and physics results from this problem are discussed on a variety of platforms
Energy Technology Data Exchange (ETDEWEB)
Bellezzo, Murillo
2014-09-01
As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo Method (MCM) has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this thesis, the CUBMC code is presented, a GPU-based MC photon transport algorithm for dose calculation under the Compute Unified Device Architecture (CUDA) platform. The simulation of physical events is based on the algorithm used in PENELOPE, and the cross section table used is the one generated by the MATERIAL routine, also present in PENELOPE code. Photons are transported in voxel-based geometries with different compositions. There are two distinct approaches used for transport simulation. The rst of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon ignores the existence of borders and travels in homogeneous fictitious media. The CUBMC code aims to be an alternative of Monte Carlo simulator code that, by using the capability of parallel processing of graphics processing units (GPU), provide high performance simulations in low cost compact machines, and thus can be applied in clinical cases and incorporated in treatment planning systems for radiotherapy. (author)
International Nuclear Information System (INIS)
Sutherland, J. G. H.; Thomson, R. M.; Rogers, D. W. O.
2011-01-01
Purpose: To investigate the use of various breast tissue segmentation models in Monte Carlo dose calculations for low-energy brachytherapy. Methods: The EGSnrc user-code BrachyDose is used to perform Monte Carlo simulations of a breast brachytherapy treatment using TheraSeed Pd-103 seeds with various breast tissue segmentation models. Models used include a phantom where voxels are randomly assigned to be gland or adipose (randomly segmented), a phantom where a single tissue of averaged gland and adipose is present (averaged tissue), and a realistically segmented phantom created from previously published numerical phantoms. Radiation transport in averaged tissue while scoring in gland along with other combinations is investigated. The inclusion of calcifications in the breast is also studied in averaged tissue and randomly segmented phantoms. Results: In randomly segmented and averaged tissue phantoms, the photon energy fluence is approximately the same; however, differences occur in the dose volume histograms (DVHs) as a result of scoring in the different tissues (gland and adipose versus averaged tissue), whose mass energy absorption coefficients differ by 30%. A realistically segmented phantom is shown to significantly change the photon energy fluence compared to that in averaged tissue or randomly segmented phantoms. Despite this, resulting DVHs for the entire treatment volume agree reasonably because fluence differences are compensated by dose scoring differences. DVHs for the dose to only the gland voxels in a realistically segmented phantom do not agree with those for dose to gland in an averaged tissue phantom. Calcifications affect photon energy fluence to such a degree that the differences in fluence are not compensated for (as they are in the no calcification case) by dose scoring in averaged tissue phantoms. Conclusions: For low-energy brachytherapy, if photon transport and dose scoring both occur in an averaged tissue, the resulting DVH for the entire
Marshall, Paul; Reed, Robert; Fodness, Bryan; Jordan, Tom; Pickel, Jim; Xapsos, Michael; Burke, Ed
2004-01-01
This slide presentation examines motivation for Monte Carlo methods, charge deposition in sensor arrays, displacement damage calculations, and future work. The discussion of charge deposition sensor arrays includes Si active pixel sensor APS arrays and LWIR HgCdTe FPAs. The discussion of displacement damage calculations includes nonionizing energy loss (NIEL), HgCdTe NIEL calculation results including variance, and implications for damage in HgCdTe detector arrays.
Aurora T: a Monte Carlo code for transportation of neutral atoms in a toroidal plasma
International Nuclear Information System (INIS)
Bignami, A.; Chiorrini, R.
1982-01-01
This paper contains a short description of Aurora code. This code have been developed at Princeton with Monte Carlo method for calculating neutral gas in cylindrical plasma. In this work subroutines such one can take in account toroidal geometry are developed
SPHERE: a spherical-geometry multimaterial electron/photon Monte Carlo transport code
International Nuclear Information System (INIS)
Halbleib, J.A. Sr.
1977-06-01
SPHERE provides experimenters and theorists with a method for the routine solution of coupled electron/photon transport through multimaterial configurations possessing spherical symmetry. Emphasis is placed upon operational simplicity without sacrificing the rigor of the model. SPHERE combines condensed-history electron Monte Carlo with conventional single-scattering photon Monte Carlo in order to describe the transport of all generations of particles from several MeV down to 1.0 and 10.0 keV for electrons and photons, respectively. The model is more accurate at the higher energies, with a less rigorous description of the particle cascade at energies where the shell structure of the transport media becomes important. Flexibility of construction permits the user to tailor the model to specific applications and to extend the capabilities of the model to more sophisticated applications through relatively simple update procedures. 8 figs., 3 tables
Self-Consistent Scattering and Transport Calculations
Hansen, S. B.; Grabowski, P. E.
2015-11-01
An average-atom model with ion correlations provides a compact and complete description of atomic-scale physics in dense, finite-temperature plasmas. The self-consistent ionic and electronic distributions from the model enable calculation of x-ray scattering signals and conductivities for material across a wide range of temperatures and densities. We propose a definition for the bound electronic states that ensures smooth behavior of these measurable properties under pressure ionization and compare the predictions of this model with those of less consistent models for Be, C, Al, and Fe. SNL is a multi-program laboratory managed and operated by Sandia Corp., a wholly owned subsidiary of Lockheed Martin Corp, for the U.S. DoE NNSA under contract DE-AC04-94AL85000. This work was supported by DoE OFES Early Career grant FWP-14-017426.
Faught, Austin M; Davidson, Scott E; Fontenot, Jonas; Kry, Stephen F; Etzel, Carol; Ibbott, Geoffrey S; Followill, David S
2017-09-01
The Imaging and Radiation Oncology Core Houston (IROC-H) (formerly the Radiological Physics Center) has reported varying levels of agreement in their anthropomorphic phantom audits. There is reason to believe one source of error in this observed disagreement is the accuracy of the dose calculation algorithms and heterogeneity corrections used. To audit this component of the radiotherapy treatment process, an independent dose calculation tool is needed. Monte Carlo multiple source models for Elekta 6 MV and 10 MV therapeutic x-ray beams were commissioned based on measurement of central axis depth dose data for a 10 × 10 cm 2 field size and dose profiles for a 40 × 40 cm 2 field size. The models were validated against open field measurements consisting of depth dose data and dose profiles for field sizes ranging from 3 × 3 cm 2 to 30 × 30 cm 2 . The models were then benchmarked against measurements in IROC-H's anthropomorphic head and neck and lung phantoms. Validation results showed 97.9% and 96.8% of depth dose data passed a ±2% Van Dyk criterion for 6 MV and 10 MV models respectively. Dose profile comparisons showed an average agreement using a ±2%/2 mm criterion of 98.0% and 99.0% for 6 MV and 10 MV models respectively. Phantom plan comparisons were evaluated using ±3%/2 mm gamma criterion, and averaged passing rates between Monte Carlo and measurements were 87.4% and 89.9% for 6 MV and 10 MV models respectively. Accurate multiple source models for Elekta 6 MV and 10 MV x-ray beams have been developed for inclusion in an independent dose calculation tool for use in clinical trial audits. © 2017 American Association of Physicists in Medicine.
The Calculation Of Titanium Buildup Factor Based On Monte Carlo Method
International Nuclear Information System (INIS)
Has, Hengky Istianto; Achmad, Balza; Harto, Andang Widi
2001-01-01
The objective of radioactive-waste container is to reduce radiation emission to the environment. For that purpose, we need material with ability to shield that radiation and last for 10.000 years. Titanium is one of the materials that can be used to make containers. Unfortunately, its buildup factor, which is an importance factor in setting up radiation shielding, has not been calculated. Therefore, the calculations of titanium buildup factor as a function of other parameters is needed. Buildup factor can be determined either experimentally or by simulation. The purpose of this study is to determine titanium buildup factor using simulation program based on Monte Carlo method. Monte Carlo is a stochastic method, therefore is proper to calculate nuclear radiation which naturally has random characteristic. Simulation program also able to give result while experiments can not be performed, because of their limitations.The result of the simulation is, that by increasing titanium thickness the buildup factor number and dosage increase. In contrary If photon energy is higher, then buildup factor number and dosage are lower. The photon energy used in the simulation was ranged from 0.2 MeV to 2.0 MeV with 0.2 MeV step size, while the thickness was ranged from 0.2 cm to 3.0 cm with step size of 0.2 cm. The highest buildup factor number is β = 1.4540 ± 0.047229 at 0.2 MeV photon energy with titanium thickness of 3.0 cm. The lowest is β = 1.0123 ± 0.000650 at 2.0 MeV photon energy with 0.2 cm thickness of titanium. For the dosage buildup factor, the highest dose is β D = 1.3991 ± 0.013999 at 0.2 MeV of the photon energy with a titanium thickness of 3.0 cm and the lowest is β D = 1.0042 ± 0.000597 at 2.0 MeV with titanium thickness of 0.2 cm. For the photon energy and the thickness of titanium used in simulation, buildup factor and dosage buildup factor as a function of photon energy and titanium thickness can be formulated as follow β = 1.1264 e - 0.0855 E e 0 .0584 T
Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation
Ziegenhein, Peter; Pirner, Sven; Kamerling, Cornelis Ph; Oelfke, Uwe
2015-08-01
Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37× compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25× and 1.95× faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.
International Nuclear Information System (INIS)
Gomes B, W. O.
2016-10-01
This study aimed to develop a geometry of irradiation applicable to the software PCXMC and the consequent calculation of effective dose in applications of the Computed Tomography Cone Beam (CBCT). We evaluated two different CBCT equipment s for dental applications: Care stream Cs 9000 3-dimensional tomograph; i-CAT and GENDEX GXCB-500. Initially characterize each protocol measuring the surface kerma input and the product kerma air-area, P KA , with solid state detectors RADCAL and PTW transmission chamber. Then we introduce the technical parameters of each preset protocols and geometric conditions in the PCXMC software to obtain the values of effective dose. The calculated effective dose is within the range of 9.0 to 15.7 μSv for 3-dimensional computer 9000 Cs; within the range 44.5 to 89 μSv for GXCB-500 equipment and in the range of 62-111 μSv for equipment Classical i-CAT. These values were compared with results obtained dosimetry using TLD implanted in anthropomorphic phantom and are considered consistent. Os effective dose results are very sensitive to the geometry of radiation (beam position in mathematical phantom). This factor translates to a factor of fragility software usage. But it is very useful to get quick answers to regarding process optimization tool conclusions protocols. We conclude that use software PCXMC Monte Carlo simulation is useful assessment protocols for CBCT tests in dental applications. (Author)
Monte Carlo code Serpent calculation of the parameters of the stationary nuclear fission wave
Directory of Open Access Journals (Sweden)
V. M. Khotyayintsev
2017-12-01
Full Text Available n this work, propagation of the stationary nuclear fission wave was simulated for series of fixed power values using Monte Carlo code Serpent. The wave moved in the axial direction in 5 m long cylindrical core of fast reactor with pure 238U raw fuel. Stationary wave mode arises some period later after the wave ignition and lasts sufficiently long to determine kef with high enough accuracy. The velocity characteristic of the reactor was determined as the dependence of the wave velocity on the neutron multiplication factor. As we have recently shown within a one-group diffusion description, the velocity characteristic is two-valued due to the effect of concentration mechanisms, while thermal feedback affects it only quantitatively. The shape and parameters of the velocity characteristic critically affect feasibility of the reactor design since stationary wave solutions of the lower branch are unstable and do not correspond to any real waves in self-regulated reactor, like CANDLE. In this work calculations were performed without taking into account thermal feedback. They confirm that theoretical dependence correctly describes the shape of the velocity characteristic calculated using the results of the Serpent modeling.
Energy Technology Data Exchange (ETDEWEB)
Gomes B, W. O., E-mail: wilsonottobatista@gmail.com [Instituto Federal da Bahia, Rua Emidio dos Santos s/n, Barbalho 40301-015, Salvador de Bahia (Brazil)
2016-10-15
This study aimed to develop a geometry of irradiation applicable to the software PCXMC and the consequent calculation of effective dose in applications of the Computed Tomography Cone Beam (CBCT). We evaluated two different CBCT equipment s for dental applications: Care stream Cs 9000 3-dimensional tomograph; i-CAT and GENDEX GXCB-500. Initially characterize each protocol measuring the surface kerma input and the product kerma air-area, P{sub KA}, with solid state detectors RADCAL and PTW transmission chamber. Then we introduce the technical parameters of each preset protocols and geometric conditions in the PCXMC software to obtain the values of effective dose. The calculated effective dose is within the range of 9.0 to 15.7 μSv for 3-dimensional computer 9000 Cs; within the range 44.5 to 89 μSv for GXCB-500 equipment and in the range of 62-111 μSv for equipment Classical i-CAT. These values were compared with results obtained dosimetry using TLD implanted in anthropomorphic phantom and are considered consistent. Os effective dose results are very sensitive to the geometry of radiation (beam position in mathematical phantom). This factor translates to a factor of fragility software usage. But it is very useful to get quick answers to regarding process optimization tool conclusions protocols. We conclude that use software PCXMC Monte Carlo simulation is useful assessment protocols for CBCT tests in dental applications. (Author)
Calculation of Credit Valuation Adjustment Based on Least Square Monte Carlo Methods
Directory of Open Access Journals (Sweden)
Qian Liu
2015-01-01
Full Text Available Counterparty credit risk has become one of the highest-profile risks facing participants in the financial markets. Despite this, relatively little is known about how counterparty credit risk is actually priced mathematically. We examine this issue using interest rate swaps. This largely traded financial product allows us to well identify the risk profiles of both institutions and their counterparties. Concretely, Hull-White model for rate and mean-reverting model for default intensity have proven to be in correspondence with the reality and to be well suited for financial institutions. Besides, we find that least square Monte Carlo method is quite efficient in the calculation of credit valuation adjustment (CVA, for short as it avoids the redundant step to generate inner scenarios. As a result, it accelerates the convergence speed of the CVA estimators. In the second part, we propose a new method to calculate bilateral CVA to avoid double counting in the existing bibliographies, where several copula functions are adopted to describe the dependence of two first to default times.
DSMC calculations for the delta wing. [Direct Simulation Monte Carlo method
Celenligil, M. Cevdet; Moss, James N.
1990-01-01
Results are reported from three-dimensional direct simulation Monte Carlo (DSMC) computations, using a variable-hard-sphere molecular model, of hypersonic flow on a delta wing. The body-fitted grid is made up of deformed hexahedral cells divided into six tetrahedral subcells with well defined triangular faces; the simulation is carried out for 9000 time steps using 150,000 molecules. The uniform freestream conditions include M = 20.2, T = 13.32 K, rho = 0.00001729 kg/cu m, and T(wall) = 620 K, corresponding to lambda = 0.00153 m and Re = 14,000. The results are presented in graphs and briefly discussed. It is found that, as the flow expands supersonically around the leading edge, an attached leeside flow develops around the wing, and the near-surface density distribution has a maximum downstream from the stagnation point. Coefficients calculated include C(H) = 0.067, C(DP) = 0.178, C(DF) = 0.110, C(L) = 0.714, and C(D) = 1.089. The calculations required 56 h of CPU time on the NASA Langley Voyager CRAY-2 supercomputer.
Nonlinear acceleration of Sn transport calculations
International Nuclear Information System (INIS)
Fichtl, Erin D.; Warsa, James S.; Calef, Matthew T.
2011-01-01
The use of nonlinear iterative methods, Jacobian-Free Newton-Krylov (JFNK) in particular, for solving eigenvalue problems in transport applications has recently become an active subject of research. While JFNK has been shown to be effective for k-eigenvalue problems, there are a number of input parameters that impact computational efficiency, making it difficult to implement efficiently in a production code using a single set of default parameters. We show that different selections for the forcing parameter in particular can lead to large variations in the amount of computational work for a given problem. In contrast, we employ a nonlinear subspace method that sits outside and effectively accelerates nonlinear iterations of a given form and requires only a single input parameter, the subspace size. It is shown to consistently and significantly reduce the amount of computational work when applied to fixed-point iteration, and this combination of methods is shown to be more efficient than JFNK for our application. (author)
Bouchard, Hugo; Bielajew, Alex
2015-07-07
To establish a theoretical framework for generalizing Monte Carlo transport algorithms by adding external electromagnetic fields to the Boltzmann radiation transport equation in a rigorous and consistent fashion. Using first principles, the Boltzmann radiation transport equation is modified by adding a term describing the variation of the particle distribution due to the Lorentz force. The implications of this new equation are evaluated by investigating the validity of Fano's theorem. Additionally, Lewis' approach to multiple scattering theory in infinite homogeneous media is redefined to account for the presence of external electromagnetic fields. The equation is modified and yields a description consistent with the deterministic laws of motion as well as probabilistic methods of solution. The time-independent Boltzmann radiation transport equation is generalized to account for the electromagnetic forces in an additional operator similar to the interaction term. Fano's and Lewis' approaches are stated in this new equation. Fano's theorem is found not to apply in the presence of electromagnetic fields. Lewis' theory for electron multiple scattering and moments, accounting for the coupling between the Lorentz force and multiple elastic scattering, is found. However, further investigation is required to develop useful algorithms for Monte Carlo and deterministic transport methods. To test the accuracy of Monte Carlo transport algorithms in the presence of electromagnetic fields, the Fano cavity test, as currently defined, cannot be applied. Therefore, new tests must be designed for this specific application. A multiple scattering theory that accurately couples the Lorentz force with elastic scattering could improve Monte Carlo efficiency. The present study proposes a new theoretical framework to develop such algorithms.
International Nuclear Information System (INIS)
Pölz, Stefan; Laubersheimer, Sven; Eberhardt, Jakob S; Harrendorf, Marco A; Keck, Thomas; Benzler, Andreas; Breustedt, Bastian
2013-01-01
The basic idea of Voxel2MCNP is to provide a framework supporting users in modeling radiation transport scenarios using voxel phantoms and other geometric models, generating corresponding input for the Monte Carlo code MCNPX, and evaluating simulation output. Applications at Karlsruhe Institute of Technology are primarily whole and partial body counter calibration and calculation of dose conversion coefficients. A new generic data model describing data related to radiation transport, including phantom and detector geometries and their properties, sources, tallies and materials, has been developed. It is modular and generally independent of the targeted Monte Carlo code. The data model has been implemented as an XML-based file format to facilitate data exchange, and integrated with Voxel2MCNP to provide a common interface for modeling, visualization, and evaluation of data. Also, extensions to allow compatibility with several file formats, such as ENSDF for nuclear structure properties and radioactive decay data, SimpleGeo for solid geometry modeling, ImageJ for voxel lattices, and MCNPX’s MCTAL for simulation results have been added. The framework is presented and discussed in this paper and example workflows for body counter calibration and calculation of dose conversion coefficients is given to illustrate its application. (paper)
International Nuclear Information System (INIS)
Garcia-Pareja, S.; Galan, P.; Manzano, F.; Brualla, L.; Lallena, A. M.
2010-01-01
Purpose: In this work, the authors describe an approach which has been developed to drive the application of different variance-reduction techniques to the Monte Carlo simulation of photon and electron transport in clinical accelerators. Methods: The new approach considers the following techniques: Russian roulette, splitting, a modified version of the directional bremsstrahlung splitting, and the azimuthal particle redistribution. Their application is controlled by an ant colony algorithm based on an importance map. Results: The procedure has been applied to radiosurgery beams. Specifically, the authors have calculated depth-dose profiles, off-axis ratios, and output factors, quantities usually considered in the commissioning of these beams. The agreement between Monte Carlo results and the corresponding measurements is within ∼3%/0.3 mm for the central axis percentage depth dose and the dose profiles. The importance map generated in the calculation can be used to discuss simulation details in the different parts of the geometry in a simple way. The simulation CPU times are comparable to those needed within other approaches common in this field. Conclusions: The new approach is competitive with those previously used in this kind of problems (PSF generation or source models) and has some practical advantages that make it to be a good tool to simulate the radiation transport in problems where the quantities of interest are difficult to obtain because of low statistics.
Energy Technology Data Exchange (ETDEWEB)
Garcia-Pareja, S.; Galan, P.; Manzano, F.; Brualla, L.; Lallena, A. M. [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ' ' Carlos Haya' ' , Avda. Carlos Haya s/n, E-29010 Malaga (Spain); Unidad de Radiofisica Hospitalaria, Hospital Xanit Internacional, Avda. de los Argonautas s/n, E-29630 Benalmadena (Malaga) (Spain); NCTeam, Strahlenklinik, Universitaetsklinikum Essen, Hufelandstr. 55, D-45122 Essen (Germany); Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)
2010-07-15
Purpose: In this work, the authors describe an approach which has been developed to drive the application of different variance-reduction techniques to the Monte Carlo simulation of photon and electron transport in clinical accelerators. Methods: The new approach considers the following techniques: Russian roulette, splitting, a modified version of the directional bremsstrahlung splitting, and the azimuthal particle redistribution. Their application is controlled by an ant colony algorithm based on an importance map. Results: The procedure has been applied to radiosurgery beams. Specifically, the authors have calculated depth-dose profiles, off-axis ratios, and output factors, quantities usually considered in the commissioning of these beams. The agreement between Monte Carlo results and the corresponding measurements is within {approx}3%/0.3 mm for the central axis percentage depth dose and the dose profiles. The importance map generated in the calculation can be used to discuss simulation details in the different parts of the geometry in a simple way. The simulation CPU times are comparable to those needed within other approaches common in this field. Conclusions: The new approach is competitive with those previously used in this kind of problems (PSF generation or source models) and has some practical advantages that make it to be a good tool to simulate the radiation transport in problems where the quantities of interest are difficult to obtain because of low statistics.
International Nuclear Information System (INIS)
Petrizzi, L.; Batistoni, P.; Migliori, S.; Chen, Y.; Fischer, U.; Pereslavtsev, P.; Loughlin, M.; Secco, A.
2003-01-01
In deuterium-deuterium (D-D) and deuterium-tritium (D-T) fusion plasmas neutrons are produced causing activation of JET machine components. For safe operation and maintenance it is important to be able to predict the induced activation and the resulting shut down dose rates. This requires a suitable system of codes which is capable of simulating both the neutron induced material activation during operation and the decay gamma radiation transport after shut-down in the proper 3-D geometry. Two methodologies to calculate the dose rate in fusion devices have been developed recently and applied to fusion machines, both using the MCNP Monte Carlo code. FZK has developed a more classical approach, the rigorous 2-step (R2S) system in which MCNP is coupled to the FISPACT inventory code with an automated routing. ENEA, in collaboration with the ITER Team, has developed an alternative approach, the direct 1 step method (D1S). Neutron and decay gamma transport are handled in one single MCNP run, using an ad hoc cross section library. The intention was to tightly couple the neutron induced production of a radio-isotope and the emission of its decay gammas for an accurate spatial distribution and a reliable calculated statistical error. The two methods have been used by the two Associations to calculate the dose rate in five positions of JET machine, two inside the vacuum chamber and three outside, at cooling times between 1 second and 1 year after shutdown. The same MCNP model and irradiation conditions have been assumed. The exercise has been proposed and financed in the frame of the Fusion Technological Program of the JET machine. The scope is to supply the designers with the most reliable tool and data to calculate the dose rate on fusion machines. Results showed that there is a good agreement: the differences range between 5-35%. The next step to be considered in 2003 will be an exercise in which the comparison will be done with dose-rate data from JET taken during and
Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport
Energy Technology Data Exchange (ETDEWEB)
Romano, Paul K.; Siegel, Andrew R.
2017-04-16
The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup due to vectorization as a function of two parameters: the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size in order to achieve vector efficiency greater than 90%. When the execution times for events are allowed to vary, however, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration. For some problems, this implies that vector effciencies over 50% may not be attainable. While there are many factors impacting performance of an event-based algorithm that are not captured by our model, it nevertheless provides insights into factors that may be limiting in a real implementation.
Energy Technology Data Exchange (ETDEWEB)
Shulenburger, Luke [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mattsson, Thomas Kjell Rene [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Desjarlais, Michael Paul [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-01-01
Motivated by the disagreement between recent diffusion Monte Carlo calculations of the phase transition pressure between the ambient and beta-Sn phases of silicon and experiments, we present a study of the HCP to BCC phase transition in beryllium. This lighter element provides an opportunity for directly testing many of the approximations required for calculations on silicon and may suggest a path towards increasing the practical accuracy of diffusion Monte Carlo calculations of solids in general. We demonstrate that the single largest approximation in these calculations is the pseudopotential approximation and after removing this we find excellent agreement with experiment for the ambient HCP phase and results similar to careful calculations using density functional theory for the phase transition pressure.
Head-and-neck IMRT treatments assessed with a Monte Carlo dose calculation engine
International Nuclear Information System (INIS)
Seco, J; Adams, E; Bidmead, M; Partridge, M; Verhaegen, F
2005-01-01
IMRT is frequently used in the head-and-neck region, which contains materials of widely differing densities (soft tissue, bone, air-cavities). Conventional methods of dose computation for these complex, inhomogeneous IMRT cases involve significant approximations. In the present work, a methodology for the development, commissioning and implementation of a Monte Carlo (MC) dose calculation engine for intensity modulated radiotherapy (MC-IMRT) is proposed which can be used by radiotherapy centres interested in developing MC-IMRT capabilities for research or clinical evaluations. The method proposes three levels for developing, commissioning and maintaining a MC-IMRT dose calculation engine: (a) development of a MC model of the linear accelerator, (b) validation of MC model for IMRT and (c) periodic quality assurance (QA) of the MC-IMRT system. The first step, level (a), in developing an MC-IMRT system is to build a model of the linac that correctly predicts standard open field measurements for percentage depth-dose and off-axis ratios. Validation of MC-IMRT, level (b), can be performed in a rando phantom and in a homogeneous water equivalent phantom. Ultimately, periodic quality assurance of the MC-IMRT system is needed to verify the MC-IMRT dose calculation system, level (c). Once the MC-IMRT dose calculation system is commissioned it can be applied to more complex clinical IMRT treatments. The MC-IMRT system implemented at the Royal Marsden Hospital was used for IMRT calculations for a patient undergoing treatment for primary disease with nodal involvement in the head-and-neck region (primary treated to 65 Gy and nodes to 54 Gy), while sparing the spinal cord, brain stem and parotid glands. Preliminary MC results predict a decrease of approximately 1-2 Gy in the median dose of both the primary tumour and nodal volumes (compared with both pencil beam and collapsed cone). This is possibly due to the large air-cavity (the larynx of the patient) situated in the centre
Kramer, R; Khoury, H J; Vieira, J W; Loureiro, E C M; Lima, V J M; Lima, F R A; Hoff, G
2004-12-07
The International Commission on Radiological Protection (ICRP) has created a task group on dose calculations, which, among other objectives, should replace the currently used mathematical MIRD phantoms by voxel phantoms. Voxel phantoms are based on digital images recorded from scanning of real persons by computed tomography or magnetic resonance imaging (MRI). Compared to the mathematical MIRD phantoms, voxel phantoms are true to the natural representations of a human body. Connected to a radiation transport code, voxel phantoms serve as virtual humans for which equivalent dose to organs and tissues from exposure to ionizing radiation can be calculated. The principal database for the construction of the FAX (Female Adult voXel) phantom consisted of 151 CT images recorded from scanning of trunk and head of a female patient, whose body weight and height were close to the corresponding data recommended by the ICRP in Publication 89. All 22 organs and tissues at risk, except for the red bone marrow and the osteogenic cells on the endosteal surface of bone ('bone surface'), have been segmented manually with a technique recently developed at the Departamento de Energia Nuclear of the UFPE in Recife, Brazil. After segmentation the volumes of the organs and tissues have been adjusted to agree with the organ and tissue masses recommended by ICRP for the Reference Adult Female in Publication 89. Comparisons have been made with the organ and tissue masses of the mathematical EVA phantom, as well as with the corresponding data for other female voxel phantoms. The three-dimensional matrix of the segmented images has eventually been connected to the EGS4 Monte Carlo code. Effective dose conversion coefficients have been calculated for exposures to photons, and compared to data determined for the mathematical MIRD-type phantoms, as well as for other voxel phantoms.
Energy Technology Data Exchange (ETDEWEB)
Kramer, R [Departamento de Energia Nuclear, Universidade Federal de Pernambuco, Av. Prof. Luiz Freire 1000, Cidade Universitaria, CEP 50740-540, Recife, PE (Brazil); Khoury, H J [Departamento de Energia Nuclear, Universidade Federal de Pernambuco, Av. Prof. Luiz Freire 1000, Cidade Universitaria, CEP 50740-540, Recife, PE (Brazil); Vieira, J W [Centro Federal de Educacao Tecnologica de Pernambuco, Recife, PE (Brazil); Loureiro, E C M [Escola Politecnica, UPE, Recife, PE (Brazil); Lima, V J M [Departamento de Anatomia, Universidade Federal de Pernambuco, Prof. Moraes Rego, 1235 Cidade Universitaria CEP 50670-420 Recife, PE (Brazil); Lima, F R A [Centro Regional de Ciencias Nucleares, R. Conego Barata 999, Recife, PE (Brazil); Hoff, G [Faculdade de FIsica, PUCRS, Porto Alegre, RS (Brazil)
2004-12-07
The International Commission on Radiological Protection (ICRP) has created a task group on dose calculations, which, among other objectives, should replace the currently used mathematical MIRD phantoms by voxel phantoms. Voxel phantoms are based on digital images recorded from scanning of real persons by computed tomography or magnetic resonance imaging (MRI). Compared to the mathematical MIRD phantoms, voxel phantoms are true to the natural representations of a human body. Connected to a radiation transport code, voxel phantoms serve as virtual humans for which equivalent dose to organs and tissues from exposure to ionizing radiation can be calculated. The principal database for the construction of the FAX (Female Adult voXel) phantom consisted of 151 CT images recorded from scanning of trunk and head of a female patient, whose body weight and height were close to the corresponding data recommended by the ICRP in Publication 89. All 22 organs and tissues at risk, except for the red bone marrow and the osteogenic cells on the endosteal surface of bone ('bone surface'), have been segmented manually with a technique recently developed at the Departamento de Energia Nuclear of the UFPE in Recife, Brazil. After segmentation the volumes of the organs and tissues have been adjusted to agree with the organ and tissue masses recommended by ICRP for the Reference Adult Female in Publication 89. Comparisons have been made with the organ and tissue masses of the mathematical EVA phantom, as well as with the corresponding data for other female voxel phantoms. The three-dimensional matrix of the segmented images has eventually been connected to the EGS4 Monte Carlo code. Effective dose conversion coefficients have been calculated for exposures to photons, and compared to data determined for the mathematical MIRD-type phantoms, as well as for other voxel phantoms.
International Nuclear Information System (INIS)
Kramer, R; Khoury, H J; Vieira, J W; Loureiro, E C M; Lima, V J M; Lima, F R A; Hoff, G
2004-01-01
The International Commission on Radiological Protection (ICRP) has created a task group on dose calculations, which, among other objectives, should replace the currently used mathematical MIRD phantoms by voxel phantoms. Voxel phantoms are based on digital images recorded from scanning of real persons by computed tomography or magnetic resonance imaging (MRI). Compared to the mathematical MIRD phantoms, voxel phantoms are true to the natural representations of a human body. Connected to a radiation transport code, voxel phantoms serve as virtual humans for which equivalent dose to organs and tissues from exposure to ionizing radiation can be calculated. The principal database for the construction of the FAX (Female Adult voXel) phantom consisted of 151 CT images recorded from scanning of trunk and head of a female patient, whose body weight and height were close to the corresponding data recommended by the ICRP in Publication 89. All 22 organs and tissues at risk, except for the red bone marrow and the osteogenic cells on the endosteal surface of bone ('bone surface'), have been segmented manually with a technique recently developed at the Departamento de Energia Nuclear of the UFPE in Recife, Brazil. After segmentation the volumes of the organs and tissues have been adjusted to agree with the organ and tissue masses recommended by ICRP for the Reference Adult Female in Publication 89. Comparisons have been made with the organ and tissue masses of the mathematical EVA phantom, as well as with the corresponding data for other female voxel phantoms. The three-dimensional matrix of the segmented images has eventually been connected to the EGS4 Monte Carlo code. Effective dose conversion coefficients have been calculated for exposures to photons, and compared to data determined for the mathematical MIRD-type phantoms, as well as for other voxel phantoms
Llovet, X.; Salvat, F.
2018-01-01
The accuracy of Monte Carlo simulations of EPMA measurements is primarily determined by that of the adopted interaction models and atomic relaxation data. The code PENEPMA implements the most reliable general models available, and it is known to provide a realistic description of electron transport and X-ray emission. Nonetheless, efficiency (i.e., the simulation speed) of the code is determined by a number of simulation parameters that define the details of the electron tracking algorithm, which may also have an effect on the accuracy of the results. In addition, to reduce the computer time needed to obtain X-ray spectra with a given statistical accuracy, PENEPMA allows the use of several variance-reduction techniques, defined by a set of specific parameters. In this communication we analyse and discuss the effect of using different values of the simulation and variance-reduction parameters on the speed and accuracy of EPMA simulations. We also discuss the effectiveness of using multi-core computers along with a simple practical strategy implemented in PENEPMA.
Žukauskaitėa, A; Plukienė, R; Ridikas, D
2007-01-01
Particle accelerators and other high energy facilities produce penetrating ionizing radiation (neutrons and γ-rays) that must be shielded. The objective of this work was to model photon and neutron transport in various materials, usually used as shielding, such as concrete, iron or graphite. Monte Carlo method allows obtaining answers by simulating individual particles and recording some aspects of their average behavior. In this work several nuclear experiments were modeled: AVF 65 (AVF cyclotron of Research Center of Nuclear Physics, Osaka University, Japan) – γ-ray beams (1-10 MeV), HIMAC (heavy-ion synchrotron of the National Institute of Radiological Sciences in Chiba, Japan) and ISIS-800 (ISIS intensive spallation neutron source facility of the Rutherford Appleton laboratory, UK) – high energy neutron (20-800 MeV) transport in iron and concrete. The calculation results were then compared with experimental data.compared with experimental data.
Generalized diffusion theory for calculating the neutron transport scalar flux
International Nuclear Information System (INIS)
Alcouffe, R.E.
1975-01-01
A generalization of the neutron diffusion equation is introduced, the solution of which is an accurate approximation to the transport scalar flux. In this generalization the auxiliary transport calculations of the system of interest are utilized to compute an accurate, pointwise diffusion coefficient. A procedure is specified to generate and improve this auxiliary information in a systematic way, leading to improvement in the calculated diffusion scalar flux. This improvement is shown to be contingent upon satisfying the condition of positive calculated-diffusion coefficients, and an algorithm that ensures this positivity is presented. The generalized diffusion theory is also shown to be compatible with conventional diffusion theory in the sense that the same methods and codes can be used to calculate a solution for both. The accuracy of the method compared to reference S/sub N/ transport calculations is demonstrated for a wide variety of examples. (U.S.)
Optimal calculational schemes for solving multigroup photon transport problem
International Nuclear Information System (INIS)
Dubinin, A.A.; Kurachenko, Yu.A.
1987-01-01
A scheme of complex algorithm for solving multigroup equation of radiation transport is suggested. The algorithm is based on using the method of successive collisions, the method of forward scattering and the spherical harmonics method, and is realized in the FORAP program (FORTRAN, BESM-6 computer). As an example the results of calculating reactor photon transport in water are presented. The considered algorithm being modified may be used for solving neutron transport problems
Energy Technology Data Exchange (ETDEWEB)
Barcellos, Luiz Felipe F.C.; Bodmann, Bardo E.J.; Vilhena, Marco T.M.B., E-mail: luizfelipe.fcb@gmail.com, E-mail: bardo.bodmann@ufrgs.br, E-mail: mtmbvilhena@gmail.com [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Grupo de Estudos Nucleares; Leite, Sergio Q. Bogado, E-mail: sbogado@ibest.com.br [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil)
2017-07-01
In this work a Monte Carlo simulator with continuous energy is used. This simulator distinguishes itself by using the sum of three probability distributions to represent the neutron spectrum. Two distributions have known shape, but have varying population of neutrons in time, and these are the fission neutron spectrum (for high energy neutrons) and the Maxwell-Boltzmann distribution (for thermal neutrons). The third distribution has an a priori unknown and possibly variable shape with time and is determined from parametrizations of Monte Carlo simulation. It is common practice in neutron transport calculations, e.g. multi-group transport, to consider that the neutrons only lose energy with each scattering reaction and then to use a thermal group with a Maxwellian distribution. Such an approximation is valid due to the fact that for fast neutrons up-scattering occurrence is irrelevant, being only appreciable at low energies, i.e. in the thermal energy region, in which it can be regarded as a Maxwell-Boltzmann distribution for thermal equilibrium. In this work the possible neutron-matter interactions are simulated with exception of the up-scattering of neutrons. In order to preserve the thermal spectrum, neutrons are selected stochastically as being part of the thermal population and have an energy attributed to them taken from a Maxwellian distribution. It is then shown how this procedure can emulate the up-scattering effect by the increase in the neutron population kinetic energy. Since the simulator uses tags to identify the reactions it is possible not only to plot the distributions by neutron energy, but also by the type of interaction with matter and with the identification of the target nuclei involved in the process. This work contains some preliminary results obtained from a Monte Carlo simulator for neutron transport that is being developed at Federal University of Rio Grande do Sul. (author)
東條, 匡志; tojo, masashi
2007-01-01
In this study, a BWR core calculation method is developed. The continuous energy Monte Carlo burn-up calculation code is newly applied to BWR assembly calculations of production level. The applicability of the present new calculation method is verified through the tracking-calculation of commercial BWR.The mechanism and quantitative effects of the error propagations, the spatial discretization and of the temperature distribution in fuel pellet on the Monte Carlo burn-up calculations are clari...
Monte Carlo benchmark calculations of energy deposition by electron/photon showers up to 1 GeV
International Nuclear Information System (INIS)
Mehlhorn, T.A.; Halbleib, J.A.
1983-01-01
Over the past several years the TIGER series of coupled electron/photon Monte Carlo transport codes has been applied to a variety of problems involving nuclear and space radiations, electron accelerators, and radioactive sources. In particular, they have been used at Sandia to simulate the interaction of electron beams, generated by pulsed-power accelerators, with various target materials for weapons effect simulation, and electron beam fusion. These codes are based on the ETRAN system which was developed for an energy range from about 10 keV up to a few tens of MeV. In this paper we will discuss the modifications that were made to the TIGER series of codes in order to extend their applicability to energies of interest to the high energy physics community (up to 1 GeV). We report the results of a series of benchmark calculations of the energy deposition by high energy electron beams in various materials using the modified codes. These results are then compared with the published results of various experimental measurements and other computational models
International Nuclear Information System (INIS)
Mizuno, T.; Kanai, Y.; Kataoka, J.; Kiss, M.; Kurita, K.; Pearce, M.; Tajima, H.; Takahashi, H.; Tanaka, T.; Ueno, M.; Umeki, Y.; Yoshida, H.; Arimoto, M.; Axelsson, M.; Marini Bettolo, C.; Bogaert, G.; Chen, P.; Craig, W.; Fukazawa, Y.; Gunji, S.
2009-01-01
The energy response of plastic scintillators (Eljen Technology EJ-204) to polarized soft gamma-ray photons below 100 keV has been studied, primarily for the balloon-borne polarimeter, PoGOLite. The response calculation includes quenching effects due to low-energy recoil electrons and the position dependence of the light collection efficiency in a 20 cm long scintillator rod. The broadening of the pulse-height spectrum, presumably caused by light transportation processes inside the scintillator, as well as the generation and multiplication of photoelectrons in the photomultiplier tube, were studied experimentally and have also been taken into account. A Monte Carlo simulation based on the Geant4 toolkit was used to model photon interactions in the scintillators. When using the polarized Compton/Rayleigh scattering processes previously corrected by the authors, scintillator spectra and angular distributions of scattered polarized photons could clearly be reproduced, in agreement with the results obtained at a synchrotron beam test conducted at the KEK Photon Factory. Our simulation successfully reproduces the modulation factor, defined as the ratio of the amplitude to the mean of the distribution of the azimuthal scattering angles, within ∼5% (relative). Although primarily developed for the PoGOLite mission, the method presented here is also relevant for other missions aiming to measure polarization from astronomical objects using plastic scintillator scatterers.
Mizuno, T.; Kanai, Y.; Kataoka, J.; Kiss, M.; Kurita, K.; Pearce, M.; Tajima, H.; Takahashi, H.; Tanaka, T.; Ueno, M.; Umeki, Y.; Yoshida, H.; Arimoto, M.; Axelsson, M.; Marini Bettolo, C.; Bogaert, G.; Chen, P.; Craig, W.; Fukazawa, Y.; Gunji, S.; Kamae, T.; Katsuta, J.; Kawai, N.; Kishimoto, S.; Klamra, W.; Larsson, S.; Madejski, G.; Ng, J. S. T.; Ryde, F.; Rydström, S.; Takahashi, T.; Thurston, T. S.; Varner, G.
2009-03-01
The energy response of plastic scintillators (Eljen Technology EJ-204) to polarized soft gamma-ray photons below 100 keV has been studied, primarily for the balloon-borne polarimeter, PoGOLite. The response calculation includes quenching effects due to low-energy recoil electrons and the position dependence of the light collection efficiency in a 20 cm long scintillator rod. The broadening of the pulse-height spectrum, presumably caused by light transportation processes inside the scintillator, as well as the generation and multiplication of photoelectrons in the photomultiplier tube, were studied experimentally and have also been taken into account. A Monte Carlo simulation based on the Geant4 toolkit was used to model photon interactions in the scintillators. When using the polarized Compton/Rayleigh scattering processes previously corrected by the authors, scintillator spectra and angular distributions of scattered polarized photons could clearly be reproduced, in agreement with the results obtained at a synchrotron beam test conducted at the KEK Photon Factory. Our simulation successfully reproduces the modulation factor, defined as the ratio of the amplitude to the mean of the distribution of the azimuthal scattering angles, within ˜5% (relative). Although primarily developed for the PoGOLite mission, the method presented here is also relevant for other missions aiming to measure polarization from astronomical objects using plastic scintillator scatterers.
DEFF Research Database (Denmark)
Mangiarotti, Alessio; Sona, Pietro; Ballestrero, Sergio
2012-01-01
Approximate analytical calculations of multi-photon effects in the spectrum of total radiated energy by high-energy electrons crossing thin targets are compared to the results of Monte Carlo type simulations. The limits of validity of the analytical expressions found in the literature are establi...
Jacimovic, R; Maucec, M; Trkov, A
2003-01-01
An experimental verification of Monte Carlo neutron flux calculations in typical irradiation channels in the TRIGA Mark II reactor at the Jozef Stefan Institute is presented. It was found that the flux, as well as its spectral characteristics, depends rather strongly on the position of the
Efficient heterogeneous execution of Monte Carlo shielding calculations on a Beowulf cluster.
Dewar, David; Hulse, Paul; Cooper, Andrew; Smith, Nigel
2005-01-01
Recent work has been done in using a high-performance 'Beowulf' cluster computer system for the efficient distribution of Monte Carlo shielding calculations. This has enabled the rapid solution of complex shielding problems at low cost and with greater modularity and scalability than traditional platforms. The work has shown that a simple approach to distributing the workload is as efficient as using more traditional techniques such as PVM (Parallel Virtual Machine). In addition, when used in an operational setting this technique is fairer with the use of resources than traditional methods, in that it does not tie up a single computing resource but instead shares the capacity with other tasks. These developments in computing technology have enabled shielding problems to be solved that would have taken an unacceptably long time to run on traditional platforms. This paper discusses the BNFL Beowulf cluster and a number of tests that have recently been run to demonstrate the efficiency of the asynchronous technique in running the MCBEND program. The BNFL Beowulf currently consists of 84 standard PCs running RedHat Linux. Current performance of the machine has been estimated to be between 40 and 100 Gflop s(-1). When the whole system is employed on one problem up to four million particles can be tracked per second. There are plans to review its size in line with future business needs.
Monte Carlo calculations with dynamical fermions by a local stochastic process
International Nuclear Information System (INIS)
Rossi, P.; Zwanziger, D.
1984-01-01
We develop and test numerically a Monte Carlo method for fermions on a lattice which accounts for the effect of the fermionic determinant to arbitrary accuracy. It is tested numerically in a 4-dimensional model with SU(2) color group and scalar fermionic quarks interacting with gluons. Computer time grows linearly with the volume of the lattice and the updating of gluons is not restricted to small jumps. The method is based on random location updating, instead of an ordered sweep, in which quarks are updated, on the average, R times more frequently than gluons. It is proven that the error in R is only of order 1/R instead of 1/Rsup(1/2) as one might naively expect. Quarks are represented by pseudofermionic variables in M pseudoflavors (which requires M times more memory for each physical fermionic degree of freedom) with an error in M of order 1/M. The method is tested by calculating the self-energy of an external quark, a quantity which would be infinite in the absence of dynamical or sea quarks. For the quantities measured, the dependence on R -1 is linear for R >= 8, and, within our statistical uncertainty, M = 2 is already asymptotic. (orig.)
Efficient heterogeneous execution of Monte Carlo shielding calculations on a Beowulf cluster
International Nuclear Information System (INIS)
Dewar, D.; Hulse, P.; Cooper, A.; Smith, N.
2005-01-01
Recent work has been done in using a high-performance 'Beowulf' cluster computer system for the efficient distribution of Monte Carlo shielding calculations. This has enabled the rapid solution of complex shielding problems at low cost and with greater modularity and scalability than traditional platforms. The work has shown that a simple approach to distributing the workload is as efficient as using more traditional techniques such as PVM (Parallel Virtual Machine). In addition, when used in an operational setting this technique is fairer with the use of resources than traditional methods, in that it does not tie up a single computing resource but instead shares the capacity with other tasks. These developments in computing technology have enabled shielding problems to be solved that would have taken an unacceptably long time to run on traditional platforms. This paper discusses the BNFL Beowulf cluster and a number of tests that have recently been run to demonstrate the efficiency of the asynchronous technique in running the MCBEND program. The BNFL Beowulf currently consists of 84 standard PCs running RedHat Linux. Current performance of the machine has been estimated to be between 40 and 100 Gflop s -1 . When the whole system is employed on one problem up to four million particles can be tracked per second. There are plans to review its size in line with future business needs. (authors)
Hanada, Masanori; Miwa, Akitsugu; Nishimura, Jun; Takeuchi, Shingo
2009-05-08
In the string-gauge duality it is important to understand how the space-time geometry is encoded in gauge theory observables. We address this issue in the case of the D0-brane system at finite temperature T. Based on the duality, the temporal Wilson loop W in gauge theory is expected to contain the information of the Schwarzschild radius RSch of the dual black hole geometry as log(W)=RSch/(2pialpha'T). This translates to the power-law behavior log(W)=1.89(T/lambda 1/3)-3/5, where lambda is the 't Hooft coupling constant. We calculate the Wilson loop on the gauge theory side in the strongly coupled regime by performing Monte Carlo simulations of supersymmetric matrix quantum mechanics with 16 supercharges. The results reproduce the expected power-law behavior up to a constant shift, which is explainable as alpha' corrections on the gravity side. Our conclusion also demonstrates manifestly the fuzzball picture of black holes.
Velazquez, L.; Castro-Palacio, J. C.
2015-03-01
Velazquez and Curilef [J. Stat. Mech. (2010) P02002, 10.1088/1742-5468/2010/02/P02002; J. Stat. Mech. (2010) P04026, 10.1088/1742-5468/2010/04/P04026] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989), 10.1103/PhysRevLett.63.1195]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L ×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L ) ∝(1/L ) z with exponent z ≃0 .26 ±0 .02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L →+∞ .
A power spectrum approach to tally convergence in Monte Carlo criticality calculation
International Nuclear Information System (INIS)
Ueki, Taro
2017-01-01
In Monte Carlo criticality calculation, confidence interval estimation is based on the central limit theorem (CLT) for a series of tallies from generations in equilibrium. A fundamental assertion resulting from CLT is the convergence in distribution (CID) of the interpolated standardized time series (ISTS) of tallies. In this work, the spectral analysis of ISTS has been conducted in order to assess the convergence of tallies in terms of CID. Numerical results obtained indicate that the power spectrum of ISTS is equal to the theoretically predicted power spectrum of Brownian motion for tallies of effective neutron multiplication factor; on the other hand, the power spectrum of ISTS of a strongly correlated series of tallies from local powers fluctuates wildly while maintaining the spectral form of fractional Brownian motion. The latter result is the evidence of a case where a series of tallies are away from CID, while the spectral form supports normality assumption on the sample mean. It is also demonstrated that one can make the unbiased estimation of the standard deviation of sample mean well before CID occurs. (author)
Chain segmentation for the Monte Carlo solution of particle transport problems
International Nuclear Information System (INIS)
Ragheb, M.M.H.
1984-01-01
A Monte Carlo approach is proposed where the random walk chains generated in particle transport simulations are segmented. Forward and adjoint-mode estimators are then used in conjunction with the firstevent source density on the segmented chains to obtain multiple estimates of the individual terms of the Neumann series solution at each collision point. The solution is then constructed by summation of the series. The approach is compared to the exact analytical and to the Monte Carlo nonabsorption weighting method results for two representative slowing down and deep penetration problems. Application of the proposed approach leads to unbiased estimates for limited numbers of particle simulations and is useful in suppressing an effective bias problem observed in some cases of deep penetration particle transport problems
Implicit Monte Carlo methods and non-equilibrium Marshak wave radiative transport
International Nuclear Information System (INIS)
Lynch, J.E.
1985-01-01
Two enhancements to the Fleck implicit Monte Carlo method for radiative transport are described, for use in transparent and opaque media respectively. The first introduces a spectral mean cross section, which applies to pseudoscattering in transparent regions with a high frequency incident spectrum. The second provides a simple Monte Carlo random walk method for opaque regions, without the need for a supplementary diffusion equation formulation. A time-dependent transport Marshak wave problem of radiative transfer, in which a non-equilibrium condition exists between the radiation and material energy fields, is then solved. These results are compared to published benchmark solutions and to new discrete ordinate S-N results, for both spatially integrated radiation-material energies versus time and to new spatially dependent temperature profiles. Multigroup opacities, which are independent of both temperature and frequency, are used in addition to a material specific heat which is proportional to the cube of the temperature. 7 refs., 4 figs
Neutron transport calculations of some fast critical assemblies
Energy Technology Data Exchange (ETDEWEB)
Martinez-Val Penalosa, J. A.
1976-07-01
To analyse the influence of the input variables of the transport codes upon the neutronic results (eigenvalues, generation times, . . . ) four Benchmark calculations have been performed. Sensitivity analysis have been applied to express these dependences in a useful way, and also to get an unavoidable experience to carry out calculations achieving the required accuracy and doing them in practical computing times. (Author) 29 refs.
The use of Monte Carlo radiation transport codes in radiation physics and dosimetry
CERN. Geneva; Ferrari, Alfredo; Silari, Marco
2006-01-01
Transport and interaction of electromagnetic radiation Interaction models and simulation schemes implemented in modern Monte Carlo codes for the simulation of coupled electron-photon transport will be briefly reviewed. In these codes, photon transport is simulated by using the detailed scheme, i.e., interaction by interaction. Detailed simulation is easy to implement, and the reliability of the results is only limited by the accuracy of the adopted cross sections. Simulations of electron and positron transport are more difficult, because these particles undergo a large number of interactions in the course of their slowing down. Different schemes for simulating electron transport will be discussed. Condensed algorithms, which rely on multiple-scattering theories, are comparatively fast, but less accurate than mixed algorithms, in which hard interactions (with energy loss or angular deflection larger than certain cut-off values) are simulated individually. The reliability, and limitations, of electron-interacti...
Monte Carlo simulation of nonlinear reactive contaminant transport in unsaturated porous media
International Nuclear Information System (INIS)
Giacobbo, F.; Patelli, E.
2007-01-01
In the current proposed solutions of radioactive waste repositories, the protective function against the radionuclide water-driven transport back to the biosphere is to be provided by an integrated system of engineered and natural geologic barriers. The occurrence of several nonlinear interactions during the radionuclide migration process may render burdensome the classical analytical-numerical approaches. Moreover, the heterogeneity of the barriers' media forces approximations to the classical analytical-numerical models, thus reducing their fidelity to reality. In an attempt to overcome these difficulties, in the present paper we adopt a Monte Carlo simulation approach, previously developed on the basis of the Kolmogorov-Dmitriev theory of branching stochastic processes. The approach is here extended for describing transport through unsaturated porous media under transient flow conditions and in presence of nonlinear interchange phenomena between the liquid and solid phases. This generalization entails the determination of the functional dependence of the parameters of the proposed transport model from the water content and from the contaminant concentration, which change in space and time during the water infiltration process. The corresponding Monte Carlo simulation approach is verified with respect to a case of nonreactive transport under transient unsaturated flow and to a case of nonlinear reactive transport under stationary saturated flow. Numerical applications regarding linear and nonlinear reactive transport under transient unsaturated flow are reported
3D calculation of absorbed dose for 131I-targeted radiotherapy: A Monte Carlo study
International Nuclear Information System (INIS)
Saeedzadeh, E.; Sarkar, S.; Abbaspour Tehrani-Fard, A.; Ay, M. R.; Khosravi, H. R.; Loudos, G.
2008-01-01
Various methods, such as those developed by the Medical Internal Radiation Dosimetry (MIRD) Committee of the Society of Nuclear Medicine or employing dose point kernels, have been applied to the radiation dosimetry of 131 I radionuclide therapy. However, studies have not shown a strong relationship between tumour absorbed dose and its overall therapeutic response, probably due in part to inaccuracies in activity and dose estimation. In the current study, the GATE Monte Carlo computer code was used to facilitate voxel-level radiation dosimetry for organ activities measured in an. 131 I-treated thyroid cancer patient. This approach allows incorporation of the size, shape and composition of organs (in the current study, in the Zubal anthropomorphic phantom) and intra-organ and intra-tumour inhomogeneities in the activity distributions. The total activities of the tumours and their heterogeneous distributions were measured from the SPECT images to calculate the dose maps. For investigating the effect of activity distribution on dose distribution, a hypothetical homogeneous distribution of the same total activity was considered in the tumours. It was observed that the tumour mean absorbed dose rates per unit cumulated activity were 0.65 E-5 and 0.61 E-5 mGY MBq -1 s -1 for the uniform and non-uniform distributions in the tumour, respectively, which do not differ considerably. However, the dose-volume histograms (DVH) show that the tumour non-uniform activity distribution decreases the absorbed dose to portions of the tumour volume. In such a case, it can be misleading to quote the mean or maximum absorbed dose, because overall response is likely limited by the tumour volume that receives low (i.e. non-cytocidal) doses. Three-dimensional radiation dosimetry, and calculation of tumour DVHs, may lead to the derivation of clinically reliable dose-response relationships and therefore may ultimately improve treatment planning as well as response assessment for radionuclide
Modular, object-oriented redesign of a large-scale Monte Carlo neutron transport program
International Nuclear Information System (INIS)
Moskowitz, B.S.
2000-01-01
This paper describes the modular, object-oriented redesign of a large-scale Monte Carlo neutron transport program. This effort represents a complete 'white sheet of paper' rewrite of the code. In this paper, the motivation driving this project, the design objectives for the new version of the program, and the design choices and their consequences will be discussed. The design itself will also be described, including the important subsystems as well as the key classes within those subsystems
Monte Carlo model of light transport in scintillating fibers and large scintillators
International Nuclear Information System (INIS)
Chakarova, R.
1995-01-01
A Monte Carlo model is developed which simulates the light transport in a scintillator surrounded by a transparent layer with different surface properties. The model is applied to analyse the light collection properties of scintillating fibers and a large scintillator wrapped in aluminium foil. The influence of the fiber interface characteristics on the light yield is investigated in detail. Light output results as well as time distributions are obtained for the large scintillator case. 15 refs, 16 figs
Sn transport calculations on vector and parallel processors
International Nuclear Information System (INIS)
Rhoades, W.A.; Childs, R.L.
1987-01-01
The transport of radiation from the source to the location of people or equipment gives rise to some of the most challenging of calculations. A problem may involve as many as a billion unknowns, each evaluated several times to resolve interdependence. Such calculations run many hours on a Cray computer, and a typical study involves many such calculations. This paper will discuss the steps taken to vectorize the DOT code, which solves transport problems in two space dimensions (2-D); the extension of this code to 3-D; and the plans for extension to parallel processors
Transport appraisal and Monte Carlo simulation by use of the CBA-DK model
DEFF Research Database (Denmark)
Salling, Kim Bang; Leleur, Steen
2011-01-01
calculation, where risk analysis is carried out using Monte Carlo simulation. Special emphasis has been placed on the separation between inherent randomness in the modeling system and lack of knowledge. These two concepts have been defined in terms of variability (ontological uncertainty) and uncertainty...... (epistemic uncertainty). After a short introduction to deterministic calculation resulting in some evaluation criteria a more comprehensive evaluation of the stochastic calculation is made. Especially, the risk analysis part of CBA-DK, with considerations about which probability distributions should be used...
DEFF Research Database (Denmark)
Sloth, Peter
1990-01-01
Density profiles and partition coefficients are obtained for hard-sphere fluids inside hard, spherical pores of different sizes by grand canonical ensemble Monte Carlo calculations. The Monte Carlo results are compared to the results obtained by application of different kinds of integral equation...... approximations. Also, some exact, analytical results for the partition coefficients are given, which are valid in the case of (very) small pores or at low density, respectively. The Journal of Chemical Physics is copyrighted by The American Institute of Physics....
Fensin, Michael Lorne
Monte Carlo-linked depletion methods have gained recent interest due to the ability to more accurately model complex 3-dimesional geometries and better track the evolution of temporal nuclide inventory by simulating the actual physical process utilizing continuous energy coefficients. The integration of CINDER90 into the MCNPX Monte Carlo radiation transport code provides a high-fidelity completely self-contained Monte-Carlo-linked depletion capability in a well established, widely accepted Monte Carlo radiation transport code that is compatible with most nuclear criticality (KCODE) particle tracking features in MCNPX. MCNPX depletion tracks all necessary reaction rates and follows as many isotopes as cross section data permits in order to achieve a highly accurate temporal nuclide inventory solution. This work chronicles relevant nuclear history, surveys current methodologies of depletion theory, details the methodology in applied MCNPX and provides benchmark results for three independent OECD/NEA benchmarks. Relevant nuclear history, from the Oklo reactor two billion years ago to the current major United States nuclear fuel cycle development programs, is addressed in order to supply the motivation for the development of this technology. A survey of current reaction rate and temporal nuclide inventory techniques is then provided to offer justification for the depletion strategy applied within MCNPX. The MCNPX depletion strategy is then dissected and each code feature is detailed chronicling the methodology development from the original linking of MONTEBURNS and MCNP to the most recent public release of the integrated capability (MCNPX 2.6.F). Calculation results of the OECD/NEA Phase IB benchmark, H. B. Robinson benchmark and OECD/NEA Phase IVB are then provided. The acceptable results of these calculations offer sufficient confidence in the predictive capability of the MCNPX depletion method. This capability sets up a significant foundation, in a well established
International Nuclear Information System (INIS)
Chen, Zhenping; Song, Jing; Zheng, Huaqing; Wu, Bin; Hu, Liqin
2015-01-01
Highlights: • The subdivision combines both advantages of uniform and non-uniform schemes. • The grid models were proved to be more efficient than traditional CSG models. • Monte Carlo simulation performance was enhanced by Optimal Spatial Subdivision. • Efficiency gains were obtained for realistic whole reactor core models. - Abstract: Geometry navigation is one of the key aspects of dominating Monte Carlo particle transport simulation performance for large-scale whole reactor models. In such cases, spatial subdivision is an easily-established and high-potential method to improve the run-time performance. In this study, a dedicated method, named Optimal Spatial Subdivision, is proposed for generating numerically optimal spatial grid models, which are demonstrated to be more efficient for geometry navigation than traditional Constructive Solid Geometry (CSG) models. The method uses a recursive subdivision algorithm to subdivide a CSG model into non-overlapping grids, which are labeled as totally or partially occupied, or not occupied at all, by CSG objects. The most important point is that, at each stage of subdivision, a conception of quality factor based on a cost estimation function is derived to evaluate the qualities of the subdivision schemes. Only the scheme with optimal quality factor will be chosen as the final subdivision strategy for generating the grid model. Eventually, the model built with the optimal quality factor will be efficient for Monte Carlo particle transport simulation. The method has been implemented and integrated into the Super Monte Carlo program SuperMC developed by FDS Team. Testing cases were used to highlight the performance gains that could be achieved. Results showed that Monte Carlo simulation runtime could be reduced significantly when using the new method, even as cases reached whole reactor core model sizes
International Nuclear Information System (INIS)
Bauer, Thilo; Jäger, Christof M.; Jordan, Meredith J. T.; Clark, Timothy
2015-01-01
We have developed a multi-agent quantum Monte Carlo model to describe the spatial dynamics of multiple majority charge carriers during conduction of electric current in the channel of organic field-effect transistors. The charge carriers are treated by a neglect of diatomic differential overlap Hamiltonian using a lattice of hydrogen-like basis functions. The local ionization energy and local electron affinity defined previously map the bulk structure of the transistor channel to external potentials for the simulations of electron- and hole-conduction, respectively. The model is designed without a specific charge-transport mechanism like hopping- or band-transport in mind and does not arbitrarily localize charge. An electrode model allows dynamic injection and depletion of charge carriers according to source-drain voltage. The field-effect is modeled by using the source-gate voltage in a Metropolis-like acceptance criterion. Although the current cannot be calculated because the simulations have no time axis, using the number of Monte Carlo moves as pseudo-time gives results that resemble experimental I/V curves
A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC).
Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun
2015-10-07
Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia's CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE's random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by
Attenuation properties of cement composites: Experimental measurements and Monte Carlo calculations
Florez Meza, Raul Fernando
Developing new cement based materials with excellent mechanical and attenuation properties is critically important for both medical and nuclear power industries. Concrete continues to be the primary choice material for the shielding of gamma and neutron radiation in facilities such as nuclear reactors, nuclear waste repositories, spent nuclear fuel pools, heavy particle radiotherapy rooms, particles accelerators, among others. The purpose of this research was to manufacture cement pastes modified with magnetite and samarium oxide and evaluate the feasibility of utilizing them for shielding of gamma and neutron radiation. Two different experiments were conducted to accomplish these goals. In the first one, Portland cement pastes modified with different loading of fine magnetite were fabricated and investigated for application in gamma radiation shielding. The experimental results were verified theoretically through XCOM and the Monte Carlo N-Particle (MCNP) transport code. Scanning electron microscopy and x-ray diffraction tests were used to investigate the microstructure of the samples. Mechanical characterization was also perfornmed by compression testing. The results suggest that fine magnetite is a suitable aggregate for increasing the compressive and flexural strength of white Portland cement pastes; however, there is no improvement of the attenuation at intermediate energy (662 keV). For the second experiment, cement pastes with different concentrations of samarium oxide were fabricated and tested for shielding against thermal neutrons. MCNP simulations were used to validate the experimental work. The result shows that samarium oxide increases the effective thermal cross section of Portland cement and has the potential to replace boron bearing compounds currently used in neutron shielding.
International Nuclear Information System (INIS)
Umiastowski, K.; Buniak, M.; Gyurcsak, J.; Maloszewski, P.
1976-01-01
Monte-Carlo calculations and experiments performed in order to verify the analytical formulae developed in the first part of the present paper are discussed. The grain size effect parameter R is determined from the measurements and Monte-Carlo calculations of the absorption coefficient of the granular samples. The dependence of parameter R on the dimensionless grain size Y, obtained in this way is compared with the one calculated from the formulae of Part 1. Good agreement is obtained for all results in a broad range of radiation energies (from 17 keV to 1.33 MeV) and grain sizes (from 40 μm to 4.0 cm). (author)
Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy
Energy Technology Data Exchange (ETDEWEB)
Martinez-Rovira, I.; Sempau, J.; Prezado, Y. [Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain) and ID17 Biomedical Beamline, European Synchrotron Radiation Facility (ESRF), 6 rue Jules Horowitz B.P. 220, F-38043 Grenoble Cedex (France); Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain); Laboratoire Imagerie et modelisation en neurobiologie et cancerologie, UMR8165, Centre National de la Recherche Scientifique (CNRS), Universites Paris 7 et Paris 11, Bat 440., 15 rue Georges Clemenceau, F-91406 Orsay Cedex (France)
2012-05-15
Purpose: Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-{mu}m-wide microbeams spaced by 200-400 {mu}m) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. Methods: The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Results: Good agreement between MC simulations and experimental results was achieved, even at
International Nuclear Information System (INIS)
Pantazi, D.; Mateescu, S.; Stanciu, M.; Mete, M.
2001-01-01
The modulated code system SCALE is used to perform a standardized shielding analysis for any facility containing spent fuel: handling devices, transport cask, intermediate and final storage facility. The neutron and gamma sources as well as the dose rates can be obtained using either discrete-ordinates or Monte Carlo methods. The shielding analysis control modules (SAS1, SAS2H and SAS4) provide a general procedure for cross-section preparation, fuel depletion/decay calculation and general onedimensional or multi-dimensional shielding analysis. The module SAS4 used in the analysis presented in this paper, is a three-dimensional Monte Carlo shielding analysis module, which uses an automated biasing procedure specialized for a nuclear fuel transport or storage container. The Spent Fuel Interim Storage Facility in our country is projected to be a parallelepiped concrete monolithic module, consisting of an external reinforced concrete structure with vertical storage cylinders (pits) arranged in a rectangular array. A pit is filled with sealed cylindrical baskets of stainless steel arranged in a stack, and with each basket containing spent fuel bundles in vertical position. The pit is closed with a concrete plug. The cylindrical geometry model is used in the shielding evaluation for a spent fuel storage structure (pit), and only the active parts of the superposed bundles is considered. The dose rates have been calculated in both the axial and radial directions using SAS4.(author)
Exploring the Birthday Paradox Using a Monte Carlo Simulation and Graphing Calculators.
Whitney, Matthew C.
2001-01-01
Describes an activity designed to demonstrate the birthday paradox and introduce students to real-world applications of Monte Carlo-type simulation techniques. Includes a sample TI-83 program and graphical analysis of the birthday problem function. (KHR)
International Nuclear Information System (INIS)
Mitrica, B.; Brancus, I.M.; Toma, G.; Bercuci, A.; Aiftimiei, C.; Wentz, J.; Rebel, H.
2004-01-01
Atmospheric muons are produced in the interactions of primary cosmic rays particle with Earth's atmosphere, mainly by the decay of pions and kaons generated in hadronic interactions. They decay further in electrons and positrons and electron and muon neutrinos. Being the penetrating cosmic rays component, the muons manage to pass entirely through the atmosphere and can pass even larger absorbers before they interact with the material at the Earth's surface, and due to cosmogenic production of isotopes by atmospheric muons, information of astrophysical, environmental and material research interest can be obtained. Up to now, mainly semi-analytical approximations have been used to calculate the muon flux for estimating the cosmogenic isotope production, necessary for different applications. Our estimation of the atmospheric muon flux is based on a Monte-Carlo simulation program CORSIKA, in which we simulate the development in the atmosphere of the extensive air showers, using different models for the description of the hadronic interaction. Atmospheric muons are produced in the interactions of primary cosmic rays particle with Earth's atmosphere, mainly by the decay of pions and kaons generated in hadronic interactions. They decay further in electrons and positrons and electron and muon neutrinos. Being the penetrating cosmic rays component, the muons manage to pass entirely through the atmosphere and can pass even larger absorbers before they interact with the material at the Earth's surface, and due to cosmogenic production of isotopes by atmospheric muons, information of astrophysical, environmental and material research interest can be obtained. Up to now, mainly semi-analytical approximations have been used to calculate the muon flux for estimating the cosmogenic isotope production, necessary for different applications. Our estimation of the atmospheric muon flux is based on a Monte-Carlo simulation program CORSIKA, in which we simulates the development in the
A portable, parallel, object-oriented Monte Carlo neutron transport code in C++
International Nuclear Information System (INIS)
Lee, S.R.; Cummings, J.C.; Nolen, S.D.
1997-01-01
We have developed a multi-group Monte Carlo neutron transport code using C++ and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and α-eigenvalues and is portable to and runs parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities of MC++ are discussed, along with physics and performance results on a variety of hardware, including all Accelerated Strategic Computing Initiative (ASCI) hardware. Current parallel performance indicates the ability to compute α-eigenvalues in seconds to minutes rather than hours to days. Future plans and the implementation of a general transport physics framework are also discussed
Monte Carlo calculation of the maximum therapeutic gain of tumor antivascular alpha therapy
Energy Technology Data Exchange (ETDEWEB)
Huang, Chen-Yu; Oborn, Bradley M.; Guatelli, Susanna; Allen, Barry J. [Centre for Experimental Radiation Oncology, St. George Clinical School, University of New South Wales, Kogarah, New South Wales 2217 (Australia); Illawarra Cancer Care Centre, Wollongong, New South Wales 2522, Australia and Centre for Medical Radiation Physics, University of Wollongong, New South Wales 2522 (Australia); Centre for Medical Radiation Physics, University of Wollongong, New South Wales 2522 (Australia); Centre for Experimental Radiation Oncology, St. George Clinical School, University of New South Wales, Kogarah, New South Wales 2217 (Australia)
2012-03-15
Purpose: Metastatic melanoma lesions experienced marked regression after systemic targeted alpha therapy in a phase 1 clinical trial. This unexpected response was ascribed to tumor antivascular alpha therapy (TAVAT), in which effective tumor regression is achieved by killing endothelial cells (ECs) in tumor capillaries and, thus, depriving cancer cells of nutrition and oxygen. The purpose of this paper is to quantitatively analyze the therapeutic efficacy and safety of TAVAT by building up the testing Monte Carlo microdosimetric models. Methods: Geant4 was adapted to simulate the spatial nonuniform distribution of the alpha emitter {sup 213}Bi. The intraluminal model was designed to simulate the background dose to normal tissue capillary ECs from the nontargeted activity in the blood. The perivascular model calculates the EC dose from the activity bound to the perivascular cancer cells. The key parameters are the probability of an alpha particle traversing an EC nucleus, the energy deposition, the lineal energy transfer, and the specific energy. These results were then applied to interpret the clinical trial. Cell survival rate and therapeutic gain were determined. Results: The specific energy for an alpha particle hitting an EC nucleus in the intraluminal and perivascular models is 0.35 and 0.37 Gy, respectively. As the average probability of traversal in these models is 2.7% and 1.1%, the mean specific energy per decay drops to 1.0 cGy and 0.4 cGy, which demonstrates that the source distribution has a significant impact on the dose. Using the melanoma clinical trial activity of 25 mCi, the dose to tumor EC nucleus is found to be 3.2 Gy and to a normal capillary EC nucleus to be 1.8 cGy. These data give a maximum therapeutic gain of about 180 and validate the TAVAT concept. Conclusions: TAVAT can deliver a cytotoxic dose to tumor capillaries without being toxic to normal tissue capillaries.
An energy dependent spatial approximation for transport deflection calculations
International Nuclear Information System (INIS)
Stankovski, Z.; Sanchez, R.; Roy, R.
1989-01-01
A model for transport depletion calculations based on an energy-dependent spatial representation of the fluxes has been developed. In the case of thermal absorbers, this model allows for regions in the fast range to be less discretized than in the thermal range. When depletion calculations are done to obtain the variation of the isotopic concentration vs. the burnup, the media where several spatial flux representations are used become heterogeneous. In the fast range, prehomogenization of the physical properties is done prior to each transport step. Even when taking into account this prehomogenization step, the computational cost of transport depleted calculations has been cut down significantly, while preserving the overall accuracy. Numerical results are given for a slab core and for a PWR poisoned assembly
Application of a numerical transport correction in diffusion calculations
International Nuclear Information System (INIS)
Tomatis, Daniele; Dall'Osso, Aldo
2011-01-01
Full core calculations by ordinary transport methods can demand considerable computational time, hardly acceptable in the industrial work frame. However, the trend of next generation nuclear cores goes toward more heterogeneous systems, where transport phenomena of neutrons become very important. On the other hand, using diffusion solvers is more practical allowing faster calculations, but a specific formulation of the diffusion coefficient is requested to reproduce the scalar flux with reliable physical accuracy. In this paper, the Ronen method is used to evaluate numerically the diffusion coefficient in the slab reactor. The new diffusion solution is driven toward the solution of the integral neutron transport equation by non linear iterations. Better estimates of currents are computed and diffusion coefficients are corrected at node interfaces, still assuming Fick's law. This method enables obtaining closer results to the transport solution by a common solver in multigroup diffusion. (author)
MCNP: a general Monte Carlo code for neutron and photon transport
Energy Technology Data Exchange (ETDEWEB)
Forster, R.A.; Godfrey, T.N.K.
1985-01-01
MCNP is a very general Monte Carlo neutron photon transport code system with approximately 250 person years of Group X-6 code development invested. It is extremely portable, user-oriented, and a true production code as it is used about 60 Cray hours per month by about 150 Los Alamos users. It has as its data base the best cross-section evaluations available. MCNP contains state-of-the-art traditional and adaptive Monte Carlo techniques to be applied to the solution of an ever-increasing number of problems. Excellent user-oriented documentation is available for all facets of the MCNP code system. Many useful and important variants of MCNP exist for special applications. The Radiation Shielding Information Center (RSIC) in Oak Ridge, Tennessee is the contact point for worldwide MCNP code and documentation distribution. A much improved MCNP Version 3A will be available in the fall of 1985, along with new and improved documentation. Future directions in MCNP development will change the meaning of MCNP to Monte Carlo N Particle where N particle varieties will be transported.
International Nuclear Information System (INIS)
Jinaphanh, A.
2012-01-01
Monte Carlo criticality calculation allows to estimate the effective multiplication factor as well as local quantities such as local reaction rates. Some configurations presenting weak neutronic coupling (high burn up profile, complete reactor core,...) may induce biased estimations for k eff or reaction rates. In order to improve robustness of the iterative Monte Carlo methods, a coupling with a deterministic code was studied. An adjoint flux is obtained by a deterministic calculation and then used in the Monte Carlo. The initial guess is then automated, the sampling of fission sites is modified and the random walk of neutrons is modified using splitting and russian roulette strategies. An automated convergence detection method has been developed. It locates and suppresses the transient due to the initialization in an output series, applied here to k eff and Shannon entropy. It relies on modeling stationary series by an order 1 auto regressive process and applying statistical tests based on a Student Bridge statistics. This method can easily be extended to every output of an iterative Monte Carlo. Methods developed in this thesis are tested on different test cases. (author)
Ballarini, F.; Biaggi, M.; De Biaggi, L.; Ferrari, A.; Ottolenghi, A.; Panzarasa, A.; Paretzke, H. G.; Pelliccioni, M.; Sala, P.; Scannicchio, D.; Zankl, M.
2004-01-01
Distributions of absorbed dose and DNA clustered damage yields in various organs and tissues following the October 1989 solar particle event (SPE) were calculated by coupling the FLUKA Monte Carlo transport code with two anthropomorphic phantoms (a mathematical model and a voxel model), with the main aim of quantifying the role of the shielding features in modulating organ doses. The phantoms, which were assumed to be in deep space, were inserted into a shielding box of variable thickness and material and were irradiated with the proton spectra of the October 1989 event. Average numbers of DNA lesions per cell in different organs were calculated by adopting a technique already tested in previous works, consisting of integrating into "condensed-history" Monte Carlo transport codes - such as FLUKA - yields of radiobiological damage, either calculated with "event-by-event" track structure simulations, or taken from experimental works available in the literature. More specifically, the yields of "Complex Lesions" (or "CL", defined and calculated as a clustered DNA damage in a previous work) per unit dose and DNA mass (CL Gy -1 Da -1) due to the various beam components, including those derived from nuclear interactions with the shielding and the human body, were integrated in FLUKA. This provided spatial distributions of CL/cell yields in different organs, as well as distributions of absorbed doses. The contributions of primary protons and secondary hadrons were calculated separately, and the simulations were repeated for values of Al shielding thickness ranging between 1 and 20 g/cm 2. Slight differences were found between the two phantom types. Skin and eye lenses were found to receive larger doses with respect to internal organs; however, shielding was more effective for skin and lenses. Secondary particles arising from nuclear interactions were found to have a minor role, although their relative contribution was found to be larger for the Complex Lesions than for
High-speed evaluation of track-structure Monte Carlo electron transport simulations.
Pasciak, A S; Ford, J R
2008-10-07
There are many instances where Monte Carlo simulation using the track-structure method for electron transport is necessary for the accurate analytical computation and estimation of dose and other tally data. Because of the large electron interaction cross-sections and highly anisotropic scattering behavior, the track-structure method requires an enormous amount of computation time. For microdosimetry, radiation biology and other applications involving small site and tally sizes, low electron energies or high-Z/low-Z material interfaces where the track-structure method is preferred, a computational device called a field-programmable gate array (FPGA) is capable of executing track-structure Monte Carlo electron-transport simulations as fast as or faster than a standard computer can complete an identical simulation using the condensed history (CH) technique. In this paper, data from FPGA-based track-structure electron-transport computations are presented for five test cases, from simple slab-style geometries to radiation biology applications involving electrons incident on endosteal bone surface cells. For the most complex test case presented, an FPGA is capable of evaluating track-structure electron-transport problems more than 500 times faster than a standard computer can perform the same track-structure simulation and with comparable accuracy.
Bergmann, Ryan
Graphics processing units, or GPUs, have gradually increased in computational power from the small, job-specific boards of the early 1990s to the programmable powerhouses of today. Compared to more common central processing units, or CPUs, GPUs have a higher aggregate memory bandwidth, much higher floating-point operations per second (FLOPS), and lower energy consumption per FLOP. Because one of the main obstacles in exascale computing is power consumption, many new supercomputing platforms are gaining much of their computational capacity by incorporating GPUs into their compute nodes. Since CPU-optimized parallel algorithms are not directly portable to GPU architectures (or at least not without losing substantial performance), transport codes need to be rewritten to execute efficiently on GPUs. Unless this is done, reactor simulations cannot take full advantage of these new supercomputers. WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed in this work as to efficiently implement a continuous energy Monte Carlo neutron transport algorithm on a GPU. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo Method, namely, very few physical and geometrical simplifications. WARP is able to calculate multiplication factors, flux tallies, and fission source distributions for time-independent problems, and can run in both criticality or fixed source modes. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. WARP uses an event-based algorithm, but with some important differences. Moving data is expensive, so WARP uses a remapping vector of pointer/index pairs to direct GPU threads to the data they need to access. The remapping vector is sorted by reaction type after every transport iteration using a high-efficiency parallel radix sort, which serves to keep the
First-passage kinetic Monte Carlo on lattices: Hydrogen transport in lattices with traps
von Toussaint, U.; Schwarz-Selinger, T.; Schmid, K.
2015-08-01
A new algorithm for the diffusion in 2D and 3D discrete simple cubic lattices based on a recently proposed technique, Green-functions or first-passage kinetic Monte Carlo has been developed. It is based on the solutions of appropriately chosen Greens functions, which propagate the diffusing atoms over long distances in one step (superhops). The speed-up of the new approach over standard kinetic Monte Carlo techniques can be orders of magnitude, depending on the problem. Using this new algorithm we simulated recent hydrogen isotope exchange experiments in recrystallized tungsten at 320 K, initially loaded with deuterium. It was found that the observed depth profiles can only be explained with 'active' traps, i.e. traps capable of exchanging atoms with activation energies significantly lower than the actual trap energy. Such a mechanism has so far not been considered in the modeling of hydrogen transport.
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2016-03-01
This work discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package authored at Oak Ridge National Laboratory. Shift has been developed to scale well from laptops to small computing clusters to advanced supercomputers and includes features such as support for multiple geometry and physics engines, hybrid capabilities for variance reduction methods such as the Consistent Adjoint-Driven Importance Sampling methodology, advanced parallel decompositions, and tally methods optimized for scalability on supercomputing architectures. The scaling studies presented in this paper demonstrate good weak and strong scaling behavior for the implemented algorithms. Shift has also been validated and verified against various reactor physics benchmarks, including the Consortium for Advanced Simulation of Light Water Reactors' Virtual Environment for Reactor Analysis criticality test suite and several Westinghouse AP1000® problems presented in this paper. These benchmark results compare well to those from other contemporary Monte Carlo codes such as MCNP5 and KENO.
A simplified spherical harmonic method for coupled electron-photon transport calculations
International Nuclear Information System (INIS)
Josef, J.A.
1996-12-01
In this thesis we have developed a simplified spherical harmonic method (SP N method) and associated efficient solution techniques for 2-D multigroup electron-photon transport calculations. The SP N method has never before been applied to charged-particle transport. We have performed a first time Fourier analysis of the source iteration scheme and the P 1 diffusion synthetic acceleration (DSA) scheme applied to the 2-D SP N equations. Our theoretical analyses indicate that the source iteration and P 1 DSA schemes are as effective for the 2-D SP N equations as for the 1-D S N equations. Previous analyses have indicated that the P 1 DSA scheme is unstable (with sufficiently forward-peaked scattering and sufficiently small absorption) for the 2-D S N equations, yet is very effective for the 1-D S N equations. In addition, we have applied an angular multigrid acceleration scheme, and computationally demonstrated that it performs as well for the 2-D SP N equations as for the 1-D S N equations. It has previously been shown for 1-D S N calculations that this scheme is much more effective than the DSA scheme when scattering is highly forward-peaked. We have investigated the applicability of the SP N approximation to two different physical classes of problems: satellite electronics shielding from geomagnetically trapped electrons, and electron beam problems. In the space shielding study, the SP N method produced solutions that are accurate within 10% of the benchmark Monte Carlo solutions, and often orders of magnitude faster than Monte Carlo. We have successfully modeled quasi-void problems and have obtained excellent agreement with Monte Carlo. We have observed that the SP N method appears to be too diffusive an approximation for beam problems. This result, however, is in agreement with theoretical expectations
Approximate models for neutral particle transport calculations in ducts
International Nuclear Information System (INIS)
Ono, Shizuca
2000-01-01
The problem of neutral particle transport in evacuated ducts of arbitrary, but axially uniform, cross-sectional geometry and isotropic reflection at the wall is studied. The model makes use of basis functions to represent the transverse and azimuthal dependences of the particle angular flux in the duct. For the approximation in terms of two basis functions, an improvement in the method is implemented by decomposing the problem into uncollided and collided components. A new quadrature set, more suitable to the problem, is developed and generated by one of the techniques of the constructive theory of orthogonal polynomials. The approximation in terms of three basis functions is developed and implemented to improve the precision of the results. For both models of two and three basis functions, the energy dependence of the problem is introduced through the multigroup formalism. The results of sample problems are compared to literature results and to results of the Monte Carlo code, MCNP. (author)
International Nuclear Information System (INIS)
Li, Zeguang; Wang, Kan; Zhang, Xisi
2011-01-01
In traditional Monte Carlo method, the material properties in a certain cell are assumed to be constant, but this is no longer applicable in continuous varying materials where the material's nuclear cross-sections vary over the particle's flight path. So, three Monte Carlo methods, including sub stepping method, delta-tracking method and direct sampling method, are discussed in this paper to solve the problems with continuously varying materials. After the verification and comparison of these methods in 1-D models, the basic specialties of these methods are discussed and then we choose the delta-tracking method as the main method to solve the problems with continuously varying materials, especially 3-D problems. To overcome the drawbacks of the original delta-tracking method, an improved delta-tracking method is proposed in this paper to make this method more efficient in solving problems where the material's cross-sections vary sharply over the particle's flight path. To use this method in practical calculation, we implemented the improved delta-tracking method into the 3-D Monte Carlo code RMC developed by Department of Engineering Physics, Tsinghua University. Two problems based on Godiva system were constructed and calculations were made using both improved delta-tracking method and the sub stepping method, and the results proved the effects of improved delta-tracking method. (author)
Application of Monte Carlo methods for dead time calculations for counting measurements
International Nuclear Information System (INIS)
Henniger, Juergen; Jakobi, Christoph
2015-01-01
From a mathematical point of view Monte Carlo methods are the numerical solution of certain integrals and integral equations using a random experiment. There are several advantages compared to the classical stepwise integration. The time required for computing increases for multi-dimensional problems only moderately with increasing dimension. The only requirements for the integral kernel are its capability of being integrated in the considered integration area and the possibility of an algorithmic representation. These are the important properties of Monte Carlo methods that allow the application in every scientific area. Besides that Monte Carlo algorithms are often more intuitive than conventional numerical integration methods. The contribution demonstrates these facts using the example of dead time corrections for counting measurements.
Khrutchinsky, Arkady; Drozdovitch, Vladimir; Kutsen, Semion; Minenko, Victor; Khrouch, Valeri; Luckyanov, Nickolas; Voillequé, Paul; Bouville, André
2012-01-01
This paper presents results of Monte Carlo modeling of the SRP-68-01 survey meter used to measure exposure rates near the thyroid glands of persons exposed to radioactivity following the Chernobyl accident. This device was not designed to measure radioactivity in humans. To estimate the uncertainty associated with the measurement results, a mathematical model of the SRP-68-01 survey meter was developed and verified. A Monte Carlo method of numerical simulation of radiation transport has been used to calculate the calibration factor for the device and evaluate its uncertainty. The SRP-68-01 survey meter scale coefficient, an important characteristic of the device, was also estimated in this study. The calibration factors of the survey meter were calculated for 131I, 132I, 133I, and 135I content in the thyroid gland for six age groups of population: newborns; children aged 1 yr, 5 yr, 10 yr, 15 yr; and adults. A realistic scenario of direct thyroid measurements with an “extended” neck was used to calculate the calibration factors for newborns and one-year-olds. Uncertainties in the device calibration factors due to variability of the device scale coefficient, variability in thyroid mass and statistical uncertainty of Monte Carlo method were evaluated. Relative uncertainties in the calibration factor estimates were found to be from 0.06 for children aged 1 yr to 0.1 for 10-yr and 15-yr children. The positioning errors of the detector during measurements deviate mainly in one direction from the estimated calibration factors. Deviations of the device position from the proper geometry of measurements were found to lead to overestimation of the calibration factor by up to 24 percent for adults and up to 60 percent for 1-yr children. The results of this study improve the estimates of 131I thyroidal content and, consequently, thyroid dose estimates that are derived from direct thyroid measurements performed in Belarus shortly after the Chernobyl accident. PMID:22245289
ITS - The integrated TIGER series of coupled electron/photon Monte Carlo transport codes
International Nuclear Information System (INIS)
Halbleib, J.A.; Mehlhorn, T.A.
1985-01-01
The TIGER series of time-independent coupled electron/photon Monte Carlo transport codes is a group of multimaterial, multidimensional codes designed to provide a state-of-the-art description of the production and transport of the electron/photon cascade. The codes follow both electrons and photons from 1.0 GeV down to 1.0 keV, and the user has the option of combining the collisional transport with transport in macroscopic electric and magnetic fields of arbitrary spatial dependence. Source particles can be either electrons or photons. The most important output data are (a) charge and energy deposition profiles, (b) integral and differential escape coefficients for both electrons and photons, (c) differential electron and photon flux, and (d) pulse-height distributions for selected regions of the problem geometry. The base codes of the series differ from one another primarily in their dimensionality and geometric modeling. They include (a) a one-dimensional multilayer code, (b) a code that describes the transport in two-dimensional axisymmetric cylindrical material geometries with a fully three-dimensional description of particle trajectories, and (c) a general three-dimensional transport code which employs a combinatorial geometry scheme. These base codes were designed primarily for describing radiation transport for those situations in which the detailed atomic structure of the transport medium is not important. For some applications, it is desirable to have a more detailed model of the low energy transport. The system includes three additional codes that contain a more elaborate ionization/relaxation model than the base codes. Finally, the system includes two codes that combine the collisional transport of the multidimensional base codes with transport in macroscopic electric and magnetic fields of arbitrary spatial dependence
A kinetic Monte Carlo approach to study fluid transport in pore networks
Apostolopoulou, M.; Day, R.; Hull, R.; Stamatakis, M.; Striolo, A.
2017-10-01
The mechanism of fluid migration in porous networks continues to attract great interest. Darcy's law (phenomenological continuum theory), which is often used to describe macroscopically fluid flow through a porous material, is thought to fail in nano-channels. Transport through heterogeneous and anisotropic systems, characterized by a broad distribution of pores, occurs via a contribution of different transport mechanisms, all of which need to be accounted for. The situation is likely more complicated when immiscible fluid mixtures are present. To generalize the study of fluid transport through a porous network, we developed a stochastic kinetic Monte Carlo (KMC) model. In our lattice model, the pore network is represented as a set of connected finite volumes (voxels), and transport is simulated as a random walk of molecules, which "hop" from voxel to voxel. We simulated fluid transport along an effectively 1D pore and we compared the results to those expected by solving analytically the diffusion equation. The KMC model was then implemented to quantify the transport of methane through hydrated micropores, in which case atomistic molecular dynamic simulation results were reproduced. The model was then used to study flow through pore networks, where it was able to quantify the effect of the pore length and the effect of the network's connectivity. The results are consistent with experiments but also provide additional physical insights. Extension of the model will be useful to better understand fluid transport in shale rocks.
Efendiev, Yalchin R.
2013-08-21
In this paper, we propose multilevel Monte Carlo (MLMC) methods that use ensemble level mixed multiscale methods in the simulations of multiphase flow and transport. The contribution of this paper is twofold: (1) a design of ensemble level mixed multiscale finite element methods and (2) a novel use of mixed multiscale finite element methods within multilevel Monte Carlo techniques to speed up the computations. The main idea of ensemble level multiscale methods is to construct local multiscale basis functions that can be used for any member of the ensemble. In this paper, we consider two ensemble level mixed multiscale finite element methods: (1) the no-local-solve-online ensemble level method (NLSO); and (2) the local-solve-online ensemble level method (LSO). The first approach was proposed in Aarnes and Efendiev (SIAM J. Sci. Comput. 30(5):2319-2339, 2008) while the second approach is new. Both mixed multiscale methods use a number of snapshots of the permeability media in generating multiscale basis functions. As a result, in the off-line stage, we construct multiple basis functions for each coarse region where basis functions correspond to different realizations. In the no-local-solve-online ensemble level method, one uses the whole set of precomputed basis functions to approximate the solution for an arbitrary realization. In the local-solve-online ensemble level method, one uses the precomputed functions to construct a multiscale basis for a particular realization. With this basis, the solution corresponding to this particular realization is approximated in LSO mixed multiscale finite element method (MsFEM). In both approaches, the accuracy of the method is related to the number of snapshots computed based on different realizations that one uses to precompute a multiscale basis. In this paper, ensemble level multiscale methods are used in multilevel Monte Carlo methods (Giles 2008a, Oper.Res. 56(3):607-617, b). In multilevel Monte Carlo methods, more accurate
LTRACK: Beam-transport calculation including wakefield effects
International Nuclear Information System (INIS)
Chan, K.C.D.; Cooper, R.K.
1988-01-01
LTRACK is a first-order beam-transport code that includes wakefield effects up to quadrupole modes. This paper will introduce the readers to this computer code by describing the history, the method of calculations, and a brief summary of the input/output information. Future plans for the code will also be described
Density functional theory calculations of charge transport properties ...
Indian Academy of Sciences (India)
ZIRAN CHEN
2017-08-04
Aug 4, 2017 ... Density functional theory calculations of charge transport properties of 'plate-like' coronene topological structures. ZIRAN CHENa, ZHANRONG HEa, YOUHUI XUa and WENHAO YUb,∗. aDepartment of Architecture and Environment Engineering, Sichuan Vocational and Technical College, Suining,.
High beta tokamaks. [MHD equilibrium, stability, and transport calculations
Energy Technology Data Exchange (ETDEWEB)
Dory, R.A.; Berger, D.P.; Charlton, L.A.; Hogan, J.T.; Munro, J.K.; Nelson, D.B.; Peng, Y.K.M.; Sigmar, D.J.; Strickler, D.J.
1978-01-01
MHD equilibrium, stability, and transport calculations are made to study the accessibility and behavior of ''high beta'' tokamak plasmas in the range ..beta.. approximately 5 to 15 percent. For next generation devices, beta values of at least 8 percent appear to be accessible and stable if there is a conducting surface nearby.
Impact of thermoplastic mask on X-ray surface dose calculated with Monte Carlo code
International Nuclear Information System (INIS)
Zhao Yanqun; Li Jie; Wu Liping; Wang Pei; Lang Jinyi; Wu Dake; Xiao Mingyong
2010-01-01
Objective: To calculate the effects of thermoplastic mask on X-ray surface dose. Methods: The BEAMnrc Monte Carlo Code system, designed especially for computer simulation of radioactive sources, was performed to evaluate the effects of thermoplastic mask on X-ray surface dose.Thermoplastic mask came from our center with a material density of 1.12 g/cm 2 . The masks without holes, with holes size of 0.1 cm x 0.1 cm, and with holes size of 0. 1 cm x 0.2 cm, and masks with different depth (0.12 cm and 0.24 cm) were evaluated separately. For those with holes, the material width between adjacent holes was 0.1 cm. Virtual masks with a material density of 1.38 g/cm 3 without holes with two different depths were also evaluated. Results: Thermoplastic mask affected X-rays surface dose. When using a thermoplastic mask with the depth of 0.24 cm without holes, the surface dose was 74. 9% and 57.0% for those with the density of 1.38 g/cm 3 and 1.12 g/cm 3 respectively. When focusing on the masks with the density of 1.12 g/cm 3 , the surface dose was 41.2% for those with 0.12 cm depth without holes; 57.0% for those with 0. 24 cm depth without holes; 44.5% for those with 0.24 cm depth with holes size of 0.1 cm x 0.2 cm;and 54.1% for those with 0.24 cm depths with holes size of 0.1 cm x 0.1 cm.Conclusions: Using thermoplastic mask during the radiation increases patient surface dose. The severity is relative to the hole size and the depth of thermoplastic mask. The surface dose change should be considered in radiation planning to avoid severe skin reaction. (authors)
Kinetic Monte Carlo simulation of single-electron multiple-trapping transport in disordered media
Javadi, Mohammad; Abdi, Yaser
2017-12-01
The conventional single-particle Monte Carlo simulation of charge transport in disordered media is based on the truncated density of localized states (DOLS) which benefits from very short time execution. Although this model successfully clarifies the properties of electron transport in moderately disordered media, it overestimates the electron diffusion coefficient for strongly disordered media. The origin of this deviation is discussed in terms of zero-temperature approximation in the truncated DOLS and the ignorance of spatial occupation of localized states. Here, based on the multiple-trapping regime we introduce a modified single-particle kinetic Monte Carlo model that can be used to investigate the electron transport in any disordered media independent from the value of disorder parameter. In the proposed model, instead of using a truncated DOLS we imply the raw DOLS. In addition, we have introduced an occupation index for localized states to consider the effect of spatial occupation of trap sites. The proposed model is justified in a simple cubic lattice of trap sites for broad interval of disorder parameters, Fermi levels, and temperatures.
ACCEPT: three-dimensional electron/photon Monte Carlo transport code using combinatorial geometry
Energy Technology Data Exchange (ETDEWEB)
Halbleib, J.A. Sr.
1979-05-01
The ACCEPT code provides experimenters and theorists with a method for the routine solution of coupled electron/photon transport through three-dimensional multimaterial geometries described by the combinational method. Emphasis is placed upon operational simplicity without sacrificing the rigor of the model. ACCEPT combines condensed-history electron Monte Carlo with conventional single-scattering photon Monte Carlo in order to describe the transport of all generations of particles from several MeV down to 1.0 and 10.0 keV for electrons and photons, respectively. The model is more accurate at the higher energies with a less rigorous description of the particle cascade at energies where the shell structure of the transport media becomes important. Flexibility of construction permits the user to tailor the model to specific applications and to extend the capabilities of the model to more sophisticated applications through relatively simple update procedures. The ACCEPT code is currently running on the CDC-7600 (66000) where the bulk of the cross-section data and the statistical variables are stored in Large Core Memory (Extended Core Storage).
ACCEPT: three-dimensional electron/photon Monte Carlo transport code using combinatorial geometry
International Nuclear Information System (INIS)
Halbleib, J.A. Sr.
1979-05-01
The ACCEPT code provides experimenters and theorists with a method for the routine solution of coupled electron/photon transport through three-dimensional multimaterial geometries described by the combinational method. Emphasis is placed upon operational simplicity without sacrificing the rigor of the model. ACCEPT combines condensed-history electron Monte Carlo with conventional single-scattering photon Monte Carlo in order to describe the transport of all generations of particles from several MeV down to 1.0 and 10.0 keV for electrons and photons, respectively. The model is more accurate at the higher energies with a less rigorous description of the particle cascade at energies where the shell structure of the transport media becomes important. Flexibility of construction permits the user to tailor the model to specific applications and to extend the capabilities of the model to more sophisticated applications through relatively simple update procedures. The ACCEPT code is currently running on the CDC-7600 (66000) where the bulk of the cross-section data and the statistical variables are stored in Large Core Memory
Foucart, Francois
2018-04-01
General relativistic radiation hydrodynamic simulations are necessary to accurately model a number of astrophysical systems involving black holes and neutron stars. Photon transport plays a crucial role in radiatively dominated accretion discs, while neutrino transport is critical to core-collapse supernovae and to the modelling of electromagnetic transients and nucleosynthesis in neutron star mergers. However, evolving the full Boltzmann equations of radiative transport is extremely expensive. Here, we describe the implementation in the general relativistic SPEC code of a cheaper radiation hydrodynamic method that theoretically converges to a solution of Boltzmann's equation in the limit of infinite numerical resources. The algorithm is based on a grey two-moment scheme, in which we evolve the energy density and momentum density of the radiation. Two-moment schemes require a closure that fills in missing information about the energy spectrum and higher order moments of the radiation. Instead of the approximate analytical closure currently used in core-collapse and merger simulations, we complement the two-moment scheme with a low-accuracy Monte Carlo evolution. The Monte Carlo results can provide any or all of the missing information in the evolution of the moments, as desired by the user. As a first test of our methods, we study a set of idealized problems demonstrating that our algorithm performs significantly better than existing analytical closures. We also discuss the current limitations of our method, in particular open questions regarding the stability of the fully coupled scheme.
Efficient calculation of dissipative quantum transport properties in semiconductor nanostructures
Energy Technology Data Exchange (ETDEWEB)
Greck, Peter
2012-11-26
We present a novel quantum transport method that follows the non-equilibrium Green's function (NEGF) framework but side steps any self-consistent calculation of lesser self-energies by replacing them by a quasi-equilibrium expression. We termed this method the multi-scattering Buettiker-Probe (MSB) method. It generalizes the so-called Buettiker-Probe model but takes into account all relevant individual scattering mechanisms. It is orders of magnitude more efficient than a fully selfconsistent non-equilibrium Green's function calculation for realistic devices, yet accurately reproduces the results of the latter method as well as experimental data. This method is fairly easy to implement and opens the path towards realistic three-dimensional quantum transport calculations. In this work, we review the fundamentals of the non-equilibrium Green's function formalism for quantum transport calculations. Then, we introduce our novel MSB method after briefly reviewing the original Buettiker-Probe model. Finally, we compare the results of the MSB method to NEGF calculations as well as to experimental data. In particular, we calculate quantum transport properties of quantum cascade lasers in the terahertz (THz) and the mid-infrared (MIR) spectral domain. With a device optimization algorithm based upon the MSB method, we propose a novel THz quantum cascade laser design. It uses a two-well period with alternating barrier heights and complete carrier thermalization for the majority of the carriers within each period. We predict THz laser operation for temperatures up to 250 K implying a new temperature record.
Energy Technology Data Exchange (ETDEWEB)
Fomin, B.A. [CPTEC/INPE, Rod. Presidente Dutra, km.40, Cachoeira Paulsta, Sao Paulo, 12630-000 (Brazil)]. E-mail: fomin@cptec.inpe.br
2006-03-15
An algorithm for calculations of the longwave radiation in cloudy and aerosol slab atmospheres is described. It is based on the line-by-line and Monte-Carlo methods and is suitable for accurate treatment of both the gaseous absorption and the particulate multiple scattering in any spectral regions; other published algorithms as accurate as this can only make calculations in narrow spectral regions. It is recommended that this algorithm is well suited for radiation code validations as well as for theoretical investigations of radiative transfer in clouds and aerosols and satellite signal simulations.
Three dimensions transport calculations for PWR core; Calcul de coeur R.E.P. en transport 3D
Energy Technology Data Exchange (ETDEWEB)
Richebois, E
2000-07-01
The objective of this work is to define improved 3-D core calculation methods based on the transport theory. These methods can be particularly useful and lead to more precise computations in areas of the core where anisotropy and steep flux gradients occur, especially near interface and boundary conditions and in regions of high heterogeneity (bundle with absorbent rods). In order to apply the transport theory a new method for calculating reflector constants has been developed, since traditional methods were only suited for 2-group diffusion core calculations and could not be extrapolated to transport calculations. In this thesis work, the new method for obtaining reflector constants is derived regardless of the number of energy groups and of the operator used. The core calculations results using the reflector constants thereof obtained have been validated on the EDF's power reactor Saint Laurent B1 with MOX loading. The advantages of a 3-D core transport calculation scheme have been highlighted as opposed to diffusion methods; there are a considerable number of significant effects and potential advantages to be gained in rod worth calculations for instance. These preliminary results obtained with on particular cycle will have to be confirmed by more systematic analysis. Accidents like MSLB (main steam line break) and LOCA (loss of coolant accident) should also be investigated and constitute challenging situations where anisotropy is high and/or flux gradients are steep. This method is now being validated for others EDF's PWRs' reactors, as well as for experimental reactors and other types of commercial reactors. (author)
International Nuclear Information System (INIS)
Damiani, Daniela D.; Cruz, Carlos M.; Pinnera, Ibrahin; Abreu, Yamiel; Leyva, Antonio
2015-01-01
New developments and simulations on regard to the interactions of incident gamma radiation over solids materials using the MCSAD (Monte Carlo Simulation of Atom Displacement) code are presented. In this code Monte Carlo algorithms are applied in order to sample all electrons and gamma interaction processes occurring during their transport through a solid target, especially those connected to the output of atom displacements events. Particularly, it is calculated the limit angle to elastic scattering for the electrons on a new approach, which allows correctly the splitting of the electron single processes at higher scattering angles. On this way, the probability of single electron scattering processes transferring high recoil atomic energy leading to atom displacement effects is calculated and consequently sampled in the MCSAD code. In addition, it is considered some other new theoretical aspects in order to improve previous versions, like the one concerning the selection of threshold energy for displacements at a given atom site in dependence of the atom recoil direction. (Author)
Directory of Open Access Journals (Sweden)
P Shokrani
2009-10-01
Full Text Available Introduction & Objective: Brachytherapy using I-125 radioactive seeds in removable episcleral plaques (EP is often used in treatment of ocular malignant melanoma. Some radioactive seeds are fixed in a gold bowl-shaped plaque. The plaque is sutured to the sclera surface corresponding to the base of the intraocular tumor, allowing for a localized radiation dose delivery to the tumor. Minimum target doses as high as 85Gy are directed at malignant tumor. The aim of this study was to develop a Monte Carlo simulation of an ocular plaque in order to calculate the resulting isodose distributions. Materials & Methods: The MCNP-4C Monte Carlo code is used to simulate the plan of an episcleral plaque treatment. A 20-mm Collaborative Ocular Melanoma Study (COMS plaque with 3, I-125 seed of model 6711 was used. Resulting dose distributions, including central axis dose and off-axis dose profiles, were calculated in a water phantom with 12mm radius. The calculated dose distributions were compared to the corresponding dose measured by Knuten et al., 2001. Results: Central axis dose calculations represent a rapid dose fall off, which is an important factor in selection of appropriate eye plaque for management of tumors with known dimension. Calculated off-axis dose profiles show decreased dose uniformity at distances close to the plaque. Increasing of distance from the plaque resulted in increasing of the dose uniformity. Conclusion: Monte Carlo simulation of eye plaques can be used as a useful tool in process of design, development and treatment planning of ocular radioactive plaques.
Kriesen, Stephan; Fippel, Matthias
2005-01-01
The VEF linac head model (VEF, virtual energy fluence) was developed at the University of Tübingen to determine the primary fluence for calculations of dose distributions in patients by the Voxel-Monte-Carlo-Algorithm (XVMC). This analytical model can be fitted to any therapy accelerator head by measuring only a few basic dose data; therefore, time-consuming Monte-Carlo simulations of the linac head become unnecessary. The aim of the present study was the verification of the VEF model by means of water-phantom measurements, as well as the comparison of this system with a common analytical linac head model of a commercial planning system (TMS, formerly HELAX or MDS Nordion, respectively). The results show that both the VEF and the TMS models can very well simulate the primary fluence. However, the VEF model proved superior in the simulations of scattered radiation and in the calculations of strongly irregular MLC fields. Thus, an accurate and clinically practicable tool for the determination of the primary fluence for Monte-Carlo-Simulations with photons was established, especially for the use in IMRT planning.
Verification of the VEF photon beam model for dose calculations by the voxel-Monte-Carlo-algorithm
International Nuclear Information System (INIS)
Kriesen, S.; Fippel, M.
2005-01-01
The VEF linac head model (VEF, virtual energy fluence) was developed at the University of Tuebingen to determine the primary fluence for calculations of dose distributions in patients by the Voxel-Monte-Carlo-Algorithm (XVMC). This analytical model can be fitted to any therapy accelerator head by measuring only a few basic dose data; therefore, time-consuming Monte-Carlo simulations of the linac head become unnecessary. The aim of the present study was the verification of the VEF model by means of water-phantom measurements, as well as the comparison of this system with a common analytical linac head model of a commercial planning system (TMS, formerly HELAX or MDS Nordion, respectively). The results show that both the VEF and the TMS models can very well simulate the primary fluence. However, the VEF model proved superior in the simulations of scattered radiation and in the calculations of strongly irregular MLC fields. Thus, an accurate and clinically practicable tool for the determination of the primary fluence for Monte-Carlo-Simulations with photons was established, especially for the use in IMRT planning. (orig.)
International Nuclear Information System (INIS)
Fraass, Benedick A.; Smathers, James; Deye, James
2003-01-01
Due to the significant interest in Monte Carlo dose calculations for external beam megavoltage radiation therapy from both the research and commercial communities, a workshop was held in October 2001 to assess the status of this computational method with regard to use for clinical treatment planning. The Radiation Research Program of the National Cancer Institute, in conjunction with the Nuclear Data and Analysis Group at the Oak Ridge National Laboratory, gathered a group of experts in clinical radiation therapy treatment planning and Monte Carlo dose calculations, and examined issues involved in clinical implementation of Monte Carlo dose calculation methods in clinical radiotherapy. The workshop examined the current status of Monte Carlo algorithms, the rationale for using Monte Carlo, algorithmic concerns, clinical issues, and verification methodologies. Based on these discussions, the workshop developed recommendations for future NCI-funded research and development efforts. This paper briefly summarizes the issues presented at the workshop and the recommendations developed by the group
Design of a transport calculation system for logging sondes simulation
International Nuclear Information System (INIS)
Marquez Damian, Jose Ignacio
2005-01-01
Analysis of available resources in earth crust is performed by different techniques, one of them is neutron logging. Design of sondes that are used to make such logging is supported by laboratory experiments as well as by numerical calculations.This work presents several calculation schemes, designed to simplify the task of whom has to planify such experiments or optimize parameters of this kind of sondes.These schemes use transport calculation codes, especially DaRT, TORT and MCNP, and cross section processing modules from SCALE system.Additionally a system for DaRT and TORT data postprocessing using OpenDX is presented.It allows scalar flux spatial distribution analysis, as wells as cross section condensation and reaction rates calculation
International Nuclear Information System (INIS)
Santos, W.S.; Carvalho Jr, A.B.; Hunt, J.G.; Maia, A.F.
2014-01-01
The objective of this study was to estimate doses in the physician and the nurse assistant at different positions during interventional radiology procedures. In this study, effective doses obtained for the physician and at points occupied by other workers were normalised by air kerma-area product (KAP). The simulations were performed for two X-ray spectra (70 kVp and 87 kVp) using the radiation transport code MCNPX (version 2.7.0), and a pair of anthropomorphic voxel phantoms (MASH/FASH) used to represent both the patient and the medical professional at positions from 7 cm to 47 cm from the patient. The X-ray tube was represented by a point source positioned in the anterior posterior (AP) and posterior anterior (PA) projections. The CC can be useful to calculate effective doses, which in turn are related to stochastic effects. With the knowledge of the values of CCs and KAP measured in an X-ray equipment, at a similar exposure, medical professionals will be able to know their own effective dose. - Highlights: ► This study presents a series of simulations to determine scatter-dose in IR. ► Irradiation of the worker is non-uniform and a part of his body is shielded. ► With the CCs it is possible to estimate the occupational doses in the CA examination. ► Protection of medical personnel in IR is an important issue of radiological protection
Directory of Open Access Journals (Sweden)
Kępisty Grzegorz
2015-09-01
Full Text Available In this paper, we compare the methodology of different time-step models in the context of Monte Carlo burnup calculations for nuclear reactors. We discuss the differences between staircase step model, slope model, bridge scheme and stochastic implicit Euler method proposed in literature. We focus on the spatial stability of depletion procedure and put additional emphasis on the problem of normalization of neutron source strength. Considered methodology has been implemented in our continuous energy Monte Carlo burnup code (MCB5. The burnup simulations have been performed using the simplified high temperature gas-cooled reactor (HTGR system with and without modeling of control rod withdrawal. Useful conclusions have been formulated on the basis of results.
International Nuclear Information System (INIS)
Gu, J.; George Xu, X.; Caracappa, P. F.; Liu, B.
2013-01-01
To investigate the radiation dose to the fetus using retrospective tube current modulation (TCM) data selected from archived clinical records. This paper describes the calculation of fetal doses using retrospective TCM data and Monte Carlo (MC) simulations. Three TCM schemes were adopted for use with three pregnant patient phantoms. MC simulations were used to model CT scanners, TCM schemes and pregnant patients. Comparisons between organ doses from TCM schemes and those from non-TCM schemes show that these three TCM schemes reduced fetal doses by 14, 18 and 25 %, respectively. These organ doses were also compared with those from ImPACT calculation. It is found that the difference between the calculated fetal dose and the ImPACT reported dose is as high as 46 %. This work demonstrates methods to study organ doses from various TCM protocols and potential ways to improve the accuracy of CT dose calculation for pregnant patients. (authors)
International Nuclear Information System (INIS)
Balos, Y.; Timurtuerkan, E. B.; Yorulmaz, N.; Bozkurt, A.
2009-01-01
In determining the radiation background of a region, it is important to carry out environmental radioactivity measurements in soil, water and air, to determine their contribution to the dose rate in air. This study aims to determine the dose conversion coefficients (in {nGy/h}/{Bq/kg}) that are used to convert radionuclide activity concentration in soil (in Bq/kg) to dose rate in air (in nGy/h) using the Monte Carlo method. An isotropic source which emits monoenergetic photons is assumed to be uniformly distributed in soil. The doses released by photons in organs and tissues of a mathematical phantom are determined by the Monte Carlo package MCNP. The organ doses are then used, together with radiation weighting factors and organ weighting factors, to obtain effective doses for the energy range of 100 keV-3 MeV, which in turn are used to determine the dose rates in air per unit of specific activity.
Energy Technology Data Exchange (ETDEWEB)
Sampson, Andrew; Le Yi; Williamson, Jeffrey F. [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia 23298 (United States)
2012-02-15
Purpose: To demonstrate potential of correlated sampling Monte Carlo (CMC) simulation to improve the calculation efficiency for permanent seed brachytherapy (PSB) implants without loss of accuracy. Methods: CMC was implemented within an in-house MC code family (PTRAN) and used to compute 3D dose distributions for two patient cases: a clinical PSB postimplant prostate CT imaging study and a simulated post lumpectomy breast PSB implant planned on a screening dedicated breast cone-beam CT patient exam. CMC tallies the dose difference, {Delta}D, between highly correlated histories in homogeneous and heterogeneous geometries. The heterogeneous geometry histories were derived from photon collisions sampled in a geometrically identical but purely homogeneous medium geometry, by altering their particle weights to correct for bias. The prostate case consisted of 78 Model-6711 {sup 125}I seeds. The breast case consisted of 87 Model-200 {sup 103}Pd seeds embedded around a simulated lumpectomy cavity. Systematic and random errors in CMC were unfolded using low-uncertainty uncorrelated MC (UMC) as the benchmark. CMC efficiency gains, relative to UMC, were computed for all voxels, and the mean was classified in regions that received minimum doses greater than 20%, 50%, and 90% of D{sub 90}, as well as for various anatomical regions. Results: Systematic errors in CMC relative to UMC were less than 0.6% for 99% of the voxels and 0.04% for 100% of the voxels for the prostate and breast cases, respectively. For a 1 x 1 x 1 mm{sup 3} dose grid, efficiency gains were realized in all structures with 38.1- and 59.8-fold average gains within the prostate and breast clinical target volumes (CTVs), respectively. Greater than 99% of the voxels within the prostate and breast CTVs experienced an efficiency gain. Additionally, it was shown that efficiency losses were confined to low dose regions while the largest gains were located where little difference exists between the homogeneous and
AlfaMC: A fast alpha particle transport Monte Carlo code
Energy Technology Data Exchange (ETDEWEB)
Peralta, Luis, E-mail: luis@lip.pt [Faculdade de Ciências da Universidade de Lisboa (Portugal); Laboratório de Instrumentação e Física Experimental de Partículas (Portugal); Louro, Alina [Laboratório de Instrumentação e Física Experimental de Partículas (Portugal)
2014-02-11
AlfaMC is a Monte Carlo simulation code for the transport of alpha particles. This code is based on the Continuous Slowing Down Approximation and uses the NIST/ASTAR stopping-power database. The code uses a powerful geometrical package, which allows coding of complex geometries. A flexible histogramming package is used as well, which greatly eases the scoring of results. The code is tailored for microdosimetric applications in which speed is a key factor. Comparison with the SRIM code is made for deposited energy in thin layers and range for air, mylar, aluminum and gold. The general agreement between the two codes is good for beam energies between 1 and 12 MeV. -- Highlights: • AlfaMC is a Monte Carlo program for fast alpha particle transport in matter. • The model is accurate within a few percent in the energy range of 1–12 MeV. • AlfaMC uses a combinatorial geometry package allowing the modeling of complex bodies.
Creating and using a type of free-form geometry in Monte Carlo particle transport
International Nuclear Information System (INIS)
Wessol, D.E.; Wheeler, F.J.
1993-01-01
While the reactor physicists were fine-tuning the Monte Carlo paradigm for particle transport in regular geometries, the computer scientists were developing rendering algorithms to display extremely realistic renditions of irregular objects ranging from the ubiquitous teakettle to dynamic Jell-O. Even though the modeling methods share a common basis, the initial strategies each discipline developed for variance reduction were remarkably different. Initially, the reactor physicist used Russian roulette, importance sampling, particle splitting, and rejection techniques. In the early stages of development, the computer scientist relied primarily on rejection techniques, including a very elegant hierarchical construction and sampling method. This sampling method allowed the computer scientist to viably track particles through irregular geometries in three-dimensional space, while the initial methods developed by the reactor physicists would only allow for efficient searches through analytical surfaces or objects. As time goes by, it appears there has been some merging of the variance reduction strategies between the two disciplines. This is an early (possibly first) incorporation of geometric hierarchical construction and sampling into the reactor physicists' Monte Carlo transport model that permits efficient tracking through nonuniform rational B-spline surfaces in three-dimensional space. After some discussion, the results from this model are compared with experiments and the model employing implicit (analytical) geometric representation
Premar-2: a Monte Carlo code for radiative transport simulation in atmospheric environments
International Nuclear Information System (INIS)
Cupini, E.
1999-01-01
The peculiarities of the PREMAR-2 code, aimed at radiation transport Monte Carlo simulation in atmospheric environments in the infrared-ultraviolet frequency range, are described. With respect to the previously developed PREMAR code, besides plane multilayers, spherical multilayers and finite sequences of vertical layers, each one with its own atmospheric behaviour, are foreseen in the new code, together with the refraction phenomenon, so that long range, highly slanted paths can now be more faithfully taken into account. A zenithal angular dependence of the albedo coefficient has moreover been introduced. Lidar systems, with spatially independent source and telescope, are allowed again to be simulated, and, in this latest version of the code, sensitivity analyses to be performed. According to this last feasibility, consequences on radiation transport of small perturbations in physical components of the atmospheric environment may be analyze and the related effects on searched results estimated. The availability of a library of physical data (reaction coefficients, phase functions and refraction indexes) is required by the code, providing the essential features of the environment of interest needed of the Monte Carlo simulation. Variance reducing techniques have been enhanced in the Premar-2 code, by introducing, for instance, a local forced collision technique, especially apt to be used in Lidar system simulations. Encouraging comparisons between code and experimental results carried out at the Brasimone Centre of ENEA, have so far been obtained, even if further checks of the code are to be performed [it
Penelope-2006: a code system for Monte Carlo simulation of electron and photon transport
International Nuclear Information System (INIS)
2006-01-01
The computer code system PENELOPE (version 2006) performs Monte Carlo simulation of coupled electron-photon transport in arbitrary materials for a wide energy range, from a few hundred eV to about 1 GeV. Photon transport is simulated by means of the standard, detailed simulation scheme. Electron and positron histories are generated on the basis of a mixed procedure, which combines detailed simulation of hard events with condensed simulation of soft interactions. A geometry package called PENGEOM permits the generation of random electron-photon showers in material systems consisting of homogeneous bodies limited by quadric surfaces, i.e. planes, spheres, cylinders, etc. This report is intended not only to serve as a manual of the PENELOPE code system, but also to provide the user with the necessary information to understand the details of the Monte Carlo algorithm. These proceedings contain the corresponding manual and teaching notes of the PENELOPE-2006 workshop and training course, held on 4-7 July 2006 in Barcelona, Spain. (author)
Calculation of ion stopping in dense plasma by the Monte-Carlo method
Kodanova, S. K.; Ramazanov, T. S.; Issanova, M. K.; Bastykova, N. Kh; Golyatina, R. I.; Maiorov, S. A.
2018-01-01
In this paper, the Monte-Carlo method was used to simulate ion trajectories in dense plasma of inertial confinement fusion. The results of computer simulation are numerical data on the dynamic characteristics, such as energy loss, penetration depth, the effective range of particles, stopping and straggling. By the results of the work the program of three-dimensional visualization of ion trajectories in dense plasma of inertial confinement fusion was developed.
Supersonic flow with shock waves. Monte-Carlo calculations for low density plasma. I
International Nuclear Information System (INIS)
Almenara, E.; Hidalgo, M.; Saviron, J. M.
1980-01-01
This Report gives preliminary information about a Monte Carlo procedure to simulate supersonic flow past a body of a low density plasma in the transition regime. A computer program has been written for a UNIVAC 1108 machine to account for a plasma composed by neutral molecules and positive and negative ions. Different and rather general body geometries can be analyzed. Special attention is played to tho detached shock waves growth In front of the body. (Author) 30 refs
Supersonic flow with shock waves. Monte-Carlo calculations for low density plasma. Part. 1
International Nuclear Information System (INIS)
Almenara, E.; Hidalgo, M.; Saviron, J.M.
1980-01-01
A preliminary information about a Monte Carlo procedure to simulate supersonic flow past a body of a low density plasma in the transition regime is gived. A computer program has been written for a Univac 1108 machine to account for a plasma composed by neutral molecules and positive and negative ions. Different and rather general body geometries can be analyzed. Special attention is payed to the detached shock waves growth in front of the body. (author)
Use of implicit Monte Carlo radiation transport with hydrodynamics and compton scattering
International Nuclear Information System (INIS)
Fleck, J.A. Jr.
1971-03-01
It is shown that the combination of implicit radiation transport and hydrodynamics, Compton scattering, and any other energy transport can be simply carried out by a ''splitting'' procedure. Contributions to material energy exchange can be reckoned separately for hydrodynamics, radiation transport without scattering, Compton scattering, plus any other possible energy exchange mechanism. The radiation transport phase of the calculation would be implicit, but the hydrodynamics and Compton portions would not, leading to possible time step controls. The time step restrictions which occur on radiation transfer due to large Planck mean absorption cross-sections would not occur
CALCULATION OF POLLUTION DYNAMICS NEAR RAILWAY TERRITORY DURING COAL TRANSPORTATION
Directory of Open Access Journals (Sweden)
M. M. Biliaiev
2017-02-01
Full Text Available Purpose. The article is aimed to develop 3D numerical model for the prediction of atmospheric pollution during transportation of bulk cargo in the railway car. Methodology.To solve this problem, it was developed three-dimensional numerical model, based on the use of the transport equation of dust pollution in the air by the wind and atmospheric turbulent diffusion. For the numerical integration of the simulating equation of the dust transport the implicit difference scheme was used. When constructing a difference scheme, it was carried out prior splitting of the original transport equation into the sequence of solutions of three equations. The first of them takes into account the transport of dust in paths, the second equation – dust transport under the influence of atmospheric turbulent diffusion, and the third equation –change of the dust concentration in the air due to its emissions from the cars.Unknown value of the pollutant concentration at every step of splitting is determined by the explicit scheme – the method of running account, which provides a simple numerical implementation of splitting equations. The developed numerical model is the basis for specialized computer program. On the basis of the constructed numerical model we carried out a computational experiment to assess the level of air pollution at the railway station during the motion of train with coal. Findings. Authors developed 3D numerical model, which belongs to the class of «screening models». This model takes into account the main physical factors affecting the process of dispersion of dust pollution in the atmosphere during coal transportation. The proposed numerical model requires low cost of computer time in the practical implementation on small and medium-power computers. This model can be used for rapid calculations of the dynamics of air pollution when transporting coal by rail. Calculations to determine the pollutant concentration and formation of the
International Nuclear Information System (INIS)
Petoussi, N.; Zankl, M.; Williams, G.; Veit, R.; Drexler, G.
1987-01-01
There has been some evidence that cervical cancer patients who were treated by radiotherapy, had an increased incidence of second primary cancers noticeable 15 years or more after the radiotherapy. The data suggested that high dose pelvic irradiation was associated with increase in cancers of the bladder, kidneys, rectum, ovaries, corpus uteri, and non-Hodgkin's lymphoma but not leukemia (Kleinerman et al., 1982, Morton 1973). The aim of the present work is to estimate the absorbed dose, due to radiotherapy treatment for cervival cancer, to various organs and tissues in the body. Monte Carlo calculations were performed to calculate the organ absorbed doses resulting from intracavitary sources such as ovoids and applicators filled or loaded with radium, Co-60 and Cs-137. For that purpose a routine which simulates an internal source was constructed and added to the existing Monte Carlo code (GSF-Bericht S-885, Kramer et al.). Calculations were also made for external beam therapy. Various anterior, posterior and lateral fields were applied, resulting from megavoltage, Co-60 and Cs-137 therapy machines. The calculated organ doses are tabulated in three different ways: as organ dose per air Kerma in the reference field, according to the recommendations of the International Commission on Radiation Units and Measurements (ICRU Report No 38, 1985); as organ dose per surface dose and as organ dose per tissue dose at Point B. (orig.)
Axial SPN and radial MOC coupled whole core transport calculation
International Nuclear Information System (INIS)
Cho, Jin-Young; Kim, Kang-Seog; Lee, Chung-Chan; Zee, Sung-Quun; Joo, Han-Gyu
2007-01-01
The Simplified P N (SP N ) method is applied to the axial solution of the two-dimensional (2-D) method of characteristics (MOC) solution based whole core transport calculation. A sub-plane scheme and the nodal expansion method (NEM) are employed for the solution of the one-dimensional (1-D) SP N equations involving a radial transverse leakage. The SP N solver replaces the axial diffusion solver of the DeCART direct whole core transport code to provide more accurate, transport theory based axial solutions. In the sub-plane scheme, the radial equivalent homogenization parameters generated by the local MOC for a thick plane are assigned to the multiple finer planes in the subsequent global three-dimensional (3-D) coarse mesh finite difference (CMFD) calculation in which the NEM is employed for the axial solution. The sub-plane scheme induces a much less nodal error while having little impact on the axial leakage representation of the radial MOC calculation. The performance of the sub-plane scheme and SP N nodal transport solver is examined by solving a set of demonstrative problems and the C5G7MOX 3-D extension benchmark problems. It is shown in the demonstrative problems that the nodal error reaching upto 1,400 pcm in a rodded case is reduced to 10 pcm by introducing 10 sub-planes per MOC plane and the transport error is reduced from about 150 pcm to 10 pcm by using SP 3 . Also it is observed, in the C5G7MOX rodded configuration B problem, that the eigenvalues and pin power errors of 180 pcm and 2.2% of the 10 sub-planes diffusion case are reduced to 40 pcm and 1.4%, respectively, for SP 3 with only about a 15% increase in the computing time. It is shown that the SP 5 case gives very similar results to the SP 3 case. (author)
Energy Technology Data Exchange (ETDEWEB)
Verde Velasco, J. M.; Garcia Repiso, S.; Martin rincon, C.; Ramos Pacho, J. A.; Delgado Aparicio, J. M.; Perez alvarez, M. E.; Saez Beltran, M.; Gomez Gonzalez, N.; Cons Perez, N.; Sena Espinel, E.
2013-07-01
The implementation of a Monte Carlo algorithm requires not only a careful series of steps, but also adjust various parameters of calculation which will influence both in the goodness of the calculation of doses as in the time required for the calculation, being necessary to reach a compromise solution that get acceptable calculation accuracy in a time of calculation which is acceptable. In this paper we present our experience in this setting. (Author)
Considerations of beta and electron transport in internal dose calculations. Progress report
Energy Technology Data Exchange (ETDEWEB)
Bolch, W.E.
1994-11-01
The goal of this particular task is to consider, for the first time, the explicit transport of beta particles and photon-generated electrons in the series of six phantoms developed by Cristy and Eckerman (1987) at the Oak Ridge National Laboratory. In their report, ORNL/TM-8381, specific absorbed fractions of energy are reported for phantoms representing the newborn (3.4 kg), the one-year-old (9.8 kg), the five-year-old (19 kg), the ten-year-old (32 kg), the fifteen-year-old/adult female (55-58 kg), and the adult male (70 kg). Radiation transport calculations were performed with the Monte Carlo code ALGAMP which allows photon transport only. In subsequent calculations of radionuclide S values as is done in the MIRDOSE2 computer program, electron absorbed fractions are thus considered to be either unity or zero depending upon whether the source region does or does not equal the target region, respectively.
ASOP, Shield Calculation, 1-D, Discrete Ordinates Transport
International Nuclear Information System (INIS)
1993-01-01
1 - Nature of physical problem solved: ASOP is a shield optimization calculational system based on the one-dimensional discrete ordinates transport program ANISN. It has been used to design optimum shields for space applications of SNAP zirconium-hydride-uranium- fueled reactors and uranium-oxide fueled thermionic reactors and to design beam stops for the ORELA facility. 2 - Method of solution: ASOP generates coefficients of linear equations describing the logarithm of the dose and dose-weight derivatives as functions of position from data obtained in an automated sequence of ANISN calculations. With the dose constrained to a design value and all dose-weight derivatives required to be equal, the linear equations may be solved for a new set of shield dimensions. Since changes in the shield dimensions may cause the linear functions to change, the entire procedure is repeated until convergence is obtained. The detailed calculations of the radiation transport through shield configurations for every step in the procedure distinguish ASOP from other shield optimization computer code systems which rely on multiple component sources and attenuation coefficients to describe the transport. 3 - Restrictions on the complexity of the problem: Problem size is limited only by machine size
Uniform Gauss-Weight Quadratures for Discrete Ordinate Transport Calculations
International Nuclear Information System (INIS)
Carew, John F.; Hu, Kai; Zamonsky, Gabriel
2000-01-01
Recently, a uniform equal-weight quadrature set, UE n , and a uniform Gauss-weight quadrature set, UG n , have been derived. These quadratures have the advantage over the standard level-symmetric LQ n quadrature sets in that the weights are positive for all orders,and the transport solution may be systematically converged by increasing the order of the quadrature set. As the order of the quadrature is increased,the points approach a uniform continuous distribution on the unit sphere,and the quadrature is invariant with respect to spatial rotations. The numerical integrals converge for continuous functions as the order of the quadrature is increased.The numerical characteristics of the UE n quadrature set have been investigated previously. In this paper, numerical calculations are performed to evaluate the application of the UG n quadrature set in typical transport analyses. A series of DORT transport calculations of the >1-MeV neutron flux have been performed for a set of pressure-vessel fluence benchmark problems. These calculations employed the UG n (n = 8, 12, 16, 24, and 32) quadratures and indicate that the UG n solutions have converged to within ∼0.25%. The converged UG n solutions are found to be comparable to the UE n results and are more accurate than the level-symmetric S 16 predictions
Accounting for chemical kinetics in field scale transport calculations
International Nuclear Information System (INIS)
Bryan, N.D.
2005-01-01
The modelling of column experiments has shown that the humic acid mediated transport of metal ions is dominated by the non-exchangeable fraction. Metal ions enter this fraction via the exchangeable fraction, and may transfer back again. However, in both directions these chemical reactions are slow. Whether or not a kinetic description of these processes is required during transport calculations, or an assumption of local equilibrium will suffice, will depend upon the ratio of the reaction half-time to the residence time of species within the groundwater column. If the flow rate is sufficiently slow or the reaction sufficiently fast then the assumption of local equilibrium is acceptable. Alternatively, if the reaction is sufficiently slow (or the flow rate fast), then the reaction may be 'decoupled', i.e. removed from the calculation. These distinctions are important, because calculations involving chemical kinetics are computationally very expensive, and should be avoided wherever possible. In addition, column experiments have shown that the sorption of humic substances and metal-humate complexes may be significant, and that these reactions may also be slow. In this work, a set of rules is presented that dictate when the local equilibrium and decoupled assumptions may be used. In addition, it is shown that in all cases to a first approximation, the behaviour of a kinetically controlled species, and in particular its final distribution against distance at the end of a calculation, depends only upon the ratio of the reaction first order rate to the residence time, and hence, even in the region where the simplifications may not be used, the behaviour is predictable. In this way, it is possible to obtain an estimate of the migration of these species, without the need for a complex transport calculation. (orig.)
International Nuclear Information System (INIS)
Zhang, Dingkang; Rahnema, Farzad; Ougouag, Abderrfi M.
2011-01-01
A response-based local transport method has been developed in 2-D (r, θ) geometry for coupling to any coarse-mesh (nodal) diffusion method/code. Monte Carlo method is first used to generate a (pre-computed) the response function library for each unique coarse mesh in the transport domain (e.g., the outer reflector region of the Pebble Bed Reactor). The scalar flux and net current at the diffusion/transport interface provided by the diffusion method are used as an incoming surface source to the transport domain. A deterministic iterative sweeping method together with the response function library is utilized to compute the local transport solution within all transport coarse meshes. After the partial angular currents crossing the coarse mesh surfaces are converged, albedo coefficients are computed as boundary conditions for the diffusion methods. The iteration on the albedo boundary condition (for the diffusion method via transport) and the incoming angular flux boundary condition (for the transport via diffusion) is continued until convergence is achieved. The method was tested for in a simplified 2-D (r, θ) pebble bed reactor problem consisting of an inner reflector, an annular fuel region and a controlled outer reflector. The comparisons have shown that the results of the response-function-based transport method agree very well with a direct MCNP whole core solution. The agreement in coarse mesh averaged flux was found to be excellent: relative difference of about 0.18% and a maximum difference of about 0.55%. Note that the MCNP uncertainty was less than 0.1%. (author)
Caon, Martin
2013-09-01
The ADELAIDE voxel model of paediatric anatomy was used with the EGSnrc Monte Carlo code to compare effective dose from computed tomography (CT) calculated with both the ICRP103 and ICRP60 definitions which are different in their tissue weighting factors and in the included tissues. The new tissue weighting factors resulted in a lower effective dose for pelvis CT (than if calculated using ICRP60 tissue weighting factors), by 6.5% but higher effective doses for all other examinations. ICRP103 calculated effective dose for CT abdomen + pelvis was higher by 4.6%, for CT abdomen (by 9.5%), for CT chest + abdomen + pelvis (by 6%), for CT chest + abdomen (by 9.6%), for CT chest (by 10.1%) and for cardiac CT (by 11.5%). These values, along with published values of effective dose from CT that were calculated for both sets of tissue weighting factors were used to determine single values for the ratio ICRP103:ICRP60 calculated effective doses from CT, for seven CT examinations. The following values for ICRP103:ICRP60 are suggested for use to convert ICRP60 calculated effective dose to ICRP103 calculated effective dose for the following CT examinations: Pelvis CT, 0.75; for abdomen CT, abdomen + pelvis CT, chest + abdomen + pelvis CT, 1.00; for chest + abdomen CT, and for chest CT. 1.15; for cardiac CT 1.25.
International Nuclear Information System (INIS)
Caon, Martin
2013-01-01
The ADELAIDE voxel model of paediatric anatomy was used with the EGSnrc Monte Carlo code to compare effective dose from computed tomography (CT) calculated with both the ICRP103 and ICRP60 definitions which are different in their tissue weighting factors and in the included tissues. The new tissue weighting factors resulted in a lower effective dose for pelvis CT (than if calculated using ICRP60 tissue weighting factors), by 6.5 % but higher effective doses for all other examinations. ICRP103 calculated effective dose for CT abdomen + pelvis was higher by 4.6 %, for CT abdomen (by 9.5 %), for CT chest + abdomen + pelvis (by 6 %), for CT chest + abdomen (by 9.6 %), for CT chest (by 10.1 %) and for cardiac CT (by 11.5 %). These values, along with published values of effective dose from CT that were calculated for both sets of tissue weighting factors were used to determine single values for the ratio ICRP103:ICRP60 calculated effective doses from CT, for seven CT examinations. The following values for ICRP103:ICRP60 are suggested for use to convert ICRP60 calculated effective dose to ICRP103 calculated effective dose for the following CT examinations: Pelvis CT, 0.75; for abdomen CT, abdomen + pelvis CT, chest + abdomen + pelvis CT, 1.00; for chest + abdomen CT, and for chest CT. 1.15; for cardiac CT 1.25.
Energy Technology Data Exchange (ETDEWEB)
Vergnaud, Th.; Nimal, J.C.; Chiron, M
2001-07-01
The TRIPOLI-3 code applies the Monte Carlo method to neutron, gamma-ray and coupled neutron and gamma-ray transport calculations in three-dimensional geometries, either in steady-state conditions or having a time dependence. It can be used to study problems where there is a high flux attenuation between the source zone and the result zone (studies of shielding configurations or source driven sub-critical systems, with fission being taken into account), as well as problems where there is a low flux attenuation (neutronic calculations -- in a fuel lattice cell, for example -- where fission is taken into account, usually with the calculation on the effective multiplication factor, fine structure studies, numerical experiments to investigate methods approximations, etc). TRIPOLI-3 has been operational since 1995 and is the version of the TRIPOLI code that follows on from TRIPOLI-2; it can be used on SUN, RISC600 and HP workstations and on PC using the Linux or Windows/NT operating systems. The code uses nuclear data libraries generated using the THEMIS/NJOY system. The current libraries were derived from ENDF/B6 and JEF2. There is also a response function library based on a number of evaluations, notably the dosimetry libraries IRDF/85, IRDF/90 and also evaluations from JEF2. The treatment of particle transport is the same in version 3.5 as in version 3.4 of the TRIPOLI code; but the version 3.5 is more convenient for preparing the input data and for reading the output. The french version of the user's manual exists. (authors)
Present status of transport code development based on Monte Carlo method
International Nuclear Information System (INIS)
Nakagawa, Masayuki
1985-01-01
The present status of development in Monte Carlo code is briefly reviewed. The main items are the followings; Application fields, Methods used in Monte Carlo code (geometry spectification, nuclear data, estimator and variance reduction technique) and unfinished works, Typical Monte Carlo codes and Merits of continuous energy Monte Carlo code. (author)
International Nuclear Information System (INIS)
Daures, J.; Gouriou, J.; Bordy, J.M.
2010-01-01
The authors report calculations performed using the MNCP and PENELOPE codes to determine the Hp(3)/K air conversion coefficient which allows the Hp(3) dose equivalent to be determined from the measured value of the kerma in the air. They report the definition of the phantom, a 20 cm diameter and 20 cm high cylinder which is considered as representative of a head. Calculations are performed for an energy range corresponding to interventional radiology or cardiology (20 keV-110 keV). Results obtained with both codes are compared
The electron transport problem sampling by Monte Carlo individual collision technique
International Nuclear Information System (INIS)
Androsenko, P.A.; Belousov, V.I.
2005-01-01
The problem of electron transport is of most interest in all fields of the modern science. To solve this problem the Monte Carlo sampling has to be used. The electron transport is characterized by a large number of individual interactions. To simulate electron transport the 'condensed history' technique may be used where a large number of collisions are grouped into a single step to be sampled randomly. Another kind of Monte Carlo sampling is the individual collision technique. In comparison with condensed history technique researcher has the incontestable advantages. For example one does not need to give parameters altered by condensed history technique like upper limit for electron energy, resolution, number of sub-steps etc. Also the condensed history technique may lose some very important tracks of electrons because of its limited nature by step parameters of particle movement and due to weakness of algorithms for example energy indexing algorithm. There are no these disadvantages in the individual collision technique. This report presents some sampling algorithms of new version BRAND code where above mentioned technique is used. All information on electrons was taken from Endf-6 files. They are the important part of BRAND. These files have not been processed but directly taken from electron information source. Four kinds of interaction like the elastic interaction, the Bremsstrahlung, the atomic excitation and the atomic electro-ionization were considered. In this report some results of sampling are presented after comparison with analogs. For example the endovascular radiotherapy problem (P2) of QUADOS2002 was presented in comparison with another techniques that are usually used. (authors)
Zheng, Na; Xu, Hai-Bo
2015-10-01
An empirical numerical model that includes nuclear absorption, multiple Coulomb scattering and energy loss is presented for the calculation of transmission through thick objects in high energy proton radiography. In this numerical model the angular distributions are treated as Gaussians in the laboratory frame. A Monte Carlo program based on the Geant4 toolkit was developed and used for high energy proton radiography experiment simulations and verification of the empirical numerical model. The two models are used to calculate the transmission fraction of carbon and lead step-wedges in proton radiography at 24 GeV/c, and to calculate radial transmission of the French Test Object in proton radiography at 24 GeV/c with different angular cuts. It is shown that the results of the two models agree with each other, and an analysis of the slight differences is given. Supported by NSAF (11176001) and Science and Technology Developing Foundation of China Academy of