International Nuclear Information System (INIS)
Sharma, Subhash; Ott, Joseph; Williams, Jamone; Dickow, Danny
2011-01-01
Monte Carlo dose calculation algorithms have the potential for greater accuracy than traditional model-based algorithms. This enhanced accuracy is particularly evident in regions of lateral scatter disequilibrium, which can develop during treatments incorporating small field sizes and low-density tissue. A heterogeneous slab phantom was used to evaluate the accuracy of several commercially available dose calculation algorithms, including Monte Carlo dose calculation for CyberKnife, Analytical Anisotropic Algorithm and Pencil Beam convolution for the Eclipse planning system, and convolution-superposition for the Xio planning system. The phantom accommodated slabs of varying density; comparisons between planned and measured dose distributions were accomplished with radiochromic film. The Monte Carlo algorithm provided the most accurate comparison between planned and measured dose distributions. In each phantom irradiation, the Monte Carlo predictions resulted in gamma analysis comparisons >97%, using acceptance criteria of 3% dose and 3-mm distance to agreement. In general, the gamma analysis comparisons for the other algorithms were <95%. The Monte Carlo dose calculation algorithm for CyberKnife provides more accurate dose distribution calculations in regions of lateral electron disequilibrium than commercially available model-based algorithms. This is primarily because of the ability of Monte Carlo algorithms to implicitly account for tissue heterogeneities, density scaling functions; and/or effective depth correction factors are not required.
Energy Technology Data Exchange (ETDEWEB)
Brockway, D.; Soran, P.; Whalen, P.
1985-01-01
A Monte Carlo algorithm to efficiently calculate static alpha eigenvalues, N = ne/sup ..cap alpha..t/, for supercritical systems has been developed and tested. A direct Monte Carlo approach to calculating a static alpha is to simply follow the buildup in time of neutrons in a supercritical system and evaluate the logarithmic derivative of the neutron population with respect to time. This procedure is expensive, and the solution is very noisy and almost useless for a system near critical. The modified approach is to convert the time-dependent problem to a static ..cap alpha../sup -/eigenvalue problem and regress ..cap alpha.. on solutions of a/sup -/ k/sup -/eigenvalue problem. In practice, this procedure is much more efficient than the direct calculation, and produces much more accurate results. Because the Monte Carlo codes are intrinsically three-dimensional and use elaborate continuous-energy cross sections, this technique is now used as a standard for evaluating other calculational techniques in odd geometries or with group cross sections.
Adjoint electron Monte Carlo calculations
International Nuclear Information System (INIS)
Jordan, T.M.
1986-01-01
Adjoint Monte Carlo is the most efficient method for accurate analysis of space systems exposed to natural and artificially enhanced electron environments. Recent adjoint calculations for isotropic electron environments include: comparative data for experimental measurements on electronics boxes; benchmark problem solutions for comparing total dose prediction methodologies; preliminary assessment of sectoring methods used during space system design; and total dose predictions on an electronics package. Adjoint Monte Carlo, forward Monte Carlo, and experiment are in excellent agreement for electron sources that simulate space environments. For electron space environments, adjoint Monte Carlo is clearly superior to forward Monte Carlo, requiring one to two orders of magnitude less computer time for relatively simple geometries. The solid-angle sectoring approximations used for routine design calculations can err by more than a factor of 2 on dose in simple shield geometries. For critical space systems exposed to severe electron environments, these potential sectoring errors demand the establishment of large design margins and/or verification of shield design by adjoint Monte Carlo/experiment
International Nuclear Information System (INIS)
Weinhorst, Bastian; Fischer, Ulrich; Lu, Lei; Qiu, Yuefeng; Wilson, Paul
2015-01-01
Highlights: • Comparison of different approaches for the use of CAD geometry for Monte Carlo transport calculations. • Comparison with regard to user-friendliness and computation performance. • Three approaches, namely conversion with McCad, unstructured mesh feature of MCN6 and DAGMC. • Installation most complex for DAGMC, model preparation worst for McCad, computation performance worst for MCNP6. • Installation easiest for McCad, model preparation best for MCNP6, computation speed fastest for McCad. - Abstract: Computer aided design (CAD) is an important industrial way to produce high quality designs. Therefore, CAD geometries are in general used for engineering and the design of complex facilities like the ITER tokamak. Although Monte Carlo codes like MCNP are well suited to handle the complex 3D geometry of ITER for transport calculations, they rely on their own geometry description and are in general not able to directly use the CAD geometry. In this paper, three different approaches for the use of CAD geometries with MCNP calculations are investigated and assessed with regard to calculation performance and user-friendliness. The first method is the conversion of the CAD geometry into MCNP geometry employing the conversion software McCad developed by KIT. The second approach utilizes the MCNP6 mesh geometry feature for the particle tracking and relies on the conversion of the CAD geometry into a mesh model. The third method employs DAGMC, developed by the University of Wisconsin-Madison, for the direct particle tracking on the CAD geometry using a patched version of MCNP. The obtained results show that each method has its advantages depending on the complexity and size of the model, the calculation problem considered, and the expertise of the user.
Energy Technology Data Exchange (ETDEWEB)
Weinhorst, Bastian, E-mail: bastian.weinhorst@kit.edu [Karlsruhe Institute of Technology (KIT), Institute for Neutron Physics and Reactor Technology, Eggenstein-Leopoldshafen (Germany); Fischer, Ulrich; Lu, Lei; Qiu, Yuefeng [Karlsruhe Institute of Technology (KIT), Institute for Neutron Physics and Reactor Technology, Eggenstein-Leopoldshafen (Germany); Wilson, Paul [University of Wisconsin-Madison, Computational Nuclear Engineering Research Group, Madison, WI (United States)
2015-10-15
Highlights: • Comparison of different approaches for the use of CAD geometry for Monte Carlo transport calculations. • Comparison with regard to user-friendliness and computation performance. • Three approaches, namely conversion with McCad, unstructured mesh feature of MCN6 and DAGMC. • Installation most complex for DAGMC, model preparation worst for McCad, computation performance worst for MCNP6. • Installation easiest for McCad, model preparation best for MCNP6, computation speed fastest for McCad. - Abstract: Computer aided design (CAD) is an important industrial way to produce high quality designs. Therefore, CAD geometries are in general used for engineering and the design of complex facilities like the ITER tokamak. Although Monte Carlo codes like MCNP are well suited to handle the complex 3D geometry of ITER for transport calculations, they rely on their own geometry description and are in general not able to directly use the CAD geometry. In this paper, three different approaches for the use of CAD geometries with MCNP calculations are investigated and assessed with regard to calculation performance and user-friendliness. The first method is the conversion of the CAD geometry into MCNP geometry employing the conversion software McCad developed by KIT. The second approach utilizes the MCNP6 mesh geometry feature for the particle tracking and relies on the conversion of the CAD geometry into a mesh model. The third method employs DAGMC, developed by the University of Wisconsin-Madison, for the direct particle tracking on the CAD geometry using a patched version of MCNP. The obtained results show that each method has its advantages depending on the complexity and size of the model, the calculation problem considered, and the expertise of the user.
Directory of Open Access Journals (Sweden)
Banafsheh Zeinali Rafsanjani
2011-06-01
Full Text Available Introduction: Among different kinds of oral cavity cancers, the frequency of tongue cancer occurrence is more significant. Brachytherapy is the most common method to cure tongue cancers. Long sources are used in different techniques of tongue brachytherapy. The objective of this study is to asses the dose distribution around long sources, comparing different radioisotopes as brachytherapy sources, measuring the homogeneity of delivered dose to treatment volume and also comparing mandible dose and dose of tongue in the regions near the mandible with and without using shield. Material and Method: The Monte Carlo code MCNP4C was used for simulation. The accuracy of simulation was verified by comparing the results with experimental data. The sources like Ir-192, Cs-137, Ra-226, Au-198, In-111 and Ba-131 were simulated and the position of sources was determined by Paris system. Results: The percentage of mandible dose reduction with use of 2 mm Pb shield for the sources mentioned above were: 35.4%, 20.1%, 86.6%, 32.24%, 75.6%, and 36.8%. The tongue dose near the mandible with use of shied did not change significantly. The dose homogeneity from the most to least was obtained from these sources: Cs-137, Au-198, Ir-192, Ba-131, In-111 and Ra-226. Discussion and Conclusion: Ir-192 and Cs-137 were the best sources for tongue brachytherapy treatment but In-111 and Ra-226 were not suitable choices for tongue brachytherapy. The sources like Au-198 and Ba-131 had rather the same performance as Ir-192
Importance iteration in MORSE Monte Carlo calculations
International Nuclear Information System (INIS)
Kloosterman, J.L.; Hoogenboom, J.E.
1994-01-01
An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example that shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation
Importance iteration in MORSE Monte Carlo calculations
International Nuclear Information System (INIS)
Kloosterman, J.L.; Hoogenboom, J.E.
1994-02-01
An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example, which shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation. (orig.)
Algorithms for Monte Carlo calculations with fermions
International Nuclear Information System (INIS)
Weingarten, D.
1985-01-01
We describe a fermion Monte Carlo algorithm due to Petcher and the present author and another due to Fucito, Marinari, Parisi and Rebbi. For the first algorithm we estimate the number of arithmetic operations required to evaluate a vacuum expectation value grows as N 11 /msub(q) on an N 4 lattice with fixed periodicity in physical units and renormalized quark mass msub(q). For the second algorithm the rate of growth is estimated to be N 8 /msub(q) 2 . Numerical experiments are presented comparing the two algorithms on a lattice of size 2 4 . With a hopping constant K of 0.15 and β of 4.0 we find the number of operations for the second algorithm is about 2.7 times larger than for the first and about 13 000 times larger than for corresponding Monte Carlo calculations with a pure gauge theory. An estimate is given for the number of operations required for more realistic calculations by each algorithm on a larger lattice. (orig.)
Improvements for Monte Carlo burnup calculation
Energy Technology Data Exchange (ETDEWEB)
Shenglong, Q.; Dong, Y.; Danrong, S.; Wei, L., E-mail: qiangshenglong@tsinghua.org.cn, E-mail: d.yao@npic.ac.cn, E-mail: songdr@npic.ac.cn, E-mail: luwei@npic.ac.cn [Nuclear Power Inst. of China, Cheng Du, Si Chuan (China)
2015-07-01
Monte Carlo burnup calculation is development trend of reactor physics, there would be a lot of work to be done for engineering applications. Based on Monte Carlo burnup code MOI, non-fuel burnup calculation methods and critical search suggestions will be mentioned in this paper. For non-fuel burnup, mixed burnup mode will improve the accuracy of burnup calculation and efficiency. For critical search of control rod position, a new method called ABN based on ABA which used by MC21 will be proposed for the first time in this paper. (author)
The Monte Carlo applied for calculation dose
International Nuclear Information System (INIS)
Peixoto, J.E.
1988-01-01
The Monte Carlo method is showed for the calculation of absorbed dose. The trajectory of the photon is traced simulating sucessive interaction between the photon and the substance that consist the human body simulator. The energy deposition in each interaction of the simulator organ or tissue per photon is also calculated. (C.G.C.) [pt
Monte Carlo method for array criticality calculations
International Nuclear Information System (INIS)
Dickinson, D.; Whitesides, G.E.
1976-01-01
The Monte Carlo method for solving neutron transport problems consists of mathematically tracing paths of individual neutrons collision by collision until they are lost by absorption or leakage. The fate of the neutron after each collision is determined by the probability distribution functions that are formed from the neutron cross-section data. These distributions are sampled statistically to establish the successive steps in the neutron's path. The resulting data, accumulated from following a large number of batches, are analyzed to give estimates of k/sub eff/ and other collision-related quantities. The use of electronic computers to produce the simulated neutron histories, initiated at Los Alamos Scientific Laboratory, made the use of the Monte Carlo method practical for many applications. In analog Monte Carlo simulation, the calculation follows the physical events of neutron scattering, absorption, and leakage. To increase calculational efficiency, modifications such as the use of statistical weights are introduced. The Monte Carlo method permits the use of a three-dimensional geometry description and a detailed cross-section representation. Some of the problems in using the method are the selection of the spatial distribution for the initial batch, the preparation of the geometry description for complex units, and the calculation of error estimates for region-dependent quantities such as fluxes. The Monte Carlo method is especially appropriate for criticality safety calculations since it permits an accurate representation of interacting units of fissile material. Dissimilar units, units of complex shape, moderators between units, and reflected arrays may be calculated. Monte Carlo results must be correlated with relevant experimental data, and caution must be used to ensure that a representative set of neutron histories is produced
Reflections on early Monte Carlo calculations
International Nuclear Information System (INIS)
Spanier, J.
1992-01-01
Monte Carlo methods for solving various particle transport problems developed in parallel with the evolution of increasingly sophisticated computer programs implementing diffusion theory and low-order moments calculations. In these early years, Monte Carlo calculations and high-order approximations to the transport equation were seen as too expensive to use routinely for nuclear design but served as invaluable aids and supplements to design with less expensive tools. The earliest Monte Carlo programs were quite literal; i.e., neutron and other particle random walk histories were simulated by sampling from the probability laws inherent in the physical system without distoration. Use of such analogue sampling schemes resulted in a good deal of time being spent in examining the possibility of lowering the statistical uncertainties in the sample estimates by replacing simple, and intuitively obvious, random variables by those with identical means but lower variances
Burnup calculations using Monte Carlo method
International Nuclear Information System (INIS)
Ghosh, Biplab; Degweker, S.B.
2009-01-01
In the recent years, interest in burnup calculations using Monte Carlo methods has gained momentum. Previous burn up codes have used multigroup transport theory based calculations followed by diffusion theory based core calculations for the neutronic portion of codes. The transport theory methods invariably make approximations with regard to treatment of the energy and angle variables involved in scattering, besides approximations related to geometry simplification. Cell homogenisation to produce diffusion, theory parameters adds to these approximations. Moreover, while diffusion theory works for most reactors, it does not produce accurate results in systems that have strong gradients, strong absorbers or large voids. Also, diffusion theory codes are geometry limited (rectangular, hexagonal, cylindrical, and spherical coordinates). Monte Carlo methods are ideal to solve very heterogeneous reactors and/or lattices/assemblies in which considerable burnable poisons are used. The key feature of this approach is that Monte Carlo methods permit essentially 'exact' modeling of all geometrical detail, without resort to ene and spatial homogenization of neutron cross sections. Monte Carlo method would also be better for in Accelerator Driven Systems (ADS) which could have strong gradients due to the external source and a sub-critical assembly. To meet the demand for an accurate burnup code, we have developed a Monte Carlo burnup calculation code system in which Monte Carlo neutron transport code is coupled with a versatile code (McBurn) for calculating the buildup and decay of nuclides in nuclear materials. McBurn is developed from scratch by the authors. In this article we will discuss our effort in developing the continuous energy Monte Carlo burn-up code, McBurn. McBurn is intended for entire reactor core as well as for unit cells and assemblies. Generally, McBurn can do burnup of any geometrical system which can be handled by the underlying Monte Carlo transport code
A keff calculation method by Monte Carlo
International Nuclear Information System (INIS)
Shen, H; Wang, K.
2008-01-01
The effective multiplication factor (k eff ) is defined as the ratio between the number of neutrons in successive generations, which definition is adopted by most Monte Carlo codes (e.g. MCNP). Also, it can be thought of as the ratio of the generation rate of neutrons by the sum of the leakage rate and the absorption rate, which should exclude the effect of the neutron reaction such as (n, 2n) and (n, 3n). This article discusses the Monte Carlo method for k eff calculation based on the second definition. A new code has been developed and the results are presented. (author)
Biases in Monte Carlo eigenvalue calculations
Energy Technology Data Exchange (ETDEWEB)
Gelbard, E.M.
1992-12-01
The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the ``fixed-source`` case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated (``replicated``) over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here.
Biases in Monte Carlo eigenvalue calculations
Energy Technology Data Exchange (ETDEWEB)
Gelbard, E.M.
1992-01-01
The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the fixed-source'' case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated ( replicated'') over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here.
Biases in Monte Carlo eigenvalue calculations
International Nuclear Information System (INIS)
Gelbard, E.M.
1992-01-01
The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the ''fixed-source'' case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated (''replicated'') over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here
Monte Carlo methods for shield design calculations
International Nuclear Information System (INIS)
Grimstone, M.J.
1974-01-01
A suite of Monte Carlo codes is being developed for use on a routine basis in commercial reactor shield design. The methods adopted for this purpose include the modular construction of codes, simplified geometries, automatic variance reduction techniques, continuous energy treatment of cross section data, and albedo methods for streaming. Descriptions are given of the implementation of these methods and of their use in practical calculations. 26 references. (U.S.)
Quantum Monte Carlo calculations of light nuclei
International Nuclear Information System (INIS)
Pandharipande, V. R.
1999-01-01
Quantum Monte Carlo methods provide an essentially exact way to calculate various properties of nuclear bound, and low energy continuum states, from realistic models of nuclear interactions and currents. After a brief description of the methods and modern models of nuclear forces, we review the results obtained for all the bound, and some continuum states of up to eight nucleons. Various other applications of the methods are reviewed along with future prospects
Reactor perturbation calculations by Monte Carlo methods
International Nuclear Information System (INIS)
Gubbins, M.E.
1965-09-01
Whilst Monte Carlo methods are useful for reactor calculations involving complicated geometry, it is difficult to apply them to the calculation of perturbation worths because of the large amount of computing time needed to obtain good accuracy. Various ways of overcoming these difficulties are investigated in this report, with the problem of estimating absorbing control rod worths particularly in mind. As a basis for discussion a method of carrying out multigroup reactor calculations by Monte Carlo methods is described. Two methods of estimating a perturbation worth directly, without differencing two quantities of like magnitude, are examined closely but are passed over in favour of a third method based on a correlation technique. This correlation method is described, and demonstrated by a limited range of calculations for absorbing control rods in a fast reactor. In these calculations control rod worths of between 1% and 7% in reactivity are estimated to an accuracy better than 10% (3 standard errors) in about one hour's computing time on the English Electric KDF.9 digital computer. (author)
Computation cluster for Monte Carlo calculations
International Nuclear Information System (INIS)
Petriska, M.; Vitazek, K.; Farkas, G.; Stacho, M.; Michalek, S.
2010-01-01
Two computation clusters based on Rocks Clusters 5.1 Linux distribution with Intel Core Duo and Intel Core Quad based computers were made at the Department of the Nuclear Physics and Technology. Clusters were used for Monte Carlo calculations, specifically for MCNP calculations applied in Nuclear reactor core simulations. Optimization for computation speed was made on hardware and software basis. Hardware cluster parameters, such as size of the memory, network speed, CPU speed, number of processors per computation, number of processors in one computer were tested for shortening the calculation time. For software optimization, different Fortran compilers, MPI implementations and CPU multi-core libraries were tested. Finally computer cluster was used in finding the weighting functions of neutron ex-core detectors of VVER-440. (authors)
Computation cluster for Monte Carlo calculations
Energy Technology Data Exchange (ETDEWEB)
Petriska, M.; Vitazek, K.; Farkas, G.; Stacho, M.; Michalek, S. [Dep. Of Nuclear Physics and Technology, Faculty of Electrical Engineering and Information, Technology, Slovak Technical University, Ilkovicova 3, 81219 Bratislava (Slovakia)
2010-07-01
Two computation clusters based on Rocks Clusters 5.1 Linux distribution with Intel Core Duo and Intel Core Quad based computers were made at the Department of the Nuclear Physics and Technology. Clusters were used for Monte Carlo calculations, specifically for MCNP calculations applied in Nuclear reactor core simulations. Optimization for computation speed was made on hardware and software basis. Hardware cluster parameters, such as size of the memory, network speed, CPU speed, number of processors per computation, number of processors in one computer were tested for shortening the calculation time. For software optimization, different Fortran compilers, MPI implementations and CPU multi-core libraries were tested. Finally computer cluster was used in finding the weighting functions of neutron ex-core detectors of VVER-440. (authors)
Monte Carlo methods to calculate impact probabilities
Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.
2014-09-01
Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward
Monte Carlo perturbation theory in neutron transport calculations
International Nuclear Information System (INIS)
Hall, M.C.G.
1980-01-01
The need to obtain sensitivities in complicated geometrical configurations has resulted in the development of Monte Carlo sensitivity estimation. A new method has been developed to calculate energy-dependent sensitivities of any number of responses in a single Monte Carlo calculation with a very small time penalty. This estimation typically increases the tracking time per source particle by about 30%. The method of estimation is explained. Sensitivities obtained are compared with those calculated by discrete ordinates methods. Further theoretical developments, such as second-order perturbation theory and application to k/sub eff/ calculations, are discussed. The application of the method to uncertainty analysis and to the analysis of benchmark experiments is illustrated. 5 figures
Monte Carlo calculations of channeling radiation
International Nuclear Information System (INIS)
Bloom, S.D.; Berman, B.L.; Hamilton, D.C.; Alguard, M.J.; Barrett, J.H.; Datz, S.; Pantell, R.H.; Swent, R.H.
1981-01-01
Results of classical Monte Carlo calculations are presented for the radiation produced by ultra-relativistic positrons incident in a direction parallel to the (110) plane of Si in the energy range 30 to 100 MeV. The results all show the characteristic CR(channeling radiation) peak in the energy range 20 keV to 100 keV. Plots of the centroid energies, widths, and total yields of the CR peaks as a function of energy show the power law dependences of γ 1 5 , γ 1 7 , and γ 2 5 respectively. Except for the centroid energies and power-law dependence is only approximate. Agreement with experimental data is good for the centroid energies and only rough for the widths. Adequate experimental data for verifying the yield dependence on γ does not yet exist
Pseudopotentials for quantum-Monte-Carlo-calculations
International Nuclear Information System (INIS)
Burkatzki, Mark Thomas
2008-01-01
The author presents scalar-relativistic energy-consistent Hartree-Fock pseudopotentials for the main-group and 3d-transition-metal elements. The pseudopotentials do not exhibit a singularity at the nucleus and are therefore suitable for quantum Monte Carlo (QMC) calculations. The author demonstrates their transferability through extensive benchmark calculations of atomic excitation spectra as well as molecular properties. In particular, the author computes the vibrational frequencies and binding energies of 26 first- and second-row diatomic molecules using post Hartree-Fock methods, finding excellent agreement with the corresponding all-electron values. The author shows that the presented pseudopotentials give superior accuracy than other existing pseudopotentials constructed specifically for QMC. The localization error and the efficiency in QMC are discussed. The author also presents QMC calculations for selected atomic and diatomic 3d-transitionmetal systems. Finally, valence basis sets of different sizes (VnZ with n=D,T,Q,5 for 1st and 2nd row; with n=D,T for 3rd to 5th row; with n=D,T,Q for the 3d transition metals) optimized for the pseudopotentials are presented. (orig.)
The Monte Carlo calculation of gamma family
International Nuclear Information System (INIS)
Shibata, Makio
1980-01-01
The method of the Monte Carlo calculation for gamma family was investigated. The effects of the variation of values or terms of parameters on observed quantities were studied. The terms taken for the standard calculation are the scaling law for the model, simple proton spectrum for primary cosmic ray, a constant cross section of interaction, zero probability of neutral pion production, and the bending of the curve of primary energy spectrum. This is called S model. Calculations were made by changing one of above mentioned parameters. The chamber size, the mixing of gamma and hadrons, and the family size were fitted to the practical ECC data. When the model was changed from the scaling law to the CKP model, the energy spectrum of the family was able to be expressed by the CKP model better than the scaling law. The scaling law was better in the symmetry around the family center. It was denied that primary cosmic ray mostly consists of heavy particles. The increase of the interaction cross section was necessary in view of the frequency of the families. (Kato, T.)
Effect of error propagation of nuclide number densities on Monte Carlo burn-up calculations
International Nuclear Information System (INIS)
Tohjoh, Masayuki; Endo, Tomohiro; Watanabe, Masato; Yamamoto, Akio
2006-01-01
As a result of improvements in computer technology, the continuous energy Monte Carlo burn-up calculation has received attention as a good candidate for an assembly calculation method. However, the results of Monte Carlo calculations contain the statistical errors. The results of Monte Carlo burn-up calculations, in particular, include propagated statistical errors through the variance of the nuclide number densities. Therefore, if statistical error alone is evaluated, the errors in Monte Carlo burn-up calculations may be underestimated. To make clear this effect of error propagation on Monte Carlo burn-up calculations, we here proposed an equation that can predict the variance of nuclide number densities after burn-up calculations, and we verified this equation using enormous numbers of the Monte Carlo burn-up calculations by changing only the initial random numbers. We also verified the effect of the number of burn-up calculation points on Monte Carlo burn-up calculations. From these verifications, we estimated the errors in Monte Carlo burn-up calculations including both statistical and propagated errors. Finally, we made clear the effects of error propagation on Monte Carlo burn-up calculations by comparing statistical errors alone versus both statistical and propagated errors. The results revealed that the effects of error propagation on the Monte Carlo burn-up calculations of 8 x 8 BWR fuel assembly are low up to 60 GWd/t
Monte Carlo neutral density calculations for ELMO Bumpy Torus
International Nuclear Information System (INIS)
Davis, W.A.; Colchin, R.J.
1986-11-01
The steady-state nature of the ELMO Bumpy Torus (EBT) plasma implies that the neutral density at any point inside the plasma volume will determine the local particle confinement time. This paper describes a Monte Carlo calculation of three-dimensional atomic and molecular neutral density profiles in EBT. The calculation has been done using various models for neutral source points, for launching schemes, for plasma profiles, and for plasma densities and temperatures. Calculated results are compared with experimental observations - principally spectroscopic measurements - both for guidance in normalization and for overall consistency checks. Implications of the predicted neutral profiles for the fast-ion-decay measurement of neutral densities are also addressed
Advanced Computational Methods for Monte Carlo Calculations
Energy Technology Data Exchange (ETDEWEB)
Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2018-01-12
This course is intended for graduate students who already have a basic understanding of Monte Carlo methods. It focuses on advanced topics that may be needed for thesis research, for developing new state-of-the-art methods, or for working with modern production Monte Carlo codes.
Optimization of linear Monte Carlo calculations
International Nuclear Information System (INIS)
Troubetzkoy, E.S.
1991-01-01
The variance of the calculation is minimized on the basis of parameters generated by a learning technique. The optimum is obtained if sampling is biased proportionally to the expected root-mean-square score. In this paper, the method is compared with existing methods, which bias proportionally to the expected score
Performance of quantum Monte Carlo for calculating molecular bond lengths
Energy Technology Data Exchange (ETDEWEB)
Cleland, Deidre M., E-mail: deidre.cleland@csiro.au; Per, Manolo C., E-mail: manolo.per@csiro.au [CSIRO Virtual Nanoscience Laboratory, 343 Royal Parade, Parkville, Victoria 3052 (Australia)
2016-03-28
This work investigates the accuracy of real-space quantum Monte Carlo (QMC) methods for calculating molecular geometries. We present the equilibrium bond lengths of a test set of 30 diatomic molecules calculated using variational Monte Carlo (VMC) and diffusion Monte Carlo (DMC) methods. The effect of different trial wavefunctions is investigated using single determinants constructed from Hartree-Fock (HF) and Density Functional Theory (DFT) orbitals with LDA, PBE, and B3LYP functionals, as well as small multi-configurational self-consistent field (MCSCF) multi-determinant expansions. When compared to experimental geometries, all DMC methods exhibit smaller mean-absolute deviations (MADs) than those given by HF, DFT, and MCSCF. The most accurate MAD of 3 ± 2 × 10{sup −3} Å is achieved using DMC with a small multi-determinant expansion. However, the more computationally efficient multi-determinant VMC method has a similar MAD of only 4.0 ± 0.9 × 10{sup −3} Å, suggesting that QMC forces calculated from the relatively simple VMC algorithm may often be sufficient for accurate molecular geometries.
A Monte Carlo dose calculation tool for radiotherapy treatment planning
International Nuclear Information System (INIS)
Ma, C.-M.; Li, J.S.; Pawlicki, T.; Jiang, S.B.; Deng, J.; Lee, M.C.; Koumrian, T.; Luxton, M.; Brain, S.
2002-01-01
A Monte Carlo user code, MCDOSE, has been developed for radiotherapy treatment planning (RTP) dose calculations. MCDOSE is designed as a dose calculation module suitable for adaptation to host RTP systems. MCDOSE can be used for both conventional photon/electron beam calculation and intensity modulated radiotherapy (IMRT) treatment planning. MCDOSE uses a multiple-source model to reconstruct the treatment beam phase space. Based on Monte Carlo simulated or measured beam data acquired during commissioning, source-model parameters are adjusted through an automated procedure. Beam modifiers such as jaws, physical and dynamic wedges, compensators, blocks, electron cut-outs and bolus are simulated by MCDOSE together with a 3D rectilinear patient geometry model built from CT data. Dose distributions calculated using MCDOSE agreed well with those calculated by the EGS4/DOSXYZ code using different beam set-ups and beam modifiers. Heterogeneity correction factors for layered-lung or layered-bone phantoms as calculated by both codes were consistent with measured data to within 1%. The effect of energy cut-offs for particle transport was investigated. Variance reduction techniques were implemented in MCDOSE to achieve a speedup factor of 10-30 compared to DOSXYZ. (author)
Calculation of toroidal fusion reactor blankets by Monte Carlo
International Nuclear Information System (INIS)
Macdonald, J.L.; Cashwell, E.D.; Everett, C.J.
1977-01-01
A brief description of the calculational method is given. The code calculates energy deposition in toroidal geometry, but is a continuous energy Monte Carlo code, treating the reaction cross sections as well as the angular scattering distributions in great detail
Wielandt acceleration for MCNP5 Monte Carlo eigenvalue calculations
International Nuclear Information System (INIS)
Brown, F.
2007-01-01
Monte Carlo criticality calculations use the power iteration method to determine the eigenvalue (k eff ) and eigenfunction (fission source distribution) of the fundamental mode. A recently proposed method for accelerating convergence of the Monte Carlo power iteration using Wielandt's method has been implemented in a test version of MCNP5. The method is shown to provide dramatic improvements in convergence rates and to greatly reduce the possibility of false convergence assessment. The method is effective and efficient, improving the Monte Carlo figure-of-merit for many problems. In addition, the method should eliminate most of the underprediction bias in confidence intervals for Monte Carlo criticality calculations. (authors)
Variational Variance Reduction for Monte Carlo Criticality Calculations
International Nuclear Information System (INIS)
Densmore, Jeffery D.; Larsen, Edward W.
2001-01-01
A new variational variance reduction (VVR) method for Monte Carlo criticality calculations was developed. This method employs (a) a variational functional that is more accurate than the standard direct functional, (b) a representation of the deterministically obtained adjoint flux that is especially accurate for optically thick problems with high scattering ratios, and (c) estimates of the forward flux obtained by Monte Carlo. The VVR method requires no nonanalog Monte Carlo biasing, but it may be used in conjunction with Monte Carlo biasing schemes. Some results are presented from a class of criticality calculations involving alternating arrays of fuel and moderator regions
Time step length versus efficiency of Monte Carlo burnup calculations
International Nuclear Information System (INIS)
Dufek, Jan; Valtavirta, Ville
2014-01-01
Highlights: • Time step length largely affects efficiency of MC burnup calculations. • Efficiency of MC burnup calculations improves with decreasing time step length. • Results were obtained from SIE-based Monte Carlo burnup calculations. - Abstract: We demonstrate that efficiency of Monte Carlo burnup calculations can be largely affected by the selected time step length. This study employs the stochastic implicit Euler based coupling scheme for Monte Carlo burnup calculations that performs a number of inner iteration steps within each time step. In a series of calculations, we vary the time step length and the number of inner iteration steps; the results suggest that Monte Carlo burnup calculations get more efficient as the time step length is reduced. More time steps must be simulated as they get shorter; however, this is more than compensated by the decrease in computing cost per time step needed for achieving a certain accuracy
Probability Density Estimation Using Neural Networks in Monte Carlo Calculations
International Nuclear Information System (INIS)
Shim, Hyung Jin; Cho, Jin Young; Song, Jae Seung; Kim, Chang Hyo
2008-01-01
The Monte Carlo neutronics analysis requires the capability for a tally distribution estimation like an axial power distribution or a flux gradient in a fuel rod, etc. This problem can be regarded as a probability density function estimation from an observation set. We apply the neural network based density estimation method to an observation and sampling weight set produced by the Monte Carlo calculations. The neural network method is compared with the histogram and the functional expansion tally method for estimating a non-smooth density, a fission source distribution, and an absorption rate's gradient in a burnable absorber rod. The application results shows that the neural network method can approximate a tally distribution quite well. (authors)
Statistics of Monte Carlo methods used in radiation transport calculation
International Nuclear Information System (INIS)
Datta, D.
2009-01-01
Radiation transport calculation can be carried out by using either deterministic or statistical methods. Radiation transport calculation based on statistical methods is basic theme of the Monte Carlo methods. The aim of this lecture is to describe the fundamental statistics required to build the foundations of Monte Carlo technique for radiation transport calculation. Lecture note is organized in the following way. Section (1) will describe the introduction of Basic Monte Carlo and its classification towards the respective field. Section (2) will describe the random sampling methods, a key component of Monte Carlo radiation transport calculation, Section (3) will provide the statistical uncertainty of Monte Carlo estimates, Section (4) will describe in brief the importance of variance reduction techniques while sampling particles such as photon, or neutron in the process of radiation transport
Linear filtering applied to Monte Carlo criticality calculations
International Nuclear Information System (INIS)
Morrison, G.W.; Pike, D.H.; Petrie, L.M.
1975-01-01
A significant improvement in the acceleration of the convergence of the eigenvalue computed by Monte Carlo techniques has been developed by applying linear filtering theory to Monte Carlo calculations for multiplying systems. A Kalman filter was applied to a KENO Monte Carlo calculation of an experimental critical system consisting of eight interacting units of fissile material. A comparison of the filter estimate and the Monte Carlo realization was made. The Kalman filter converged in five iterations to 0.9977. After 95 iterations, the average k-eff from the Monte Carlo calculation was 0.9981. This demonstrates that the Kalman filter has the potential of reducing the calculational effort of multiplying systems. Other examples and results are discussed
Monte Carlo Calculation of the Radiation Field at Aircraft Altitudes
Energy Technology Data Exchange (ETDEWEB)
Roesler, Stefan
2001-08-24
Energy spectra of secondary cosmic rays are calculated for aircraft altitudes and a discrete set of solar modulation parameters and rigidity cutoff values covering all possible conditions. The calculations are based on the Monte Carlo code FLUKA and on the most recent information on the interstellar cosmic ray flux including a detailed model of solar modulation. Results are compared to a large variety of experimental data obtained on ground and aboard of aircrafts and balloons, such as neutron, proton, and muon spectra and yields of charged particles. Furthermore, particle fluence is converted into ambient dose equivalent and effective dose and the dependence of these quantities on height above sea level, solar modulation, and geographic location is studied. Finally, calculated dose equivalent is compared to results of comprehensive measurements performed aboard of aircrafts.
Hypothesis testing of scientific Monte Carlo calculations
Wallerberger, Markus; Gull, Emanuel
2017-11-01
The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.
Active neutron multiplicity analysis and Monte Carlo calculations
International Nuclear Information System (INIS)
Krick, M.S.; Ensslin, N.; Langner, D.G.; Miller, M.C.; Siebelist, R.; Stewart, J.E.; Ceo, R.N.; May, P.K.; Collins, L.L. Jr
1994-01-01
Active neutron multiplicity measurements of high-enrichment uranium metal and oxide samples have been made at Los Alamos and Y-12. The data from the measurements of standards at Los Alamos were analyzed to obtain values for neutron multiplication and source-sample coupling. These results are compared to equivalent results obtained from Monte Carlo calculations. An approximate relationship between coupling and multiplication is derived and used to correct doubles rates for multiplication and coupling. The utility of singles counting for uranium samples is also examined
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
International Nuclear Information System (INIS)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Energy Technology Data Exchange (ETDEWEB)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
Monte Carlo calculations of electron diffusion in materials
International Nuclear Information System (INIS)
Schroeder, U.G.
1976-01-01
By means of simulated experiments, various transport problems for 10 Mev electrons are investigated. For this purpose, a special Monte-Carlo programme is developed, and with this programme calculations are made for several material arrangements. (orig./LN) [de
Problems in radiation shielding calculations with Monte Carlo methods
International Nuclear Information System (INIS)
Ueki, Kohtaro
1985-01-01
The Monte Carlo method is a very useful tool for solving a large class of radiation transport problem. In contrast with deterministic method, geometric complexity is a much less significant problem for Monte Carlo calculations. However, the accuracy of Monte Carlo calculations is of course, limited by statistical error of the quantities to be estimated. In this report, we point out some typical problems to solve a large shielding system including radiation streaming. The Monte Carlo coupling technique was developed to settle such a shielding problem accurately. However, the variance of the Monte Carlo results using the coupling technique of which detectors were located outside the radiation streaming, was still not enough. So as to bring on more accurate results for the detectors located outside the streaming and also for a multi-legged-duct streaming problem, a practicable way of ''Prism Scattering technique'' is proposed in the study. (author)
Cluster monte carlo method for nuclear criticality safety calculation
International Nuclear Information System (INIS)
Pei Lucheng
1984-01-01
One of the most important applications of the Monte Carlo method is the calculation of the nuclear criticality safety. The fair source game problem was presented at almost the same time as the Monte Carlo method was applied to calculating the nuclear criticality safety. The source iteration cost may be reduced as much as possible or no need for any source iteration. This kind of problems all belongs to the fair source game prolems, among which, the optimal source game is without any source iteration. Although the single neutron Monte Carlo method solved the problem without the source iteration, there is still quite an apparent shortcoming in it, that is, it solves the problem without the source iteration only in the asymptotic sense. In this work, a new Monte Carlo method called the cluster Monte Carlo method is given to solve the problem further
A functional method for estimating DPA tallies in Monte Carlo calculations of Light Water Reactors
International Nuclear Information System (INIS)
Read, Edward A.; Oliveira, Cassiano R.E. de
2011-01-01
There has been a growing need in recent years for the development of methodology to calculate radiation damage factors, namely displacements per atom (dpa), of structural components for Light Water Reactors (LWRs). The aim of this paper is to discuss the development and implementation of a dpa method using Monte Carlo method for transport calculations. The capabilities of the Monte Carlo code Serpent such as Woodcock tracking and fuel depletion are assessed for radiation damage calculations and its capability demonstrated and compared to those of the Monte Carlo code MCNP for radiation damage calculations of a typical LWR configuration. (author)
Usage of burnt fuel isotopic compositions from engineering codes in Monte-Carlo code calculations
International Nuclear Information System (INIS)
Aleshin, Sergey S.; Gorodkov, Sergey S.; Shcherenko, Anna I.
2015-01-01
A burn-up calculation of VVER's cores by Monte-Carlo code is complex process and requires large computational costs. This fact makes Monte-Carlo codes usage complicated for project and operating calculations. Previously prepared isotopic compositions are proposed to use for the Monte-Carlo code (MCU) calculations of different states of VVER's core with burnt fuel. Isotopic compositions are proposed to calculate by an approximation method. The approximation method is based on usage of a spectral functionality and reference isotopic compositions, that are calculated by engineering codes (TVS-M, PERMAK-A). The multiplication factors and power distributions of FA and VVER with infinite height are calculated in this work by the Monte-Carlo code MCU using earlier prepared isotopic compositions. The MCU calculation data were compared with the data which were obtained by engineering codes.
Monte Carlo calculations for intermediate-energy standard neutron field
International Nuclear Information System (INIS)
Joneja, O.P.; Subbukutty, K.; Iyengar, S.B.D.; Navalkar, M.P.
Intermediate-Energy Standard Neutron Field (ISNF) which produces a well characterised spectrum in the energy range of interest for fast reactors including breeders, has been set up at NBS using thin enriched 235 U fission sources. A proposal has been made for setting up a similar facility at BARC using however, easily available natural U instead of enriched U sources, to start with. In order to simulate the neutronics of such a facility Monte Carlo method of calculations has been adopted and developed. The results of these calculations have been compared with those of NBS and it is found that there may be a maximum difference of 10% in spectrum characteristics for the two cases of using thick and thin fission sources. (K.B.)
Green's function Monte Carlo calculations of /sup 4/He
Energy Technology Data Exchange (ETDEWEB)
Carlson, J.A.
1988-01-01
Green's Function Monte Carlo methods have been developed to study the ground state properties of light nuclei. These methods are shown to reproduce results of Faddeev calculations for A = 3, and are then used to calculate ground state energies, one- and two-body distribution functions, and the D-state probability for the alpha particle. Results are compared to variational Monte Carlo calculations for several nuclear interaction models. 31 refs.
Reference Monte Carlo calculations of Maria reactor core
International Nuclear Information System (INIS)
Andrzejewski, K.; Kulikowska, T.
2002-01-01
The reference Monte Carlo calculations of MARIA reactor core have been carried to evaluate accuracy of the calculations at each stage of its neutron-physics analysis using deterministic codes. The elementary cell has been calculated with two main goals; evaluation of effects of simplifications introduced in deterministic lattice spectrum calculations by the WIMS code and evaluation of library data in recently developed WIMS libraries. In particular the beryllium data of those libraries needed evaluation. The whole core calculations mainly the first MARIA critical experiment and the first critical core after the 8-year break in operation. Both cores contained only fresh fuel elements but only in the first critical core the beryllium blocks were not poisoned by Li-6 and He-3. Thus the MCNP k-eff results could be compared with the experiment. The MCNP calculations for the cores with beryllium poisoned suffered the deficiency of uncertainty in the poison concentration, but a comparison of power distribution shows that realistic poison levels have been carried out for the operating reactor MARIA configurations. (author)
Low field Monte-Carlo calculations in heterojunctions and quantum wells
Hall, van P.J.; Rooij, de R.; Wolter, J.H.
1990-01-01
We present results of low-field Monte-Carlo calculations and compare them with experimental results. The negative absolute mobility of minority electrons in p-type quantum wells, as found in recent experiments, is described quite well.
Analysis of error in Monte Carlo transport calculations
International Nuclear Information System (INIS)
Booth, T.E.
1979-01-01
The Monte Carlo method for neutron transport calculations suffers, in part, because of the inherent statistical errors associated with the method. Without an estimate of these errors in advance of the calculation, it is difficult to decide what estimator and biasing scheme to use. Recently, integral equations have been derived that, when solved, predicted errors in Monte Carlo calculations in nonmultiplying media. The present work allows error prediction in nonanalog Monte Carlo calculations of multiplying systems, even when supercritical. Nonanalog techniques such as biased kernels, particle splitting, and Russian Roulette are incorporated. Equations derived here allow prediction of how much a specific variance reduction technique reduces the number of histories required, to be weighed against the change in time required for calculation of each history. 1 figure, 1 table
Energy Technology Data Exchange (ETDEWEB)
Khazaee, M [shahid beheshti university, Tehran, Tehran (Iran, Islamic Republic of); Asl, A Kamali [Shahid Beheshti University, Tehran, Iran., Tehran, Tehran (Iran, Islamic Republic of); Geramifar, P [Shariati Hospital, Tehran, Iran., Tehran, Tehran (Iran, Islamic Republic of)
2015-06-15
Purpose: the objective of this study was to assess utilizing water dose point kernel (DPK)instead of tissue dose point kernels in convolution algorithms.to the best of our knowledge, in providing 3D distribution of absorbed dose from a 3D distribution of the activity, the human body is considered equivalent to water. as a Result tissue variations are not considered in patient specific dosimetry. Methods: In this study Gate v7.0 was used to calculate tissue dose point kernel. the beta emitter radionuclides which have taken into consideration in this simulation include Y-90, Lu-177 and P-32 which are commonly used in nuclear medicine. the comparison has been performed for dose point kernels of adipose, bone, breast, heart, intestine, kidney, liver, lung and spleen versus water dose point kernel. Results: In order to validate the simulation the Result of 90Y DPK in water were compared with published results of Papadimitroulas et al (Med. Phys., 2012). The results represented that the mean differences between water DPK and other soft tissues DPKs range between 0.6 % and 1.96% for 90Y, except for lung and bone, where the observed discrepancies are 6.3% and 12.19% respectively. The range of DPK difference for 32P is between 1.74% for breast and 18.85% for bone. For 177Lu, the highest difference belongs to bone which is equal to 16.91%. For other soft tissues the least discrepancy is observed in kidney with 1.68%. Conclusion: In all tissues except for lung and bone, the results of GATE for dose point kernel were comparable to water dose point kernel which demonstrates the appropriateness of applying water dose point kernel instead of soft tissues in the field of nuclear medicine.
International Nuclear Information System (INIS)
Khazaee, M; Asl, A Kamali; Geramifar, P
2015-01-01
Purpose: the objective of this study was to assess utilizing water dose point kernel (DPK)instead of tissue dose point kernels in convolution algorithms.to the best of our knowledge, in providing 3D distribution of absorbed dose from a 3D distribution of the activity, the human body is considered equivalent to water. as a Result tissue variations are not considered in patient specific dosimetry. Methods: In this study Gate v7.0 was used to calculate tissue dose point kernel. the beta emitter radionuclides which have taken into consideration in this simulation include Y-90, Lu-177 and P-32 which are commonly used in nuclear medicine. the comparison has been performed for dose point kernels of adipose, bone, breast, heart, intestine, kidney, liver, lung and spleen versus water dose point kernel. Results: In order to validate the simulation the Result of 90Y DPK in water were compared with published results of Papadimitroulas et al (Med. Phys., 2012). The results represented that the mean differences between water DPK and other soft tissues DPKs range between 0.6 % and 1.96% for 90Y, except for lung and bone, where the observed discrepancies are 6.3% and 12.19% respectively. The range of DPK difference for 32P is between 1.74% for breast and 18.85% for bone. For 177Lu, the highest difference belongs to bone which is equal to 16.91%. For other soft tissues the least discrepancy is observed in kidney with 1.68%. Conclusion: In all tissues except for lung and bone, the results of GATE for dose point kernel were comparable to water dose point kernel which demonstrates the appropriateness of applying water dose point kernel instead of soft tissues in the field of nuclear medicine
Neutron flux calculation by means of Monte Carlo methods
International Nuclear Information System (INIS)
Barz, H.U.; Eichhorn, M.
1988-01-01
In this report a survey of modern neutron flux calculation procedures by means of Monte Carlo methods is given. Due to the progress in the development of variance reduction techniques and the improvements of computational techniques this method is of increasing importance. The basic ideas in application of Monte Carlo methods are briefly outlined. In more detail various possibilities of non-analog games and estimation procedures are presented, problems in the field of optimizing the variance reduction techniques are discussed. In the last part some important international Monte Carlo codes and own codes of the authors are listed and special applications are described. (author)
Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations
Hoogenboom, J. Eduard; Dufek, Jan
2014-06-01
This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.
Guideline of Monte Carlo calculation. Neutron/gamma ray transport simulation by Monte Carlo method
2002-01-01
This report condenses basic theories and advanced applications of neutron/gamma ray transport calculations in many fields of nuclear energy research. Chapters 1 through 5 treat historical progress of Monte Carlo methods, general issues of variance reduction technique, cross section libraries used in continuous energy Monte Carlo codes. In chapter 6, the following issues are discussed: fusion benchmark experiments, design of ITER, experiment analyses of fast critical assembly, core analyses of JMTR, simulation of pulsed neutron experiment, core analyses of HTTR, duct streaming calculations, bulk shielding calculations, neutron/gamma ray transport calculations of the Hiroshima atomic bomb. Chapters 8 and 9 treat function enhancements of MCNP and MVP codes, and a parallel processing of Monte Carlo calculation, respectively. An important references are attached at the end of this report.
Quantum Monte Carlo diagonalization method as a variational calculation
International Nuclear Information System (INIS)
Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio.
1997-01-01
A stochastic method for performing large-scale shell model calculations is presented, which utilizes the auxiliary field Monte Carlo technique and diagonalization method. This method overcomes the limitation of the conventional shell model diagonalization and can extremely widen the feasibility of shell model calculations with realistic interactions for spectroscopic study of nuclear structure. (author)
Monte Carlo calculation of standard graphite block
International Nuclear Information System (INIS)
Ljubenov, V.
2000-01-01
This paper presents results of calculation of neutron flux space and energy distribution in the standard graphite block (SGB) obtained by the MCNP TM code. VMCCS nuclear data library, based on the ENDF / B-VI release 4 evaluation file, is used. MCNP model of the SGB considers detailed material, geometric and spectral properties of the neutron source, source carrier, graphite moderator medium, aluminium foil holders and proximate surrounding of SGB Geometric model is organised to provide the simplest homogeneous volume cells in order to obtain the maximum acceleration of neutron history tracking (author)
OPAL reactor calculations using the Monte Carlo code serpent
Energy Technology Data Exchange (ETDEWEB)
Ferraro, Diego; Villarino, Eduardo [Nuclear Engineering Dept., INVAP S.E., Rio Negro (Argentina)
2012-03-15
In the present work the Monte Carlo cell code developed by VTT Serpent v1.1.14 is used to model the MTR fuel assemblies (FA) and control rods (CR) from OPAL (Open Pool Australian Light-water) reactor in order to obtain few-group constants with burnup dependence to be used in the already developed reactor core models. These core calculations are performed using CITVAP 3-D diffusion code, which is well-known reactor code based on CITATION. Subsequently the results are compared with those obtained by the deterministic calculation line used by INVAP, which uses the Collision Probability Condor cell-code to obtain few-group constants. Finally the results are compared with the experimental data obtained from the reactor information for several operation cycles. As a result several evaluations are performed, including a code to code cell comparison at cell and core level and calculation-experiment comparison at core level in order to evaluate the Serpent code actual capabilities. (author)
An order αs Monte Carlo calculation of hadronic double photon production
International Nuclear Information System (INIS)
Owens, J.F.
1992-01-01
The results of an order α s calculation of hadronic double photon production are discussed and compared with data from both colliding beam and fixed target experiments. The calculation utilizes a combination of analytic and Monte Carlo integration methods which make it easy to calculate a variety of observables and impose experimental cuts. (author) 8 refs.; 2 figs
Monte Carlo calculation of Dancoff factors in irregular geometries
International Nuclear Information System (INIS)
Feher, S.; Hoogenboom, J.E.; Leege, P.F.A. de; Valko, J.
1994-01-01
A Monte Carlo program is described that calculates Dancoff factors in arbitrary arrangements of cylindrical or spherical fuel elements. The fuel elements can have different diameters and material compositions, and they are allowed to be black or partially transparent. Calculations of the Dancoff factor is based on its collision probability definition. The Monte Carlo approach is recommended because it is equally applicable in simple and in complicated geometries. It is shown that some of the commonly used algorithms are inaccurate even in infinite regular lattices. An example of application includes the Canada deuterium uranium (CANDU) 37-pin fuel bundle, which requires different Dancoff factors for the symmetrically different fuel pin positions
Quantum Monte Carlo calculations with chiral effective field theory interactions
Energy Technology Data Exchange (ETDEWEB)
Tews, Ingo
2015-10-12
The neutron-matter equation of state connects several physical systems over a wide density range, from cold atomic gases in the unitary limit at low densities, to neutron-rich nuclei at intermediate densities, up to neutron stars which reach supranuclear densities in their core. An accurate description of the neutron-matter equation of state is therefore crucial to describe these systems. To calculate the neutron-matter equation of state reliably, precise many-body methods in combination with a systematic theory for nuclear forces are needed. Chiral effective field theory (EFT) is such a theory. It provides a systematic framework for the description of low-energy hadronic interactions and enables calculations with controlled theoretical uncertainties. Chiral EFT makes use of a momentum-space expansion of nuclear forces based on the symmetries of Quantum Chromodynamics, which is the fundamental theory of strong interactions. In chiral EFT, the description of nuclear forces can be systematically improved by going to higher orders in the chiral expansion. On the other hand, continuum Quantum Monte Carlo (QMC) methods are among the most precise many-body methods available to study strongly interacting systems at finite densities. They treat the Schroedinger equation as a diffusion equation in imaginary time and project out the ground-state wave function of the system starting from a trial wave function by propagating the system in imaginary time. To perform this propagation, continuum QMC methods require as input local interactions. However, chiral EFT, which is naturally formulated in momentum space, contains several sources of nonlocality. In this Thesis, we show how to construct local chiral two-nucleon (NN) and three-nucleon (3N) interactions and discuss results of first QMC calculations for pure neutron systems. We have performed systematic auxiliary-field diffusion Monte Carlo (AFDMC) calculations for neutron matter using local chiral NN interactions. By
Quantum Monte Carlo Calculations Applied to Magnetic Molecules
International Nuclear Information System (INIS)
Larry Engelhardt
2006-01-01
We have calculated the equilibrium thermodynamic properties of Heisenberg spin systems using a quantum Monte Carlo (QMC) method. We have used some of these systems as models to describe recently synthesized magnetic molecules, and-upon comparing the results of these calculations with experimental data-have obtained accurate estimates for the basic parameters of these models. We have also performed calculations for other systems that are of more general interest, being relevant both for existing experimental data and for future experiments. Utilizing the concept of importance sampling, these calculations can be carried out in an arbitrarily large quantum Hilbert space, while still avoiding any approximations that would introduce systematic errors. The only errors are statistical in nature, and as such, their magnitudes are accurately estimated during the course of a simulation. Frustrated spin systems present a major challenge to the QMC method, nevertheless, in many instances progress can be made. In this chapter, the field of magnetic molecules is introduced, paying particular attention to the characteristics that distinguish magnetic molecules from other systems that are studied in condensed matter physics. We briefly outline the typical path by which we learn about magnetic molecules, which requires a close relationship between experiments and theoretical calculations. The typical experiments are introduced here, while the theoretical methods are discussed in the next chapter. Each of these theoretical methods has a considerable limitation, also described in Chapter 2, which together serve to motivate the present work. As is shown throughout the later chapters, the present QMC method is often able to provide useful information where other methods fail. In Chapter 3, the use of Monte Carlo methods in statistical physics is reviewed, building up the fundamental ideas that are necessary in order to understand the method that has been used in this work. With these
Quantum Monte Carlo Calculations Applied to Magnetic Molecules
Energy Technology Data Exchange (ETDEWEB)
Engelhardt, Larry [Iowa State Univ., Ames, IA (United States)
2006-01-01
We have calculated the equilibrium thermodynamic properties of Heisenberg spin systems using a quantum Monte Carlo (QMC) method. We have used some of these systems as models to describe recently synthesized magnetic molecules, and-upon comparing the results of these calculations with experimental data-have obtained accurate estimates for the basic parameters of these models. We have also performed calculations for other systems that are of more general interest, being relevant both for existing experimental data and for future experiments. Utilizing the concept of importance sampling, these calculations can be carried out in an arbitrarily large quantum Hilbert space, while still avoiding any approximations that would introduce systematic errors. The only errors are statistical in nature, and as such, their magnitudes are accurately estimated during the course of a simulation. Frustrated spin systems present a major challenge to the QMC method, nevertheless, in many instances progress can be made. In this chapter, the field of magnetic molecules is introduced, paying particular attention to the characteristics that distinguish magnetic molecules from other systems that are studied in condensed matter physics. We briefly outline the typical path by which we learn about magnetic molecules, which requires a close relationship between experiments and theoretical calculations. The typical experiments are introduced here, while the theoretical methods are discussed in the next chapter. Each of these theoretical methods has a considerable limitation, also described in Chapter 2, which together serve to motivate the present work. As is shown throughout the later chapters, the present QMC method is often able to provide useful information where other methods fail. In Chapter 3, the use of Monte Carlo methods in statistical physics is reviewed, building up the fundamental ideas that are necessary in order to understand the method that has been used in this work. With these
Monte Carlo neutron and gamma-ray calculations
International Nuclear Information System (INIS)
Mendelsohn, Edgar
1987-01-01
Kerma in tissue and the activation produced in sulfur and cobalt due to prompt neutrons from the Hiroshima and Nagasaki bombs were calculated out to 2000 m from the hypocenter in 100 m increments. As neutron sources weapon output spectra calculated by investigators from the Los Alamos National Laboratory (LANL) were used. Other parameters, such as burst height and air and ground densities and compositions, were obtained from recent sources. The LLNL Monte Carlo transport code TART was used for these calculations. TART accesses the well-established 1985 ENDL cross-section library, which has built-in reaction cross sections. The zoning for this problem was a full two-dimensional geometry with a ceiling height of 1100 m and a ground thickness of 30 cm. For the Hiroshima calculations (including sulfur activation) and untilted source was used. However, a special sulfur activation problem using a source tilted 15 deg was run for which the ratios to the untilted case are reported. The TART code uses a technique for solving the transport equation that is different from that of the ORNL DOT code; it also draws on a specially evaluated cross-section library (ENDL) and uses a larger group structure than DOT. One of the purposes of this work was to instill confidence in the DOT calculations that will be used directly in the dose reassessment of A-bomb survivors. The TART results were compared with values calculated with the DOT code by investigators from ORNL and found to be in good agreement for the most part. However, the sulfur activation comparison is disappointing. Because the sulfur activation is caused by higher energy neutrons (which should have experienced fewer collisions than those causing cobalt activation, for example), better agreement than what is reported here would be expected
Continuum variational and diffusion quantum Monte Carlo calculations
International Nuclear Information System (INIS)
Needs, R J; Towler, M D; Drummond, N D; Lopez RIos, P
2010-01-01
This topical review describes the methodology of continuum variational and diffusion quantum Monte Carlo calculations. These stochastic methods are based on many-body wavefunctions and are capable of achieving very high accuracy. The algorithms are intrinsically parallel and well suited to implementation on petascale computers, and the computational cost scales as a polynomial in the number of particles. A guide to the systems and topics which have been investigated using these methods is given. The bulk of the article is devoted to an overview of the basic quantum Monte Carlo methods, the forms and optimization of wavefunctions, performing calculations under periodic boundary conditions, using pseudopotentials, excited-state calculations, sources of calculational inaccuracy, and calculating energy differences and forces. (topical review)
A multi-microcomputer system for Monte Carlo calculations
International Nuclear Information System (INIS)
Hertzberger, L.O.; Berg, B.; Krasemann, H.
1981-01-01
We propose a microcomputer system which allows parallel processing for Monte Carlo calculations in lattice gauge theories, simulations of high energy physics experiments and presumably many other fields of current interest. The master-n-slave multiprocessor system is based on the Motorola MC 68000 microprocessor. One attraction if this processor is that it allows up to 16 M Byte random access memory. (orig.)
Parallel MCNP Monte Carlo transport calculations with MPI
International Nuclear Information System (INIS)
Wagner, J.C.; Haghighat, A.
1996-01-01
The steady increase in computational performance has made Monte Carlo calculations for large/complex systems possible. However, in order to make these calculations practical, order of magnitude increases in performance are necessary. The Monte Carlo method is inherently parallel (particles are simulated independently) and thus has the potential for near-linear speedup with respect to the number of processors. Further, the ever-increasing accessibility of parallel computers, such as workstation clusters, facilitates the practical use of parallel Monte Carlo. Recognizing the nature of the Monte Carlo method and the trends in available computing, the code developers at Los Alamos National Laboratory implemented the message-passing general-purpose Monte Carlo radiation transport code MCNP (version 4A). The PVM package was chosen by the MCNP code developers because it supports a variety of communication networks, several UNIX platforms, and heterogeneous computer systems. This PVM version of MCNP has been shown to produce speedups that approach the number of processors and thus, is a very useful tool for transport analysis. Due to software incompatibilities on the local IBM SP2, PVM has not been available, and thus it is not possible to take advantage of this useful tool. Hence, it became necessary to implement an alternative message-passing library package into MCNP. Because the message-passing interface (MPI) is supported on the local system, takes advantage of the high-speed communication switches in the SP2, and is considered to be the emerging standard, it was selected
Dielectric response of periodic systems from quantum Monte Carlo calculations.
Umari, P; Willamson, A J; Galli, Giulia; Marzari, Nicola
2005-11-11
We present a novel approach that allows us to calculate the dielectric response of periodic systems in the quantum Monte Carlo formalism. We employ a many-body generalization for the electric-enthalpy functional, where the coupling with the field is expressed via the Berry-phase formulation for the macroscopic polarization. A self-consistent local Hamiltonian then determines the ground-state wave function, allowing for accurate diffusion quantum Monte Carlo calculations where the polarization's fixed point is estimated from the average on an iterative sequence, sampled via forward walking. This approach has been validated for the case of an isolated hydrogen atom and then applied to a periodic system, to calculate the dielectric susceptibility of molecular-hydrogen chains. The results found are in excellent agreement with the best estimates obtained from the extrapolation of quantum-chemistry calculations.
Burnup calculation methodology in the serpent 2 Monte Carlo code
International Nuclear Information System (INIS)
Leppaenen, J.; Isotalo, A.
2012-01-01
This paper presents two topics related to the burnup calculation capabilities in the Serpent 2 Monte Carlo code: advanced time-integration methods and improved memory management, accomplished by the use of different optimization modes. The development of the introduced methods is an important part of re-writing the Serpent source code, carried out for the purpose of extending the burnup calculation capabilities from 2D assembly-level calculations to large 3D reactor-scale problems. The progress is demonstrated by repeating a PWR test case, originally carried out in 2009 for the validation of the newly-implemented burnup calculation routines in Serpent 1. (authors)
International Nuclear Information System (INIS)
Hoogenboom, J. E.
2004-01-01
Although Russian roulette is applied very often in Monte Carlo calculations, not much literature exists on its quantitative influence on the variance and efficiency of a Monte Carlo calculation. Elaborating on the work of Lux and Koblinger using moment equations, new relevant equations are derived to calculate the variance of a Monte Carlo simulation using Russian roulette. To demonstrate its practical application the theory is applied to a simplified transport model resulting in explicit analytical expressions for the variance of a Monte Carlo calculation and for the expected number of collisions per history. From these expressions numerical results are shown and compared with actual Monte Carlo calculations, showing an excellent agreement. By considering the number of collisions in a Monte Carlo calculation as a measure of the CPU time, also the efficiency of the Russian roulette can be studied. It opens the way for further investigations, including optimization of Russian roulette parameters. (authors)
Clinical implementation of full Monte Carlo dose calculation in proton beam therapy
International Nuclear Information System (INIS)
Paganetti, Harald; Jiang, Hongyu; Parodi, Katia; Slopsema, Roelf; Engelsman, Martijn
2008-01-01
The goal of this work was to facilitate the clinical use of Monte Carlo proton dose calculation to support routine treatment planning and delivery. The Monte Carlo code Geant4 was used to simulate the treatment head setup, including a time-dependent simulation of modulator wheels (for broad beam modulation) and magnetic field settings (for beam scanning). Any patient-field-specific setup can be modeled according to the treatment control system of the facility. The code was benchmarked against phantom measurements. Using a simulation of the ionization chamber reading in the treatment head allows the Monte Carlo dose to be specified in absolute units (Gy per ionization chamber reading). Next, the capability of reading CT data information was implemented into the Monte Carlo code to model patient anatomy. To allow time-efficient dose calculation, the standard Geant4 tracking algorithm was modified. Finally, a software link of the Monte Carlo dose engine to the patient database and the commercial planning system was established to allow data exchange, thus completing the implementation of the proton Monte Carlo dose calculation engine ('DoC++'). Monte Carlo re-calculated plans are a valuable tool to revisit decisions in the planning process. Identification of clinically significant differences between Monte Carlo and pencil-beam-based dose calculations may also drive improvements of current pencil-beam methods. As an example, four patients (29 fields in total) with tumors in the head and neck regions were analyzed. Differences between the pencil-beam algorithm and Monte Carlo were identified in particular near the end of range, both due to dose degradation and overall differences in range prediction due to bony anatomy in the beam path. Further, the Monte Carlo reports dose-to-tissue as compared to dose-to-water by the planning system. Our implementation is tailored to a specific Monte Carlo code and the treatment planning system XiO (Computerized Medical Systems Inc
Hybrid Monte-Carlo method for ICF calculations
International Nuclear Information System (INIS)
Clouet, J.F.; Samba, G.
2003-01-01
) conduction and ray-tracing for laser description. Radiation transport is usually solved by a Monte-Carlo method. In coupling diffusion approximation and transport description, the difficult part comes from the need for an implicit discretization of the emission-absorption terms: this problem was solved by using the symbolic Monte-Carlo method. This means that at each step of the simulation a matrix is computed by a Monte-Carlo method which accounts for the radiation energy exchange between the cells. Because of time step limitation by hydrodynamic motion, energy exchange is limited to a small number of cells and the matrix remains sparse. This matrix is added to usual diffusion matrix for thermal and radiative conductions: finally we arrive at a non-symmetric linear system to invert. A generalized Marshak condition describe the coupling between transport and diffusion. In this paper we will present the principles of the method and numerical simulation of an ICF hohlraum. We shall illustrate the benefits of the method by comparing the results with full implicit Monte-Carlo calculations. In particular we shall show how the spectral cut-off evolves during the propagation of the radiative front in the gold wall. Several issues are still to be addressed (robust algorithm for spectral cut- off calculation, coupling with ALE capabilities): we shall briefly discuss these problems. (authors)
Diffusion Monte Carlo calculation of three-body systems
International Nuclear Information System (INIS)
Lu Mengjiao; Lin Qihu; Ren Zhongzhou
2012-01-01
The application of the diffusion Monte Carlo algorithm in three-body systems is studied. We develop a program and use it to calculate the property of various three-body systems. Regular Coulomb systems such as atoms, molecules, and ions are investigated. The calculation is then extended to exotic systems where electrons are replaced by muons. Some nuclei with neutron halos are also calculated as three-body systems consisting of a core and two external nucleons. Our results agree well with experiments and others' work. (authors)
Load Balancing of Parallel Monte Carlo Transport Calculations
International Nuclear Information System (INIS)
Procassini, R J; O'Brien, M J; Taylor, J M
2005-01-01
The performance of parallel Monte Carlo transport calculations which use both spatial and particle parallelism is increased by dynamically assigning processors to the most worked domains. Since he particle work load varies over the course of the simulation, this algorithm determines each cycle if dynamic load balancing would speed up the calculation. If load balancing is required, a small number of particle communications are initiated in order to achieve load balance. This method has decreased the parallel run time by more than a factor of three for certain criticality calculations
Dynamic Load Balancing of Parallel Monte Carlo Transport Calculations
International Nuclear Information System (INIS)
O'Brien, M; Taylor, J; Procassini, R
2004-01-01
The performance of parallel Monte Carlo transport calculations which use both spatial and particle parallelism is increased by dynamically assigning processors to the most worked domains. Since the particle work load varies over the course of the simulation, this algorithm determines each cycle if dynamic load balancing would speed up the calculation. If load balancing is required, a small number of particle communications are initiated in order to achieve load balance. This method has decreased the parallel run time by more than a factor of three for certain criticality calculations
Calculations of pair production by Monte Carlo methods
International Nuclear Information System (INIS)
Bottcher, C.; Strayer, M.R.
1991-01-01
We describe some of the technical design issues associated with the production of particle-antiparticle pairs in very large accelerators. To answer these questions requires extensive calculation of Feynman diagrams, in effect multi-dimensional integrals, which we evaluate by Monte Carlo methods on a variety of supercomputers. We present some portable algorithms for generating random numbers on vector and parallel architecture machines. 12 refs., 14 figs
Molecular dynamics and Monte Carlo calculations in statistical mechanics
International Nuclear Information System (INIS)
Wood, W.W.; Erpenbeck, J.J.
1976-01-01
Monte Carlo and molecular dynamics calculations on statistical mechanical systems is reviewed giving some of the more significant recent developments. It is noted that the term molecular dynamics refers to the time-averaging technique for hard-core and square-well interactions and for continuous force-law interactions. Ergodic questions, methodology, quantum mechanical, Lorentz, and one-dimensional, hard-core, and square and triangular-well systems, short-range soft potentials, and other systems are included. 268 references
Monte Carlo dose calculation of microbeam in a lung phantom
International Nuclear Information System (INIS)
Company, F.Z.; Mino, C.; Mino, F.
1998-01-01
Full text: Recent advances in synchrotron generated X-ray beams with high fluence rate permit investigation of the application of an array of closely spaced, parallel or converging microplanar beams in radiotherapy. The proposed techniques takes advantage of the hypothesised repair mechanism of capillary cells between alternate microbeam zones, which regenerates the lethally irradiated endothelial cells. The lateral and depth doses of 100 keV microplanar beams are investigated for different beam dimensions and spacings in a tissue, lung and tissue/lung/tissue phantom. The EGS4 Monte Carlo code is used to calculate dose profiles at different depth and bundles of beams (up to 20x20cm square cross section). The maximum dose on the beam axis (peak) and the minimum interbeam dose (valley) are compared at different depths, bundles, heights, widths and beam spacings. Relatively high peak to valley ratios are observed in the lung region, suggesting an ideal environment for microbeam radiotherapy. For a single field, the ratio at the tissue/lung interface will set the maximum dose to the target volume. However, in clinical application, several fields would be involved allowing much greater doses to be applied for the elimination of cancer cells. We conclude therefore that multifield microbeam therapy has the potential to achieve useful therapeutic ratios for the treatment of lung cancer
Automatic fission source convergence criteria for Monte Carlo criticality calculations
International Nuclear Information System (INIS)
Shim, Hyung Jin; Kim, Chang Hyo
2005-01-01
The Monte Carlo criticality calculations for the multiplication factor and the power distribution in a nuclear system require knowledge of stationary or fundamental-mode fission source distribution (FSD) in the system. Because it is a priori unknown, so-called inactive cycle Monte Carlo (MC) runs are performed to determine it. The inactive cycle MC runs should be continued until the FSD converges to the stationary FSD. Obviously, if one stops them prematurely, the MC calculation results may have biases because the followup active cycles may be run with the non-stationary FSD. Conversely, if one performs the inactive cycle MC runs more than necessary, one is apt to waste computing time because inactive cycle MC runs are used to elicit the fundamental-mode FSD only. In the absence of suitable criteria for terminating the inactive cycle MC runs, one cannot but rely on empiricism in deciding how many inactive cycles one should conduct for a given problem. Depending on the problem, this may introduce biases into Monte Carlo estimates of the parameters one tries to calculate. The purpose of this paper is to present new fission source convergence criteria designed for the automatic termination of inactive cycle MC runs
Energy Technology Data Exchange (ETDEWEB)
Millman, D. L. [Dept. of Computer Science, Univ. of North Carolina at Chapel Hill (United States); Griesheimer, D. P.; Nease, B. R. [Bechtel Marine Propulsion Corporation, Bertis Atomic Power Laboratory (United States); Snoeyink, J. [Dept. of Computer Science, Univ. of North Carolina at Chapel Hill (United States)
2012-07-01
In this paper we consider a new generalized algorithm for the efficient calculation of component object volumes given their equivalent constructive solid geometry (CSG) definition. The new method relies on domain decomposition to recursively subdivide the original component into smaller pieces with volumes that can be computed analytically or stochastically, if needed. Unlike simpler brute-force approaches, the proposed decomposition scheme is guaranteed to be robust and accurate to within a user-defined tolerance. The new algorithm is also fully general and can handle any valid CSG component definition, without the need for additional input from the user. The new technique has been specifically optimized to calculate volumes of component definitions commonly found in models used for Monte Carlo particle transport simulations for criticality safety and reactor analysis applications. However, the algorithm can be easily extended to any application which uses CSG representations for component objects. The paper provides a complete description of the novel volume calculation algorithm, along with a discussion of the conjectured error bounds on volumes calculated within the method. In addition, numerical results comparing the new algorithm with a standard stochastic volume calculation algorithm are presented for a series of problems spanning a range of representative component sizes and complexities. (authors)
International Nuclear Information System (INIS)
Millman, D. L.; Griesheimer, D. P.; Nease, B. R.; Snoeyink, J.
2012-01-01
In this paper we consider a new generalized algorithm for the efficient calculation of component object volumes given their equivalent constructive solid geometry (CSG) definition. The new method relies on domain decomposition to recursively subdivide the original component into smaller pieces with volumes that can be computed analytically or stochastically, if needed. Unlike simpler brute-force approaches, the proposed decomposition scheme is guaranteed to be robust and accurate to within a user-defined tolerance. The new algorithm is also fully general and can handle any valid CSG component definition, without the need for additional input from the user. The new technique has been specifically optimized to calculate volumes of component definitions commonly found in models used for Monte Carlo particle transport simulations for criticality safety and reactor analysis applications. However, the algorithm can be easily extended to any application which uses CSG representations for component objects. The paper provides a complete description of the novel volume calculation algorithm, along with a discussion of the conjectured error bounds on volumes calculated within the method. In addition, numerical results comparing the new algorithm with a standard stochastic volume calculation algorithm are presented for a series of problems spanning a range of representative component sizes and complexities. (authors)
Monte Carlo calculations of few-body and light nuclei
International Nuclear Information System (INIS)
Wiringa, R.B.
1992-01-01
A major goal in nuclear physics is to understand how nuclear structure comes about from the underlying interactions between nucleons. This requires modelling nuclei as collections of strongly interacting particles. Using realistic nucleon-nucleon potentials, supplemented with consistent three-nucleon potentials and two-body electroweak current operators, variational Monte Carlo methods are used to calculate nuclear ground-state properties, such as the binding energy, electromagnetic form factors, and momentum distributions. Other properties such as excited states and low-energy reactions are also calculable with these methods
Monte Carlo calculations supporting patient plan verification in proton therapy
Directory of Open Access Journals (Sweden)
Thiago Viana Miranda Lima
2016-03-01
Full Text Available Patient’s treatment plan verification covers substantial amount of the quality assurance (QA resources, this is especially true for Intensity Modulated Proton Therapy (IMPT. The use of Monte Carlo (MC simulations in supporting QA has been widely discussed and several methods have been proposed. In this paper we studied an alternative approach from the one being currently applied clinically at Centro Nazionale di Adroterapia Oncologica (CNAO. We reanalysed the previously published data (Molinelli et al. 2013, where 9 patient plans were investigated in which the warning QA threshold of 3% mean dose deviation was crossed. The possibility that these differences between measurement and calculated dose were related to dose modelling (Treatment Planning Systems (TPS vs MC, limitations on dose delivery system or detectors mispositioning was originally explored but other factors such as the geometric description of the detectors were not ruled out. For the purpose of this work we compared ionisation-chambers measurements with different MC simulations results. It was also studied some physical effects introduced by this new approach for example inter detector interference and the delta ray thresholds. The simulations accounting for a detailed geometry typically are superior (statistical difference - p-value around 0.01 to most of the MC simulations used at CNAO (only inferior to the shift approach used. No real improvement were observed in reducing the current delta-ray threshold used (100 keV and no significant interference between ion chambers in the phantom were detected (p-value 0.81. In conclusion, it was observed that the detailed geometrical description improves the agreement between measurement and MC calculations in some cases. But in other cases position uncertainty represents the dominant uncertainty. The inter chamber disturbance was not detected for the therapeutic protons energies and the results from the current delta threshold are
Monte Carlo calculations of electron transport on microcomputers
International Nuclear Information System (INIS)
Chung, Manho; Jester, W.A.; Levine, S.H.; Foderaro, A.H.
1990-01-01
In the work described in this paper, the Monte Carlo program ZEBRA, developed by Berber and Buxton, was converted to run on the Macintosh computer using Microsoft BASIC to reduce the cost of Monte Carlo calculations using microcomputers. Then the Eltran2 program was transferred to an IBM-compatible computer. Turbo BASIC and Microsoft Quick BASIC have been used on the IBM-compatible Tandy 4000SX computer. The paper shows the running speed of the Monte Carlo programs on the different computers, normalized to one for Eltran2 on the Macintosh-SE or Macintosh-Plus computer. Higher values refer to faster running times proportionally. Since Eltran2 is a one-dimensional program, it calculates energy deposited in a semi-infinite multilayer slab. Eltran2 has been modified to a two-dimensional program called Eltran3 to computer more accurately the case with a point source, a small detector, and a short source-to-detector distance. The running time of Eltran3 is about twice as long as that of Eltran2 for a similar case
Improvement of correlated sampling Monte Carlo methods for reactivity calculations
International Nuclear Information System (INIS)
Nakagawa, Masayuki; Asaoka, Takumi
1978-01-01
Two correlated Monte Carlo methods, the similar flight path and the identical flight path methods, have been improved to evaluate up to the second order change of the reactivity perturbation. Secondary fission neutrons produced by neutrons having passed through perturbed regions in both unperturbed and perturbed systems are followed in a way to have a strong correlation between secondary neutrons in both the systems. These techniques are incorporated into the general purpose Monte Carlo code MORSE, so as to be able to estimate also the statistical error of the calculated reactivity change. The control rod worths measured in the FCA V-3 assembly are analyzed with the present techniques, which are shown to predict the measured values within the standard deviations. The identical flight path method has revealed itself more useful than the similar flight path method for the analysis of the control rod worth. (auth.)
Whole core calculations of power reactors by Monte Carlo method
International Nuclear Information System (INIS)
Nakagawa, Masayuki; Mori, Takamasa
1993-01-01
Whole core calculations have been performed for a commercial size PWR and a prototype LMFBR by using vectorized Monte Carlo codes. Geometries of cores were precisely represented in a pin by pin model. The calculated parameters were k eff , control rod worth, power distribution and so on. Both multigroup and continuous energy models were used and the accuracy of multigroup approximation was evaluated through the comparison of both results. One million neutron histories were tracked to considerably reduce variances. It was demonstrated that the high speed vectorized codes could calculate k eff , assembly power and some reactivity worths within practical computation time. For pin power and small reactivity worth calculations, the order of 10 million histories would be necessary. Required number of histories to achieve target design accuracy were estimated for those neutronic parameters. (orig.)
Monte Carlo calculations of kQ, the beam quality conversion factor
International Nuclear Information System (INIS)
Muir, B. R.; Rogers, D. W. O.
2010-01-01
Purpose: To use EGSnrc Monte Carlo simulations to directly calculate beam quality conversion factors, k Q , for 32 cylindrical ionization chambers over a range of beam qualities and to quantify the effect of systematic uncertainties on Monte Carlo calculations of k Q . These factors are required to use the TG-51 or TRS-398 clinical dosimetry protocols for calibrating external radiotherapy beams. Methods: Ionization chambers are modeled either from blueprints or manufacturers' user's manuals. The dose-to-air in the chamber is calculated using the EGSnrc user-code egs c hamber using 11 different tabulated clinical photon spectra for the incident beams. The dose to a small volume of water is also calculated in the absence of the chamber at the midpoint of the chamber on its central axis. Using a simple equation, k Q is calculated from these quantities under the assumption that W/e is constant with energy and compared to TG-51 protocol and measured values. Results: Polynomial fits to the Monte Carlo calculated k Q factors as a function of beam quality expressed as %dd(10) x and TPR 10 20 are given for each ionization chamber. Differences are explained between Monte Carlo calculated values and values from the TG-51 protocol or calculated using the computer program used for TG-51 calculations. Systematic uncertainties in calculated k Q values are analyzed and amount to a maximum of one standard deviation uncertainty of 0.99% if one assumes that photon cross-section uncertainties are uncorrelated and 0.63% if they are assumed correlated. The largest components of the uncertainty are the constancy of W/e and the uncertainty in the cross-section for photons in water. Conclusions: It is now possible to calculate k Q directly using Monte Carlo simulations. Monte Carlo calculations for most ionization chambers give results which are comparable to TG-51 values. Discrepancies can be explained using individual Monte Carlo calculations of various correction factors which are more
Monte Carlo benchmark calculations for 400MWTH PBMR core
International Nuclear Information System (INIS)
Kim, H. C.; Kim, J. K.; Kim, S. Y.; Noh, J. M.
2007-01-01
A large interest in high-temperature gas-cooled reactors (HTGR) has been initiated in connection with hydrogen production in recent years. In this study, as a part of work for establishing Monte Carlo computation system for HTGR core analysis, some benchmark calculations for pebble-type HTGR were carried out using MCNP5 code. The core of the 400MW t h Pebble-bed Modular Reactor (PBMR) was selected as a benchmark model. Recently, the IAEA CRP5 neutronics and thermal-hydraulics benchmark problem was proposed for the testing of existing methods for HTGRs to analyze the neutronics and thermal-hydraulic behavior for the design and safety evaluations of the PBMR. This study deals with the neutronic benchmark problems, for fresh fuel and cold conditions (Case F-1), and first core loading with given number densities (Case F-2), proposed for PBMR. After the detailed MCNP modeling of the whole facility, benchmark calculations were performed. Spherical fuel region of a fuel pebble is divided into cubic lattice element in order to model a fuel pebble which contains, on average, 15000 CFPs (Coated Fuel Particles). Each element contains one CFP at its center. In this study, the side length of each cubic lattice element to have the same amount of fuel was calculated to be 0.1635 cm. The remaining volume of each lattice element was filled with graphite. All of different 5 concentric shells of CFP were modeled. The PBMR annular core consists of approximately 452000 pebbles in the benchmark problems. In Case F-1 where the core was filled with only fresh fuel pebble, a BCC(body-centered-cubic) lattice model was employed in order to achieve the random packing core with the packing fraction of 0.61. The BCC lattice was also employed with the size of the moderator pebble increased in a manner that reproduces the specified F/M ratio of 1:2 while preserving the packing fraction of 0.61 in Case F-2. The calculations were pursued with ENDF/B-VI cross-section library and used sab2002 S(α,
Comparison of calculations of a reflected reactor with diffusion, SN and Monte Carlo codes
International Nuclear Information System (INIS)
McGregor, B.
1975-01-01
A diffusion theory code, POW, was compared with a Monte Carlo transport theory code, KENO, for the calculation of a small C/ 235 U cylindrical core with a graphite reflector. The calculated multiplication factors were in good agreement but differences were noted in region-averaged group fluxes. A one-dimensional spherical geometry was devised to approximate cylindrical geometry. Differences similar to those already observed were noted when the region-averaged fluxes from a diffusion theory (POW) calculation were compared with an SN transport theory (ANAUSN) calculation for the spherical model. Calculations made with SN and Monte Carlo transport codes were in good agreement. It was concluded that observed flux differences were attributable to the POW code, and were not inconsistent with inherent diffusion theory approximations. (author)
Monte Carlo reactor calculation with substantially reduced number of cycles
International Nuclear Information System (INIS)
Lee, M. J.; Joo, H. G.; Lee, D.; Smith, K.
2012-01-01
A new Monte Carlo (MC) eigenvalue calculation scheme that substantially reduces the number of cycles is introduced with the aid of coarse mesh finite difference (CMFD) formulation. First, it is confirmed in terms of pin power errors that using extremely many particles resulting in short active cycles is beneficial even in the conventional MC scheme although wasted operations in inactive cycles cannot be reduced with more particles. A CMFD-assisted MC scheme is introduced as an effort to reduce the number of inactive cycles and the fast convergence behavior and reduced inter-cycle effect of the CMFD assisted MC calculation is investigated in detail. As a practical means of providing a good initial fission source distribution, an assembly based few-group condensation and homogenization scheme is introduced and it is shown that efficient MC eigenvalue calculations with fewer than 20 total cycles (including inactive cycles) are possible for large power reactor problems. (authors)
Modeling Dynamic Objects in Monte Carlo Particle Transport Calculations
International Nuclear Information System (INIS)
Yegin, G.
2008-01-01
In this study, the Multi-Geometry geometry modeling technique was improved in order to handle moving objects in a Monte Carlo particle transport calculation. In the Multi-Geometry technique, the geometry is a superposition of objects not surfaces. By using this feature, we developed a new algorithm which allows a user to make enable or disable geometry elements during particle transport. A disabled object can be ignored at a certain stage of a calculation and switching among identical copies of the same object located adjacent poins during a particle simulation corresponds to the movement of that object in space. We called this powerfull feature as Dynamic Multi-Geometry technique (DMG) which is used for the first time in Brachy Dose Monte Carlo code to simulate HDR brachytherapy treatment systems. Our results showed that having disabled objects in a geometry does not effect calculated dose values. This technique is also suitable to be used in other areas such as IMRT treatment planning systems
Monte Carlo Calculation of Sensitivities to Secondaries' Angular Distributions
International Nuclear Information System (INIS)
Perel, R.L.
2003-01-01
An algorithm for Monte Carlo calculation of sensitivities of responses to secondaries' angular distributions (SAD) is developed, based on the differential operator approach. The algorithm was formulated for the sensitivity to Legendre coefficients of the SAD and is valid even in cases where the actual representation of SAD is not in the form of a Legendre series. The algorithm was implemented, for point- or ring-detectors, in a local version of the code MCNP. Numerical tests were performed to validate the algorithm and its implementation. In addition, an algorithm specific for the Kalbach-Mann representation of SAD is presented
MCOR - Monte Carlo depletion code for reference LWR calculations
Energy Technology Data Exchange (ETDEWEB)
Puente Espel, Federico, E-mail: fup104@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Tippayakul, Chanatip, E-mail: cut110@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Ivanov, Kostadin, E-mail: kni1@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Misu, Stefan, E-mail: Stefan.Misu@areva.com [AREVA, AREVA NP GmbH, Erlangen (Germany)
2011-04-15
Research highlights: > Introduction of a reference Monte Carlo based depletion code with extended capabilities. > Verification and validation results for MCOR. > Utilization of MCOR for benchmarking deterministic lattice physics (spectral) codes. - Abstract: The MCOR (MCnp-kORigen) code system is a Monte Carlo based depletion system for reference fuel assembly and core calculations. The MCOR code is designed as an interfacing code that provides depletion capability to the LANL Monte Carlo code by coupling two codes: MCNP5 with the AREVA NP depletion code, KORIGEN. The physical quality of both codes is unchanged. The MCOR code system has been maintained and continuously enhanced since it was initially developed and validated. The verification of the coupling was made by evaluating the MCOR code against similar sophisticated code systems like MONTEBURNS, OCTOPUS and TRIPOLI-PEPIN. After its validation, the MCOR code has been further improved with important features. The MCOR code presents several valuable capabilities such as: (a) a predictor-corrector depletion algorithm, (b) utilization of KORIGEN as the depletion module, (c) individual depletion calculation of each burnup zone (no burnup zone grouping is required, which is particularly important for the modeling of gadolinium rings), and (d) on-line burnup cross-section generation by the Monte Carlo calculation for 88 isotopes and usage of the KORIGEN libraries for PWR and BWR typical spectra for the remaining isotopes. Besides the just mentioned capabilities, the MCOR code newest enhancements focus on the possibility of executing the MCNP5 calculation in sequential or parallel mode, a user-friendly automatic re-start capability, a modification of the burnup step size evaluation, and a post-processor and test-matrix, just to name the most important. The article describes the capabilities of the MCOR code system; from its design and development to its latest improvements and further ameliorations. Additionally
MCOR - Monte Carlo depletion code for reference LWR calculations
International Nuclear Information System (INIS)
Puente Espel, Federico; Tippayakul, Chanatip; Ivanov, Kostadin; Misu, Stefan
2011-01-01
Research highlights: → Introduction of a reference Monte Carlo based depletion code with extended capabilities. → Verification and validation results for MCOR. → Utilization of MCOR for benchmarking deterministic lattice physics (spectral) codes. - Abstract: The MCOR (MCnp-kORigen) code system is a Monte Carlo based depletion system for reference fuel assembly and core calculations. The MCOR code is designed as an interfacing code that provides depletion capability to the LANL Monte Carlo code by coupling two codes: MCNP5 with the AREVA NP depletion code, KORIGEN. The physical quality of both codes is unchanged. The MCOR code system has been maintained and continuously enhanced since it was initially developed and validated. The verification of the coupling was made by evaluating the MCOR code against similar sophisticated code systems like MONTEBURNS, OCTOPUS and TRIPOLI-PEPIN. After its validation, the MCOR code has been further improved with important features. The MCOR code presents several valuable capabilities such as: (a) a predictor-corrector depletion algorithm, (b) utilization of KORIGEN as the depletion module, (c) individual depletion calculation of each burnup zone (no burnup zone grouping is required, which is particularly important for the modeling of gadolinium rings), and (d) on-line burnup cross-section generation by the Monte Carlo calculation for 88 isotopes and usage of the KORIGEN libraries for PWR and BWR typical spectra for the remaining isotopes. Besides the just mentioned capabilities, the MCOR code newest enhancements focus on the possibility of executing the MCNP5 calculation in sequential or parallel mode, a user-friendly automatic re-start capability, a modification of the burnup step size evaluation, and a post-processor and test-matrix, just to name the most important. The article describes the capabilities of the MCOR code system; from its design and development to its latest improvements and further ameliorations
Development and verification of Monte Carlo burnup calculation system
International Nuclear Information System (INIS)
Ando, Yoshihira; Yoshioka, Kenichi; Mitsuhashi, Ishi; Sakurada, Koichi; Sakurai, Shungo
2003-01-01
Monte Carlo burnup calculation code system has been developed to evaluate accurate various quantities required in the backend field. From the Actinide Research in a Nuclear Element (ARIANE) program, by using, the measured nuclide compositions of fuel rods in the fuel assemblies irradiated in the commercial Netherlands BWR, the analyses have been performed for the code system verification. The code system developed in this paper has been verified through analysis for MOX and UO2 fuel rods. This system enables to reduce large margin assumed in the present criticality analysis for LWR spent fuels. (J.P.N.)
Monte Carlo calculation of received dose from ingestion and inhalation of natural uranium
International Nuclear Information System (INIS)
Trobok, M.; Zupunski, Lj.; Spasic-Jokic, V.; Gordanic, V.; Sovilj, P.
2009-01-01
For the purpose of this study eighty samples are taken from the area Bela Crkva and Vrsac. The activity of radionuclide in the soil is determined by gamma- ray spectrometry. Monte Carlo method is used to calculate effective dose received by population resulting from the inhalation and ingestion of natural uranium. The estimated doses were compared with the legally prescribed levels. (author) [sr
One group neutron flux at a point in a cylindrical reactor cell calculated by Monte Carlo
Energy Technology Data Exchange (ETDEWEB)
Kocic, A [Institute of Nuclear Sciences Vinca, Beograd (Serbia and Montenegro)
1974-01-15
Mean values of the neutron flux over material regions and the neutron flux at space points in a cylindrical annular cell (one group model) have been calculated by Monte Carlo. The results are compared with those obtained by an improved collision probability method (author)
Research on GPU acceleration for Monte Carlo criticality calculation
International Nuclear Information System (INIS)
Xu, Q.; Yu, G.; Wang, K.
2013-01-01
The Monte Carlo (MC) neutron transport method can be naturally parallelized by multi-core architectures due to the dependency between particles during the simulation. The GPU+CPU heterogeneous parallel mode has become an increasingly popular way of parallelism in the field of scientific supercomputing. Thus, this work focuses on the GPU acceleration method for the Monte Carlo criticality simulation, as well as the computational efficiency that GPUs can bring. The 'neutron transport step' is introduced to increase the GPU thread occupancy. In order to test the sensitivity of the MC code's complexity, a 1D one-group code and a 3D multi-group general purpose code are respectively transplanted to GPUs, and the acceleration effects are compared. The result of numerical experiments shows considerable acceleration effect of the 'neutron transport step' strategy. However, the performance comparison between the 1D code and the 3D code indicates the poor scalability of MC codes on GPUs. (authors)
Sign problem and Monte Carlo calculations beyond Lefschetz thimbles
International Nuclear Information System (INIS)
Alexandru, Andrei; Başar, Gökçe; Bedaque, Paulo F.; Ridgway, Gregory W.; Warrington, Neill C.
2016-01-01
We point out that Monte Carlo simulations of theories with severe sign problems can be profitably performed over manifolds in complex space different from the one with fixed imaginary part of the action (“Lefschetz thimble”). We describe a family of such manifolds that interpolate between the tangent space at one critical point (where the sign problem is milder compared to the real plane but in some cases still severe) and the union of relevant thimbles (where the sign problem is mild but a multimodal distribution function complicates the Monte Carlo sampling). We exemplify this approach using a simple 0+1 dimensional fermion model previously used on sign problem studies and show that it can solve the model for some parameter values where a solution using Lefschetz thimbles was elusive.
Calculation of Monte Carlo importance functions for use in nuclear-well logging calculations
International Nuclear Information System (INIS)
Soran, P.D.; McKeon, D.C.; Booth, T.E.
1989-07-01
Importance sampling is essential to the timely solution of Monte Carlo nuclear-logging computer simulations. Achieving minimum variance (maximum precision) of a response in minimum computation time is one criteria for the choice of an importance function. Various methods for calculating importance functions will be presented, new methods investigated, and comparisons with porosity and density tools will be shown. 5 refs., 1 tab
The calculation of neutron flux using Monte Carlo method
Günay, Mehtap; Bardakçı, Hilal
2017-09-01
In this study, a hybrid reactor system was designed by using 99-95% Li20Sn80 + 1-5% RG-Pu, 99-95% Li20Sn80 + 1-5% RG-PuF4, and 99-95% Li20Sn80 + 1-5% RG-PuO2 fluids, ENDF/B-VII.0 evaluated nuclear data library and 9Cr2WVTa structural material. The fluids were used in the liquid first wall, liquid second wall (blanket) and shield zones of a fusion-fission hybrid reactor system. The neutron flux was calculated according to the mixture components, radial, energy spectrum in the designed hybrid reactor system for the selected fluids, library and structural material. Three-dimensional nucleonic calculations were performed using the most recent version MCNPX-2.7.0 the Monte Carlo code.
Therapeutic Applications of Monte Carlo Calculations in Nuclear Medicine
Sgouros, George
2003-01-01
This book examines the applications of Monte Carlo (MC) calculations in therapeutic nuclear medicine, from basic principles to computer implementations of software packages and their applications in radiation dosimetry and treatment planning. It is written for nuclear medicine physicists and physicians as well as radiation oncologists, and can serve as a supplementary text for medical imaging, radiation dosimetry and nuclear engineering graduate courses in science, medical and engineering faculties. With chapters is written by recognised authorities in that particular field, the book covers the entire range of MC applications in therapeutic medical and health physics, from its use in imaging prior to therapy to dose distribution modelling targeted radiotherapy. The contributions discuss the fundamental concepts of radiation dosimetry, radiobiological aspects of targeted radionuclide therapy and the various components and steps required for implementing a dose calculation and treatment planning methodology in ...
Modelling of electron contamination in clinical photon beams for Monte Carlo dose calculation
International Nuclear Information System (INIS)
Yang, J; Li, J S; Qin, L; Xiong, W; Ma, C-M
2004-01-01
The purpose of this work is to model electron contamination in clinical photon beams and to commission the source model using measured data for Monte Carlo treatment planning. In this work, a planar source is used to represent the contaminant electrons at a plane above the upper jaws. The source size depends on the dimensions of the field size at the isocentre. The energy spectra of the contaminant electrons are predetermined using Monte Carlo simulations for photon beams from different clinical accelerators. A 'random creep' method is employed to derive the weight of the electron contamination source by matching Monte Carlo calculated monoenergetic photon and electron percent depth-dose (PDD) curves with measured PDD curves. We have integrated this electron contamination source into a previously developed multiple source model and validated the model for photon beams from Siemens PRIMUS accelerators. The EGS4 based Monte Carlo user code BEAM and MCSIM were used for linac head simulation and dose calculation. The Monte Carlo calculated dose distributions were compared with measured data. Our results showed good agreement (less than 2% or 2 mm) for 6, 10 and 18 MV photon beams
Monte Carlo method for calculating the radiation skyshine produced by electron accelerators
Energy Technology Data Exchange (ETDEWEB)
Kong Chaocheng [Department of Engineering Physics, Tsinghua University Beijing 100084 (China)]. E-mail: kongchaocheng@tsinghua.org.cn; Li Quanfeng [Department of Engineering Physics, Tsinghua University Beijing 100084 (China); Chen Huaibi [Department of Engineering Physics, Tsinghua University Beijing 100084 (China); Du Taibin [Department of Engineering Physics, Tsinghua University Beijing 100084 (China); Cheng Cheng [Department of Engineering Physics, Tsinghua University Beijing 100084 (China); Tang Chuanxiang [Department of Engineering Physics, Tsinghua University Beijing 100084 (China); Zhu Li [Laboratory of Radiation and Environmental Protection, Tsinghua University, Beijing 100084 (China); Zhang Hui [Laboratory of Radiation and Environmental Protection, Tsinghua University, Beijing 100084 (China); Pei Zhigang [Laboratory of Radiation and Environmental Protection, Tsinghua University, Beijing 100084 (China); Ming Shenjin [Laboratory of Radiation and Environmental Protection, Tsinghua University, Beijing 100084 (China)
2005-06-01
Using the MCNP4C Monte Carlo code, the X-ray skyshine produced by 9 MeV, 15 MeV and 21 MeV electron linear accelerators were calculated respectively with a new two-step method combined with the split and roulette variance reduction technique. Results of the Monte Carlo simulation, the empirical formulas used for skyshine calculation and the dose measurements were analyzed and compared. In conclusion, the skyshine dose measurements agreed reasonably with the results computed by the Monte Carlo method, but deviated from computational results given by empirical formulas. The effect on skyshine dose caused by different structures of accelerator head is also discussed in this paper.
Energy Technology Data Exchange (ETDEWEB)
Moskvin, Vadim [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN (United States)]. E-mail: vmoskvin@iupui.edu; DesRosiers, Colleen; Papiez, Lech; Timmerman, Robert; Randall, Marcus; DesRosiers, Paul [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN (United States)
2002-06-21
The Monte Carlo code PENELOPE has been used to simulate photon flux from the Leksell Gamma Knife, a precision method for treating intracranial lesions. Radiation from a single {sup 60}Co assembly traversing the collimator system was simulated, and phase space distributions at the output surface of the helmet for photons and electrons were calculated. The characteristics describing the emitted final beam were used to build a two-stage Monte Carlo simulation of irradiation of a target. A dose field inside a standard spherical polystyrene phantom, usually used for Gamma Knife dosimetry, has been computed and compared with experimental results, with calculations performed by other authors with the use of the EGS4 Monte Carlo code, and data provided by the treatment planning system Gamma Plan. Good agreement was found between these data and results of simulations in homogeneous media. Owing to this established accuracy, PENELOPE is suitable for simulating problems relevant to stereotactic radiosurgery. (author)
Monte Carlo dose calculation algorithm on a distributed system
International Nuclear Information System (INIS)
Chauvie, Stephane; Dominoni, Matteo; Marini, Piergiorgio; Stasi, Michele; Pia, Maria Grazia; Scielzo, Giuseppe
2003-01-01
The main goal of modern radiotherapy, such as 3D conformal radiotherapy and intensity-modulated radiotherapy is to deliver a high dose to the target volume sparing the surrounding healthy tissue. The accuracy of dose calculation in a treatment planning system is therefore a critical issue. Among many algorithms developed over the last years, those based on Monte Carlo proven to be very promising in terms of accuracy. The most severe obstacle in application to clinical practice is the high time necessary for calculations. We have studied a high performance network of Personal Computer as a realistic alternative to a high-costs dedicated parallel hardware to be used routinely as instruments of evaluation of treatment plans. We set-up a Beowulf Cluster, configured with 4 nodes connected with low-cost network and installed MC code Geant4 to describe our irradiation facility. The MC, once parallelised, was run on the Beowulf Cluster. The first run of the full simulation showed that the time required for calculation decreased linearly increasing the number of distributed processes. The good scalability trend allows both statistically significant accuracy and good time performances. The scalability of the Beowulf Cluster system offers a new instrument for dose calculation that could be applied in clinical practice. These would be a good support particularly in high challenging prescription that needs good calculation accuracy in zones of high dose gradient and great dishomogeneities
Core Calculation of 1 MWatt PUSPATI TRIGA Reactor (RTP) using Monte Carlo MVP Code System
Karim, Julia Abdul
2008-05-01
The Monte Carlo MVP code system was adopted for the Reaktor TRIGA PUSAPTI (RTP) core calculation. The code was developed by a group of researcher of Japan Atomic Energy Agency (JAEA) first in 1994. MVP is a general multi-purpose Monte Carlo code for neutron and photon transport calculation and able to estimate an accurate simulation problems. The code calculation is based on the continuous energy method. This code is capable of adopting an accurate physics model, geometry description and variance reduction technique faster than conventional method as compared to the conventional scalar method. This code could achieve higher computational speed by several factors on the vector super-computer. In this calculation, RTP core was modeled as close as possible to the real core and results of keff flux, fission densities and others were obtained.
Core Calculation of 1 MWatt PUSPATI TRIGA Reactor (RTP) using Monte Carlo MVP Code System
International Nuclear Information System (INIS)
Karim, Julia Abdul
2008-01-01
The Monte Carlo MVP code system was adopted for the Reaktor TRIGA PUSAPTI (RTP) core calculation. The code was developed by a group of researcher of Japan Atomic Energy Agency (JAEA) first in 1994. MVP is a general multi-purpose Monte Carlo code for neutron and photon transport calculation and able to estimate an accurate simulation problems. The code calculation is based on the continuous energy method. This code is capable of adopting an accurate physics model, geometry description and variance reduction technique faster than conventional method as compared to the conventional scalar method. This code could achieve higher computational speed by several factors on the vector super-computer. In this calculation, RTP core was modeled as close as possible to the real core and results of keff flux, fission densities and others were obtained
Monte-Carlo Method Python Library for dose distribution Calculation in Brachytherapy
Energy Technology Data Exchange (ETDEWEB)
Randriantsizafy, R D; Ramanandraibe, M J [Madagascar Institut National des Sciences et Techniques Nucleaires, Antananarivo (Madagascar); Raboanary, R [Institut of astro and High-Energy Physics Madagascar, University of Antananarivo, Antananarivo (Madagascar)
2007-07-01
The Cs-137 Brachytherapy treatment is performed in Madagascar since 2005. Time treatment calculation for prescribed dose is made manually. Monte-Carlo Method Python library written at Madagascar INSTN is experimentally used to calculate the dose distribution on the tumour and around it. The first validation of the code was done by comparing the library curves with the Nucletron company curves. To reduce the duration of the calculation, a Grid of PC's is set up with listner patch run on each PC. The library will be used to modelize the dose distribution in the CT scan patient picture for individual and better accuracy time calculation for a prescribed dose.
Monte-Carlo Method Python Library for dose distribution Calculation in Brachytherapy
International Nuclear Information System (INIS)
Randriantsizafy, R.D.; Ramanandraibe, M.J.; Raboanary, R.
2007-01-01
The Cs-137 Brachytherapy treatment is performed in Madagascar since 2005. Time treatment calculation for prescribed dose is made manually. Monte-Carlo Method Python library written at Madagascar INSTN is experimentally used to calculate the dose distribution on the tumour and around it. The first validation of the code was done by comparing the library curves with the Nucletron company curves. To reduce the duration of the calculation, a Grid of PC's is set up with listner patch run on each PC. The library will be used to modelize the dose distribution in the CT scan patient picture for individual and better accuracy time calculation for a prescribed dose.
Streamlining resummed QCD calculations using Monte Carlo integration
Energy Technology Data Exchange (ETDEWEB)
Farhi, David; Feige, Ilya; Freytsis, Marat; Schwartz, Matthew D. [Center for the Fundamental Laws of Nature, Harvard University,17 Oxford St., Cambridge, MA 02138 (United States)
2016-08-18
Some of the most arduous and error-prone aspects of precision resummed calculations are related to the partonic hard process, having nothing to do with the resummation. In particular, interfacing to parton-distribution functions, combining various channels, and performing the phase space integration can be limiting factors in completing calculations. Conveniently, however, most of these tasks are already automated in many Monte Carlo programs, such as MADGRAPH http://dx.doi.org/10.1007/JHEP07(2014)079, ALPGEN http://dx.doi.org/10.1088/1126-6708/2003/07/001 or SHERPA http://dx.doi.org/10.1088/1126-6708/2009/02/007. In this paper, we show how such programs can be used to produce distributions of partonic kinematics with associated color structures representing the hard factor in a resummed distribution. These distributions can then be used to weight convolutions of jet, soft and beam functions producing a complete resummed calculation. In fact, only around 1000 unweighted events are necessary to produce precise distributions. A number of examples and checks are provided, including e{sup +}e{sup −} two- and four-jet event shapes, n-jettiness and jet-mass related observables at hadron colliders at next-to-leading-log (NLL) matched to leading order (LO). Attached code can be used to modify MADGRAPH to export the relevant LO hard functions and color structures for arbitrary processes.
Monte Carlo sampling strategies for lattice gauge calculations
International Nuclear Information System (INIS)
Guralnik, G.; Zemach, C.; Warnock, T.
1985-01-01
We have sought to optimize the elements of the Monte Carlo processes for thermalizing and decorrelating sequences of lattice gauge configurations and for this purpose, to develop computational and theoretical diagnostics to compare alternative techniques. These have been applied to speed up generations of random matrices, compare heat bath and Metropolis stepping methods, and to study autocorrelations of sequences in terms of the classical moment problem. The efficient use of statistically correlated lattice data is an optimization problem depending on the relation between computer times to generate lattice sequences of sufficiently small correlation and times to analyze them. We can solve this problem with the aid of a representation of auto-correlation data for various step lags as moments of positive definite distributions, using methods known for the moment problem to put bounds on statistical variances, in place of estimating the variances by too-lengthy computer runs
Comparison of ONETRAN calculations of electron beam dose profiles with Monte Carlo and experiment
International Nuclear Information System (INIS)
Garth, J.C.; Woolf, S.
1987-01-01
Electron beam dose profiles have been calculated using a multigroup, discrete ordinates solution of the Spencer-Lewis electron transport equation. This was accomplished by introducing electron transport cross-sections into the ONETRAN code in a simple manner. The authors' purpose is to ''benchmark'' this electron transport model and to demonstrate its accuracy and capabilities over the energy range from 30 keV to 20 MeV. Many of their results are compared with the extensive measurements and TIGER Monte Carlo data. In general the ONETRAN results are smoother, agree with TIGER within the statistical error of the Monte Carlo histograms and require about one tenth the running time of Monte Carlo
Monte Carlo dose calculations for phantoms with hip prostheses
International Nuclear Information System (INIS)
Bazalova, M; Verhaegen, F; Coolens, C; Childs, P; Cury, F; Beaulieu, L
2008-01-01
Computed tomography (CT) images of patients with hip prostheses are severely degraded by metal streaking artefacts. The low image quality makes organ contouring more difficult and can result in large dose calculation errors when Monte Carlo (MC) techniques are used. In this work, the extent of streaking artefacts produced by three common hip prosthesis materials (Ti-alloy, stainless steel, and Co-Cr-Mo alloy) was studied. The prostheses were tested in a hypothetical prostate treatment with five 18 MV photon beams. The dose distributions for unilateral and bilateral prosthesis phantoms were calculated with the EGSnrc/DOSXYZnrc MC code. This was done in three phantom geometries: in the exact geometry, in the original CT geometry, and in an artefact-corrected geometry. The artefact-corrected geometry was created using a modified filtered back-projection correction technique. It was found that unilateral prosthesis phantoms do not show large dose calculation errors, as long as the beams miss the artefact-affected volume. This is possible to achieve in the case of unilateral prosthesis phantoms (except for the Co-Cr-Mo prosthesis which gives a 3% error) but not in the case of bilateral prosthesis phantoms. The largest dose discrepancies were obtained for the bilateral Co-Cr-Mo hip prosthesis phantom, up to 11% in some voxels within the prostate. The artefact correction algorithm worked well for all phantoms and resulted in dose calculation errors below 2%. In conclusion, a MC treatment plan should include an artefact correction algorithm when treating patients with hip prostheses
Calculation of beam quality correction factor using Monte Carlo simulation
International Nuclear Information System (INIS)
Kawachi, T.; Saitoh, H.; Myojoyama, A.; Katayose, T.; Kojima, T.; Fukuda, K.; Inoue, M.
2005-01-01
In recent years, a number of the CyberKnife systems (Accuray C., U.S.) have been increasing significantly. However, the CyberKnife has unique treatment head structure and beam collimating system. Therefore, the global standard protocols can not be adopted for absolute absorbed dose dosimetry in CyberKnife beam. In this work, the energy spectrum of photon and electron from CyberKnife treatment head at 80 cm SSD and several depths in water are simulated with conscientious geometry using by the EGS Monte Carlo method. Furthermore, for calculation of the beam quality correction factor k Q , the mean restricted mass stopping power and the mass energy absorption coefficient of air, water and several chamber wall and waterproofing sleeve materials are calculated. As a result, the factors k Q CyberKnife beam for several ionization chambers are determined. And the relationship between the beam quality index PDD(10) x in CyberKnife beam and k Q is described in this report. (author)
Energy Technology Data Exchange (ETDEWEB)
Burkatzki, Mark Thomas
2008-07-01
The author presents scalar-relativistic energy-consistent Hartree-Fock pseudopotentials for the main-group and 3d-transition-metal elements. The pseudopotentials do not exhibit a singularity at the nucleus and are therefore suitable for quantum Monte Carlo (QMC) calculations. The author demonstrates their transferability through extensive benchmark calculations of atomic excitation spectra as well as molecular properties. In particular, the author computes the vibrational frequencies and binding energies of 26 first- and second-row diatomic molecules using post Hartree-Fock methods, finding excellent agreement with the corresponding all-electron values. The author shows that the presented pseudopotentials give superior accuracy than other existing pseudopotentials constructed specifically for QMC. The localization error and the efficiency in QMC are discussed. The author also presents QMC calculations for selected atomic and diatomic 3d-transitionmetal systems. Finally, valence basis sets of different sizes (VnZ with n=D,T,Q,5 for 1st and 2nd row; with n=D,T for 3rd to 5th row; with n=D,T,Q for the 3d transition metals) optimized for the pseudopotentials are presented. (orig.)
Propagation of statistical and nuclear data uncertainties in Monte Carlo burn-up calculations
International Nuclear Information System (INIS)
Garcia-Herranz, Nuria; Cabellos, Oscar; Sanz, Javier; Juan, Jesus; Kuijper, Jim C.
2008-01-01
Two methodologies to propagate the uncertainties on the nuclide inventory in combined Monte Carlo-spectrum and burn-up calculations are presented, based on sensitivity/uncertainty and random sampling techniques (uncertainty Monte Carlo method). Both enable the assessment of the impact of uncertainties in the nuclear data as well as uncertainties due to the statistical nature of the Monte Carlo neutron transport calculation. The methodologies are implemented in our MCNP-ACAB system, which combines the neutron transport code MCNP-4C and the inventory code ACAB. A high burn-up benchmark problem is used to test the MCNP-ACAB performance in inventory predictions, with no uncertainties. A good agreement is found with the results of other participants. This benchmark problem is also used to assess the impact of nuclear data uncertainties and statistical flux errors in high burn-up applications. A detailed calculation is performed to evaluate the effect of cross-section uncertainties in the inventory prediction, taking into account the temporal evolution of the neutron flux level and spectrum. Very large uncertainties are found at the unusually high burn-up of this exercise (800 MWd/kgHM). To compare the impact of the statistical errors in the calculated flux with respect to the cross uncertainties, a simplified problem is considered, taking a constant neutron flux level and spectrum. It is shown that, provided that the flux statistical deviations in the Monte Carlo transport calculation do not exceed a given value, the effect of the flux errors in the calculated isotopic inventory are negligible (even at very high burn-up) compared to the effect of the large cross-section uncertainties available at present in the data files
Propagation of statistical and nuclear data uncertainties in Monte Carlo burn-up calculations
Energy Technology Data Exchange (ETDEWEB)
Garcia-Herranz, Nuria [Departamento de Ingenieria Nuclear, Universidad Politecnica de Madrid, UPM (Spain)], E-mail: nuria@din.upm.es; Cabellos, Oscar [Departamento de Ingenieria Nuclear, Universidad Politecnica de Madrid, UPM (Spain); Sanz, Javier [Departamento de Ingenieria Energetica, Universidad Nacional de Educacion a Distancia, UNED (Spain); Juan, Jesus [Laboratorio de Estadistica, Universidad Politecnica de Madrid, UPM (Spain); Kuijper, Jim C. [NRG - Fuels, Actinides and Isotopes Group, Petten (Netherlands)
2008-04-15
Two methodologies to propagate the uncertainties on the nuclide inventory in combined Monte Carlo-spectrum and burn-up calculations are presented, based on sensitivity/uncertainty and random sampling techniques (uncertainty Monte Carlo method). Both enable the assessment of the impact of uncertainties in the nuclear data as well as uncertainties due to the statistical nature of the Monte Carlo neutron transport calculation. The methodologies are implemented in our MCNP-ACAB system, which combines the neutron transport code MCNP-4C and the inventory code ACAB. A high burn-up benchmark problem is used to test the MCNP-ACAB performance in inventory predictions, with no uncertainties. A good agreement is found with the results of other participants. This benchmark problem is also used to assess the impact of nuclear data uncertainties and statistical flux errors in high burn-up applications. A detailed calculation is performed to evaluate the effect of cross-section uncertainties in the inventory prediction, taking into account the temporal evolution of the neutron flux level and spectrum. Very large uncertainties are found at the unusually high burn-up of this exercise (800 MWd/kgHM). To compare the impact of the statistical errors in the calculated flux with respect to the cross uncertainties, a simplified problem is considered, taking a constant neutron flux level and spectrum. It is shown that, provided that the flux statistical deviations in the Monte Carlo transport calculation do not exceed a given value, the effect of the flux errors in the calculated isotopic inventory are negligible (even at very high burn-up) compared to the effect of the large cross-section uncertainties available at present in the data files.
Development of continuous energy Monte Carlo burn-up calculation code MVP-BURN
International Nuclear Information System (INIS)
Okumura, Keisuke; Nakagawa, Masayuki; Sasaki, Makoto
2001-01-01
Burn-up calculations based on the continuous energy Monte Carlo method became possible by development of MVP-BURN. To confirm the reliably of MVP-BURN, it was applied to the two numerical benchmark problems; cell burn-up calculations for High Conversion LWR lattice and BWR lattice with burnable poison rods. Major burn-up parameters have shown good agreements with the results obtained by a deterministic code (SRAC95). Furthermore, spent fuel composition calculated by MVP-BURN was compared with measured one. Atomic number densities of major actinides at 34 GWd/t could be predicted within 10% accuracy. (author)
MONTE CARLO CALCULATION OF THE ENERGY RESPONSE OF THE NARF HURST-TYPE FAST- NEUTRON DOSIMETER
Energy Technology Data Exchange (ETDEWEB)
De Vries, T. W.
1963-06-15
The response function for the fast-neutron dosimeter was calculated by the Monte Carlo technique (Code K-52) and compared with a calculation based on the Bragg-Gray principle. The energy deposition spectra so obtained show that the response spectra become softer with increased incident neutron energy ahove 3 Mev. The K-52 calculated total res nu onse is more nearly constant with energy than the BraggGray response. The former increases 70 percent from 1 Mev to 14 Mev while the latter increases 135 percent over this energy range. (auth)
Energy Technology Data Exchange (ETDEWEB)
Han, Gi Yeong; Kim, Song Hyun; Kim, Do Hyun; Shin, Chang Ho; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of)
2014-05-15
In this study, how the geometry splitting strategy affects the calculation efficiency was analyzed. In this study, a geometry splitting method was proposed to increase the calculation efficiency in Monte Carlo simulation. First, the analysis of the neutron distribution characteristics in a deep penetration problem was performed. Then, considering the neutron population distribution, a geometry splitting method was devised. Using the proposed method, the FOMs with benchmark problems were estimated and compared with the conventional geometry splitting strategy. The results show that the proposed method can considerably increase the calculation efficiency in using geometry splitting method. It is expected that the proposed method will contribute to optimizing the computational cost as well as reducing the human errors in Monte Carlo simulation. Geometry splitting in Monte Carlo (MC) calculation is one of the most popular variance reduction techniques due to its simplicity, reliability and efficiency. For the use of the geometry splitting, the user should determine locations of geometry splitting and assign the relative importance of each region. Generally, the splitting parameters are decided by the user's experience. However, in this process, the splitting parameters can ineffectively or erroneously be selected. In order to prevent it, there is a recommendation to help the user eliminate guesswork, which is to split the geometry evenly. And then, the importance is estimated by a few iterations for preserving population of particle penetrating each region. However, evenly geometry splitting method can make the calculation inefficient due to the change in mean free path (MFP) of particles.
International Nuclear Information System (INIS)
Han, Gi Yeong; Kim, Song Hyun; Kim, Do Hyun; Shin, Chang Ho; Kim, Jong Kyung
2014-01-01
In this study, how the geometry splitting strategy affects the calculation efficiency was analyzed. In this study, a geometry splitting method was proposed to increase the calculation efficiency in Monte Carlo simulation. First, the analysis of the neutron distribution characteristics in a deep penetration problem was performed. Then, considering the neutron population distribution, a geometry splitting method was devised. Using the proposed method, the FOMs with benchmark problems were estimated and compared with the conventional geometry splitting strategy. The results show that the proposed method can considerably increase the calculation efficiency in using geometry splitting method. It is expected that the proposed method will contribute to optimizing the computational cost as well as reducing the human errors in Monte Carlo simulation. Geometry splitting in Monte Carlo (MC) calculation is one of the most popular variance reduction techniques due to its simplicity, reliability and efficiency. For the use of the geometry splitting, the user should determine locations of geometry splitting and assign the relative importance of each region. Generally, the splitting parameters are decided by the user's experience. However, in this process, the splitting parameters can ineffectively or erroneously be selected. In order to prevent it, there is a recommendation to help the user eliminate guesswork, which is to split the geometry evenly. And then, the importance is estimated by a few iterations for preserving population of particle penetrating each region. However, evenly geometry splitting method can make the calculation inefficient due to the change in mean free path (MFP) of particles
Monte Carlo calculation of ''skyshine'' neutron dose from ALS [Advanced Light Source
International Nuclear Information System (INIS)
Moin-Vasiri, M.
1990-06-01
This report discusses the following topics on ''skyshine'' neutron dose from ALS: Sources of radiation; ALS modeling for skyshine calculations; MORSE Monte-Carlo; Implementation of MORSE; Results of skyshine calculations from storage ring; and Comparison of MORSE shielding calculations
International Nuclear Information System (INIS)
Gast, R.C.
1981-08-01
A procedure for defining diffusion coefficients from Monte Carlo calculations that results in suitable ones for use in neutron diffusion theory calculations is not readily obtained. This study provides a survey of the methods used to define diffusion coefficients from deterministic calculations and provides a discussion as to why such traditional methods cannot be used in Monte Carlo. This study further provides the empirical procedure used for defining diffusion coefficients from the RCP01 Monte Carlo program
Artificial neural networks, a new alternative to Monte Carlo calculations for radiotherapy
International Nuclear Information System (INIS)
Martin, E.; Gschwind, R.; Henriet, J.; Sauget, M.; Makovicka, L.
2010-01-01
In order to reduce the computing time needed by Monte Carlo codes in the field of irradiation physics, notably in dosimetry, the authors report the use of artificial neural networks in combination with preliminary Monte Carlo calculations. During the learning phase, Monte Carlo calculations are performed in homogeneous media to allow the building up of the neural network. Then, dosimetric calculations (in heterogeneous media, unknown by the network) can be performed by the so-learned network. Results with an equivalent precision can be obtained within less than one minute on a simple PC whereas several days are needed with a Monte Carlo calculation
Neutron point-flux calculation by Monte Carlo
International Nuclear Information System (INIS)
Eichhorn, M.
1986-04-01
A survey of the usual methods for estimating flux at a point is given. The associated variance-reducing techniques in direct Monte Carlo games are explained. The multigroup Monte Carlo codes MC for critical systems and PUNKT for point source-point detector-systems are represented, and problems in applying the codes to practical tasks are discussed. (author)
Improved estimation of the variance in Monte Carlo criticality calculations
International Nuclear Information System (INIS)
Hoogenboom, J. Eduard
2008-01-01
Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k eff results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k eff will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k eff are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)
Improved estimation of the variance in Monte Carlo criticality calculations
Energy Technology Data Exchange (ETDEWEB)
Hoogenboom, J. Eduard [Delft University of Technology, Delft (Netherlands)
2008-07-01
Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k{sub eff} results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k{sub eff} will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k{sub eff} are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)
Monte Carlo calculations of patient doses from dental radiography
International Nuclear Information System (INIS)
Gibbs, S.J.; Pujol, A.; Chen, T.S.; Malcolm, A.W.
1984-01-01
A Monte Carlo computer program has been developed to calculate patient dose from diagnostic radiologic procedures. Input data include patient anatomy as serial CT scans at 1-cm intervals from a typical cadaver, beam spectrum, and projection geometry. The program tracks single photons, accounting for photoelectric effect, coherent (using atomic form factors) and incoherent (using scatter functions) scatter. Inhomogeneities (bone, teeth, muscle, fat, lung, air cavities, etc.) are accounted for as they are encountered. Dose is accumulated in a three-dimensional array of voxels, corresponding to the CT input. Output consists of isodose curves, doses to specific organs, and effective dose equivalent, H/sub E/, as defined by ICRP. Initial results, from dental bite-wing projections using 90-kVp, half-wave rectified dental spectra, have produced H/sub E/ values ranging from 3 to 17 microsieverts (0.3-1.7 mrem) per image, depending on image receptor and projection geometry. The probability of stochastic effect is estimated by ICRP as 10/sup -2//Sv, or about 10/sup -7/ to 10/sup -8/ per image
Intergenerational Correlation in Monte Carlo k-Eigenvalue Calculation
International Nuclear Information System (INIS)
Ueki, Taro
2002-01-01
This paper investigates intergenerational correlation in the Monte Carlo k-eigenvalue calculation of a neutron effective multiplicative factor. To this end, the exponential transform for path stretching has been applied to large fissionable media with localized highly multiplying regions because in such media an exponentially decaying shape is a rough representation of the importance of source particles. The numerical results show that the difference between real and apparent variances virtually vanishes for an appropriate value of the exponential transform parameter. This indicates that the intergenerational correlation of k-eigenvalue samples could be eliminated by the adjoint biasing of particle transport. The relation between the biasing of particle transport and the intergenerational correlation is therefore investigated in the framework of collision estimators, and the following conclusion has been obtained: Within the leading order approximation with respect to the number of histories per generation, the intergenerational correlation vanishes when immediate importance is constant, and the immediate importance under simulation can be made constant by the biasing of particle transport with a function adjoint to the source neutron's distribution, i.e., the importance over all future generations
Error reduction techniques for Monte Carlo neutron transport calculations
International Nuclear Information System (INIS)
Ju, J.H.W.
1981-01-01
Monte Carlo methods have been widely applied to problems in nuclear physics, mathematical reliability, communication theory, and other areas. The work in this thesis is developed mainly with neutron transport applications in mind. For nuclear reactor and many other applications, random walk processes have been used to estimate multi-dimensional integrals and obtain information about the solution of integral equations. When the analysis is statistically based such calculations are often costly, and the development of efficient estimation techniques plays a critical role in these applications. All of the error reduction techniques developed in this work are applied to model problems. It is found that the nearly optimal parameters selected by the analytic method for use with GWAN estimator are nearly identical to parameters selected by the multistage method. Modified path length estimation (based on the path length importance measure) leads to excellent error reduction in all model problems examined. Finally, it should be pointed out that techniques used for neutron transport problems may be transferred easily to other application areas which are based on random walk processes. The transport problems studied in this dissertation provide exceptionally severe tests of the error reduction potential of any sampling procedure. It is therefore expected that the methods of this dissertation will prove useful in many other application areas
Reactor physics simulations with coupled Monte Carlo calculation and computational fluid dynamics
International Nuclear Information System (INIS)
Seker, V.; Thomas, J.W.; Downar, T.J.
2007-01-01
A computational code system based on coupling the Monte Carlo code MCNP5 and the Computational Fluid Dynamics (CFD) code STAR-CD was developed as an audit tool for lower order nuclear reactor calculations. This paper presents the methodology of the developed computer program 'McSTAR'. McSTAR is written in FORTRAN90 programming language and couples MCNP5 and the commercial CFD code STAR-CD. MCNP uses a continuous energy cross section library produced by the NJOY code system from the raw ENDF/B data. A major part of the work was to develop and implement methods to update the cross section library with the temperature distribution calculated by STARCD for every region. Three different methods were investigated and implemented in McSTAR. The user subroutines in STAR-CD are modified to read the power density data and assign them to the appropriate variables in the program and to write an output data file containing the temperature, density and indexing information to perform the mapping between MCNP and STAR-CD cells. Preliminary testing of the code was performed using a 3x3 PWR pin-cell problem. The preliminary results are compared with those obtained from a STAR-CD coupled calculation with the deterministic transport code DeCART. Good agreement in the k eff and the power profile was observed. Increased computational capabilities and improvements in computational methods have accelerated interest in high fidelity modeling of nuclear reactor cores during the last several years. High-fidelity has been achieved by utilizing full core neutron transport solutions for the neutronics calculation and computational fluid dynamics solutions for the thermal-hydraulics calculation. Previous researchers have reported the coupling of 3D deterministic neutron transport method to CFD and their application to practical reactor analysis problems. One of the principal motivations of the work here was to utilize Monte Carlo methods to validate the coupled deterministic neutron transport
Propagation of nuclear data uncertainties in fuel cycle calculations using Monte-Carlo technique
International Nuclear Information System (INIS)
Diez, C.J.; Cabellos, O.; Martinez, J.S.
2011-01-01
Nowadays, the knowledge of uncertainty propagation in depletion calculations is a critical issue because of the safety and economical performance of fuel cycles. Response magnitudes such as decay heat, radiotoxicity and isotopic inventory and their uncertainties should be known to handle spent fuel in present fuel cycles (e.g. high burnup fuel programme) and furthermore in new fuel cycles designs (e.g. fast breeder reactors and ADS). To deal with this task, there are different error propagation techniques, deterministic (adjoint/forward sensitivity analysis) and stochastic (Monte-Carlo technique) to evaluate the error in response magnitudes due to nuclear data uncertainties. In our previous works, cross-section uncertainties were propagated using a Monte-Carlo technique to calculate the uncertainty of response magnitudes such as decay heat and neutron emission. Also, the propagation of decay data, fission yield and cross-section uncertainties was performed, but only isotopic composition was the response magnitude calculated. Following the previous technique, the nuclear data uncertainties are taken into account and propagated to response magnitudes, decay heat and radiotoxicity. These uncertainties are assessed during cooling time. To evaluate this Monte-Carlo technique, two different applications are performed. First, a fission pulse decay heat calculation is carried out to check the Monte-Carlo technique, using decay data and fission yields uncertainties. Then, the results, experimental data and reference calculation (JEFF Report20), are compared. Second, we assess the impact of basic nuclear data (activation cross-section, decay data and fission yields) uncertainties on relevant fuel cycle parameters (decay heat and radiotoxicity) for a conceptual design of a modular European Facility for Industrial Transmutation (EFIT) fuel cycle. After identifying which time steps have higher uncertainties, an assessment of which uncertainties have more relevance is performed
International Nuclear Information System (INIS)
Kotegawa, Hiroshi; Sasamoto, Nobuo; Tanaka, Shun-ichi
1987-02-01
Both ''measured radioactive inventory due to neutron activation in the shield concrete of JPDR'' and ''measured intermediate and low energy neutron spectra penetrating through a graphite sphere'' are analyzed using a continuous energy model Monte Carlo code MCNP so as to estimate calculational accuracy of the code for neutron transport in thermal and epithermal energy regions. Analyses reveal that MCNP calculates thermal neutron spectra fairly accurately, while it apparently over-estimates epithermal neutron spectra (of approximate 1/E distribution) as compared with the measurements. (author)
Usefulness of the Monte Carlo method in reliability calculations
International Nuclear Information System (INIS)
Lanore, J.M.; Kalli, H.
1977-01-01
Three examples of reliability Monte Carlo programs developed in the LEP (Laboratory for Radiation Shielding Studies in the Nuclear Research Center at Saclay) are presented. First, an uncertainty analysis is given for a simplified spray system; a Monte Carlo program PATREC-MC has been written to solve the problem with the system components given in the fault tree representation. The second program MONARC 2 has been written to solve the problem of complex systems reliability by the Monte Carlo simulation, here again the system (a residual heat removal system) is in the fault tree representation. Third, the Monte Carlo program MONARC was used instead of the Markov diagram to solve the simulation problem of an electric power supply including two nets and two stand-by diesels
Transport calculation of medium-energy protons and neutrons by Monte Carlo method
International Nuclear Information System (INIS)
Ban, Syuuichi; Hirayama, Hideo; Katoh, Kazuaki.
1978-09-01
A Monte Carlo transport code, ARIES, has been developed for protons and neutrons at medium energy (25 -- 500 MeV). Nuclear data provided by R.G. Alsmiller, Jr. were used for the calculation. To simulate the cascade development in the medium, each generation was represented by a single weighted particle and an average number of emitted particles was used as the weight. Neutron fluxes were stored by the collisions density method. The cutoff energy was set to 25 MeV. Neutrons below the cutoff were stored to be used as the source for the low energy neutron transport calculation upon the discrete ordinates method. Then transport calculations were performed for both low energy neutrons (thermal -- 25 MeV) and secondary gamma-rays. Energy spectra of emitted neutrons were calculated and compared with those of published experimental and calculated results. The agreement was good for the incident particles of energy between 100 and 500 MeV. (author)
International Nuclear Information System (INIS)
Koo, Bon Seung; Lee, Kyung Hoon; Song, Jae Seung; Park, Sang Yoon
2013-01-01
In this paper, the basic nuclear characteristics of major emitter materials were surveyed. In addition, preliminary calculations of Cobalt-Vanadium fixed incore detector were performed using the Monte Carlo code. Calculational results were cross-checked by KARMA. KARMA is a two-dimensional multigroup transport theory code developed by the KAERI and approved by Korean regularity agency to be employed as a nuclear design tool for a Korean commercial pressurizer water reactor. The nuclear characteristics of the major emitter materials were surveyed, and preliminary calculations of the hybrid fixed incore detector were performed with the MCNP code. The eigenvalue and pin-by-pin fission power distributions were calculated and showed good agreement with the KARMA calculation results. As future work, gamma power distributions as well as several types of XS of the emitter, insulator, and collector regions for a Co-V ICI assembly will be evaluated and compared
International Nuclear Information System (INIS)
Santoro, R.T.; Barnes, J.M.
1983-08-01
Neutron and gamma-ray spectra resulting from the interactions of approx. 14-MeV neutrons in laminated slabs of stainless steel type-304 and borated polyethylene have been calculated using the Monte Carlo code MCNP. The calculated spectra are compared with measured data as a function of slab thickness and material composition and as a function of detector location behind the slabs. Comparisons of the differential energy spectra are made for neutrons with energies above 850 keV and for gamma rays with energies above 750 keV. The measured neutron spectra and those calculated using Monte Carlo methods agree within 5% to 50% depending on the slab thickness and composition and neutron energy. The agreement between the measured and calculated gamma-ray energy spectra is also within this range. The MCNP data are also in favorable agreement with attenuated data calculated previously by discrete ordinates transport methods and the Monte Carlo code SAM-CE
International Nuclear Information System (INIS)
Richet, Y.; Jacquet, O.; Bay, X.
2005-01-01
The accuracy of an Iterative Monte Carlo calculation requires the convergence of the simulation output process. The present paper deals with a post processing algorithm to suppress the transient due to initialization applied on criticality calculations. It should be noticed that this initial transient suppression aims only at obtaining a stationary output series, then the convergence of the calculation needs to be guaranteed independently. The transient suppression algorithm consists in a repeated truncation of the first observations of the output process. The truncation of the first observations is performed as long as a steadiness test based on Brownian bridge theory is negative. This transient suppression method was previously tuned for a simplified model of criticality calculations, although this paper focuses on the efficiency on real criticality calculations. The efficiency test is based on four benchmarks with strong source convergence problems: 1) a checkerboard storage of fuel assemblies, 2) a pin cell array with irradiated fuel, 3) 3 one-dimensional thick slabs, and 4) an array of interacting fuel spheres. It appears that the transient suppression method needs to be more widely validated on real criticality calculations before any blind using as a post processing in criticality codes
Monte Carlo Calculation of Sensitivities to Secondary Angular Distributions. Theory and Validation
International Nuclear Information System (INIS)
Perell, R. L.
2002-01-01
The basic methods for solution of the transport equation that are in practical use today are the discrete ordinates (SN) method, and the Monte Carlo (Monte Carlo) method. While the SN method is typically less computation time consuming, the Monte Carlo method is often preferred for detailed and general description of three-dimensional geometries, and for calculations using cross sections that are point-wise energy dependent. For analysis of experimental and calculated results, sensitivities are needed. Sensitivities to material parameters in general, and to the angular distribution of the secondary (scattered) neutrons in particular, can be calculated by well known SN methods, using the fluxes obtained from solution of the direct and the adjoint transport equations. Algorithms to calculate sensitivities to cross-sections with Monte Carlo methods have been known for quite a time. However, only just recently we have developed a general Monte Carlo algorithm for the calculation of sensitivities to the angular distribution of the secondary neutrons
International Nuclear Information System (INIS)
Griesheimer, D. P.; Toth, B. E.
2007-01-01
A novel technique for accelerating the convergence rate of the iterative power method for solving eigenvalue problems is presented. Smoothed Residual Acceleration (SRA) is based on a modification to the well known fixed-parameter extrapolation method for power iterations. In SRA the residual vector is passed through a low-pass filter before the extrapolation step. Filtering limits the extrapolation to the lower order Eigenmodes, improving the stability of the method and allowing the use of larger extrapolation parameters. In simple tests SRA demonstrates superior convergence acceleration when compared with an optimal fixed-parameter extrapolation scheme. The primary advantage of SRA is that it can be easily applied to Monte Carlo criticality calculations in order to reduce the number of discard cycles required before a stationary fission source distribution is reached. A simple algorithm for applying SRA to Monte Carlo criticality problems is described. (authors)
Comparison of EGS4 and MCNP Monte Carlo codes when calculating radiotherapy depth doses.
Love, P A; Lewis, D G; Al-Affan, I A; Smith, C W
1998-05-01
The Monte Carlo codes EGS4 and MCNP have been compared when calculating radiotherapy depth doses in water. The aims of the work were to study (i) the differences between calculated depth doses in water for a range of monoenergetic photon energies and (ii) the relative efficiency of the two codes for different electron transport energy cut-offs. The depth doses from the two codes agree with each other within the statistical uncertainties of the calculations (1-2%). The relative depth doses also agree with data tabulated in the British Journal of Radiology Supplement 25. A discrepancy in the dose build-up region may by attributed to the different electron transport algorithims used by EGS4 and MCNP. This discrepancy is considerably reduced when the improved electron transport routines are used in the latest (4B) version of MCNP. Timing calculations show that EGS4 is at least 50% faster than MCNP for the geometries used in the simulations.
Therapeutic Applications of Monte Carlo Calculations in Nuclear Medicine
International Nuclear Information System (INIS)
Coulot, J
2003-01-01
Monte Carlo techniques are involved in many applications in medical physics, and the field of nuclear medicine has seen a great development in the past ten years due to their wider use. Thus, it is of great interest to look at the state of the art in this domain, when improving computer performances allow one to obtain improved results in a dramatically reduced time. The goal of this book is to make, in 15 chapters, an exhaustive review of the use of Monte Carlo techniques in nuclear medicine, also giving key features which are not necessary directly related to the Monte Carlo method, but mandatory for its practical application. As the book deals with therapeutic' nuclear medicine, it focuses on internal dosimetry. After a general introduction on Monte Carlo techniques and their applications in nuclear medicine (dosimetry, imaging and radiation protection), the authors give an overview of internal dosimetry methods (formalism, mathematical phantoms, quantities of interest). Then, some of the more widely used Monte Carlo codes are described, as well as some treatment planning softwares. Some original techniques are also mentioned, such as dosimetry for boron neutron capture synovectomy. It is generally well written, clearly presented, and very well documented. Each chapter gives an overview of each subject, and it is up to the reader to investigate it further using the extensive bibliography provided. Each topic is discussed from a practical point of view, which is of great help for non-experienced readers. For instance, the chapter about mathematical aspects of Monte Carlo particle transport is very clear and helps one to apprehend the philosophy of the method, which is often a difficulty with a more theoretical approach. Each chapter is put in the general (clinical) context, and this allows the reader to keep in mind the intrinsic limitation of each technique involved in dosimetry (for instance activity quantitation). Nevertheless, there are some minor remarks to
Monte Carlo calculated CT numbers for improved heavy ion treatment planning
Directory of Open Access Journals (Sweden)
Qamhiyeh Sima
2014-03-01
Full Text Available Better knowledge of CT number values and their uncertainties can be applied to improve heavy ion treatment planning. We developed a novel method to calculate CT numbers for a computed tomography (CT scanner using the Monte Carlo (MC code, BEAMnrc/EGSnrc. To generate the initial beam shape and spectra we conducted full simulations of an X-ray tube, filters and beam shapers for a Siemens Emotion CT. The simulation output files were analyzed to calculate projections of a phantom with inserts. A simple reconstruction algorithm (FBP using a Ram-Lak filter was applied to calculate the pixel values, which represent an attenuation coefficient, normalized in such a way to give zero for water (Hounsfield unit (HU. Measured and Monte Carlo calculated CT numbers were compared. The average deviation between measured and simulated CT numbers was 4 ± 4 HU and the standard deviation σ was 49 ± 4 HU. The simulation also correctly predicted the behaviour of H-materials compared to a Gammex tissue substitutes. We believe the developed approach represents a useful new tool for evaluating the effect of CT scanner and phantom parameters on CT number values.
International Nuclear Information System (INIS)
Ji Gang; Guo Yong; Luo Yisheng; Zhang Wenzhong
2001-01-01
Objective: To provide useful parameters for neutron radiotherapy, the author presents results of a Monte Carlo simulation study investigating the dosimetric characteristics of linear 252 Cf fission neutron sources. Methods: A 252 Cf fission source and tissue equivalent phantom were modeled. The dose of neutron and gamma radiations were calculated using Monte Carlo Code. Results: The dose of neutron and gamma at several positions for 252 Cf in the phantom made of equivalent materials to water, blood, muscle, skin, bone and lung were calculated. Conclusion: The results by Monte Carlo methods were compared with the data by measurement and references. According to the calculation, the method using water phantom to simulate local tissues such as muscle, blood and skin is reasonable for the calculation and measurements of dose distribution for 252 Cf
Variational Monte Carlo calculations of few-body nuclei
International Nuclear Information System (INIS)
Wiringa, R.B.
1986-01-01
The variational Monte Carlo method is described. Results for the binding energies, density distributions, momentum distributions, and static longitudinal structure functions of the 3 H, 3 He, and 4 He ground states, and for the energies of the low-lying scattering states in 4 He are presented. 25 refs., 3 figs
Variational Monte Carlo calculations of few-body nuclei
Energy Technology Data Exchange (ETDEWEB)
Wiringa, R.B.
1986-01-01
The variational Monte Carlo method is described. Results for the binding energies, density distributions, momentum distributions, and static longitudinal structure functions of the /sup 3/H, /sup 3/He, and /sup 4/He ground states, and for the energies of the low-lying scattering states in /sup 4/He are presented. 25 refs., 3 figs.
Monte Carlo validation of self shielding and void effect calculations
International Nuclear Information System (INIS)
Tellier, H.; Coste, M.; Raepsaet, C.; Soldevila, M.; Van der Gucht, C.
1995-01-01
The self shielding validation and the void effect are studied with Monte Carlo method. The satisfactory comparison obtained between the APOLLO 2 results of the self shielding effect and the TRIPOLI and MCNP results allows us to be confident in the multigroup transport code. (K.A.)
Monte Carlo calculations of neutron thermalization in a heterogeneous system
Energy Technology Data Exchange (ETDEWEB)
Hoegberg, T
1959-07-15
The slowing down of neutrons in a heterogeneous system (a slab geometry) of uranium and heavy water has been investigated by Monte Carlo methods. Effects on the neutron spectrum due to the thermal motions of the scattering and absorbing atoms are taken into account. It has been assumed that the speed distribution of the moderator atoms are Maxwell-Boltzmann in character.
Zaidi, H
1999-01-01
the many applications of Monte Carlo modelling in nuclear medicine imaging make it desirable to increase the accuracy and computational speed of Monte Carlo codes. The accuracy of Monte Carlo simulations strongly depends on the accuracy in the probability functions and thus on the cross section libraries used for photon transport calculations. A comparison between different photon cross section libraries and parametrizations implemented in Monte Carlo simulation packages developed for positron emission tomography and the most recent Evaluated Photon Data Library (EPDL97) developed by the Lawrence Livermore National Laboratory was performed for several human tissues and common detector materials for energies from 1 keV to 1 MeV. Different photon cross section libraries and parametrizations show quite large variations as compared to the EPDL97 coefficients. This latter library is more accurate and was carefully designed in the form of look-up tables providing efficient data storage, access, and management. Toge...
Energy Technology Data Exchange (ETDEWEB)
Hoogenboom, J.E. [Delft University of Technology, Interfaculty Reactor Institute, Delft (Netherlands)
2000-07-01
The Monte Carlo method is a statistical method to solve mathematical and physical problems using random numbers. The principle of the methods will be demonstrated for a simple mathematical problem and for neutron transport. Various types of estimators will be discussed, as well as generally applied variance reduction methods like splitting, Russian roulette and importance biasing. The theoretical formulation for solving eigenvalue problems for multiplying systems will be shown. Some reflections will be given about the applicability of the Monte Carlo method, its limitations and its future prospects for reactor physics calculations. Adjoint Monte Carlo is a Monte Carlo game to solve the adjoint neutron (or photon) transport equation. The adjoint transport equation can be interpreted in terms of simulating histories of artificial particles, which show properties of neutrons that move backwards in history. These particles will start their history at the detector from which the response must be estimated and give a contribution to the estimated quantity when they hit or pass through the neutron source. Application to multigroup transport formulation will be demonstrated Possible implementation for the continuous energy case will be outlined. The inherent advantages and disadvantages of the method will be discussed. The Midway Monte Carlo method will be presented for calculating a detector response due to a (neutron or photon) source. A derivation will be given of the basic formula for the Midway Monte Carlo method The black absorber technique, allowing for a cutoff of particle histories when reaching the midway surface in one of the calculations will be derived. An extension of the theory to coupled neutron-photon problems is given. The method will be demonstrated for an oil well logging problem, comprising a neutron source in a borehole and photon detectors to register the photons generated by inelastic neutron scattering. (author)
International Nuclear Information System (INIS)
Hoogenboom, J.E.
2000-01-01
The Monte Carlo method is a statistical method to solve mathematical and physical problems using random numbers. The principle of the methods will be demonstrated for a simple mathematical problem and for neutron transport. Various types of estimators will be discussed, as well as generally applied variance reduction methods like splitting, Russian roulette and importance biasing. The theoretical formulation for solving eigenvalue problems for multiplying systems will be shown. Some reflections will be given about the applicability of the Monte Carlo method, its limitations and its future prospects for reactor physics calculations. Adjoint Monte Carlo is a Monte Carlo game to solve the adjoint neutron (or photon) transport equation. The adjoint transport equation can be interpreted in terms of simulating histories of artificial particles, which show properties of neutrons that move backwards in history. These particles will start their history at the detector from which the response must be estimated and give a contribution to the estimated quantity when they hit or pass through the neutron source. Application to multigroup transport formulation will be demonstrated Possible implementation for the continuous energy case will be outlined. The inherent advantages and disadvantages of the method will be discussed. The Midway Monte Carlo method will be presented for calculating a detector response due to a (neutron or photon) source. A derivation will be given of the basic formula for the Midway Monte Carlo method The black absorber technique, allowing for a cutoff of particle histories when reaching the midway surface in one of the calculations will be derived. An extension of the theory to coupled neutron-photon problems is given. The method will be demonstrated for an oil well logging problem, comprising a neutron source in a borehole and photon detectors to register the photons generated by inelastic neutron scattering. (author)
Monte Carlo calculations and measurements of spectra from a C-14 source
International Nuclear Information System (INIS)
Borg, J.
1996-05-01
To perform Monte Carlo simulations it is necessary to model the physical geometries i.e., the source and detector geometry. However, a complete model of the physical geometry may not be possible or may result in a very low calculation efficiency. Substituting the complete source model with a simplified model is one way of increasing the calculation efficiency. In this report, the study of a simplified model of a 14 C source is described. Results of Monte Carlo calculations with the EGS4 code are compared with measurements with a β spectrometer consisting of two coaxial Si detectors, and a low-energy photon spectrometer being a Si(Li) detector. Calculations and measurements show generally good agreement. However, the difference (a factor of 4) between calculated and measured response to electrons for the Si(Li) detector indicates that this detector has a dead layer about 12 μm thick instead of 0.2 μm as reported by the manufacturer. The efficiency of the calculations is increased by a factor of 10, when the complete source model is replaced by the simplified source model. This reduces the calculation time of detector responses to a few days instead of weeks on the NRC SGI R4400 computers. Good agreement between measured and calculated data also verifies that the MC code EGS4 is a reliable and useful tool for simulating coupled electron and photon transport for particles with energies down to a few keV. (au) 3 tabs., 15 ills., 11 refs
Monte Carlo 20 and 45 MeV Bremsstrahlung and dose-reduction calculations
Energy Technology Data Exchange (ETDEWEB)
Goosman, D.R.
1984-08-14
The SANDYL electron-photon coupled Monte Carlo code has been compared with previously published experimental bremsstrahlung data at 20.9 MeV electron energy. The code was then used to calculate forward-directed spectra, angular distributions and dose-reduction factors for three practical configurations. These are: 20 MeV electrons incident on 1 mm of W + 59 mm of Be, 45 MeV electrons of 1 mm of W and 45 MeV electrons on 1 mm of W + 147 mm of Be. The application of these results to flash radiography is discussed. 7 references, 12 figures, 1 table.
Monte Carlo 20 and 45 MeV Bremsstrahlung and dose-reduction calculations
International Nuclear Information System (INIS)
Goosman, D.R.
1984-01-01
The SANDYL electron-photon coupled Monte Carlo code has been compared with previously published experimental bremsstrahlung data at 20.9 MeV electron energy. The code was then used to calculate forward-directed spectra, angular distributions and dose-reduction factors for three practical configurations. These are: 20 MeV electrons incident on 1 mm of W + 59 mm of Be, 45 MeV electrons of 1 mm of W and 45 MeV electrons on 1 mm of W + 147 mm of Be. The application of these results to flash radiography is discussed. 7 references, 12 figures, 1 table
Optimum biasing of integral equations in Monte Carlo calculations
International Nuclear Information System (INIS)
Hoogenboom, J.E.
1979-01-01
In solving integral equations and estimating average values with the Monte Carlo method, biasing functions may be used to reduce the variancee of the estimates. A simple derivation was used to prove the existence of a zero-variance collision estimator if a specific biasing function and survival probability are applied. This optimum biasing function is the same as that used for the well known zero-variance last-event estimator
Monte Carlo calculations for r-process nucleosynthesis
Energy Technology Data Exchange (ETDEWEB)
Mumpower, Matthew Ryan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-11-12
A Monte Carlo framework is developed for exploring the impact of nuclear model uncertainties on the formation of the heavy elements. Mass measurements tightly constrain the macroscopic sector of FRDM2012. For r-process nucleosynthesis, it is necessary to understand the microscopic physics of the nuclear model employed. A combined approach of measurements and a deeper understanding of the microphysics is thus warranted to elucidate the site of the r-process.
Monte Carlo calculation with unquenched Wilson-Fermions
International Nuclear Information System (INIS)
Montvay, I.
1984-01-01
A Monte Carlo updating procedure taking into account the virtual quark loops is described. It is based on high order hopping parameter expansion of the quark determinant for Wilson-fermions. In a first test run Wilson-loop expectation values are measured on 6 4 lattice at β=5.70 using 16sup(th) order hopping parameter expansion for the quark determinant. (orig.)
International Nuclear Information System (INIS)
Kim, Ok Joo
2007-02-01
Wavelet theory was applied to detect the singularity in reactor power signal. Compared to Fourier transform, wavelet transform has localization properties in space and frequency. Therefore, by wavelet transform after de-noising, singular points can be found easily. To demonstrate this, we generated reactor power signals using a HANARO (a Korean multi-purpose research reactor) dynamics model consisting of 39 nonlinear differential equations and Gaussian noise. We applied wavelet transform decomposition and de-noising procedures to these signals. It was effective to detect the singular events such as sudden reactivity change and abrupt intrinsic property changes. Thus this method could be profitably utilized in a real-time system for automatic event recognition (e.g., reactor condition monitoring). In addition, using the wavelet de-noising concept, variance reduction of Monte Carlo result was tried. To get correct solution in Monte Carlo calculation, small uncertainty is required and it is quite time-consuming on a computer. Instead of long-time calculation in the Monte Carlo code (MCNP), wavelet de-noising can be performed to get small uncertainties. We applied this idea to MCNP results of k eff and fission source. Variance was reduced somewhat while the average value is kept constant. In MCNP criticality calculation, initial guess for the fission distribution is used and it could give contamination to solution. To avoid this situation, sufficient number of initial generations should be discarded, and they are called inactive cycles. Convergence check can give guildeline to determine when we should start the active cycles. Various entropy functions are tried to check the convergence of fission distribution. Some entropy functions reflect the convergence behavior of fission distribution well. Entropy could be a powerful method to determine inactive/active cycles in MCNP calculation
International Nuclear Information System (INIS)
Solc, J.; Suran, J.; Novotna, M.; Pavlis, J.
2008-01-01
The contribution describes a technique of determination of calibration coefficients of a radioactivity monitor using Monte Carlo calculations. The monitor is installed at the NPP Temelin adjacent to lines with a radioactive medium. The output quantity is the activity concentration (in Bq/m3) that is converted from the number of counts per minute measured by the monitor. The value of this conversion constant, i.e. calibration coefficient, was calculated for gamma photons emitted by Co-60 and compared to the data stated by the manufacturer and supplier of these monitors, General Atomic Electronic Systems, Inc., USA. Results of the comparison show very good agreement between calculations and manufacturer data; the differences are lower than the quadratic sum of uncertainties. (authors)
Evaluation of a new commercial Monte Carlo dose calculation algorithm for electron beams.
Vandervoort, Eric J; Tchistiakova, Ekaterina; La Russa, Daniel J; Cygler, Joanna E
2014-02-01
In this report the authors present the validation of a Monte Carlo dose calculation algorithm (XiO EMC from Elekta Software) for electron beams. Calculated and measured dose distributions were compared for homogeneous water phantoms and for a 3D heterogeneous phantom meant to approximate the geometry of a trachea and spine. Comparisons of measurements and calculated data were performed using 2D and 3D gamma index dose comparison metrics. Measured outputs agree with calculated values within estimated uncertainties for standard and extended SSDs for open applicators, and for cutouts, with the exception of the 17 MeV electron beam at extended SSD for cutout sizes smaller than 5 × 5 cm(2). Good agreement was obtained between calculated and experimental depth dose curves and dose profiles (minimum number of measurements that pass a 2%/2 mm agreement 2D gamma index criteria for any applicator or energy was 97%). Dose calculations in a heterogeneous phantom agree with radiochromic film measurements (>98% of pixels pass a 3 dimensional 3%/2 mm γ-criteria) provided that the steep dose gradient in the depth direction is considered. Clinically acceptable agreement (at the 2%/2 mm level) between the measurements and calculated data for measurements in water are obtained for this dose calculation algorithm. Radiochromic film is a useful tool to evaluate the accuracy of electron MC treatment planning systems in heterogeneous media.
International Nuclear Information System (INIS)
Quade, U.
1994-01-01
Neutron- und Gamma dose rate calculations were performed for the storage containers filled with plutonium nitrate of the MOX fabrication facility of Siemens. For the particle transport calculations the Monte Carlo Code MCNP 4.2 was used. The calculated results were compared with experimental dose rate measurements. It can be stated that the choice of the code system was appropriate since all aspects of the many facettes of the problem were well reproduced in the calculations. The position dependency as well as the influence of the shieldings, the reflections and the mutual influences of the sources were well described by the calculations for the gamma and for the neutron dose rates. However, good agreement with the experimental results on the gamma dose rates could only be reached when the lead shielding of the detector was integrated into the geometry modelling of the calculations. For some few cases of thick shieldings and soft gamma ray sources the statistics of the calculational results were not sufficient. In such cases more elaborate variance reduction methods must be applied in future calculations. Thus the MCNP code in connection with NGSRC has been proven as an effective tool for the solution of this type of problems. (orig./HP) [de
Monte Carlo calculations of fast effects in uranium graphite lattices
International Nuclear Information System (INIS)
Beardwood, J.E.; Tyror, J.G.
1962-12-01
Details are given of the results of a series of computations of fast neutron effects in natural uranium metal/graphite cells. The computations were performed using the Monte Carlo code SPEC. It is shown that neutron capture in U238 is conveniently discussed in terms of a capture escape probability ζ as well as the conventional probability p. The latter is associated with the slowing down flux and has the classical exponential dependence on fuel-to-moderator volume ratio whilst the former is identified with the component of neutron flux above 1/E. (author)
MORSE/STORM: A generalized albedo option for Monte Carlo calculations
International Nuclear Information System (INIS)
Gomes, I.C.; Stevens, P.N.
1991-09-01
The advisability of using the albedo procedure for the Monte Carlo solution of deep penetration shielding problems that have ducts and other penetrations has been investigated. The use of albedo data can dramatically improve the computational efficiency of certain Monte Carlo calculations. However, the accuracy of these results may be unacceptable because of lost information during the albedo event and serious errors in the available differential albedo data. This study was done to evaluate and appropriately modify the MORSE/BREESE package, to develop new methods for generating the required albedo data, and to extend the adjoint capability to the albedo-modified calculations. Major modifications to MORSE/BREESE include an option to save for further use information that would be lost at the albedo event, an option to displace the point of emergence during an albedo event, and an option to use spatially dependent albedo data for both forward and adjoint calculations, which includes the point of emergence as a new random variable to be selected during an albedo event. The theoretical basis for using TORT-generated forward albedo information to produce adjuncton albedos was derived. The MORSE/STORM package was developed to perform both forward and adjoint modes of analysis using spatially dependent albedo data. Results obtained with MORSE/STORM for both forward and adjoint modes were compared with benchmark solutions. Excellent agreement and improved computational efficiency were achieved, demonstrating the full utilization of the albedo option in the MORSE code. 7 refs., 17 figs., 15 tabs
Monte Carlo calculation of the energy deposited in the KASCADE GRANDE detectors
International Nuclear Information System (INIS)
Mihai, Constantin
2004-01-01
The energy deposited by protons, electrons and positrons in the KASCADE GRANDE detectors is calculated with a simple and fast Monte Carlo method. The KASCADE GRANDE experiment (Forschungszentrum Karlsruhe, Germany), based on an array of plastic scintillation detectors, has the aim to study the energy spectrum of the primary cosmic rays around and above the 'knee' region of the spectrum. The reconstruction of the primary spectrum is achieved by comparing the data collected by the detectors with simulations of the development of the extensive air shower initiated by the primary particle combined with detailed simulations of the detector response. The simulation of the air shower development is carried out with the CORSIKA Monte Carlo code. The output file produced by CORSIKA is further processed with a program that estimates the energy deposited in the detectors by the particles of the shower. The standard method to calculate the energy deposit in the detectors is based on the Geant package from the CERN library. A new method that calculates the energy deposit by fitting the Geant based distributions with simpler functions is proposed in this work. In comparison with the method based on the Geant package this method is substantially faster. The time saving is important because the number of particles involved is large. (author)
Monte Carlo criticality calculations accelerated by a growing neutron population
International Nuclear Information System (INIS)
Dufek, Jan; Tuttelberg, Kaur
2016-01-01
Highlights: • Efficiency is significantly improved when population size grows over cycles. • The bias in the fission source is balanced to other errors in the source. • The bias in the fission source decays over the cycle as the population grows. - Abstract: We propose a fission source convergence acceleration method for Monte Carlo criticality simulation. As the efficiency of Monte Carlo criticality simulations is sensitive to the selected neutron population size, the method attempts to achieve the acceleration via on-the-fly control of the neutron population size. The neutron population size is gradually increased over successive criticality cycles so that the fission source bias amounts to a specific fraction of the total error in the cumulative fission source. An optimal setting then gives a reasonably small neutron population size, allowing for an efficient source iteration; at the same time the neutron population size is chosen large enough to ensure a sufficiently small source bias, such that does not limit accuracy of the simulation.
An algorithm of α-and γ-mode eigenvalue calculations by Monte Carlo method
International Nuclear Information System (INIS)
Yamamoto, Toshihiro; Miyoshi, Yoshinori
2003-01-01
A new algorithm for Monte Carlo calculation was developed to obtain α- and γ-mode eigenvalues. The α is a prompt neutron time decay constant measured in subcritical experiments, and the γ is a spatial decay constant measured in an exponential method for determining the subcriticality. This algorithm can be implemented into existing Monte Carlo eigenvalue calculation codes with minimum modifications. The algorithm was implemented into MCNP code and the performance of calculating the both mode eigenvalues were verified through comparison of the calculated eigenvalues with the ones obtained by fixed source calculations. (author)
DEFF Research Database (Denmark)
Mangiarotti, Alessio; Sona, Pietro; Ballestrero, Sergio
2012-01-01
Approximate analytical calculations of multi-photon effects in the spectrum of total radiated energy by high-energy electrons crossing thin targets are compared to the results of Monte Carlo type simulations. The limits of validity of the analytical expressions found in the literature are establi...
International Nuclear Information System (INIS)
Robinson, G.S.
1985-08-01
The calculation of resonance shielding by the subgroup method, as incorporated in the MIRANDA module of the AUS neutronics code system, is compared with Monte Carlo calculatons for a number of thermal reactor lattices. For the large range of single rod and rod cluster lattices considered, AUS results for resonance absorption were high by up to two per cent
Development of a software package for solid-angle calculations using the Monte Carlo method
International Nuclear Information System (INIS)
Zhang, Jie; Chen, Xiulian; Zhang, Changsheng; Li, Gang; Xu, Jiayun; Sun, Guangai
2014-01-01
Solid-angle calculations play an important role in the absolute calibration of radioactivity measurement systems and in the determination of the activity of radioactive sources, which are often complicated. In the present paper, a software package is developed to provide a convenient tool for solid-angle calculations in nuclear physics. The proposed software calculates solid angles using the Monte Carlo method, in which a new type of variance reduction technique was integrated. The package, developed under the environment of Microsoft Foundation Classes (MFC) in Microsoft Visual C ++ , has a graphical user interface, in which, the visualization function is integrated in conjunction with OpenGL. One advantage of the proposed software package is that it can calculate the solid angle subtended by a detector with different geometric shapes (e.g., cylinder, square prism, regular triangular prism or regular hexagonal prism) to a point, circular or cylindrical source without any difficulty. The results obtained from the proposed software package were compared with those obtained from previous studies and calculated using Geant4. It shows that the proposed software package can produce accurate solid-angle values with a greater computation speed than Geant4. -- Highlights: • This software package (SAC) can give accurate solid-angle values. • SAC calculate solid angles using the Monte Carlo method and it has higher computation speed than Geant4. • A simple but effective variance reduction technique which was put forward by the authors has been applied in SAC. • A visualization function and a graphical user interface are also integrated in SAC
Monte Carlo calculations of thermodynamic properties of deuterium under high pressures
International Nuclear Information System (INIS)
Levashov, P R; Filinov, V S; BoTan, A; Fortov, V E; Bonitz, M
2008-01-01
Two different numerical approaches have been applied for calculations of shock Hugoniots and compression isentrope of deuterium: direct path integral Monte Carlo and reactive Monte Carlo. The results show good agreement between two methods at intermediate pressure which is an indication of correct accounting of dissociation effects in the direct path integral Monte Carlo method. Experimental data on both shock and quasi-isentropic compression of deuterium are well described by calculations. Thus dissociation of deuterium molecules in these experiments together with interparticle interaction play significant role
Strategies for CT tissue segmentation for Monte Carlo calculations in nuclear medicine dosimetry
DEFF Research Database (Denmark)
Braad, P E N; Andersen, T; Hansen, Søren Baarsgaard
2016-01-01
in the ICRP/ICRU male phantom and in a patient PET/CT-scanned with 124I prior to radioiodine therapy. Results: CT number variations body CT examinations at effective CT doses ∼2 mSv. Monte Carlo calculated absorbed doses depended on both the number of media types and accurate......Purpose: CT images are used for patient specific Monte Carlo treatment planning in radionuclide therapy. The authors investigated the impact of tissue classification, CT image segmentation, and CT errors on Monte Carlo calculated absorbed dose estimates in nuclear medicine. Methods: CT errors...
Independent Monte-Carlo dose calculation for MLC based CyberKnife radiotherapy
Mackeprang, P.-H.; Vuong, D.; Volken, W.; Henzen, D.; Schmidhalter, D.; Malthaner, M.; Mueller, S.; Frei, D.; Stampanoni, M. F. M.; Dal Pra, A.; Aebersold, D. M.; Fix, M. K.; Manser, P.
2018-01-01
This work aims to develop, implement and validate a Monte Carlo (MC)-based independent dose calculation (IDC) framework to perform patient-specific quality assurance (QA) for multi-leaf collimator (MLC)-based CyberKnife® (Accuray Inc., Sunnyvale, CA) treatment plans. The IDC framework uses an XML-format treatment plan as exported from the treatment planning system (TPS) and DICOM format patient CT data, an MC beam model using phase spaces, CyberKnife MLC beam modifier transport using the EGS++ class library, a beam sampling and coordinate transformation engine and dose scoring using DOSXYZnrc. The framework is validated against dose profiles and depth dose curves of single beams with varying field sizes in a water tank in units of cGy/Monitor Unit and against a 2D dose distribution of a full prostate treatment plan measured with Gafchromic EBT3 (Ashland Advanced Materials, Bridgewater, NJ) film in a homogeneous water-equivalent slab phantom. The film measurement is compared to IDC results by gamma analysis using 2% (global)/2 mm criteria. Further, the dose distribution of the clinical treatment plan in the patient CT is compared to TPS calculation by gamma analysis using the same criteria. Dose profiles from IDC calculation in a homogeneous water phantom agree within 2.3% of the global max dose or 1 mm distance to agreement to measurements for all except the smallest field size. Comparing the film measurement to calculated dose, 99.9% of all voxels pass gamma analysis, comparing dose calculated by the IDC framework to TPS calculated dose for the clinical prostate plan shows 99.0% passing rate. IDC calculated dose is found to be up to 5.6% lower than dose calculated by the TPS in this case near metal fiducial markers. An MC-based modular IDC framework was successfully developed, implemented and validated against measurements and is now available to perform patient-specific QA by IDC.
SU-E-J-60: Efficient Monte Carlo Dose Calculation On CPU-GPU Heterogeneous Systems
Energy Technology Data Exchange (ETDEWEB)
Xiao, K; Chen, D. Z; Hu, X. S [University of Notre Dame, Notre Dame, IN (United States); Zhou, B [Altera Corp., San Jose, CA (United States)
2014-06-01
Purpose: It is well-known that the performance of GPU-based Monte Carlo dose calculation implementations is bounded by memory bandwidth. One major cause of this bottleneck is the random memory writing patterns in dose deposition, which leads to several memory efficiency issues on GPU such as un-coalesced writing and atomic operations. We propose a new method to alleviate such issues on CPU-GPU heterogeneous systems, which achieves overall performance improvement for Monte Carlo dose calculation. Methods: Dose deposition is to accumulate dose into the voxels of a dose volume along the trajectories of radiation rays. Our idea is to partition this procedure into the following three steps, which are fine-tuned for CPU or GPU: (1) each GPU thread writes dose results with location information to a buffer on GPU memory, which achieves fully-coalesced and atomic-free memory transactions; (2) the dose results in the buffer are transferred to CPU memory; (3) the dose volume is constructed from the dose buffer on CPU. We organize the processing of all radiation rays into streams. Since the steps within a stream use different hardware resources (i.e., GPU, DMA, CPU), we can overlap the execution of these steps for different streams by pipelining. Results: We evaluated our method using a Monte Carlo Convolution Superposition (MCCS) program and tested our implementation for various clinical cases on a heterogeneous system containing an Intel i7 quad-core CPU and an NVIDIA TITAN GPU. Comparing with a straightforward MCCS implementation on the same system (using both CPU and GPU for radiation ray tracing), our method gained 2-5X speedup without losing dose calculation accuracy. Conclusion: The results show that our new method improves the effective memory bandwidth and overall performance for MCCS on the CPU-GPU systems. Our proposed method can also be applied to accelerate other Monte Carlo dose calculation approaches. This research was supported in part by NSF under Grants CCF
Information criteria and higher Eigenmode estimation in Monte Carlo calculations
International Nuclear Information System (INIS)
Nease, B. R.; Ueki, T.
2007-01-01
Recently developed Monte Carlo methods of estimating the dominance ratio (DR) rely on autoregressive (AR) fittings of a computed time series. This time series is obtained by applying a projection vector to the fission source distribution of the problem. The AR fitting order necessary to accurately extract the mode corresponding to DR is dependent on the number of fission source bins used. This makes it necessary to examine the convergence of DR as the AR fitting order increases. Therefore, we have investigated if the AR fitting order determined by information criteria can be reliably used to estimate DR. Two information criteria have been investigated: Improved Akaike Information Criteria (AICc) and Minimum Descriptive Length Criteria (MDL). These criteria appear to work well when applied to computations with fine bin structure where the projection vector is applied. (authors)
Convergence testing for MCNP5 Monte Carlo eigenvalue calculations
International Nuclear Information System (INIS)
Brown, F.; Nease, B.; Cheatham, J.
2007-01-01
Determining convergence of Monte Carlo criticality problems is complicated by the statistical noise inherent in the random, walks of the neutrons in each generation. The latest version of MCNP5 incorporates an important new tool for assessing convergence: the Shannon entropy of the fission source distribution, H src . Shannon entropy is a well-known concept from information theory and provides a single number for each iteration to help characterize convergence trends for the fission source distribution. MCNP5 computes H src for each iteration, and these values may be plotted to examine convergence trends. Convergence testing should include both k eff and H src , since the fission distribution will converge more slowly than k eff , especially when the dominance ratio is close to 1.0. (authors)
Wang, R; Li, X A
2001-02-01
The dose parameters for the beta-particle emitting 90Sr/90Y source for intravascular brachytherapy (IVBT) have been calculated by different investigators. At a distant distance from the source, noticeable differences are seen in these parameters calculated using different Monte Carlo codes. The purpose of this work is to quantify as well as to understand these differences. We have compared a series of calculations using an EGS4, an EGSnrc, and the MCNP Monte Carlo codes. Data calculated and compared include the depth dose curve for a broad parallel beam of electrons, and radial dose distributions for point electron sources (monoenergetic or polyenergetic) and for a real 90Sr/90Y source. For the 90Sr/90Y source, the doses at the reference position (2 mm radial distance) calculated by the three code agree within 2%. However, the differences between the dose calculated by the three codes can be over 20% in the radial distance range interested in IVBT. The difference increases with radial distance from source, and reaches 30% at the tail of dose curve. These differences may be partially attributed to the different multiple scattering theories and Monte Carlo models for electron transport adopted in these three codes. Doses calculated by the EGSnrc code are more accurate than those by the EGS4. The two calculations agree within 5% for radial distance <6 mm.
International Nuclear Information System (INIS)
Sadeghi, Mahdi; Hosseini, Hamed; Raisali, Gholamreza
2008-01-01
Full text: The use of 103 Pd seed sources for permanent prostate implantation has become a popular brachytherapy application. As recommended by AAPM the dosimetric characteristics of the new source must be determined using experimental and Monte Carlo simulations, before its use in clinical applications thus The goal of this report is the experimental and theoretical determination of the dosimetric characteristics of this source following the recommendations in the AAPM TG-43U1 protocol. Figure 1 shows the geometry of the IRA- 103 Pd source. The source consists of a cylindrical silver core, 0.3 cm long x 0.05 cm in diameter, onto which 0.5 nm layer of 103 Pd has been uniformly adsorbed. The effective active length of source is 0.3 cm and the silver core encapsulated inside a hollow titanium tube with 0.45 cm long, 0.07 cm and 0.08 inner and outer diameters and two caps. The Monte Carlo N-Particle (MCNP) code, version 4C, was used to determine the relevant dosimetric parameters of the source. The geometry of the Monte Carlo simulation performed in this study consisted of a sphere with 30 cm diameter. Dose distributions around this source were measured in two Perspex phantom using enough TLD chips. For these measurements, slabs of Perspex material were machined to accommodate the source and TLD chips. A value of 0.67± 1% cGy.h -1 .U -1 for, Λ, was calculated as the ratio of d(r 0 ,θ 0 ) and s K , that may be compared with Λ values obtained for 103 Pd sources. Result of calculations and measurements values of dosimetric parameters of the source including radial dose function, g(r), and anisotropy function, F(r,θ), has been shown in separate figures. The radial dose function, g(r), for the IRA- 103 Pd source and other 103 Pd sources is included in Fig. 2. Comparison between measured and Monte Carlo simulated dose function, g(r), and anisotropy function, F(r,θ), of this source demonstrated that they are in good agreement with each other and The value of Λ is
Investigating the minimum achievable variance in a Monte Carlo criticality calculation
Energy Technology Data Exchange (ETDEWEB)
Christoforou, Stavros; Eduard Hoogenboom, J. [Delft University of Technology, Mekelweg 15, 2629 JB Delft (Netherlands)
2008-07-01
The sources of variance in a Monte Carlo criticality calculation are identified and their contributions analyzed. A zero-variance configuration is initially simulated using analytically calculated adjoint functions for biasing. From there, the various sources are analyzed. It is shown that the minimum threshold comes from the fact that the fission source is approximated. In addition, the merits of a simple variance reduction method, such as implicit capture, are shown when compared to an analog simulation. Finally, it is shown that when non-exact adjoint functions are used for biasing, the variance reduction is rather insensitive to the quality of the adjoints, suggesting that the generation of the adjoints should have as low CPU cost as possible, in order to o et the CPU cost in the implementation of the biasing of a simulation. (authors)
International Nuclear Information System (INIS)
Ohya, Kaoru; Kawata, Jun; Mori, Ichiro
1990-01-01
Incidence angle dependences of secondary electron emission from a carbon surface by low energy electron and hydrogen atom are calculated using Monte Carlo simulations on the kinetic emission model. The calculation shows very small increase or rather decrease of the secondary electron yield with oblique incidence. It is explained in terms of not only multiple elastic collisions of incident particles with the carbon atoms but also small penetration depth of the particles comparable with the escape depth of secondary electrons. In addition, the two types of secondary electron emission are distinguished by using the secondary electron yield statistics; one is the emission due to trapped particles in the carbon, and the other is that due to backscattered particles. The high-yield component of the statistics on oblique incidence is more suppressed than those on normal incidence. (author)
MCNP Perturbation Capability for Monte Carlo Criticality Calculations
International Nuclear Information System (INIS)
Hendricks, J.S.; Carter, L.L.; McKinney, G.W.
1999-01-01
The differential operator perturbation capability in MCNP4B has been extended to automatically calculate perturbation estimates for the track length estimate of k eff in MCNP4B. The additional corrections required in certain cases for MCNP4B are no longer needed. Calculating the effect of small design changes on the criticality of nuclear systems with MCNP is now straightforward
Development of Monte Carlo decay gamma-ray transport calculation system
Energy Technology Data Exchange (ETDEWEB)
Sato, Satoshi [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment; Kawasaki, Nobuo [Fujitsu Ltd., Tokyo (Japan); Kume, Etsuo [Japan Atomic Energy Research Inst., Center for Promotion of Computational Science and Engineering, Tokai, Ibaraki (Japan)
2001-06-01
In the DT fusion reactor, it is critical concern to evaluate the decay gamma-ray biological dose rates after the reactor shutdown exactly. In order to evaluate the decay gamma-ray biological dose rates exactly, three dimensional Monte Carlo decay gamma-ray transport calculation system have been developed by connecting the three dimensional Monte Carlo particle transport calculation code and the induced activity calculation code. The developed calculation system consists of the following four functions. (1) The operational neutron flux distribution is calculated by the three dimensional Monte Carlo particle transport calculation code. (2) The induced activities are calculated by the induced activity calculation code. (3) The decay gamma-ray source distribution is obtained from the induced activities. (4) The decay gamma-rays are generated by using the decay gamma-ray source distribution, and the decay gamma-ray transport calculation is conducted by the three dimensional Monte Carlo particle transport calculation code. In order to reduce the calculation time drastically, a biasing system for the decay gamma-ray source distribution has been developed, and the function is also included in the present system. In this paper, the outline and the detail of the system, and the execution example are reported. The evaluation for the effect of the biasing system is also reported. (author)
Correction of CT artifacts and its influence on Monte Carlo dose calculations
International Nuclear Information System (INIS)
Bazalova, Magdalena; Beaulieu, Luc; Palefsky, Steven; Verhaegen, Frank
2007-01-01
Computed tomography (CT) images of patients having metallic implants or dental fillings exhibit severe streaking artifacts. These artifacts may disallow tumor and organ delineation and compromise dose calculation outcomes in radiotherapy. We used a sinogram interpolation metal streaking artifact correction algorithm on several phantoms of exact-known compositions and on a prostate patient with two hip prostheses. We compared original CT images and artifact-corrected images of both. To evaluate the effect of the artifact correction on dose calculations, we performed Monte Carlo dose calculation in the EGSnrc/DOSXYZnrc code. For the phantoms, we performed calculations in the exact geometry, in the original CT geometry and in the artifact-corrected geometry for photon and electron beams. The maximum errors in 6 MV photon beam dose calculation were found to exceed 25% in original CT images when the standard DOSXYZnrc/CTCREATE calibration is used but less than 2% in artifact-corrected images when an extended calibration is used. The extended calibration includes an extra calibration point for a metal. The patient dose volume histograms of a hypothetical target irradiated by five 18 MV photon beams in a hypothetical treatment differ significantly in the original CT geometry and in the artifact-corrected geometry. This was found to be mostly due to miss-assignment of tissue voxels to air due to metal artifacts. We also developed a simple Monte Carlo model for a CT scanner and we simulated the contribution of scatter and beam hardening to metal streaking artifacts. We found that whereas beam hardening has a minor effect on metal artifacts, scatter is an important cause of these artifacts
Energy Technology Data Exchange (ETDEWEB)
Lanore, Jeanne-Marie [Commissariat a l' Energie Atomique - CEA, Centre d' Etudes Nucleaires de Fontenay-aux-Roses, Direction des Piles Atomiques, Departement des Etudes de Piles, Service d' Etudes de Protections de Piles (France)
1969-04-15
One of the main difficulties in Monte Carlo computations is the estimation of the results variance. Generally, only an apparent variance can be observed over a few calculations, often very different from the actual variance. By studying a large number of short calculations, the authors have tried to evaluate the real variance, and then to apply the obtained results to the optimization of the computations. The program used is the Poker one-dimensional Monte Carlo program. Calculations are performed in two types of fictitious environments: a body with constant cross section, without absorption, where all shocks are elastic and isotropic; a body with variable cross section (presenting a very pronounced peak and hole), with an anisotropy for high energy elastic shocks, and with the possibility of inelastic shocks (this body presents all the features that can appear in a real case)
International Nuclear Information System (INIS)
Esnaashari, K. N.; Allahverdi, M.; Gharaati, H.; Shahriari, M.
2007-01-01
Stereotactic radiosurgery is an important clinical tool for the treatment of small lesions in the brain, including benign conditions, malignant and localized metastatic tumors. A dosimetry study was performed for Elekta 'Synergy S' as a dedicated Stereotactic radiosurgery unit, capable of generating circular radiation fields with diameters of 1-5 cm at iso centre using the BEAM/EGS4 Monte Carlo code. Materials and Methods: The linear accelerator Elekta Synergy S equipped with a set of 5 circular collimators from 10 mm to 50 mm in diameter at iso centre distance was used. The cones were inserted in a base plate mounted on the collimator linac head. A PinPoint chamber and Wellhofer water tank chamber were selected for clinical dosimetry of 6 MV photon beams. The results of simulations using the Monte Carlo system BEAM/EGS4 to model the beam geometry were compared with dose measurements. Results: An excellent agreement was found between Monte Carlo calculated and measured percentage depth dose and lateral dose profiles which were performed in water phantom for circular cones with 1, 2, 3, 4 and 5 cm in diameter. The comparison between calculation and measurements showed up to 0.5 % or 1 m m difference for all field sizes. The penumbra (80-20%) results at 5 cm depth in water phantom and SSD=95 ranged from 1.5 to 2.1 mm for circular collimators with diameter 1 to 5 cm. Conclusion: This study showed that BEAMnrc code has been accurate in modeling Synergy S linear accelerator equipped with circular collimators
International Nuclear Information System (INIS)
Ghassoun, Jillali; Jehoauni, Abdellatif
2000-01-01
In practice, the estimation of the flux obtained by Fredholm integral equation needs a truncation of the Neuman series. The order N of the truncation must be large in order to get a good estimation. But a large N induces a very large computation time. So the conditional Monte Carlo method is used to reduce time without affecting the estimation quality. In a previous works, in order to have rapid convergence of calculations it was considered only weakly diffusing media so that has permitted to truncate the Neuman series after an order of 20 terms. But in the most practical shields, such as water, graphite and beryllium the scattering probability is high and if we truncate the series at 20 terms we get bad estimation of flux, so it becomes useful to use high orders in order to have good estimation. We suggest two simple techniques based on the conditional Monte Carlo. We have proposed a simple density of sampling the steps for the random walk. Also a modified stretching factor density depending on a biasing parameter which affects the sample vector by stretching or shrinking the original random walk in order to have a chain that ends at a given point of interest. Also we obtained a simple empirical formula which gives the neutron flux for a medium characterized by only their scattering probabilities. The results are compared to the exact analytic solution, we have got a good agreement of results with a good acceleration of convergence calculations. (author)
Energy Technology Data Exchange (ETDEWEB)
Nordenfors, C
1999-02-01
To determine dose rate in a gamma radiation field, based on measurements with a semiconductor detector, it is necessary to know how the detector effects the field. This work aims to describe this effect with Monte Carlo simulations and calculations, that is to identify the detector response function. This is done for a germanium gamma detector. The detector is normally used in the in-situ measurements that is carried out regularly at the department. After the response function is determined it is used to reconstruct a spectrum from an in-situ measurement, a so called unfolding. This is done to be able to calculate fluence rate and dose rate directly from a measured (and unfolded) spectrum. The Monte Carlo code used in this work is EGS4 developed mainly at Stanford Linear Accelerator Center. It is a widely used code package to simulate particle transport. The results of this work indicates that the method could be used as-is since the accuracy of this method compares to other methods already in use to measure dose rate. Bearing in mind that this method provides the nuclide specific dose it is useful, in radiation protection, since knowing what the relations between different nuclides are and how they change is very important when estimating the risks
Three-dimensional Monte Carlo calculation of some nuclear parameters
Günay, Mehtap; Şeker, Gökmen
2017-09-01
In this study, a fusion-fission hybrid reactor system was designed by using 9Cr2WVTa Ferritic steel structural material and the molten salt-heavy metal mixtures 99-95% Li20Sn80 + 1-5% RG-Pu, 99-95% Li20Sn80 + 1-5% RG-PuF4, and 99-95% Li20Sn80 + 1-5% RG-PuO2, as fluids. The fluids were used in the liquid first wall, blanket and shield zones of a fusion-fission hybrid reactor system. Beryllium (Be) zone with the width of 3 cm was used for the neutron multiplication between the liquid first wall and blanket. This study analyzes the nuclear parameters such as tritium breeding ratio (TBR), energy multiplication factor (M), heat deposition rate, fission reaction rate in liquid first wall, blanket and shield zones and investigates effects of reactor grade Pu content in the designed system on these nuclear parameters. Three-dimensional analyses were performed by using the Monte Carlo code MCNPX-2.7.0 and nuclear data library ENDF/B-VII.0.
Three-dimensional Monte Carlo calculation of some nuclear parameters
Directory of Open Access Journals (Sweden)
Günay Mehtap
2017-01-01
Full Text Available In this study, a fusion-fission hybrid reactor system was designed by using 9Cr2WVTa Ferritic steel structural material and the molten salt-heavy metal mixtures 99–95% Li20Sn80 + 1-5% RG-Pu, 99–95% Li20Sn80 + 1-5% RG-PuF4, and 99–95% Li20Sn80 + 1-5% RG-PuO2, as fluids. The fluids were used in the liquid first wall, blanket and shield zones of a fusion–fission hybrid reactor system. Beryllium (Be zone with the width of 3 cm was used for the neutron multiplication between the liquid first wall and blanket. This study analyzes the nuclear parameters such as tritium breeding ratio (TBR, energy multiplication factor (M, heat deposition rate, fission reaction rate in liquid first wall, blanket and shield zones and investigates effects of reactor grade Pu content in the designed system on these nuclear parameters. Three-dimensional analyses were performed by using the Monte Carlo code MCNPX-2.7.0 and nuclear data library ENDF/B-VII.0.
Comparison of Monte Carlo method and deterministic method for neutron transport calculation
International Nuclear Information System (INIS)
Mori, Takamasa; Nakagawa, Masayuki
1987-01-01
The report outlines major features of the Monte Carlo method by citing various applications of the method and techniques used for Monte Carlo codes. Major areas of its application include analysis of measurements on fast critical assemblies, nuclear fusion reactor neutronics analysis, criticality safety analysis, evaluation by VIM code, and calculation for shielding. Major techniques used for Monte Carlo codes include the random walk method, geometric expression method (combinatorial geometry, 1, 2, 4-th degree surface and lattice geometry), nuclear data expression, evaluation method (track length, collision, analog (absorption), surface crossing, point), and dispersion reduction (Russian roulette, splitting, exponential transform, importance sampling, corrected sampling). Major features of the Monte Carlo method are as follows: 1) neutron source distribution and systems of complex geometry can be simulated accurately, 2) physical quantities such as neutron flux in a place, on a surface or at a point can be evaluated, and 3) calculation requires less time. (Nogami, K.)
International Nuclear Information System (INIS)
Valentine, T.E.; Mihalczo, J.T.
1996-01-01
One primary concern for design of safety systems for reactors is the time response of external detectors to changes in the core. This paper describes a way to estimate the time delay between the core power production and the external detector response using Monte Carlo calculations and suggests a technique to measure the time delay. The Monte Carlo code KENO-NR was used to determine the time delay between the core power production and the external detector response for a conceptual design of the Advanced Neutron Source (ANS) reactor. The Monte Carlo estimated time delay was determined to be about 10 ms for this conceptual design of the ANS reactor
HEXANN-EVALU - a Monte Carlo program system for pressure vessel neutron irradiation calculation
International Nuclear Information System (INIS)
Lux, Ivan
1983-08-01
The Monte Carlo program HEXANN and the evaluation program EVALU are intended to calculate Monte Carlo estimates of reaction rates and currents in segments of concentric angular regions around a hexagonal reactor-core region. The report describes the theoretical basis, structure and activity of the programs. Input data preparation guides and a sample problem are also included. Theoretical considerations as well as numerical experimental results suggest the user a nearly optimum way of making use of the Monte Carlo efficiency increasing options included in the program
Quantum Monte Carlo calculation of the Fermi-liquid parameters in the two-dimensional electron gas
International Nuclear Information System (INIS)
Kwon, Y.; Ceperley, D.M.; Martin, R.M.
1994-01-01
Excitations of the two-dimensional electron gas, including many-body effects, are calculated with a variational Monte Carlo method. Correlated sampling is introduced to calculate small energy differences between different excitations. The usual pair-product (Slater-Jastrow) trial wave function is found to lack certain correlations entirely so that backflow correlation is crucial. From the excitation energies calculated here, we determine Fermi-liquid parameters and related physical quantities such as the effective mass and the Lande g factor of the system. Our results for the effective mass are compared with previous analytic calculations
Evaluation of an electron Monte Carlo dose calculation algorithm for treatment planning.
Chamberland, Eve; Beaulieu, Luc; Lachance, Bernard
2015-05-08
The purpose of this study is to evaluate the accuracy of the electron Monte Carlo (eMC) dose calculation algorithm included in a commercial treatment planning system and compare its performance against an electron pencil beam algorithm. Several tests were performed to explore the system's behavior in simple geometries and in configurations encountered in clinical practice. The first series of tests were executed in a homogeneous water phantom, where experimental measurements and eMC-calculated dose distributions were compared for various combinations of energy and applicator. More specifically, we compared beam profiles and depth-dose curves at different source-to-surface distances (SSDs) and gantry angles, by using dose difference and distance to agreement. Also, we compared output factors, we studied the effects of algorithm input parameters, which are the random number generator seed, as well as the calculation grid size, and we performed a calculation time evaluation. Three different inhomogeneous solid phantoms were built, using high- and low-density materials inserts, to clinically simulate relevant heterogeneity conditions: a small air cylinder within a homogeneous phantom, a lung phantom, and a chest wall phantom. We also used an anthropomorphic phantom to perform comparison of eMC calculations to measurements. Finally, we proceeded with an evaluation of the eMC algorithm on a clinical case of nose cancer. In all mentioned cases, measurements, carried out by means of XV-2 films, radiographic films or EBT2 Gafchromic films. were used to compare eMC calculations with dose distributions obtained from an electron pencil beam algorithm. eMC calculations in the water phantom were accurate. Discrepancies for depth-dose curves and beam profiles were under 2.5% and 2 mm. Dose calculations with eMC for the small air cylinder and the lung phantom agreed within 2% and 4%, respectively. eMC calculations for the chest wall phantom and the anthropomorphic phantom also
Optimized iteration in coupled Monte-Carlo - Thermal-hydraulics calculations
International Nuclear Information System (INIS)
Hoogenboom, J.E.; Dufek, J.
2013-01-01
This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration methods are also tested and it is concluded that the presented iteration method is near optimal. (authors)
Exploring the use of a deterministic adjoint flux calculation in criticality Monte Carlo simulations
International Nuclear Information System (INIS)
Jinaphanh, A.; Miss, J.; Richet, Y.; Martin, N.; Hebert, A.
2011-01-01
The paper presents a preliminary study on the use of a deterministic adjoint flux calculation to improve source convergence issues by reducing the number of iterations needed to reach the converged distribution in criticality Monte Carlo calculations. Slow source convergence in Monte Carlo eigenvalue calculations may lead to underestimate the effective multiplication factor or reaction rates. The convergence speed depends on the initial distribution and the dominance ratio. We propose using an adjoint flux estimation to modify the transition kernel according to the Importance Sampling technique. This adjoint flux is also used as the initial guess of the first generation distribution for the Monte Carlo simulation. Calculated Variance of a local estimator of current is being checked. (author)
Energy Technology Data Exchange (ETDEWEB)
Benmosbah, M. [Laboratoire de Chimie Physique et Rayonnement Alain Chambaudet, UMR CEA E4, Universite de Franche-Comte, 16 route de Gray, 25030 Besancon Cedex (France); Groetz, J.E. [Laboratoire de Chimie Physique et Rayonnement Alain Chambaudet, UMR CEA E4, Universite de Franche-Comte, 16 route de Gray, 25030 Besancon Cedex (France)], E-mail: jegroetz@univ-fcomte.fr; Crovisier, P. [Service de Protection contre les Rayonnements, CEA Valduc, 21120 Is/Tille (France); Asselineau, B. [Laboratoire de Metrologie et de Dosimetrie des Neutrons, IRSN, Cadarache BP3, 13115 St Paul-lez-Durance (France); Truffert, H.; Cadiou, A. [AREVA NC, Etablissement de la Hague, DQSSE/PR/E/D, 50444 Beaumont-Hague Cedex (France)
2008-08-11
Proton recoil spectra were calculated for various spherical proportional counters using Monte Carlo simulation combined with the finite element method. Electric field lines and strength were calculated by defining an appropriate mesh and solving the Laplace equation with the associated boundary conditions, taking into account the geometry of every counter. Thus, different regions were defined in the counter with various coefficients for the energy deposition in the Monte Carlo transport code MCNPX. Results from the calculations are in good agreement with measurements for three different gas pressures at various neutron energies.
The neutrons flux density calculations by Monte Carlo code for the double heterogeneity fuel
International Nuclear Information System (INIS)
Gurevich, M.I.; Brizgalov, V.I.
1994-01-01
This document provides the calculation technique for the fuel elements which consists of the one substance as a matrix and the other substance as the corn embedded in it. This technique can be used in the neutron flux density calculation by the universal Monte Carlo code. The estimation of accuracy is presented too. (authors). 6 refs., 1 fig
Monte Carlo calculation of the nuclear temperature coefficient in fast reactors
Energy Technology Data Exchange (ETDEWEB)
Matthes, W.
1974-04-15
A Monte Carlo program for the calculation of the nuclear temperature coefficient for fast reactors is described. The special difficulties for this problem are the energy and space dependence of the cross sections and the calculation of differential eifects. These difficulties are discussed in detail and the way for their solution chosen in this program is described. (auth)
International Nuclear Information System (INIS)
Devine, R.T.; Hsu, Hsiao-Hua
1994-01-01
The current basis for conversion coefficients for calibrating individual photon dosimeters in terms of dose equivalents is found in the series of papers by Grosswent. In his calculation the collision kerma inside the phantom is determined by calculation of the energy fluence at the point of interest and the use of the mass energy absorption coefficient. This approximates the local absorbed dose. Other Monte Carlo methods can be sued to provide calculations of the conversion coefficients. Rogers has calculated fluence-to-dose equivalent conversion factors with the Electron-Gamma Shower Version 3, EGS3, Monte Carlo program and produced results similar to Grosswent's calculations. This paper will report on calculations using the Integrated TIGER Series Version 3, ITS3, code to calculate the conversion coefficients in ICRU Tissue and in PMMA. A complete description of the input parameters to the program is given and comparison to previous results is included
International Nuclear Information System (INIS)
Craig, D.S.; Festarini, G.L.
1986-07-01
The Monte Carlo code, REPC, has been used to calculate resonance reaction rates for the thermal test lattices TRX-1 and MIT-4, and for the CRNL lattices ZEEP-1, 19 UO 2 and 37 UO 2 . These reaction rates were used in the RAHAB cell code to calculate k eff , conversion ratios, and fast fission ratios, for comparison with experimental values. The calculations used the cluster geometry for the 19-, 28-, and 37-element clusters. Calculations were also made using annular representations of the cluster for comparison of the rates with those obtained using the discrete ordinate code OZMA
Energy Technology Data Exchange (ETDEWEB)
Baltas, D; Geramani, K N; Ioannidis, G T; Kolotas, C; Zamboglou, N [Strahlenklinik, Stadtische Kliniken Offenbach, Offenbach (Germany); Giannouli, S [Department of Electrical and Computer Engineering, National Technical University of Athens, Athens (Greece)
1999-12-31
Source anisotropy is a very important factor in brachytherapy quality assurance of high dose rate HDR Ir 192 afterloading stepping sources. If anisotropy is not taken into account then doses received by a brachytherapy patient in certain directions can be in error by a clinically significant amount. Experimental measurements of anisotropy are very labour intensive. We have shown that within acceptable limits of accuracy, Monte Carlo integration (MCI) of a modified Sievert integral (3D generalisation) can provide the necessary data within a much shorter time scale than can experiments. Hence MCI can be used for routine quality assurance schedules whenever a new design of HDR or PDR Ir 192 is used for brachytherapy afterloading. Our MCI calculation results are comparable with published experimental data and Monte Carlo simulation data for microSelectron and VariSource Ir 192 sources. We have shown not only that MCI offers advantages over alternative numerical integration methods, but also that treating filtration coefficients as radial distance-dependent functions improves Sievert integral accuracy at low energies. This paper also provides anisotropy data for three new Ir 192 sources, one for microSelectron-HDR and two for the microSelectron-PDR, for which data currently is not available. The information we have obtained in this study can be incorporated into clinical practice.
Davidson, S.; Cui, J.; Followill, D.; Ibbott, G.; Deasy, J.
2008-02-01
The Dose Planning Method (DPM) is one of several 'fast' Monte Carlo (MC) computer codes designed to produce an accurate dose calculation for advanced clinical applications. We have developed a flexible machine modeling process and validation tests for open-field and IMRT calculations. To complement the DPM code, a practical and versatile source model has been developed, whose parameters are derived from a standard set of planning system commissioning measurements. The primary photon spectrum and the spectrum resulting from the flattening filter are modeled by a Fatigue function, cut-off by a multiplying Fermi function, which effectively regularizes the difficult energy spectrum determination process. Commonly-used functions are applied to represent the off-axis softening, increasing primary fluence with increasing angle ('the horn effect'), and electron contamination. The patient dependent aspect of the MC dose calculation utilizes the multi-leaf collimator (MLC) leaf sequence file exported from the treatment planning system DICOM output, coupled with the source model, to derive the particle transport. This model has been commissioned for Varian 2100C 6 MV and 18 MV photon beams using percent depth dose, dose profiles, and output factors. A 3-D conformal plan and an IMRT plan delivered to an anthropomorphic thorax phantom were used to benchmark the model. The calculated results were compared to Pinnacle v7.6c results and measurements made using radiochromic film and thermoluminescent detectors (TLD).
Postimplant Dosimetry Using a Monte Carlo Dose Calculation Engine: A New Clinical Standard
International Nuclear Information System (INIS)
Carrier, Jean-Francois; D'Amours, Michel; Verhaegen, Frank; Reniers, Brigitte; Martin, Andre-Guy; Vigneault, Eric; Beaulieu, Luc
2007-01-01
Purpose: To use the Monte Carlo (MC) method as a dose calculation engine for postimplant dosimetry. To compare the results with clinically approved data for a sample of 28 patients. Two effects not taken into account by the clinical calculation, interseed attenuation and tissue composition, are being specifically investigated. Methods and Materials: An automated MC program was developed. The dose distributions were calculated for the target volume and organs at risk (OAR) for 28 patients. Additional MC techniques were developed to focus specifically on the interseed attenuation and tissue effects. Results: For the clinical target volume (CTV) D 90 parameter, the mean difference between the clinical technique and the complete MC method is 10.7 Gy, with cases reaching up to 17 Gy. For all cases, the clinical technique overestimates the deposited dose in the CTV. This overestimation is mainly from a combination of two effects: the interseed attenuation (average, 6.8 Gy) and tissue composition (average, 4.1 Gy). The deposited dose in the OARs is also overestimated in the clinical calculation. Conclusions: The clinical technique systematically overestimates the deposited dose in the prostate and in the OARs. To reduce this systematic inaccuracy, the MC method should be considered in establishing a new standard for clinical postimplant dosimetry and dose-outcome studies in a near future
Energy Technology Data Exchange (ETDEWEB)
Li, JS; Fan, J; Ma, C-M [Fox Chase Cancer Center, Philadelphia, PA (United States)
2015-06-15
Purpose: To improve the treatment efficiency and capabilities for full-body treatment, a robotic radiosurgery system has equipped with a multileaf collimator (MLC) to extend its accuracy and precision to radiation therapy. To model the MLC and include it in the Monte Carlo patient dose calculation is the goal of this work. Methods: The radiation source and the MLC were carefully modeled to consider the effects of the source size, collimator scattering, leaf transmission and leaf end shape. A source model was built based on the output factors, percentage depth dose curves and lateral dose profiles measured in a water phantom. MLC leaf shape, leaf end design and leaf tilt for minimizing the interleaf leakage and their effects on beam fluence and energy spectrum were all considered in the calculation. Transmission/leakage was added to the fluence based on the transmission factors of the leaf and the leaf end. The transmitted photon energy was tuned to consider the beam hardening effects. The calculated results with the Monte Carlo implementation was compared with measurements in homogeneous water phantom and inhomogeneous phantoms with slab lung or bone material for 4 square fields and 9 irregularly shaped fields. Results: The calculated output factors are compared with the measured ones and the difference is within 1% for different field sizes. The calculated dose distributions in the phantoms show good agreement with measurements using diode detector and films. The dose difference is within 2% inside the field and the distance to agreement is within 2mm in the penumbra region. The gamma passing rate is more than 95% with 2%/2mm criteria for all the test cases. Conclusion: Implementation of Monte Carlo dose calculation for a MLC equipped robotic radiosurgery system is completed successfully. The accuracy of Monte Carlo dose calculation with MLC is clinically acceptable. This work was supported by Accuray Inc.
International Nuclear Information System (INIS)
Li, JS; Fan, J; Ma, C-M
2015-01-01
Purpose: To improve the treatment efficiency and capabilities for full-body treatment, a robotic radiosurgery system has equipped with a multileaf collimator (MLC) to extend its accuracy and precision to radiation therapy. To model the MLC and include it in the Monte Carlo patient dose calculation is the goal of this work. Methods: The radiation source and the MLC were carefully modeled to consider the effects of the source size, collimator scattering, leaf transmission and leaf end shape. A source model was built based on the output factors, percentage depth dose curves and lateral dose profiles measured in a water phantom. MLC leaf shape, leaf end design and leaf tilt for minimizing the interleaf leakage and their effects on beam fluence and energy spectrum were all considered in the calculation. Transmission/leakage was added to the fluence based on the transmission factors of the leaf and the leaf end. The transmitted photon energy was tuned to consider the beam hardening effects. The calculated results with the Monte Carlo implementation was compared with measurements in homogeneous water phantom and inhomogeneous phantoms with slab lung or bone material for 4 square fields and 9 irregularly shaped fields. Results: The calculated output factors are compared with the measured ones and the difference is within 1% for different field sizes. The calculated dose distributions in the phantoms show good agreement with measurements using diode detector and films. The dose difference is within 2% inside the field and the distance to agreement is within 2mm in the penumbra region. The gamma passing rate is more than 95% with 2%/2mm criteria for all the test cases. Conclusion: Implementation of Monte Carlo dose calculation for a MLC equipped robotic radiosurgery system is completed successfully. The accuracy of Monte Carlo dose calculation with MLC is clinically acceptable. This work was supported by Accuray Inc
A clinical study of lung cancer dose calculation accuracy with Monte Carlo simulation.
Zhao, Yanqun; Qi, Guohai; Yin, Gang; Wang, Xianliang; Wang, Pei; Li, Jian; Xiao, Mingyong; Li, Jie; Kang, Shengwei; Liao, Xiongfei
2014-12-16
The accuracy of dose calculation is crucial to the quality of treatment planning and, consequently, to the dose delivered to patients undergoing radiation therapy. Current general calculation algorithms such as Pencil Beam Convolution (PBC) and Collapsed Cone Convolution (CCC) have shortcomings in regard to severe inhomogeneities, particularly in those regions where charged particle equilibrium does not hold. The aim of this study was to evaluate the accuracy of the PBC and CCC algorithms in lung cancer radiotherapy using Monte Carlo (MC) technology. Four treatment plans were designed using Oncentra Masterplan TPS for each patient. Two intensity-modulated radiation therapy (IMRT) plans were developed using the PBC and CCC algorithms, and two three-dimensional conformal therapy (3DCRT) plans were developed using the PBC and CCC algorithms. The DICOM-RT files of the treatment plans were exported to the Monte Carlo system to recalculate. The dose distributions of GTV, PTV and ipsilateral lung calculated by the TPS and MC were compared. For 3DCRT and IMRT plans, the mean dose differences for GTV between the CCC and MC increased with decreasing of the GTV volume. For IMRT, the mean dose differences were found to be higher than that of 3DCRT. The CCC algorithm overestimated the GTV mean dose by approximately 3% for IMRT. For 3DCRT plans, when the volume of the GTV was greater than 100 cm(3), the mean doses calculated by CCC and MC almost have no difference. PBC shows large deviations from the MC algorithm. For the dose to the ipsilateral lung, the CCC algorithm overestimated the dose to the entire lung, and the PBC algorithm overestimated V20 but underestimated V5; the difference in V10 was not statistically significant. PBC substantially overestimates the dose to the tumour, but the CCC is similar to the MC simulation. It is recommended that the treatment plans for lung cancer be developed using an advanced dose calculation algorithm other than PBC. MC can accurately
Non-periodic pseudo-random numbers used in Monte Carlo calculations
Barberis, Gaston E.
2007-09-01
The generation of pseudo-random numbers is one of the interesting problems in Monte Carlo simulations, mostly because the common computer generators produce periodic numbers. We used simple pseudo-random numbers generated with the simplest chaotic system, the logistic map, with excellent results. The numbers generated in this way are non-periodic, which we demonstrated for 1013 numbers, and they are obtained in a deterministic way, which allows to repeat systematically any calculation. The Monte Carlo calculations are the ideal field to apply these numbers, and we did it for simple and more elaborated cases. Chemistry and Information Technology use this kind of simulations, and the application of this numbers to quantum Monte Carlo and cryptography is immediate. I present here the techniques to calculate, analyze and use these pseudo-random numbers, show that they lack periodicity up to 1013 numbers and that they are not correlated.
Non-periodic pseudo-random numbers used in Monte Carlo calculations
International Nuclear Information System (INIS)
Barberis, Gaston E.
2007-01-01
The generation of pseudo-random numbers is one of the interesting problems in Monte Carlo simulations, mostly because the common computer generators produce periodic numbers. We used simple pseudo-random numbers generated with the simplest chaotic system, the logistic map, with excellent results. The numbers generated in this way are non-periodic, which we demonstrated for 10 13 numbers, and they are obtained in a deterministic way, which allows to repeat systematically any calculation. The Monte Carlo calculations are the ideal field to apply these numbers, and we did it for simple and more elaborated cases. Chemistry and Information Technology use this kind of simulations, and the application of this numbers to quantum Monte Carlo and cryptography is immediate. I present here the techniques to calculate, analyze and use these pseudo-random numbers, show that they lack periodicity up to 10 13 numbers and that they are not correlated
Energy Technology Data Exchange (ETDEWEB)
Liang, Jingang; Wang, Kan; Qiu, Yishu [Dept. of Engineering Physics, LiuQing Building, Tsinghua University, Beijing (China); Chai, Xiao Ming; Qiang, Sheng Long [Science and Technology on Reactor System Design Technology Laboratory, Nuclear Power Institute of China, Chengdu (China)
2016-06-15
Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC) codes in accomplishing pin-wise three-dimensional (3D) full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC) code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.
Calculation of pellet radial power distributions with a Monte Carlo burnup code
International Nuclear Information System (INIS)
Suzuki, Motomu; Yamamoto, Toru; Nakata, Tetsuo
2010-01-01
The Japan Nuclear Energy Safety Organization (JNES) has been working on an irradiation test program of high-burnup MOX fuel at Halden Boiling Water Reactor (HBWR). MOX and UO 2 fuel rods had been irradiated up to about 64 GWd/t (rod avg.) as a Japanese utilities research program (1st phase), and using those fuel rods, in-situ measurement of fuel pellet centerline temperature was done during the 2nd phase of irradiation as the JNES test program. As part of analysis of the temperature data, power distributions in a pellet radial direction were analyzed by using a Monte Carlo burnup code MVP-BURN. In addition, the calculated results of deterministic burnup codes SRAC and PLUTON for the same problem were compared with those of MVP-BURN to evaluate their accuracy. Burnup calculations with an assembly model were performed by using MVP-BURN and those with a pin cell model by using SRAC and PLUTON. The cell pitch and, therefore, fuel to moderator ratio in the pin cell calculation was determined from the comparison of neutron energy spectra with those of MVP-BURN. The fuel pellet radial distributions of burnup and fission reaction rates at the end of the 1st phase irradiation were compared between the three codes. The MVP-BURN calculation results show a large peaking in the burnup and fission rates in the pellet outer region for the UO 2 and MOX pellets. The SRAC calculations give very close results to those of the MVP-BURN. On the other hand, the PLUTON calculations show larger burnup for the UO 2 and lower burnup for the MOX pellets in the pellet outer region than those of MVP-BURN, which lead to larger fission rates for the UO 2 and lower fission rates for the MOX pellets, respectively. (author)
Energy Technology Data Exchange (ETDEWEB)
Yoon, Jihyung; Jung, Jae Won, E-mail: jungj@ecu.edu [Department of Physics, East Carolina University, Greenville, North Carolina 27858 (United States); Kim, Jong Oh [Department of Radiation Oncology, University of Pittsburgh Cancer Institute, Pittsburgh, Pennsylvania 15232 (United States); Yeo, Inhwan [Department of Radiation Medicine, Loma Linda University Medical Center, Loma Linda, California 92354 (United States)
2016-05-15
Purpose: To develop and evaluate a fast Monte Carlo (MC) dose calculation model of electronic portal imaging device (EPID) based on its effective atomic number modeling in the XVMC code. Methods: A previously developed EPID model, based on the XVMC code by density scaling of EPID structures, was modified by additionally considering effective atomic number (Z{sub eff}) of each structure and adopting a phase space file from the EGSnrc code. The model was tested under various homogeneous and heterogeneous phantoms and field sizes by comparing the calculations in the model with measurements in EPID. In order to better evaluate the model, the performance of the XVMC code was separately tested by comparing calculated dose to water with ion chamber (IC) array measurement in the plane of EPID. Results: In the EPID plane, calculated dose to water by the code showed agreement with IC measurements within 1.8%. The difference was averaged across the in-field regions of the acquired profiles for all field sizes and phantoms. The maximum point difference was 2.8%, affected by proximity of the maximum points to penumbra and MC noise. The EPID model showed agreement with measured EPID images within 1.3%. The maximum point difference was 1.9%. The difference dropped from the higher value of the code by employing the calibration that is dependent on field sizes and thicknesses for the conversion of calculated images to measured images. Thanks to the Z{sub eff} correction, the EPID model showed a linear trend of the calibration factors unlike those of the density-only-scaled model. The phase space file from the EGSnrc code sharpened penumbra profiles significantly, improving agreement of calculated profiles with measured profiles. Conclusions: Demonstrating high accuracy, the EPID model with the associated calibration system may be used for in vivo dosimetry of radiation therapy. Through this study, a MC model of EPID has been developed, and their performance has been rigorously
Auxiliary-field quantum Monte Carlo calculations of molecular systems with a Gaussian basis
International Nuclear Information System (INIS)
Al-Saidi, W.A.; Zhang Shiwei; Krakauer, Henry
2006-01-01
We extend the recently introduced phaseless auxiliary-field quantum Monte Carlo (QMC) approach to any single-particle basis and apply it to molecular systems with Gaussian basis sets. QMC methods in general scale favorably with the system size as a low power. A QMC approach with auxiliary fields, in principle, allows an exact solution of the Schroedinger equation in the chosen basis. However, the well-known sign/phase problem causes the statistical noise to increase exponentially. The phaseless method controls this problem by constraining the paths in the auxiliary-field path integrals with an approximate phase condition that depends on a trial wave function. In the present calculations, the trial wave function is a single Slater determinant from a Hartree-Fock calculation. The calculated all-electron total energies show typical systematic errors of no more than a few millihartrees compared to exact results. At equilibrium geometries in the molecules we studied, this accuracy is roughly comparable to that of coupled cluster with single and double excitations and with noniterative triples [CCSD(T)]. For stretched bonds in H 2 O, our method exhibits a better overall accuracy and a more uniform behavior than CCSD(T)
Criticality coefficient calculation for a small PWR using Monte Carlo Transport Code
Energy Technology Data Exchange (ETDEWEB)
Trombetta, Debora M.; Su, Jian, E-mail: dtrombetta@nuclear.ufrj.br, E-mail: sujian@nuclear.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil); Chirayath, Sunil S., E-mail: sunilsc@tamu.edu [Department of Nuclear Engineering and Nuclear Security Science and Policy Institute, Texas A and M University, TX (United States)
2015-07-01
Computational models of reactors are increasingly used to predict nuclear reactor physics parameters responsible for reactivity changes which could lead to accidents and losses. In this work, preliminary results for criticality coefficient calculation using the Monte Carlo transport code MCNPX were presented for a small PWR. The computational modeling developed consists of the core with fuel elements, radial reflectors, and control rods inside a pressure vessel. Three different geometries were simulated, a single fuel pin, a fuel assembly and the core, with the aim to compare the criticality coefficients among themselves.The criticality coefficients calculated were: Doppler Temperature Coefficient, Coolant Temperature Coefficient, Coolant Void Coefficient, Power Coefficient, and Control Rod Worth. The coefficient values calculated by the MCNP code were compared with literature results, showing good agreement with reference data, which validate the computational model developed and allow it to be used to perform more complex studies. Criticality Coefficient values for the three simulations done had little discrepancy for almost all coefficients investigated, the only exception was the Power Coefficient. Preliminary results presented show that simple modelling as a fuel assembly can describe changes at almost all the criticality coefficients, avoiding the need of a complex core simulation. (author)
A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.
Nwankwo, Obioma; Glatting, Gerhard; Wenz, Frederik; Fleckenstein, Jens
2017-01-01
To introduce a new method of deriving a virtual source model (VSM) of a linear accelerator photon beam from a phase space file (PSF) for Monte Carlo (MC) dose calculation. A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden) and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs) for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC) between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses. The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate) for the evaluated fields. A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.
A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.
Directory of Open Access Journals (Sweden)
Obioma Nwankwo
Full Text Available To introduce a new method of deriving a virtual source model (VSM of a linear accelerator photon beam from a phase space file (PSF for Monte Carlo (MC dose calculation.A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses.The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate for the evaluated fields.A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.
Comparative and Predictive Multimedia Assessments Using Monte Carlo Uncertainty Analyses
Whelan, G.
2002-05-01
Multiple-pathway frameworks (sometimes referred to as multimedia models) provide a platform for combining medium-specific environmental models and databases, such that they can be utilized in a more holistic assessment of contaminant fate and transport in the environment. These frameworks provide a relatively seamless transfer of information from one model to the next and from databases to models. Within these frameworks, multiple models are linked, resulting in models that consume information from upstream models and produce information to be consumed by downstream models. The Framework for Risk Analysis in Multimedia Environmental Systems (FRAMES) is an example, which allows users to link their models to other models and databases. FRAMES is an icon-driven, site-layout platform that is an open-architecture, object-oriented system that interacts with environmental databases; helps the user construct a Conceptual Site Model that is real-world based; allows the user to choose the most appropriate models to solve simulation requirements; solves the standard risk paradigm of release transport and fate; and exposure/risk assessments to people and ecology; and presents graphical packages for analyzing results. FRAMES is specifically designed allow users to link their own models into a system, which contains models developed by others. This paper will present the use of FRAMES to evaluate potential human health exposures using real site data and realistic assumptions from sources, through the vadose and saturated zones, to exposure and risk assessment at three real-world sites, using the Multimedia Environmental Pollutant Assessment System (MEPAS), which is a multimedia model contained within FRAMES. These real-world examples use predictive and comparative approaches coupled with a Monte Carlo analysis. A predictive analysis is where models are calibrated to monitored site data, prior to the assessment, and a comparative analysis is where models are not calibrated but
Application of Monte Carlo method for dose calculation in thyroid follicle
International Nuclear Information System (INIS)
Silva, Frank Sinatra Gomes da
2008-02-01
The Monte Carlo method is an important tool to simulate radioactive particles interaction with biologic medium. The principal advantage of the method when compared with deterministic methods is the ability to simulate a complex geometry. Several computational codes use the Monte Carlo method to simulate the particles transport and they have the capacity to simulate energy deposition in models of organs and/or tissues, as well models of cells of human body. Thus, the calculation of the absorbed dose to thyroid's follicles (compound of colloid and follicles' cells) have a fundamental importance to dosimetry, because these cells are radiosensitive due to ionizing radiation exposition, in particular, exposition due to radioisotopes of iodine, because a great amount of radioiodine may be released into the environment in case of a nuclear accidents. In this case, the goal of this work was use the code of particles transport MNCP4C to calculate absorbed doses in models of thyroid's follicles, for Auger electrons, internal conversion electrons and beta particles, by iodine-131 and short-lived iodines (131, 132, 133, 134 e 135), with diameters varying from 30 to 500 μm. The results obtained from simulation with the MCNP4C code shown an average percentage of the 25% of total absorbed dose by colloid to iodine- 131 and 75% to short-lived iodine's. For follicular cells, this percentage was of 13% to iodine-131 and 87% to short-lived iodine's. The contributions from particles with low energies, like Auger and internal conversion electrons should not be neglected, to assessment the absorbed dose in cellular level. Agglomerative hierarchical clustering was used to compare doses obtained by codes MCNP4C, EPOTRAN, EGS4 and by deterministic methods. (author)
Data base to compare calculations and observations
International Nuclear Information System (INIS)
Tichler, J.L.
1985-01-01
Meteorological and climatological data bases were compared with known tritium release points and diffusion calculations to determine if calculated concentrations could replace measure concentrations at the monitoring stations. Daily tritium concentrations were monitored at 8 stations and 16 possible receptors. Automated data retrieval strategies are listed
Energy Technology Data Exchange (ETDEWEB)
Christoforou, Stavros, E-mail: stavros.christoforou@gmail.com [Kirinthou 17, 34100, Chalkida (Greece); Hoogenboom, J. Eduard, E-mail: j.e.hoogenboom@tudelft.nl [Department of Applied Sciences, Delft University of Technology (Netherlands)
2011-07-01
A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k{sub eff} estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)
International Nuclear Information System (INIS)
Christoforou, Stavros; Hoogenboom, J. Eduard
2011-01-01
A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k_e_f_f estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)
Initial Assessment of Parallelization of Monte Carlo Calculation using Graphics Processing Units
International Nuclear Information System (INIS)
Choi, Sung Hoon; Joo, Han Gyu
2009-01-01
Monte Carlo (MC) simulation is an effective tool for calculating neutron transports in complex geometry. However, because Monte Carlo simulates each neutron behavior one by one, it takes a very long computing time if enough neutrons are used for high precision of calculation. Accordingly, methods that reduce the computing time are required. In a Monte Carlo code, parallel calculation is well-suited since it simulates the behavior of each neutron independently and thus parallel computation is natural. The parallelization of the Monte Carlo codes, however, was done using multi CPUs. By the global demand for high quality 3D graphics, the Graphics Processing Unit (GPU) has developed into a highly parallel, multi-core processor. This parallel processing capability of GPUs can be available to engineering computing once a suitable interface is provided. Recently, NVIDIA introduced CUDATM, a general purpose parallel computing architecture. CUDA is a software environment that allows developers to manage GPU using C/C++ or other languages. In this work, a GPU-based Monte Carlo is developed and the initial assessment of it parallel performance is investigated
Continuous energy Monte Carlo method based homogenization multi-group constants calculation
International Nuclear Information System (INIS)
Li Mancang; Wang Kan; Yao Dong
2012-01-01
The efficiency of the standard two-step reactor physics calculation relies on the accuracy of multi-group constants from the assembly-level homogenization process. In contrast to the traditional deterministic methods, generating the homogenization cross sections via Monte Carlo method overcomes the difficulties in geometry and treats energy in continuum, thus provides more accuracy parameters. Besides, the same code and data bank can be used for a wide range of applications, resulting in the versatility using Monte Carlo codes for homogenization. As the first stage to realize Monte Carlo based lattice homogenization, the track length scheme is used as the foundation of cross section generation, which is straight forward. The scattering matrix and Legendre components, however, require special techniques. The Scattering Event method was proposed to solve the problem. There are no continuous energy counterparts in the Monte Carlo calculation for neutron diffusion coefficients. P 1 cross sections were used to calculate the diffusion coefficients for diffusion reactor simulator codes. B N theory is applied to take the leakage effect into account when the infinite lattice of identical symmetric motives is assumed. The MCMC code was developed and the code was applied in four assembly configurations to assess the accuracy and the applicability. At core-level, A PWR prototype core is examined. The results show that the Monte Carlo based multi-group constants behave well in average. The method could be applied to complicated configuration nuclear reactor core to gain higher accuracy. (authors)
TU-AB-BRC-12: Optimized Parallel MonteCarlo Dose Calculations for Secondary MU Checks
Energy Technology Data Exchange (ETDEWEB)
French, S; Nazareth, D [Roswell Park Cancer Institute, Buffalo, NY (United States); Bellor, M [Lockheed Martin, Manassas, VA (United States)
2016-06-15
Purpose: Secondary MU checks are an important tool used during a physics review of a treatment plan. Commercial software packages offer varying degrees of theoretical dose calculation accuracy, depending on the modality involved. Dose calculations of VMAT plans are especially prone to error due to the large approximations involved. Monte Carlo (MC) methods are not commonly used due to their long run times. We investigated two methods to increase the computational efficiency of MC dose simulations with the BEAMnrc code. Distributed computing resources, along with optimized code compilation, will allow for accurate and efficient VMAT dose calculations. Methods: The BEAMnrc package was installed on a high performance computing cluster accessible to our clinic. MATLAB and PYTHON scripts were developed to convert a clinical VMAT DICOM plan into BEAMnrc input files. The BEAMnrc installation was optimized by running the VMAT simulations through profiling tools which indicated the behavior of the constituent routines in the code, e.g. the bremsstrahlung splitting routine, and the specified random number generator. This information aided in determining the most efficient compiling parallel configuration for the specific CPU’s available on our cluster, resulting in the fastest VMAT simulation times. Our method was evaluated with calculations involving 10{sup 8} – 10{sup 9} particle histories which are sufficient to verify patient dose using VMAT. Results: Parallelization allowed the calculation of patient dose on the order of 10 – 15 hours with 100 parallel jobs. Due to the compiler optimization process, further speed increases of 23% were achieved when compared with the open-source compiler BEAMnrc packages. Conclusion: Analysis of the BEAMnrc code allowed us to optimize the compiler configuration for VMAT dose calculations. In future work, the optimized MC code, in conjunction with the parallel processing capabilities of BEAMnrc, will be applied to provide accurate
International Nuclear Information System (INIS)
Wagner, J.C.; Haghighat, A.
1998-01-01
Although the Monte Carlo method is considered to be the most accurate method available for solving radiation transport problems, its applicability is limited by its computational expense. Thus, biasing techniques, which require intuition, guesswork, and iterations involving manual adjustments, are employed to make reactor shielding calculations feasible. To overcome this difficulty, the authors have developed a method for using the S N adjoint function for automated variance reduction of Monte Carlo calculations through source biasing and consistent transport biasing with the weight window technique. They describe the implementation of this method into the standard production Monte Carlo code MCNP and its application to a realistic calculation, namely, the reactor cavity dosimetry calculation. The computational effectiveness of the method, as demonstrated through the increase in calculational efficiency, is demonstrated and quantified. Important issues associated with this method and its efficient use are addressed and analyzed. Additional benefits in terms of the reduction in time and effort required of the user are difficult to quantify but are possibly as important as the computational efficiency. In general, the automated variance reduction method presented is capable of increases in computational performance on the order of thousands, while at the same time significantly reducing the current requirements for user experience, time, and effort. Therefore, this method can substantially increase the applicability and reliability of Monte Carlo for large, real-world shielding applications
International Nuclear Information System (INIS)
Hsu, H.H.; Dowdy, E.J.; Estes, G.P.; Lucas, M.C.; Mack, J.M.; Moss, C.E.; Hamm, M.E.
1983-01-01
Monte Carlo calculations of a bismuth-germanate scintillator's efficiency agree closely with experimental measurements. For this comparison, we studied the absolute gamma-ray photopeak efficiency of a scintillator (7.62 cm long by 7.62 cm in diameter) at several gamma-ray energies from 166 to 2615 keV at distances from 0.5 to 152.4 cm. Computer calculations were done in a two-dimensional cylindrical geometry with the Monte Carlo coupled photon-electron code CYLTRAN. For the experiment we measured 11 sources with simple spectra and precisely known strengths. The average deviation between the calculations and the measurements is 3%. Our calculated results also closely agree with recently published calculated results
Energy Technology Data Exchange (ETDEWEB)
Cheong, Kwang-Ho; Suh, Tae-Suk; Lee, Hyoung-Koo; Choe, Bo-Young [The Catholic Univ. of Korea, Seoul (Korea, Republic of); Kim, Hoi-Nam; Yoon, Sei-Chul [Kangnam St. Mary' s Hospital, Seoul (Korea, Republic of)
2002-07-01
Accurate dose calculation in radiation treatment planning is most important for successful treatment. Since human body is composed of various materials and not an ideal shape, it is not easy to calculate the accurate effective dose in the patients. Many methods have been proposed to solve inhomogeneity and surface contour problems. Monte Carlo simulations are regarded as the most accurate method, but it is not appropriate for routine planning because it takes so much time. Pencil beam kernel based convolution/superposition methods were also proposed to correct those effects. Nowadays, many commercial treatment planning systems have adopted this algorithm as a dose calculation engine. The purpose of this study is to verify the accuracy of the dose calculated from pencil beam kernel based treatment planning system comparing to Monte Carlo simulations and measurements especially in inhomogeneous region. Home-made inhomogeneous phantom, Helax-TMS ver. 6.0 and Monte Carlo code BEAMnrc and DOSXYZnrc were used in this study. In homogeneous media, the accuracy was acceptable but in inhomogeneous media, the errors were more significant. However in general clinical situation, pencil beam kernel based convolution algorithm is thought to be a valuable tool to calculate the dose.
A general framework for implementing NLO calculations in shower Monte Carlo programs. The POWHEG BOX
Energy Technology Data Exchange (ETDEWEB)
Alioli, Simone [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Nason, Paolo [INFN, Milano-Bicocca (Italy); Oleari, Carlo [INFN, Milano-Bicocca (Italy); Milano-Bicocca Univ. (Italy); Re, Emanuele [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomenology
2010-02-15
In this work we illustrate the POWHEG BOX, a general computer code framework for implementing NLO calculations in shower Monte Carlo programs according to the POWHEG method. Aim of this work is to provide an illustration of the needed theoretical ingredients, a view of how the code is organized and a description of what a user should provide in order to use it. (orig.)
Développement de la méthode de Monte Carlo pour le calcul des ...
African Journals Online (AJOL)
In this paper; we show the interest of the heterostructures initially, then the need for using a numerical method and in particular that of Monte Carlo, to calculate electric transport in the semiconductors. We justify also the composition of our ternary semiconductor AlxGa1-xAs. Afterwards; we give the principle and the ...
Clouvas, A; Antonopoulos-Domis, M; Silva, J
2000-01-01
The dose rate conversion factors D/sub CF/ (absorbed dose rate in air per unit activity per unit of soil mass, nGy h/sup -1/ per Bq kg/sup -1/) are calculated 1 m above ground for photon emitters of natural radionuclides uniformly distributed in the soil. Three Monte Carlo codes are used: 1) The MCNP code of Los Alamos; 2) The GEANT code of CERN; and 3) a Monte Carlo code developed in the Nuclear Technology Laboratory of the Aristotle University of Thessaloniki. The accuracy of the Monte Carlo results is tested by the comparison of the unscattered flux obtained by the three Monte Carlo codes with an independent straightforward calculation. All codes and particularly the MCNP calculate accurately the absorbed dose rate in air due to the unscattered radiation. For the total radiation (unscattered plus scattered) the D/sub CF/ values calculated from the three codes are in very good agreement between them. The comparison between these results and the results deduced previously by other authors indicates a good ag...
A general framework for implementing NLO calculations in shower Monte Carlo programs. The POWHEG BOX
International Nuclear Information System (INIS)
Alioli, Simone; Nason, Paolo; Oleari, Carlo; Re, Emanuele
2010-02-01
In this work we illustrate the POWHEG BOX, a general computer code framework for implementing NLO calculations in shower Monte Carlo programs according to the POWHEG method. Aim of this work is to provide an illustration of the needed theoretical ingredients, a view of how the code is organized and a description of what a user should provide in order to use it. (orig.)
International Nuclear Information System (INIS)
Zazula, J.M.
1983-01-01
The general purpose code BALTORO was written for coupling the three-dimensional Monte-Carlo /MC/ with the one-dimensional Discrete Ordinates /DO/ radiation transport calculations. The quantity of a radiation-induced /neutrons or gamma-rays/ nuclear effect or the score from a radiation-yielding nuclear effect can be analysed in this way. (author)
Maucec, M
2005-01-01
Monte Carlo simulations for nuclear logging applications are considered to be highly demanding transport problems. In this paper, the implementation of weight-window variance reduction schemes in a 'manual' fashion to improve the efficiency of calculations for a neutron logging tool is presented.
Monte Carlo calculation of efficiencies of whole-body counter, by microcomputer
International Nuclear Information System (INIS)
Fernandes Neto, J.M.
1987-01-01
A computer programming using the Monte Carlo method for calculation of efficiencies of whole-body counting of body radiation distribution is presented. An analytical simulator (for man e for child) incorporated with 99m Tc, 131 I and 42 K is used. (M.A.C.) [pt
Energy Technology Data Exchange (ETDEWEB)
Davidson, Scott E., E-mail: sedavids@utmb.edu [Radiation Oncology, The University of Texas Medical Branch, Galveston, Texas 77555 (United States); Cui, Jing [Radiation Oncology, University of Southern California, Los Angeles, California 90033 (United States); Kry, Stephen; Ibbott, Geoffrey S.; Followill, David S. [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States); Deasy, Joseph O. [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States); Vicic, Milos [Department of Applied Physics, University of Belgrade, Belgrade 11000 (Serbia); White, R. Allen [Bioinformatics and Computational Biology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States)
2016-08-15
points tested. The model was capable of predicting the depth of the maximum dose within 1 mm. Anthropomorphic phantom benchmark testing of modulated and patterned MLCs treatment plans showed agreement to measurement within 3% in target regions using thermoluminescent dosimeters (TLD). Using radiochromic film normalized to TLD, a gamma criteria of 3% of maximum dose and 2 mm DTA was applied with a pass rate of least 85% in the high dose, high gradient, and low dose regions. Finally, recalculations of patient plans using DPM showed good agreement relative to a commercial TPS when comparing dose volume histograms and 2D dose distributions. Conclusions: A unique analytical source model coupled to the dose planning method Monte Carlo dose calculation code has been modified and validated using basic beam data and anthropomorphic phantom measurement. While this tool can be applied in general use for a particular linac model, specifically it was developed to provide a singular methodology to independently assess treatment plan dose distributions from those clinical institutions participating in National Cancer Institute trials.
DETEF a Monte Carlo system for the calculation of gamma spectrometers efficiency
International Nuclear Information System (INIS)
Cornejo, N.; Mann, G.
1996-01-01
The Monte Carlo method program DETEF calculates the efficiency of cylindrical NaI, Csi, Ge or Si, detectors for photons energy until 2 MeV and several sample geometric. These sources could be punctual, plane cylindrical or rectangular. The energy spectrum appears on the screen simultaneously with the statistical simulation. The calculated and experimental estimated efficiencies coincidence well in the standards deviations intervals
International Nuclear Information System (INIS)
Vergara Gil, Alex; Torres Aroche, Leonel A; Coca Péreza, Marco A; Pacilio, Massimiliano; Botta, Francesca; Cremonesi, Marta
2016-01-01
Aim: In this work, a new software tool (named MCID) to calculate patient specific absorbed dose in molecular radiotherapy, based on Monte Carlo simulation, is presented. Materials & Methods: The inputs for MCID are two co-registered medical images containing anatomical (CT) and functional (PET or SPECT) information of the patient. The anatomical image is converted to a density map, and tissues segmentation is provided considering compositions and densities from ICRU 44 and ICRP; the functional image provides the cumulative activity map at voxel level (figure 1). MCID creates an input file for Monte Carlo (MC) codes such as MCNP5 and GATE, and converts the MC outputs into an absorbed dose image. Results: The developed tool allows estimating dose distributions for non-uniform activities distributions and non-homogeneous tissues. It includes tools for delineation of volumes of interest, and dosimetric data analysis. Procedures to decrease the calculation time are implemented in order to allow its use in clinical settings. Dose–volume histograms are computed and presented from the obtained dosimetric maps as well as dose statistics such as mean, minimum and maximum dose values; the results can be saved in common medical image formats (Interfile, DICOM, Analyze, MetaImage). The MCID was validated by comparing estimated dose values versus reference data, such as gold standards phantoms (OLINDA´s spheres) and other MC simulations of non-homogeneous phantoms. A good agreement was obtained in spheres ranged 1g to 1kg of mass and in non-homogeneous phantoms. Clinical studies were also examined. Dosimetric evaluations in patients undergoing 153Sm-EDTMP therapy for osseous metastases showed non-significant differences with calculations performed by traditional methods. The possibility of creating input files to perform the simulations using the Gate Code has increased the MCID applications and improved its functionality, Different clinical situations including PET and SPECT
Reactor physics simulations with coupled Monte Carlo calculation and computational fluid dynamics
International Nuclear Information System (INIS)
Seker, V.; Thomas, J. W.; Downar, T. J.
2007-01-01
The interest in high fidelity modeling of nuclear reactor cores has increased over the last few years and has become computationally more feasible because of the dramatic improvements in processor speed and the availability of low cost parallel platforms. In the research here high fidelity, multi-physics analyses was performed by solving the neutron transport equation using Monte Carlo methods and by solving the thermal-hydraulics equations using computational fluid dynamics. A computation tool based on coupling the Monte Carlo code MCNP5 and the Computational Fluid Dynamics (CFD) code STAR-CD was developed as an audit tool for lower order nuclear reactor calculations. This paper presents the methodology of the developed computer program 'McSTAR' along with the verification and validation efforts. McSTAR is written in PERL programming language and couples MCNP5 and the commercial CFD code STAR-CD. MCNP uses a continuous energy cross section library produced by the NJOY code system from the raw ENDF/B data. A major part of the work was to develop and implement methods to update the cross section library with the temperature distribution calculated by STAR-CD for every region. Three different methods were investigated and two of them are implemented in McSTAR. The user subroutines in STAR-CD are modified to read the power density data and assign them to the appropriate variables in the program and to write an output data file containing the temperature, density and indexing information to perform the mapping between MCNP and STAR-CD cells. The necessary input file manipulation, data file generation, normalization and multi-processor calculation settings are all done through the program flow in McSTAR. Initial testing of the code was performed using a single pin cell and a 3X3 PWR pin-cell problem. The preliminary results of the single pin-cell problem are compared with those obtained from a STAR-CD coupled calculation with the deterministic transport code De
Energy Technology Data Exchange (ETDEWEB)
Aleshin, Sergey S.; Gorodkov, Sergey S.; Shcherenko, Anna I. [National Research Centre ' Kurchatov Institute' , Moscow (Russian Federation)
2016-09-15
A burn-up calculation of large systems by Monte-Carlo code (MCU) is complex process and it requires large computational costs. Previously prepared isotopic compositions are proposed to be used for the Monte-Carlo code calculations of different system states with burnt fuel. Isotopic compositions are calculated by an approximation method. The approximation method is based on usage of a spectral functionality and reference isotopic compositions, that are calculated by the engineering codes (TVS-M, BIPR-7A and PERMAK-A). The multiplication factors and power distributions of FAs from a 3-D reactor core are calculated in this work by the Monte-Carlo code MCU using earlier prepared isotopic compositions. The separate conditions of the burnt core are observed. The results of MCU calculations were compared with those that were obtained by engineering codes.
Automated-biasing approach to Monte Carlo shipping-cask calculations
International Nuclear Information System (INIS)
Hoffman, T.J.; Tang, J.S.; Parks, C.V.; Childs, R.L.
1982-01-01
Computer Sciences at Oak Ridge National Laboratory, under a contract with the Nuclear Regulatory Commission, has developed the SCALE system for performing standardized criticality, shielding, and heat transfer analyses of nuclear systems. During the early phase of shielding development in SCALE, it was established that Monte Carlo calculations of radiation levels exterior to a spent fuel shipping cask would be extremely expensive. This cost can be substantially reduced by proper biasing of the Monte Carlo histories. The purpose of this study is to develop and test an automated biasing procedure for the MORSE-SGC/S module of the SCALE system
Monte Carlo simulation for theoretical calculations of damage and sputtering processes
International Nuclear Information System (INIS)
Yamamura, Yasunori
1984-01-01
The radiation damage accompanying ion irradiation and the various problems caused with it should be determined in principle by resolving Boltzmann's equations. However, in reality, those for a semi-infinite system cannot be generally resolved. Moreover, the effect of crystals, oblique incidence and so on make the situation more difficult. The analysis of the complicated phenomena of the collision in solids and the problems of radiation damage and sputtering accompanying them is possible in most cases only by computer simulation. At present, the methods of simulating the atomic collision phenomena in solids are roughly classified into molecular dynamics method and Monte Carlo method. In the molecular dynamics, Newton's equations are numerically calculated time-dependently as they are, and it has large merits that many body effect and nonlinear effect can be taken in consideration, but much computing time is required. The features and problems of the Monte Carlo simulation and nonlinear Monte Carlo simulation are described. The comparison of the Monte Carlo simulation codes calculating on the basis of two-body collision approximation, MARLOWE, TRIM and ACAT, was carried out through the calculation of the backscattering spectra of light ions. (Kako, I.)
Development of M3C code for Monte Carlo reactor physics criticality calculations
International Nuclear Information System (INIS)
Kumar, Anek; Kannan, Umasankari; Krishanani, P.D.
2015-06-01
The development of Monte Carlo code (M3C) for reactor design entails use of continuous energy nuclear data and Monte Carlo simulations for each of the neutron interaction processes. BARC has started a concentrated effort for developing a new general geometry continuous energy Monte Carlo code for reactor physics calculation indigenously. The code development required a comprehensive understanding of the basic continuous energy cross section sets. The important features of this code are treatment of heterogeneous lattices by general geometry, use of point cross sections along with unionized energy grid approach, thermal scattering model for low energy treatment, capability of handling the microscopic fuel particles dispersed randomly. The capability of handling the randomly dispersed microscopic fuel particles which is very useful for the modeling of High-Temperature Gas-Cooled reactor fuels which are composed of thousands of microscopic fuel particle (TRISO fuel particle), randomly dispersed in a graphite matrix. The Monte Carlo code for criticality calculation is a pioneering effort and has been used to study several types of lattices including cluster geometries. The code has been verified for its accuracy against more than 60 sample problems covering a wide range from simple (like spherical) to complex geometry (like PHWR lattice). Benchmark results show that the code performs quite well for the criticality calculation of the system. In this report, the current status of the code, features of the code, some of the benchmark results for the testing of the code and input preparation etc. are discussed. (author)
International Nuclear Information System (INIS)
Bécares, V.; Pérez-Martín, S.; Vázquez-Antolín, M.; Villamarín, D.; Martín-Fuertes, F.; González-Romero, E.M.; Merino, I.
2014-01-01
Highlights: • Review of several Monte Carlo effective delayed neutron fraction calculation methods. • These methods have been implemented with the Monte Carlo code MCNPX. • They have been benchmarked against against some critical and subcritical systems. • Several nuclear data libraries have been used. - Abstract: The calculation of the effective delayed neutron fraction, β eff , with Monte Carlo codes is a complex task due to the requirement of properly considering the adjoint weighting of delayed neutrons. Nevertheless, several techniques have been proposed to circumvent this difficulty and obtain accurate Monte Carlo results for β eff without the need of explicitly determining the adjoint flux. In this paper, we make a review of some of these techniques; namely we have analyzed two variants of what we call the k-eigenvalue technique and other techniques based on different interpretations of the physical meaning of the adjoint weighting. To test the validity of all these techniques we have implemented them with the MCNPX code and we have benchmarked them against a range of critical and subcritical systems for which either experimental or deterministic values of β eff are available. Furthermore, several nuclear data libraries have been used in order to assess the impact of the uncertainty in nuclear data in the calculated value of β eff
International Nuclear Information System (INIS)
Yamamoto, Toshihiro; Miyoshi, Yoshinori
2004-01-01
A new algorithm of Monte Carlo criticality calculations for implementing Wielandt's method, which is one of acceleration techniques for deterministic source iteration methods, is developed, and the algorithm can be successfully implemented into MCNP code. In this algorithm, part of fission neutrons emitted during random walk processes are tracked within the current cycle, and thus a fission source distribution used in the next cycle spread more widely. Applying this method intensifies a neutron interaction effect even in a loosely-coupled array where conventional Monte Carlo criticality methods have difficulties, and a converged fission source distribution can be obtained with fewer cycles. Computing time spent for one cycle, however, increases because of tracking fission neutrons within the current cycle, which eventually results in an increase of total computing time up to convergence. In addition, statistical fluctuations of a fission source distribution in a cycle are worsened by applying Wielandt's method to Monte Carlo criticality calculations. However, since a fission source convergence is attained with fewer source iterations, a reliable determination of convergence can easily be made even in a system with a slow convergence. This acceleration method is expected to contribute to prevention of incorrect Monte Carlo criticality calculations. (author)
Monte Carlo calculations of lung dose in ORNL phantom for boron neutron capture therapy
International Nuclear Information System (INIS)
Krstic, D.; Markovic, V.M.; Jovanovic, Z.; Milenkovic, B.; Nikezic, D.; Atanackovic, J.
2014-01-01
Monte Carlo simulations were performed to evaluate dose for possible treatment of cancers by boron neutron capture therapy (BNCT). The computational model of male Oak Ridge National Laboratory (ORNL) phantom was used to simulate tumours in the lung. Calculations have been performed by means of the MCNP5/X code. In this simulation, two opposite neutron beams were considered, in order to obtain uniform neutron flux distribution inside the lung. The obtained results indicate that the lung cancer could be treated by BNCT under the assumptions of calculations. The difference in evaluated dose in cancer and normal lung tissue suggests that BNCT could be applied for the treatment of cancers. The difference in exposure of cancer and healthy tissue can be observed, so the healthy tissue can be spared from damage. An absorbed dose ratio of metastatic tissue-to-the healthy tissue was ∼5. Absorbed dose to all other organs was low when compared with the lung dose. Absorbed dose depth distribution shows that BNC therapy can be very useful in the treatments for tumour. The ratio of the tumour absorbed dose and irradiated healthy tissue absorbed dose was also ∼5. It was seen that an elliptical neutron field was better irradiation choice. (authors)
Energy Technology Data Exchange (ETDEWEB)
Gomes B, W. O., E-mail: wilsonottobatista@gmail.com [Instituto Federal da Bahia, Rua Emidio dos Santos s/n, Barbalho 40301-015, Salvador de Bahia (Brazil)
2016-10-15
This study aimed to develop a geometry of irradiation applicable to the software PCXMC and the consequent calculation of effective dose in applications of the Computed Tomography Cone Beam (CBCT). We evaluated two different CBCT equipment s for dental applications: Care stream Cs 9000 3-dimensional tomograph; i-CAT and GENDEX GXCB-500. Initially characterize each protocol measuring the surface kerma input and the product kerma air-area, P{sub KA}, with solid state detectors RADCAL and PTW transmission chamber. Then we introduce the technical parameters of each preset protocols and geometric conditions in the PCXMC software to obtain the values of effective dose. The calculated effective dose is within the range of 9.0 to 15.7 μSv for 3-dimensional computer 9000 Cs; within the range 44.5 to 89 μSv for GXCB-500 equipment and in the range of 62-111 μSv for equipment Classical i-CAT. These values were compared with results obtained dosimetry using TLD implanted in anthropomorphic phantom and are considered consistent. Os effective dose results are very sensitive to the geometry of radiation (beam position in mathematical phantom). This factor translates to a factor of fragility software usage. But it is very useful to get quick answers to regarding process optimization tool conclusions protocols. We conclude that use software PCXMC Monte Carlo simulation is useful assessment protocols for CBCT tests in dental applications. (Author)
Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation
Ziegenhein, Peter; Pirner, Sven; Kamerling, Cornelis Ph; Oelfke, Uwe
2015-08-01
Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37× compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25× and 1.95× faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.
International Nuclear Information System (INIS)
Kruijf, W.J.M. de; Janssen, A.J.
1994-01-01
Very accurate Mote Carlo calculations with Monte Carlo Code have been performed to serve as reference for benchmark calculations on resonance absorption by U 238 in a typical PWR pin-cell geometry. Calculations with the energy-pointwise slowing down code calculates the resonance absorption accurately. Calculations with the multigroup discrete ordinates code XSDRN show that accurate results can only be achieved with a very fine energy mesh. (authors). 9 refs., 5 figs., 2 tabs
International Nuclear Information System (INIS)
Moriarty, K.J.M.; Blackshaw, J.E.
1983-01-01
The computer program calculates the average action per plaquette for SU(6)/Z 6 lattice gauge theory. By considering quantum field theory on a space-time lattice, the ultraviolet divergences of the theory are regulated through the finite lattice spacing. The continuum theory results can be obtained by a renormalization group procedure. Making use of the FPS Mathematics Library (MATHLIB), we are able to generate an efficient code for the Monte Carlo algorithm for lattice gauge theory calculations which compares favourably with the performance of the CDC 7600. (orig.)
International Nuclear Information System (INIS)
Allam, Kh. A.
2017-01-01
In this work, a new methodology is developed based on Monte Carlo simulation for tunnels and mines external dose calculation. Tunnels external dose evaluation model of a cylindrical shape of finite thickness with an entrance and with or without exit. A photon transportation model was applied for exposure dose calculations. A new software based on Monte Carlo solution was designed and programmed using Delphi programming language. The variation of external dose due to radioactive nuclei in a mine tunnel and the corresponding experimental data lies in the range 7.3 19.9%. The variation of specific external dose rate with position in, tunnel building material density and composition were studied. The given new model has more flexible for real external dose in any cylindrical tunnel structure calculations. (authors)
New sampling method in continuous energy Monte Carlo calculation for pebble bed reactors
International Nuclear Information System (INIS)
Murata, Isao; Takahashi, Akito; Mori, Takamasa; Nakagawa, Masayuki.
1997-01-01
A pebble bed reactor generally has double heterogeneity consisting of two kinds of spherical fuel element. In the core, there exist many fuel balls piled up randomly in a high packing fraction. And each fuel ball contains a lot of small fuel particles which are also distributed randomly. In this study, to realize precise neutron transport calculation of such reactors with the continuous energy Monte Carlo method, a new sampling method has been developed. The new method has been implemented in the general purpose Monte Carlo code MCNP to develop a modified version MCNP-BALL. This method was validated by calculating inventory of spherical fuel elements arranged successively by sampling during transport calculation and also by performing criticality calculations in ordered packing models. From the results, it was confirmed that the inventory of spherical fuel elements could be reproduced using MCNP-BALL within a sufficient accuracy of 0.2%. And the comparison of criticality calculations in ordered packing models between MCNP-BALL and the reference method shows excellent agreement in neutron spectrum as well as multiplication factor. MCNP-BALL enables us to analyze pebble bed type cores such as PROTEUS precisely with the continuous energy Monte Carlo method. (author)
Energy Technology Data Exchange (ETDEWEB)
Murata, Isao [Osaka Univ., Suita (Japan); Mori, Takamasa; Nakagawa, Masayuki; Itakura, Hirofumi
1996-03-01
The method to calculate neutronics parameters of a core composed of randomly distributed spherical fuels has been developed based on a statistical geometry model with a continuous energy Monte Carlo method. This method was implemented in a general purpose Monte Carlo code MCNP, and a new code MCNP-CFP had been developed. This paper describes the model and method how to use it and the validation results. In the Monte Carlo calculation, the location of a spherical fuel is sampled probabilistically along the particle flight path from the spatial probability distribution of spherical fuels, called nearest neighbor distribution (NND). This sampling method was validated through the following two comparisons: (1) Calculations of inventory of coated fuel particles (CFPs) in a fuel compact by both track length estimator and direct evaluation method, and (2) Criticality calculations for ordered packed geometries. This method was also confined by applying to an analysis of the critical assembly experiment at VHTRC. The method established in the present study is quite unique so as to a probabilistic model of the geometry with a great number of spherical fuels distributed randomly. Realizing the speed-up by vector or parallel computations in future, it is expected to be widely used in calculation of a nuclear reactor core, especially HTGR cores. (author).
Energy Technology Data Exchange (ETDEWEB)
Boudou, C
2006-09-15
High grade gliomas are extremely aggressive brain tumours. Specific techniques combining the presence of high atomic number elements within the tumour to an irradiation with a low x-rays (below 100 keV) beam from a synchrotron source were proposed. For the sake of clinical trials, the use of treatment planning system has to be foreseen as well as tailored dosimetry protocols. Objectives of this thesis work were (1) the development of a dose calculation tools based on Monte Carlo code for particles transport and (2) the implementation of an experimental method for the three dimensional verification of the dose delivered. The dosimetric tool is an interface between tomography images from patient or sample and the M.C.N.P.X. general purpose code. Besides, dose distributions were measured through a radiosensitive polymer gel, providing acceptable results compared to calculations.
Efficient SPECT scatter calculation in non-uniform media using correlated Monte Carlo simulation
International Nuclear Information System (INIS)
Beekman, F.J.
1999-01-01
Accurate simulation of scatter in projection data of single photon emission computed tomography (SPECT) is computationally extremely demanding for activity distribution in non-uniform dense media. This paper suggests how the computation time and memory requirements can be significantly reduced. First the scatter projection of a uniform dense object (P SDSE ) is calculated using a previously developed accurate and fast method which includes all orders of scatter (slab-derived scatter estimation), and then P SDSE is transformed towards the desired projection P which is based on the non-uniform object. The transform of P SDSE is based on two first-order Compton scatter Monte Carlo (MC) simulated projections. One is based on the uniform object (P u ) and the other on the object with non-uniformities (P ν ). P is estimated by P-tilde=P SDSE P ν /P u . A tremendous decrease in noise in P-tilde is achieved by tracking photon paths for P ν identical to those which were tracked for the calculation of P u and by using analytical rather than stochastic modelling of the collimator. The method was validated by comparing the results with standard MC-simulated scatter projections (P) of 99m Tc and 201 Tl point sources in a digital thorax phantom. After correction, excellent agreement was obtained between P-tilde and P. The total computation time required to calculate an accurate scatter projection of an extended distribution in a thorax phantom on a PC is a only few tens of seconds per projection, which makes the method attractive for application in accurate scatter correction in clinical SPECT. Furthermore, the method removes the need of excessive computer memory involved with previously proposed 3D model-based scatter correction methods. (author)
Head-and-neck IMRT treatments assessed with a Monte Carlo dose calculation engine
International Nuclear Information System (INIS)
Seco, J; Adams, E; Bidmead, M; Partridge, M; Verhaegen, F
2005-01-01
IMRT is frequently used in the head-and-neck region, which contains materials of widely differing densities (soft tissue, bone, air-cavities). Conventional methods of dose computation for these complex, inhomogeneous IMRT cases involve significant approximations. In the present work, a methodology for the development, commissioning and implementation of a Monte Carlo (MC) dose calculation engine for intensity modulated radiotherapy (MC-IMRT) is proposed which can be used by radiotherapy centres interested in developing MC-IMRT capabilities for research or clinical evaluations. The method proposes three levels for developing, commissioning and maintaining a MC-IMRT dose calculation engine: (a) development of a MC model of the linear accelerator, (b) validation of MC model for IMRT and (c) periodic quality assurance (QA) of the MC-IMRT system. The first step, level (a), in developing an MC-IMRT system is to build a model of the linac that correctly predicts standard open field measurements for percentage depth-dose and off-axis ratios. Validation of MC-IMRT, level (b), can be performed in a rando phantom and in a homogeneous water equivalent phantom. Ultimately, periodic quality assurance of the MC-IMRT system is needed to verify the MC-IMRT dose calculation system, level (c). Once the MC-IMRT dose calculation system is commissioned it can be applied to more complex clinical IMRT treatments. The MC-IMRT system implemented at the Royal Marsden Hospital was used for IMRT calculations for a patient undergoing treatment for primary disease with nodal involvement in the head-and-neck region (primary treated to 65 Gy and nodes to 54 Gy), while sparing the spinal cord, brain stem and parotid glands. Preliminary MC results predict a decrease of approximately 1-2 Gy in the median dose of both the primary tumour and nodal volumes (compared with both pencil beam and collapsed cone). This is possibly due to the large air-cavity (the larynx of the patient) situated in the centre
Quantum Monte Carlo calculations of van der Waals interactions between aromatic benzene rings
Azadi, Sam; Kühne, T. D.
2018-05-01
The magnitude of finite-size effects and Coulomb interactions in quantum Monte Carlo simulations of van der Waals interactions between weakly bonded benzene molecules are investigated. To that extent, two trial wave functions of the Slater-Jastrow and Backflow-Slater-Jastrow types are employed to calculate the energy-volume equation of state. We assess the impact of the backflow coordinate transformation on the nonlocal correlation energy. We found that the effect of finite-size errors in quantum Monte Carlo calculations on energy differences is particularly large and may even be more important than the employed trial wave function. In addition to the cohesive energy, the singlet excitonic energy gap and the energy gap renormalization of crystalline benzene at different densities are computed.
Stabilizing canonical-ensemble calculations in the auxiliary-field Monte Carlo method
Gilbreth, C. N.; Alhassid, Y.
2015-03-01
Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method.
A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom
International Nuclear Information System (INIS)
Bellezzo, M.; Do Nascimento, E.; Yoriyaz, H.
2014-08-01
As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)
A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom
Energy Technology Data Exchange (ETDEWEB)
Bellezzo, M.; Do Nascimento, E.; Yoriyaz, H., E-mail: mbellezzo@gmail.br [Instituto de Pesquisas Energeticas e Nucleares / CNEN, Av. Lineu Prestes 2242, Cidade Universitaria, 05508-000 Sao Paulo (Brazil)
2014-08-15
As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)
Biasing anisotropic scattering kernels for deep-penetration Monte Carlo calculations
International Nuclear Information System (INIS)
Carter, L.L.; Hendricks, J.S.
1983-01-01
The exponential transform is often used to improve the efficiency of deep-penetration Monte Carlo calculations. This technique is usually implemented by biasing the distance-to-collision kernel of the transport equation, but leaving the scattering kernel unchanged. Dwivedi obtained significant improvements in efficiency by biasing an isotropic scattering kernel as well as the distance-to-collision kernel. This idea is extended to anisotropic scattering, particularly the highly forward Klein-Nishina scattering of gamma rays
The study of importance sampling in Monte-carlo calculation of blocking dips
International Nuclear Information System (INIS)
Pan Zhengying; Zhou Peng
1988-01-01
Angular blocking dips around the axis in Al single crystal of α-particles of about 2 Mev produced at a depth of 0.2 μm are calculated by a Monte-carlo simulation. The influence of the small solid angle emission of particles and the importance sampling in the solid angle emission have been investigated. By means of importance sampling, a more reasonable results with high accuracy are obtained
Cornelius, Iwan; Guatelli, Susanna; Fournier, Pauline; Crosbie, Jeffrey C; Sanchez Del Rio, Manuel; Bräuer-Krisch, Elke; Rosenfeld, Anatoly; Lerch, Michael
2014-05-01
Microbeam radiation therapy (MRT) is a synchrotron-based radiotherapy modality that uses high-intensity beams of spatially fractionated radiation to treat tumours. The rapid evolution of MRT towards clinical trials demands accurate treatment planning systems (TPS), as well as independent tools for the verification of TPS calculated dose distributions in order to ensure patient safety and treatment efficacy. Monte Carlo computer simulation represents the most accurate method of dose calculation in patient geometries and is best suited for the purpose of TPS verification. A Monte Carlo model of the ID17 biomedical beamline at the European Synchrotron Radiation Facility has been developed, including recent modifications, using the Geant4 Monte Carlo toolkit interfaced with the SHADOW X-ray optics and ray-tracing libraries. The code was benchmarked by simulating dose profiles in water-equivalent phantoms subject to irradiation by broad-beam (without spatial fractionation) and microbeam (with spatial fractionation) fields, and comparing against those calculated with a previous model of the beamline developed using the PENELOPE code. Validation against additional experimental dose profiles in water-equivalent phantoms subject to broad-beam irradiation was also performed. Good agreement between codes was observed, with the exception of out-of-field doses and toward the field edge for larger field sizes. Microbeam results showed good agreement between both codes and experimental results within uncertainties. Results of the experimental validation showed agreement for different beamline configurations. The asymmetry in the out-of-field dose profiles due to polarization effects was also investigated, yielding important information for the treatment planning process in MRT. This work represents an important step in the development of a Monte Carlo-based independent verification tool for treatment planning in MRT.
International Nuclear Information System (INIS)
Sadeghi, Mahdi; Raisali, Gholamreza; Hosseini, S. Hamed; Shavar, Arzhang
2008-01-01
This article presents a brachytherapy source having 103 Pd adsorbed onto a cylindrical silver rod that has been developed by the Agricultural, Medical, and Industrial Research School for permanent implant applications. Dosimetric characteristics (radial dose function, anisotropy function, and anisotropy factor) of this source were experimentally and theoretically determined in terms of the updated AAPM Task group 43 (TG-43U1) recommendations. Monte Carlo simulations were used to calculate the dose rate constant. Measurements were performed using TLD-GR200A circular chip dosimeters using standard methods employing thermoluminescent dosimeters in a Perspex phantom. Precision machined bores in the phantom located the dosimeters and the source in a reproducible fixed geometry, providing for transverse-axis and angular dose profiles over a range of distances from 0.5 to 5 cm. The Monte Carlo N-particle (MCNP) code, version 4C simulation techniques have been used to evaluate the dose-rate distributions around this model 103 Pd source in water and Perspex phantoms. The Monte Carlo calculated dose rate constant of the IRA- 103 Pd source in water was found to be 0.678 cGy h -1 U -1 with an approximate uncertainty of ±0.1%. The anisotropy function, F(r,θ), and the radial dose function, g(r), of the IRA- 103 Pd source were also measured in a Perspex phantom and calculated in both Perspex and liquid water phantoms
3D dose distribution calculation in a voxelized human phantom by means of Monte Carlo method
International Nuclear Information System (INIS)
Abella, V.; Miro, R.; Juste, B.; Verdu, G.
2010-01-01
The aim of this work is to provide the reconstruction of a real human voxelized phantom by means of a MatLab program and the simulation of the irradiation of such phantom with the photon beam generated in a Theratron 780 (MDS Nordion) 60 Co radiotherapy unit, by using the Monte Carlo transport code MCNP (Monte Carlo N-Particle), version 5. The project results in 3D dose mapping calculations inside the voxelized antropomorphic head phantom. The program provides the voxelization by first processing the CT slices; the process follows a two-dimensional pixel and material identification algorithm on each slice and three-dimensional interpolation in order to describe the phantom geometry via small cubic cells, resulting in an MCNP input deck format output. Dose rates are calculated by using the MCNP5 tool FMESH, superimposed mesh tally, which gives the track length estimation of the particle flux in units of particles/cm 2 . Furthermore, the particle flux is converted into dose by using the conversion coefficients extracted from the NIST Physical Reference Data. The voxelization using a three-dimensional interpolation technique in combination with the use of the FMESH tool of the MCNP Monte Carlo code offers an optimal simulation which results in 3D dose mapping calculations inside anthropomorphic phantoms. This tool is very useful in radiation treatment assessments, in which voxelized phantoms are widely utilized.
Directory of Open Access Journals (Sweden)
Kępisty Grzegorz
2015-09-01
Full Text Available In this paper, we compare the methodology of different time-step models in the context of Monte Carlo burnup calculations for nuclear reactors. We discuss the differences between staircase step model, slope model, bridge scheme and stochastic implicit Euler method proposed in literature. We focus on the spatial stability of depletion procedure and put additional emphasis on the problem of normalization of neutron source strength. Considered methodology has been implemented in our continuous energy Monte Carlo burnup code (MCB5. The burnup simulations have been performed using the simplified high temperature gas-cooled reactor (HTGR system with and without modeling of control rod withdrawal. Useful conclusions have been formulated on the basis of results.
International Nuclear Information System (INIS)
Gu, J.; George Xu, X.; Caracappa, P. F.; Liu, B.
2013-01-01
To investigate the radiation dose to the fetus using retrospective tube current modulation (TCM) data selected from archived clinical records. This paper describes the calculation of fetal doses using retrospective TCM data and Monte Carlo (MC) simulations. Three TCM schemes were adopted for use with three pregnant patient phantoms. MC simulations were used to model CT scanners, TCM schemes and pregnant patients. Comparisons between organ doses from TCM schemes and those from non-TCM schemes show that these three TCM schemes reduced fetal doses by 14, 18 and 25 %, respectively. These organ doses were also compared with those from ImPACT calculation. It is found that the difference between the calculated fetal dose and the ImPACT reported dose is as high as 46 %. This work demonstrates methods to study organ doses from various TCM protocols and potential ways to improve the accuracy of CT dose calculation for pregnant patients. (authors)
Monte Carlo dose calculation improvements for low energy electron beams using eMC
International Nuclear Information System (INIS)
Fix, Michael K; Frei, Daniel; Volken, Werner; Born, Ernst J; Manser, Peter; Neuenschwander, Hans
2010-01-01
The electron Monte Carlo (eMC) dose calculation algorithm in Eclipse (Varian Medical Systems) is based on the macro MC method and is able to predict dose distributions for high energy electron beams with high accuracy. However, there are limitations for low energy electron beams. This work aims to improve the accuracy of the dose calculation using eMC for 4 and 6 MeV electron beams of Varian linear accelerators. Improvements implemented into the eMC include (1) improved determination of the initial electron energy spectrum by increased resolution of mono-energetic depth dose curves used during beam configuration; (2) inclusion of all the scrapers of the applicator in the beam model; (3) reduction of the maximum size of the sphere to be selected within the macro MC transport when the energy of the incident electron is below certain thresholds. The impact of these changes in eMC is investigated by comparing calculated dose distributions for 4 and 6 MeV electron beams at source to surface distance (SSD) of 100 and 110 cm with applicators ranging from 6 x 6 to 25 x 25 cm 2 of a Varian Clinac 2300C/D with the corresponding measurements. Dose differences between calculated and measured absolute depth dose curves are reduced from 6% to less than 1.5% for both energies and all applicators considered at SSD of 100 cm. Using the original eMC implementation, absolute dose profiles at depths of 1 cm, d max and R50 in water lead to dose differences of up to 8% for applicators larger than 15 x 15 cm 2 at SSD 100 cm. Those differences are now reduced to less than 2% for all dose profiles investigated when the improved version of eMC is used. At SSD of 110 cm the dose difference for the original eMC version is even more pronounced and can be larger than 10%. Those differences are reduced to within 2% or 2 mm with the improved version of eMC. In this work several enhancements were made in the eMC algorithm leading to significant improvements in the accuracy of the dose calculation
Monte Carlo dose calculation improvements for low energy electron beams using eMC.
Fix, Michael K; Frei, Daniel; Volken, Werner; Neuenschwander, Hans; Born, Ernst J; Manser, Peter
2010-08-21
The electron Monte Carlo (eMC) dose calculation algorithm in Eclipse (Varian Medical Systems) is based on the macro MC method and is able to predict dose distributions for high energy electron beams with high accuracy. However, there are limitations for low energy electron beams. This work aims to improve the accuracy of the dose calculation using eMC for 4 and 6 MeV electron beams of Varian linear accelerators. Improvements implemented into the eMC include (1) improved determination of the initial electron energy spectrum by increased resolution of mono-energetic depth dose curves used during beam configuration; (2) inclusion of all the scrapers of the applicator in the beam model; (3) reduction of the maximum size of the sphere to be selected within the macro MC transport when the energy of the incident electron is below certain thresholds. The impact of these changes in eMC is investigated by comparing calculated dose distributions for 4 and 6 MeV electron beams at source to surface distance (SSD) of 100 and 110 cm with applicators ranging from 6 x 6 to 25 x 25 cm(2) of a Varian Clinac 2300C/D with the corresponding measurements. Dose differences between calculated and measured absolute depth dose curves are reduced from 6% to less than 1.5% for both energies and all applicators considered at SSD of 100 cm. Using the original eMC implementation, absolute dose profiles at depths of 1 cm, d(max) and R50 in water lead to dose differences of up to 8% for applicators larger than 15 x 15 cm(2) at SSD 100 cm. Those differences are now reduced to less than 2% for all dose profiles investigated when the improved version of eMC is used. At SSD of 110 cm the dose difference for the original eMC version is even more pronounced and can be larger than 10%. Those differences are reduced to within 2% or 2 mm with the improved version of eMC. In this work several enhancements were made in the eMC algorithm leading to significant improvements in the accuracy of the dose
International Nuclear Information System (INIS)
Murata, Isao; Mori, Takamasa; Nakagawa, Masayuki; Shirai, Hiroshi.
1996-03-01
High Temperature Gas-cooled Reactors (HTGRs) employ spherical fuels named coated fuel particles (CFPs) consisting of a microsphere of low enriched UO 2 with coating layers in order to prevent FP release. There exist many spherical fuels distributed randomly in the cores. Therefore, the nuclear design of HTGRs is generally performed on the basis of the multigroup approximation using a diffusion code, S N transport code or group-wise Monte Carlo code. This report summarizes a Monte Carlo hard sphere packing simulation code to simulate the packing of equal hard spheres and evaluate the necessary probability distribution of them, which is used for the application of the new Monte Carlo calculation method developed to treat randomly distributed spherical fuels with the continuous energy Monte Carlo method. By using this code, obtained are the various statistical values, namely Radial Distribution Function (RDF), Nearest Neighbor Distribution (NND), 2-dimensional RDF and so on, for random packing as well as ordered close packing of FCC and BCC. (author)
Comparative Study of Daylighting Calculation Methods
Directory of Open Access Journals (Sweden)
Mandala Ariani
2018-01-01
Full Text Available The aim of this study is to assess five daylighting calculation method commonly used in architectural study. The methods used include hand calculation methods (SNI/DPMB method and BRE Daylighting Protractors, scale models studied in an artificial sky simulator and computer programs using Dialux and Velux lighting software. The test room is conditioned by the uniform sky conditions, simple room geometry with variations of the room reflectance (black, grey, and white color. The analyses compared the result (including daylight factor, illumination, and coefficient of uniformity value and examines the similarity and contrast the result different. The color variations trial is used to analyses the internally reflection factor contribution to the result.
Implementation of random set-up errors in Monte Carlo calculated dynamic IMRT treatment plans
International Nuclear Information System (INIS)
Stapleton, S; Zavgorodni, S; Popescu, I A; Beckham, W A
2005-01-01
The fluence-convolution method for incorporating random set-up errors (RSE) into the Monte Carlo treatment planning dose calculations was previously proposed by Beckham et al, and it was validated for open field radiotherapy treatments. This study confirms the applicability of the fluence-convolution method for dynamic intensity modulated radiotherapy (IMRT) dose calculations and evaluates the impact of set-up uncertainties on a clinical IMRT dose distribution. BEAMnrc and DOSXYZnrc codes were used for Monte Carlo calculations. A sliding window IMRT delivery was simulated using a dynamic multi-leaf collimator (DMLC) transport model developed by Keall et al. The dose distributions were benchmarked for dynamic IMRT fields using extended dose range (EDR) film, accumulating the dose from 16 subsequent fractions shifted randomly. Agreement of calculated and measured relative dose values was well within statistical uncertainty. A clinical seven field sliding window IMRT head and neck treatment was then simulated and the effects of random set-up errors (standard deviation of 2 mm) were evaluated. The dose-volume histograms calculated in the PTV with and without corrections for RSE showed only small differences indicating a reduction of the volume of high dose region due to set-up errors. As well, it showed that adequate coverage of the PTV was maintained when RSE was incorporated. Slice-by-slice comparison of the dose distributions revealed differences of up to 5.6%. The incorporation of set-up errors altered the position of the hot spot in the plan. This work demonstrated validity of implementation of the fluence-convolution method to dynamic IMRT Monte Carlo dose calculations. It also showed that accounting for the set-up errors could be essential for correct identification of the value and position of the hot spot
Implementation of random set-up errors in Monte Carlo calculated dynamic IMRT treatment plans
Stapleton, S.; Zavgorodni, S.; Popescu, I. A.; Beckham, W. A.
2005-02-01
The fluence-convolution method for incorporating random set-up errors (RSE) into the Monte Carlo treatment planning dose calculations was previously proposed by Beckham et al, and it was validated for open field radiotherapy treatments. This study confirms the applicability of the fluence-convolution method for dynamic intensity modulated radiotherapy (IMRT) dose calculations and evaluates the impact of set-up uncertainties on a clinical IMRT dose distribution. BEAMnrc and DOSXYZnrc codes were used for Monte Carlo calculations. A sliding window IMRT delivery was simulated using a dynamic multi-leaf collimator (DMLC) transport model developed by Keall et al. The dose distributions were benchmarked for dynamic IMRT fields using extended dose range (EDR) film, accumulating the dose from 16 subsequent fractions shifted randomly. Agreement of calculated and measured relative dose values was well within statistical uncertainty. A clinical seven field sliding window IMRT head and neck treatment was then simulated and the effects of random set-up errors (standard deviation of 2 mm) were evaluated. The dose-volume histograms calculated in the PTV with and without corrections for RSE showed only small differences indicating a reduction of the volume of high dose region due to set-up errors. As well, it showed that adequate coverage of the PTV was maintained when RSE was incorporated. Slice-by-slice comparison of the dose distributions revealed differences of up to 5.6%. The incorporation of set-up errors altered the position of the hot spot in the plan. This work demonstrated validity of implementation of the fluence-convolution method to dynamic IMRT Monte Carlo dose calculations. It also showed that accounting for the set-up errors could be essential for correct identification of the value and position of the hot spot.
TU-F-CAMPUS-T-05: A Cloud-Based Monte Carlo Dose Calculation for Electron Cutout Factors
Energy Technology Data Exchange (ETDEWEB)
Mitchell, T; Bush, K [Stanford School of Medicine, Stanford, CA (United States)
2015-06-15
Purpose: For electron cutouts of smaller sizes, it is necessary to verify electron cutout factors due to perturbations in electron scattering. Often, this requires a physical measurement using a small ion chamber, diode, or film. The purpose of this study is to develop a fast Monte Carlo based dose calculation framework that requires only a smart phone photograph of the cutout and specification of the SSD and energy to determine the electron cutout factor, with the ultimate goal of making this cloud-based calculation widely available to the medical physics community. Methods: The algorithm uses a pattern recognition technique to identify the corners of the cutout in the photograph as shown in Figure 1. It then corrects for variations in perspective, scaling, and translation of the photograph introduced by the user’s positioning of the camera. Blob detection is used to identify the portions of the cutout which comprise the aperture and the portions which are cutout material. This information is then used define physical densities of the voxels used in the Monte Carlo dose calculation algorithm as shown in Figure 2, and select a particle source from a pre-computed library of phase-spaces scored above the cutout. The electron cutout factor is obtained by taking a ratio of the maximum dose delivered with the cutout in place to the dose delivered under calibration/reference conditions. Results: The algorithm has been shown to successfully identify all necessary features of the electron cutout to perform the calculation. Subsequent testing will be performed to compare the Monte Carlo results with a physical measurement. Conclusion: A simple, cloud-based method of calculating electron cutout factors could eliminate the need for physical measurements and substantially reduce the time required to properly assure accurate dose delivery.
Directory of Open Access Journals (Sweden)
P Shokrani
2009-10-01
Full Text Available Introduction & Objective: Brachytherapy using I-125 radioactive seeds in removable episcleral plaques (EP is often used in treatment of ocular malignant melanoma. Some radioactive seeds are fixed in a gold bowl-shaped plaque. The plaque is sutured to the sclera surface corresponding to the base of the intraocular tumor, allowing for a localized radiation dose delivery to the tumor. Minimum target doses as high as 85Gy are directed at malignant tumor. The aim of this study was to develop a Monte Carlo simulation of an ocular plaque in order to calculate the resulting isodose distributions. Materials & Methods: The MCNP-4C Monte Carlo code is used to simulate the plan of an episcleral plaque treatment. A 20-mm Collaborative Ocular Melanoma Study (COMS plaque with 3, I-125 seed of model 6711 was used. Resulting dose distributions, including central axis dose and off-axis dose profiles, were calculated in a water phantom with 12mm radius. The calculated dose distributions were compared to the corresponding dose measured by Knuten et al., 2001. Results: Central axis dose calculations represent a rapid dose fall off, which is an important factor in selection of appropriate eye plaque for management of tumors with known dimension. Calculated off-axis dose profiles show decreased dose uniformity at distances close to the plaque. Increasing of distance from the plaque resulted in increasing of the dose uniformity. Conclusion: Monte Carlo simulation of eye plaques can be used as a useful tool in process of design, development and treatment planning of ocular radioactive plaques.
Calculating Relativistic Transition Matrix Elements for Hydrogenic Atoms Using Monte Carlo Methods
Alexander, Steven; Coldwell, R. L.
2015-03-01
The nonrelativistic transition matrix elements for hydrogen atoms can be computed exactly and these expressions are given in a number of classic textbooks. The relativistic counterparts of these equations can also be computed exactly but these expressions have been described in only a few places in the literature. In part, this is because the relativistic equations lack the elegant simplicity of the nonrelativistic equations. In this poster I will describe how variational Monte Carlo methods can be used to calculate the energy and properties of relativistic hydrogen atoms and how the wavefunctions for these systems can be used to calculate transition matrix elements.
Calculation of neutron detection efficiency for the thick lithium glass using Monte Carlo method
International Nuclear Information System (INIS)
Tang Guoyou; Bao Shanglian; Li Yulin; Zhong Wenguan
1989-08-01
The neutron detector efficiencies of a NE912 (45mm in diameter, 9.55 mm in thickness) and 2 pieces of ST601 (40mm in diameter, 3 and 10 mm in thickness respectively) lithium glasses have been calculated with a Monte Carlo computer code. The energy range in the calculation is 10 keV to 2.0 MeV. The effect of time delayed caused by neutron multiple scattering in the detectors (prompt neutron detection efficiency) has been considered
Energy-depth relation of electrons in bulk targets by Monte-Carlo calculations
International Nuclear Information System (INIS)
Gaber, M.; Fitting, H.J.
1984-01-01
Monte-Carlo calculations are used to calculate the energy of penetrating electrons as a function of the depth in thick targets of Ti, Fe, Cu, As, In, and Au. It is shown that the mean energy ratio anti E(z)/E 0 decays exponentially with depth z and depends on the backscattering coefficient eta/sub B/ of the bulk material and the maximum range R(E 0 ) of the primary electrons with initial energy E 0 . Thereby a normalized plot anti E/E 0 as a function of the reduced depth z/R becomes possible. (author)
Monte Carlo calculations of triton and 4He nuclei with the Reid potential
International Nuclear Information System (INIS)
Lomnitz-Adler, J.; Pandharipande, V.R.; Smith, R.A.
1981-01-01
A Monte Carlo method is developed to calculate the binding energy and density distribution of the 3 H and 4 H nuclei for a variational wave function written as a symmetrized product of correlation operators. The upper bounds obtained with the Reid potential are -6.86 +- 0.08 and -22.9 +- 0.5 MeV respectively. The Coulomb interaction in 4 H is ignored. The calculated density distributions have reasonable radii, but they do not show any dip at the center. (orig.)
Monte-Carlo calculations of light nuclei with the Reid potential
Energy Technology Data Exchange (ETDEWEB)
Lomnitz-Adler, J. (Universidad Nacional Autonoma de Mexico, Mexico City. Inst. de Fisica)
1981-01-01
A Monte-Carlo method is developed to calculate the binding energy and density distribution of the /sup 3/H and /sup 4/He nuclei for a variational wave function written as a symmetrized product of correlation operators. The upper bounds obtained with the Reid potential are -6.86 +- .08 and -22.9 +- .5 MeV respectively. The Coulomb interaction in /sup 4/He is ignored. The calculated density distributions have reasonable radii, but they do not show any dip at the center.
Monte-Carlo calculations of light nuclei with the Reid potential
International Nuclear Information System (INIS)
Lomnitz-Adler, J.
1981-01-01
A Monte-Carlo method is developed to calculate the binding energy and density distribution of the 3 H and 4 He nuclei for a variational wave function written as a symmetrized product of correlation operators. The upper bounds obtained with the Reid potential are -6.86 +- .08 and -22.9 +- .5 MeV respectively. The Coulomb interaction in 4 He is ignored. The calculated density distributions have reasonable radii, but they do not show any dip at the center. (author)
Applying graphics processor units to Monte Carlo dose calculation in radiation therapy
Directory of Open Access Journals (Sweden)
Bakhtiari M
2010-01-01
Full Text Available We investigate the potential in using of using a graphics processor unit (GPU for Monte-Carlo (MC-based radiation dose calculations. The percent depth dose (PDD of photons in a medium with known absorption and scattering coefficients is computed using a MC simulation running on both a standard CPU and a GPU. We demonstrate that the GPU′s capability for massive parallel processing provides a significant acceleration in the MC calculation, and offers a significant advantage for distributed stochastic simulations on a single computer. Harnessing this potential of GPUs will help in the early adoption of MC for routine planning in a clinical environment.
Application of inactive cycle stopping criteria for Monte Carlo Wielandt calculations
International Nuclear Information System (INIS)
Shim, H. J.; Kim, C. H.
2009-01-01
The Wielandt method is incorporated into Monte Carlo (MC) eigenvalue calculation as a way to speed up fission source convergence. To make the most of the MC Wielandt method, however, it is highly desirable to halt inactive cycle runs in a timely manner because it requires a much longer computational time to execute a single cycle MC run than the conventional MC eigenvalue calculations. This paper presents an algorithm to detect the onset of the active cycles and thereby to stop automatically the inactive cycle MC runs based on two anterior stopping criteria. The effectiveness of the algorithm is demonstrated by applying it to a slow convergence problem. (authors)
Comparative Criticality Analysis of Two Monte Carlo Codes on Centrifugal Atomizer: MCNPS and SCALE
International Nuclear Information System (INIS)
Kang, H-S; Jang, M-S; Kim, S-R; Park, J-M; Kim, K-N
2015-01-01
There are two well-known Monte Carlo codes for criticality analysis, MCNP5 and SCALE. MCNP5 is a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron or coupled neutron / photon / electron transport, including the capability to calculate eigenvalues for critical system as a main analysis code. SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor physics, radiation shielding, radioactive source term characterization, and sensitivity and uncertainty analysis. SCALE was conceived and funded by US NRC to perform standardized computer analysis for licensing evaluation and is used widely in the world. We performed a validation test of MCNP5 and a comparative analysis of Monte Carlo codes, MCNP5 and SCALE, in terms of the critical analysis of centrifugal atomizer. In the criticality analysis using MCNP5 code, we obtained the statistically reliable results by using a large number of source histories per cycle and performing of uncertainty analysis
Comparative Criticality Analysis of Two Monte Carlo Codes on Centrifugal Atomizer: MCNPS and SCALE
Energy Technology Data Exchange (ETDEWEB)
Kang, H-S; Jang, M-S; Kim, S-R [NESS, Daejeon (Korea, Republic of); Park, J-M; Kim, K-N [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2015-10-15
There are two well-known Monte Carlo codes for criticality analysis, MCNP5 and SCALE. MCNP5 is a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron or coupled neutron / photon / electron transport, including the capability to calculate eigenvalues for critical system as a main analysis code. SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor physics, radiation shielding, radioactive source term characterization, and sensitivity and uncertainty analysis. SCALE was conceived and funded by US NRC to perform standardized computer analysis for licensing evaluation and is used widely in the world. We performed a validation test of MCNP5 and a comparative analysis of Monte Carlo codes, MCNP5 and SCALE, in terms of the critical analysis of centrifugal atomizer. In the criticality analysis using MCNP5 code, we obtained the statistically reliable results by using a large number of source histories per cycle and performing of uncertainty analysis.
Energy Technology Data Exchange (ETDEWEB)
Belicev, P [Vojnotehnicki Inst., Belgrade (Yugoslavia)
1988-07-01
An outline of the problems encountered in the multigroup calculations of the neutron transport in the resonance region is given. The difference between subgroup and multigroup approximation is described briefly. The features of the Monte Carlo code SUBGR are presented. The results of the calculations of the neutron transmission and albedo for infinite iron slabs are given. (author)
Monte Carlo based electron treatment planning and cutout output factor calculations
Mitrou, Ellis
Electron radiotherapy (RT) offers a number of advantages over photons. The high surface dose, combined with a rapid dose fall-off beyond the target volume presents a net increase in tumor control probability and decreases the normal tissue complication for superficial tumors. Electron treatments are normally delivered clinically without previously calculated dose distributions due to the complexity of the electron transport involved and greater error in planning accuracy. This research uses Monte Carlo (MC) methods to model clinical electron beams in order to accurately calculate electron beam dose distributions in patients as well as calculate cutout output factors, reducing the need for a clinical measurement. The present work is incorporated into a research MC calculation system: McGill Monte Carlo Treatment Planning (MMCTP) system. Measurements of PDDs, profiles and output factors in addition to 2D GAFCHROMICRTM EBT2 film measurements in heterogeneous phantoms were obtained to commission the electron beam model. The use of MC for electron TP will provide more accurate treatments and yield greater knowledge of the electron dose distribution within the patient. The calculation of output factors could invoke a clinical time saving of up to 1 hour per patient.
Improvements of MCOR: A Monte Carlo depletion code system for fuel assembly reference calculations
Energy Technology Data Exchange (ETDEWEB)
Tippayakul, C.; Ivanov, K. [Pennsylvania State Univ., Univ. Park (United States); Misu, S. [AREVA NP GmbH, An AREVA and SIEMENS Company, Erlangen (Germany)
2006-07-01
This paper presents the improvements of MCOR, a Monte Carlo depletion code system for fuel assembly reference calculations. The improvements of MCOR were initiated by the cooperation between the Penn State Univ. and AREVA NP to enhance the original Penn State Univ. MCOR version in order to be used as a new Monte Carlo depletion analysis tool. Essentially, a new depletion module using KORIGEN is utilized to replace the existing ORIGEN-S depletion module in MCOR. Furthermore, the online burnup cross section generation by the Monte Carlo calculation is implemented in the improved version instead of using the burnup cross section library pre-generated by a transport code. Other code features have also been added to make the new MCOR version easier to use. This paper, in addition, presents the result comparisons of the original and the improved MCOR versions against CASMO-4 and OCTOPUS. It was observed in the comparisons that there were quite significant improvements of the results in terms of k{sub inf}, fission rate distributions and isotopic contents. (authors)
Hunt, J G; da Silva, F C A; Mauricio, C L P; dos Santos, D S
2004-01-01
The Monte Carlo program 'Visual Monte Carlo-dose calculation' (VMC-dc) uses a voxel phantom to simulate the body organs and tissues, transports photons through this phantom and reports the absorbed dose received by each organ and tissue relevant to the calculation of effective dose as defined in ICRP Publication 60. This paper shows the validation of VMC-dc by comparison with EGSnrc and with a physical phantom containing TLDs. The validation of VMC-dc by comparison with EGSnrc was made for a collimated beam of 0.662 MeV photons irradiating a cube of water. For the validation by comparison with the physical phantom, the case considered was a whole body irradiation with a point 137Cs source placed at a distance of 1 m from the thorax of an Alderson-RANDO phantom. The validation results show good agreement for the doses obtained using VMC-dc and EGSnrc calculations, and from VMC-dc and TLD measurements. The program VMC-dc was then applied to the calculation of doses due to immersion in water containing gamma emitters. The dose conversion coefficients for water immersion are compared with their equivalents in the literature.
International Nuclear Information System (INIS)
Hunt, J. G.; Da Silva, F. C. A.; Mauricio, C. L. P.; Dos Santos, D. S.
2004-01-01
The Monte Carlo program 'Visual Monte Carlo-dose calculation' (VMC-dc) uses a voxel phantom to simulate the body organs and tissues, transports photons through this phantom and reports the absorbed dose received by each organ and tissue relevant to the calculation of effective dose as defined in ICRP Publication 60. This paper shows the validation of VMC-dc by comparison with EGSnrc and with a physical phantom containing TLDs. The validation of VMC-dc by comparison with EGSnrc was made for a collimated beam of 0.662 MeV photons irradiating a cube of water. For the validation by comparison with the physical phantom, the case considered was a whole body irradiation with a point 137 Cs source placed at a distance of 1 m from the thorax of an Alderson-RANDO phantom. The validation results show good agreement for the doses obtained using VMC-dc and EGSnrc calculations, and from VMC-dc and TLD measurements. The program VMC-dc was then applied to the calculation of doses due to immersion in water containing gamma emitters. The dose conversion coefficients for water immersion are compared with their equivalents in the literature. (authors)
International Nuclear Information System (INIS)
Dumonteil, E.; Diop, C.M.
2011-01-01
External linking scripts between Monte Carlo transport codes and burnup codes, and complete integration of burnup capability into Monte Carlo transport codes, have been or are currently being developed. Monte Carlo linked burnup methodologies may serve as an excellent benchmark for new deterministic burnup codes used for advanced systems; however, there are some instances where deterministic methodologies break down (i.e., heavily angularly biased systems containing exotic materials without proper group structure) and Monte Carlo burn up may serve as an actual design tool. Therefore, researchers are also developing these capabilities in order to examine complex, three-dimensional exotic material systems that do not contain benchmark data. Providing a reference scheme implies being able to associate statistical errors to any neutronic value of interest like k(eff), reaction rates, fluxes, etc. Usually in Monte Carlo, standard deviations are associated with a particular value by performing different independent and identical simulations (also referred to as 'cycles', 'batches', or 'replicas'), but this is only valid if the calculation itself is not biased. And, as will be shown in this paper, there is a bias in the methodology that consists of coupling transport and depletion codes because Bateman equations are not linear functions of the fluxes or of the reaction rates (those quantities being always measured with an uncertainty). Therefore, we have to quantify and correct this bias. This will be achieved by deriving an unbiased minimum variance estimator of a matrix exponential function of a normal mean. The result is then used to propose a reference scheme to solve Boltzmann/Bateman coupled equations, thanks to Monte Carlo transport codes. Numerical tests will be performed with an ad hoc Monte Carlo code on a very simple depletion case and will be compared to the theoretical results obtained with the reference scheme. Finally, the statistical error propagation
International Nuclear Information System (INIS)
Kawrakow, I.; Bielajew, A.F.
1998-01-01
A new representation of elastic electron-nucleus (Coulomb) multiple-scattering distributions is developed. Using the screened Rutherford cross section with the Moliere screening parameter as an example, a simple analytic angular transformation of the Goudsmit-Saunderson multiple-scattering distribution accounts for most of the structure of the angular distribution leaving a residual 3-parameter (path-length, transformed angle and screening parameter) function that is reasonably slowly varying and suitable for rapid, accurate interpolation in a computer-intensive algorithm. The residual function is calculated numerically for a wide range of Moliere screening parameters and path-lengths suitable for use in a general-purpose condensed-history Monte Carlo code. Additionally, techniques are developed that allow the distributions to be scaled to account for energy loss. This new representation allows ''''on-the-fly'''' sampling of Goudsmit-Saunderson angular distributions in a screened Rutherford approximation suitable for class II condensed-history Monte Carlo codes. (orig.)
Magnetism of iron and nickel from rotationally invariant Hirsch-Fye quantum Monte Carlo calculations
Belozerov, A. S.; Leonov, I.; Anisimov, V. I.
2013-03-01
We present a rotationally invariant Hirsch-Fye quantum Monte Carlo algorithm in which the spin rotational invariance of Hund's exchange is approximated by averaging over all possible directions of the spin quantization axis. We employ this technique to perform benchmark calculations for the two- and three-band Hubbard models on the infinite-dimensional Bethe lattice. Our results agree quantitatively well with those obtained using the continuous-time quantum Monte Carlo method with rotationally invariant Coulomb interaction. The proposed approach is employed to compute the electronic and magnetic properties of paramagnetic α iron and nickel. The obtained Curie temperatures agree well with experiment. Our results indicate that the magnetic transition temperature is significantly overestimated by using the density-density type of Coulomb interaction.
Two- and three-nucleon chiral interactions in quantum Monte Carlo calculations for nuclear physics
Energy Technology Data Exchange (ETDEWEB)
Lynn, Joel [Institut fuer Kernphysik, Technische Universitaet Darmstadt, 64289 Darmstadt (Germany); Tews, Ingo [Institute for Nuclear Theory, University of Washington, Seattle, Washington 98195 (United States); Carlson, Joseph; Gandolfi, Stefano [Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Gezerlis, Alexandros [Department of Physics, University of Guelph, Guelph, Ontario, N1G 2W1 (Canada); Schmidt, Kevin [Department of Physics, Arizona State University, Tempe, Arizona 85287 (United States); Schwenk, Achim [Institut fuer Kernphysik, Technische Universitaet Darmstadt, 64289 Darmstadt (Germany); ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, 64291 Darmstadt (Germany)
2016-07-01
I present our recent work on Green's function Monte Carlo calculations of light nuclei using local two- and three-nucleon interactions derived from chiral effective field theory up to next-to-next-to-leading order (N{sup 2}LO). I discuss the choice of observables we make to fit the two low-energy constants which enter in the three-nucleon sector at N{sup 2}LO: the {sup 4}He binding energy and n-α elastic scattering P-wave phase shifts. I then show some results for light nuclei. I also show our results for the energy per neutron in pure neutron matter using the auxiliary-field diffusion Monte Carlo method and discuss regulator choices. Finally I discuss some exciting future projects which are now possible.
Generation and Verification of ENDF/B-VII.0 Cross section Libraries for Monte Carlo Calculations
International Nuclear Information System (INIS)
Park, Ho Jin; Kwak, Min Su; Joo, Han Gyu; Kim, Chang Hyo
2007-01-01
For Monte Carlo neutronics calculations, a continuous energy nuclear data library is needed. It can be generated from various evaluated nuclear data files such as ENDF/B using the ACER routine of the NJOY.code after a series of prior processing involving various other NJOY routines. Recently, a utility code, which generates the NJOY input decks in an automated mode, named ANJOYMC became available. The use of this code greatly reduces the user's effort and the possibility of input errors. In December 2006, the initial version of the ENDF/BVII nuclear data library was released. It was reported that the new data files have much better data which reduces the errors noted in the previous versions. Thus it is worthwhile to examine the performance of the new data files particularly using an independent Monte Carlo code, MCCARD and the ANJOYMC utility code. The verification of the newly generated library can be readily performed by analyzing numerous standard criticality benchmark problems
Bourva, L C A
1999-01-01
The general purpose neutron-photon-electron Monte Carlo N-Particle code, MCNP sup T sup M , has been used to simulate the neutronic characteristics of the on-site laboratory passive neutron coincidence counter to be installed, under Euratom Safeguards Directorate supervision, at the Sellafield reprocessing plant in Cumbria, UK. This detector is part of a series of nondestructive assay instruments to be installed for the accurate determination of the plutonium content of nuclear materials. The present work focuses on one aspect of this task, namely, the accurate calculation of the coincidence gate utilisation factor. This parameter is an important term in the interpretative model used to analyse the passive neutron coincidence count data acquired using pulse train deconvolution electronics based on the shift register technique. It accounts for the limited proportion of neutrons detected within the time interval for which the electronics gate is open. The Monte Carlo code MCF, presented in this work, represents...
Evaluation and comparison of SN and Monte-Carlo charged particle transport calculations
International Nuclear Information System (INIS)
Hadad, K.
2000-01-01
A study was done to evaluate a 3-D S N charged particle transport code called SMARTEPANTS 1 and another 3-D Monte Carlo code called Integrated Tiger Series, ITS 2 . The evaluation study of SMARTEPANTS code was based on angular discretization and reflected boundary sensitivity whilst the evaluation of ITS was based on CPU time and variance reduction. The comparison of the two code was based on energy and charge deposition calculation in block of Gallium Arsenide with embedded gold cylinders. The result of evaluation tests shows that an S 8 calculation maintains both accuracy and speed and calculations with reflected boundaries geometry produces full symmetrical results. As expected for ITS evaluation, the CPU time and variance reduction are opposite to a point beyond which the history augmentation while increasing the CPU time do not result in variance reduction. The comparison test problem showed excellent agreement in total energy deposition calculations
International Nuclear Information System (INIS)
El Bounagui, O.; Erramli, H.
2010-01-01
In this work, we report on calculations of the electronic channelling energy loss of hydrogen and helium ions along Si and Si axial directions for the low energy range by using the Monte Carlo simulation code. Simulated and experimental data are compared for protons and He ions in the and axis of silicon. A reasonable agreement was found. Computer simulation was also employed to study the angular dependence of energy loss for 0.5, 0.8, 1, and 2 MeV channelled 4 He ions transmitted through a silicon crystal of 3 μm thickness along the axis.
Monteray Mark-I: Computer program (PC-version) for shielding calculation with Monte Carlo method
International Nuclear Information System (INIS)
Pudjijanto, M.S.; Akhmad, Y.R.
1998-01-01
A computer program for gamma ray shielding calculation using Monte Carlo method has been developed. The program is written in WATFOR77 language. The MONTERAY MARH-1 is originally developed by James Wood. The program was modified by the authors that the modified version is easily executed. Applying Monte Carlo method the program observe photon gamma transport in an infinity planar shielding with various thick. A photon gamma is observed till escape from the shielding or when its energy less than the cut off energy. Pair production process is treated as pure absorption process that annihilation photons generated in the process are neglected in the calculation. The out put data calculated by the program are total albedo, build-up factor, and photon spectra. The calculation result for build-up factor of a slab lead and water media with 6 MeV parallel beam gamma source shows that they are in agreement with published data. Hence the program is adequate as a shielding design tool for observing gamma radiation transport in various media
International Nuclear Information System (INIS)
Simpkin, D.J.
1989-01-01
A Monte Carlo calculation has been performed to determine the transmission of broad constant-potential x-ray beams through Pb, concrete, gypsum wallboard, steel and plate glass. The EGS4 code system was used with a simple broad-beam geometric model to generate exposure transmission curves for published 70, 100, 120 and 140-kVcp x-ray spectra. These curves are compared to measured three-phase generated x-ray transmission data in the literature and found to be reasonable. For calculation ease the data are fit to an equation previously shown to describe such curves quite well. These calculated transmission data are then used to create three-phase shielding tables for Pb and concrete, as well as other materials not available in Report No. 49 of the NCRP
Simpkin, D J
1989-02-01
A Monte Carlo calculation has been performed to determine the transmission of broad constant-potential x-ray beams through Pb, concrete, gypsum wallboard, steel and plate glass. The EGS4 code system was used with a simple broad-beam geometric model to generate exposure transmission curves for published 70, 100, 120 and 140-kVcp x-ray spectra. These curves are compared to measured three-phase generated x-ray transmission data in the literature and found to be reasonable. For calculation ease the data are fit to an equation previously shown to describe such curves quite well. These calculated transmission data are then used to create three-phase shielding tables for Pb and concrete, as well as other materials not available in Report No. 49 of the NCRP.
EGS-Ray, a program for the visualization of Monte-Carlo calculations in the radiation physics
International Nuclear Information System (INIS)
Kleinschmidt, C.
2001-01-01
A Windows program is introduced which allows a relatively easy and interactive access to Monte Carlo techniques in clinical radiation physics. Furthermore, this serves as a visualization tool of the methodology and the results of Monte Carlo simulations. The program requires only little effort to formulate and calculate a Monte Carlo problem. The Monte Carlo module of the program is based on the well-known EGS4/PRESTA code. The didactic features of the program are presented using several examples common to the routine of the clinical radiation physicist. (orig.) [de
SU-E-T-202: Impact of Monte Carlo Dose Calculation Algorithm On Prostate SBRT Treatments
Energy Technology Data Exchange (ETDEWEB)
Venencia, C; Garrigo, E; Cardenas, J; Castro Pena, P [Instituto de Radioterapia - Fundacion Marie Curie, Cordoba (Argentina)
2014-06-01
Purpose: The purpose of this work was to quantify the dosimetric impact of using Monte Carlo algorithm on pre calculated SBRT prostate treatment with pencil beam dose calculation algorithm. Methods: A 6MV photon beam produced by a Novalis TX (BrainLAB-Varian) linear accelerator equipped with HDMLC was used. Treatment plans were done using 9 fields with Iplanv4.5 (BrainLAB) and dynamic IMRT modality. Institutional SBRT protocol uses a total dose to the prostate of 40Gy in 5 fractions, every other day. Dose calculation is done by pencil beam (2mm dose resolution), heterogeneity correction and dose volume constraint (UCLA) for PTV D95%=40Gy and D98%>39.2Gy, Rectum V20Gy<50%, V32Gy<20%, V36Gy<10% and V40Gy<5%, Bladder V20Gy<40% and V40Gy<10%, femoral heads V16Gy<5%, penile bulb V25Gy<3cc, urethra and overlap region between PTV and PRV Rectum Dmax<42Gy. 10 SBRT treatments plans were selected and recalculated using Monte Carlo with 2mm spatial resolution and mean variance of 2%. DVH comparisons between plans were done. Results: The average difference between PTV doses constraints were within 2%. However 3 plans have differences higher than 3% which does not meet the D98% criteria (>39.2Gy) and should have been renormalized. Dose volume constraint differences for rectum, bladder, femoral heads and penile bulb were les than 2% and within tolerances. Urethra region and overlapping between PTV and PRV Rectum shows increment of dose in all plans. The average difference for urethra region was 2.1% with a maximum of 7.8% and for the overlapping region 2.5% with a maximum of 8.7%. Conclusion: Monte Carlo dose calculation on dynamic IMRT treatments could affects on plan normalization. Dose increment in critical region of urethra and PTV overlapping region with PTV could have clinical consequences which need to be studied. The use of Monte Carlo dose calculation algorithm is limited because inverse planning dose optimization use only pencil beam.
TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations
International Nuclear Information System (INIS)
Schuemann, J; Grassberger, C; Paganetti, H; Dowdell, S
2014-01-01
Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50) were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we recommend
TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations
Energy Technology Data Exchange (ETDEWEB)
Schuemann, J; Grassberger, C; Paganetti, H [Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Dowdell, S [Illawarra Shoalhaven Local Health District, Wollongong (Australia)
2014-06-15
Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50) were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we recommend
Monte Carlo calculations of electron beam quality conversion factors for several ion chamber types
Energy Technology Data Exchange (ETDEWEB)
Muir, B. R., E-mail: Bryan.Muir@nrc-cnrc.gc.ca [Measurement Science and Standards, National Research Council Canada, 1200 Montreal Road, Ottawa, Ontario K1A 0R6 (Canada); Rogers, D. W. O., E-mail: drogers@physics.carleton.ca [Carleton Laboratory for Radiotherapy Physics, Physics Department, Carleton University, 1125 ColonelBy Drive, Ottawa, Ontario K1S 5B6 (Canada)
2014-11-01
Purpose: To provide a comprehensive investigation of electron beam reference dosimetry using Monte Carlo simulations of the response of 10 plane-parallel and 18 cylindrical ion chamber types. Specific emphasis is placed on the determination of the optimal shift of the chambers’ effective point of measurement (EPOM) and beam quality conversion factors. Methods: The EGSnrc system is used for calculations of the absorbed dose to gas in ion chamber models and the absorbed dose to water as a function of depth in a water phantom on which cobalt-60 and several electron beam source models are incident. The optimal EPOM shifts of the ion chambers are determined by comparing calculations of R{sub 50} converted from I{sub 50} (calculated using ion chamber simulations in phantom) to R{sub 50} calculated using simulations of the absorbed dose to water vs depth in water. Beam quality conversion factors are determined as the calculated ratio of the absorbed dose to water to the absorbed dose to air in the ion chamber at the reference depth in a cobalt-60 beam to that in electron beams. Results: For most plane-parallel chambers, the optimal EPOM shift is inside of the active cavity but different from the shift determined with water-equivalent scaling of the front window of the chamber. These optimal shifts for plane-parallel chambers also reduce the scatter of beam quality conversion factors, k{sub Q}, as a function of R{sub 50}. The optimal shift of cylindrical chambers is found to be less than the 0.5 r{sub cav} recommended by current dosimetry protocols. In most cases, the values of the optimal shift are close to 0.3 r{sub cav}. Values of k{sub ecal} are calculated and compared to those from the TG-51 protocol and differences are explained using accurate individual correction factors for a subset of ion chambers investigated. High-precision fits to beam quality conversion factors normalized to unity in a beam with R{sub 50} = 7.5 cm (k{sub Q}{sup ′}) are provided. These
International Nuclear Information System (INIS)
Bourva, L.C.A.; Croft, S.
1999-01-01
The general purpose neutron-photon-electron Monte Carlo N-Particle code, MCNP TM , has been used to simulate the neutronic characteristics of the on-site laboratory passive neutron coincidence counter to be installed, under Euratom Safeguards Directorate supervision, at the Sellafield reprocessing plant in Cumbria, UK. This detector is part of a series of nondestructive assay instruments to be installed for the accurate determination of the plutonium content of nuclear materials. The present work focuses on one aspect of this task, namely, the accurate calculation of the coincidence gate utilisation factor. This parameter is an important term in the interpretative model used to analyse the passive neutron coincidence count data acquired using pulse train deconvolution electronics based on the shift register technique. It accounts for the limited proportion of neutrons detected within the time interval for which the electronics gate is open. The Monte Carlo code MCF, presented in this work, represents a new evaluation technique for the estimation of gate utilisation factors. It uses the die-away profile of a neutron coincidence chamber generated either by MCNP TM , or by other means, to simulate the neutron detection arrival time pattern originating from independent spontaneous fission events. A shift register simulation algorithm, embedded in the MCF code, then calculates the coincidence counts scored within the electronics gate. The gate utilisation factor is then deduced by dividing the coincidence counts obtained with that obtained in the same Monte Carlo run, but for an ideal detection system with a coincidence gate utilisation factor equal to unity. The MCF code has been benchmarked against analytical results calculated for both single and double exponential die-away profiles. These results are presented along with the development of the closed form algebraic expressions for the two cases. Results of this validity check showed very good agreement. On this
Calculation of neutron importance function in fissionable assemblies using Monte Carlo method
International Nuclear Information System (INIS)
Feghhi, S.A.H.; Shahriari, M.; Afarideh, H.
2007-01-01
The purpose of the present work is to develop an efficient solution method for the calculation of neutron importance function in fissionable assemblies for all criticality conditions, based on Monte Carlo calculations. The neutron importance function has an important role in perturbation theory and reactor dynamic calculations. Usually this function can be determined by calculating the adjoint flux while solving the adjoint weighted transport equation based on deterministic methods. However, in complex geometries these calculations are very complicated. In this article, considering the capabilities of MCNP code in solving problems with complex geometries and its closeness to physical concepts, a comprehensive method based on the physical concept of neutron importance has been introduced for calculating the neutron importance function in sub-critical, critical and super-critical conditions. For this propose a computer program has been developed. The results of the method have been benchmarked with ANISN code calculations in 1 and 2 group modes for simple geometries. The correctness of these results has been confirmed for all three criticality conditions. Finally, the efficiency of the method for complex geometries has been shown by the calculation of neutron importance in Miniature Neutron Source Reactor (MNSR) research reactor
Energy Technology Data Exchange (ETDEWEB)
Landry, Guillaume; Reniers, Brigitte; Murrer, Lars; Lutgens, Ludy; Bloemen-Van Gurp, Esther; Pignol, Jean-Philippe; Keller, Brian; Beaulieu, Luc; Verhaegen, Frank [Department of Radiation Oncology (MAASTRO), GROW-School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Department of Radiation Oncology, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario M4N 3M5 (Canada); Departement de Radio-Oncologie et Centre de Recherche en Cancerologie, de l' Universite Laval, CHUQ, Pavillon L' Hotel-Dieu de Quebec, Quebec G1R 2J6 (Canada) and Departement de Physique, de Genie Physique et d' Optique, Universite Laval, Quebec G1K 7P4 (Canada); Department of Radiation Oncology (MAASTRO), GROW-School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands) and Medical Physics Unit, McGill University, Montreal General Hospital, Montreal, Quebec H3G 1A4 (Canada)
2010-10-15
Purpose: The objective of this work is to assess the sensitivity of Monte Carlo (MC) dose calculations to uncertainties in human tissue composition for a range of low photon energy brachytherapy sources: {sup 125}I, {sup 103}Pd, {sup 131}Cs, and an electronic brachytherapy source (EBS). The low energy photons emitted by these sources make the dosimetry sensitive to variations in tissue atomic number due to the dominance of the photoelectric effect. This work reports dose to a small mass of water in medium D{sub w,m} as opposed to dose to a small mass of medium in medium D{sub m,m}. Methods: Mean adipose, mammary gland, and breast tissues (as uniform mixture of the aforementioned tissues) are investigated as well as compositions corresponding to one standard deviation from the mean. Prostate mean compositions from three different literature sources are also investigated. Three sets of MC simulations are performed with the GEANT4 code: (1) Dose calculations for idealized TG-43-like spherical geometries using point sources. Radial dose profiles obtained in different media are compared to assess the influence of compositional uncertainties. (2) Dose calculations for four clinical prostate LDR brachytherapy permanent seed implants using {sup 125}I seeds (Model 2301, Best Medical, Springfield, VA). The effect of varying the prostate composition in the planning target volume (PTV) is investigated by comparing PTV D{sub 90} values. (3) Dose calculations for four clinical breast LDR brachytherapy permanent seed implants using {sup 103}Pd seeds (Model 2335, Best Medical). The effects of varying the adipose/gland ratio in the PTV and of varying the elemental composition of adipose and gland within one standard deviation of the assumed mean composition are investigated by comparing PTV D{sub 90} values. For (2) and (3), the influence of using the mass density from CT scans instead of unit mass density is also assessed. Results: Results from simulation (1) show that variations
International Nuclear Information System (INIS)
Caon, Martin
2013-01-01
The ADELAIDE voxel model of paediatric anatomy was used with the EGSnrc Monte Carlo code to compare effective dose from computed tomography (CT) calculated with both the ICRP103 and ICRP60 definitions which are different in their tissue weighting factors and in the included tissues. The new tissue weighting factors resulted in a lower effective dose for pelvis CT (than if calculated using ICRP60 tissue weighting factors), by 6.5 % but higher effective doses for all other examinations. ICRP103 calculated effective dose for CT abdomen + pelvis was higher by 4.6 %, for CT abdomen (by 9.5 %), for CT chest + abdomen + pelvis (by 6 %), for CT chest + abdomen (by 9.6 %), for CT chest (by 10.1 %) and for cardiac CT (by 11.5 %). These values, along with published values of effective dose from CT that were calculated for both sets of tissue weighting factors were used to determine single values for the ratio ICRP103:ICRP60 calculated effective doses from CT, for seven CT examinations. The following values for ICRP103:ICRP60 are suggested for use to convert ICRP60 calculated effective dose to ICRP103 calculated effective dose for the following CT examinations: Pelvis CT, 0.75; for abdomen CT, abdomen + pelvis CT, chest + abdomen + pelvis CT, 1.00; for chest + abdomen CT, and for chest CT. 1.15; for cardiac CT 1.25.
Energy Technology Data Exchange (ETDEWEB)
Del Nero, Renata Aline; Yoriyaz, Hélio [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Nakandakari, Marcos Vinicius Nakaoka, E-mail: hyoriyaz@ipen.br, E-mail: marcos.sake@gmail.com [Hospital Beneficência Portuguesa de São Paulo, SP (Brazil)
2017-07-01
The Monte Carlo method for radiation transport data has been adapted for medical physics application. More specifically, it has received more attention in clinical treatment planning with the development of more efficient computer simulation techniques. In linear accelerator modeling by the Monte Carlo method, the phase space data file (phsp) is used a lot. However, to obtain precision in the results, it is necessary detailed information about the accelerator's head and commonly the supplier does not provide all the necessary data. An alternative to the phsp is the Virtual Source Model (VSM). This alternative approach presents many advantages for the clinical Monte Carlo application. This is the most efficient method for particle generation and can provide an accuracy similar when the phsp is used. This research propose a VSM simulation with the use of a Virtual Flattening Filter (VFF) for profiles and percent deep doses calculation. Two different sizes of open fields (40 x 40 cm² and 40√2 x 40√2 cm²) were used and two different source to surface distance (SSD) were applied: the standard 100 cm and custom SSD of 370 cm, which is applied in radiotherapy treatments of total body irradiation. The data generated by the simulation was analyzed and compared with experimental data to validate the VSM. This current model is easy to build and test. (author)
Positron stopping in elemental systems: Monte Carlo calculations and scaling properties
International Nuclear Information System (INIS)
Ghosh, V.J.; Aers, G.C.
1995-01-01
The scaling of positron-implantation (stopping) profiles has been reported by Ghosh et al., who used the BNL Monte Carlo scheme to generate stopping profiles in semi-infinite elemental metals. A simple scaling relationship reduced the stopping profiles of positrons implanted at different energies (ranging from 1--10 keV) onto a single universal curve for that particular metal. We have confirmed that the scaling relationship also applies to the quite different Jensen and Walker Monte Carlo scheme, for more materials, and over an expanded energy range of 1--25 keV. The mean depths of the stopping profiles calculated by the two Monte Carlo schemes are found to be different, mainly due to differences in the inelastic mean free paths and the energy-loss functions. However, after scaling, the profiles generated by the two schemes can be superimposed onto a single curve which can be appropriately parametrized. The scaled profiles are found to be only weakly material dependent. The mean depths, backscattered fractions, and scaled stopping profiles are fitted to simple parametric functions, and the values of these parameters are obtained for several elements
Directory of Open Access Journals (Sweden)
Cerutti F.
2017-01-01
Full Text Available The role of Monte Carlo calculations in addressing machine protection and radiation protection challenges regarding accelerator design and operation is discussed, through an overview of different applications and validation examples especially referring to recent LHC measurements.
Cerutti, F.
2017-09-01
The role of Monte Carlo calculations in addressing machine protection and radiation protection challenges regarding accelerator design and operation is discussed, through an overview of different applications and validation examples especially referring to recent LHC measurements.
CDFMC: a program that calculates the fixed neutron source distribution for a BWR using Monte Carlo
International Nuclear Information System (INIS)
Gomez T, A.M.; Xolocostli M, J.V.; Palacios H, J.C.
2006-01-01
The three-dimensional neutron flux calculation using the synthesis method, it requires of the determination of the neutron flux in two two-dimensional configurations as well as in an unidimensional one. Most of the standard guides for the neutron flux calculation or fluences in the vessel of a nuclear reactor, make special emphasis in the appropriate calculation of the fixed neutron source that should be provided to the used transport code, with the purpose of finding sufficiently approximated flux values. The reactor core assemblies configuration is based on X Y geometry, however the considered problem is solved in R θ geometry for what is necessary to make an appropriate mapping to find the source term associated to the R θ intervals starting from a source distribution in rectangular coordinates. To develop the CDFMC computer program (Source Distribution calculation using Monte Carlo), it was necessary to develop a theory of independent mapping to those that have been in the literature. The method of meshes overlapping here used, is based on a technique of random points generation, commonly well-known as Monte Carlo technique. Although the 'randomness' of this technique it implies considering errors in the calculations, it is well known that when increasing the number of points randomly generated to measure an area or some other quantity of interest, the precision of the method increases. In the particular case of the CDFMC computer program, the developed technique reaches a good general behavior when it is used a considerably high number of points (bigger or equal to a hundred thousand), with what makes sure errors in the calculations of the order of 1%. (Author)
International Nuclear Information System (INIS)
Pereira, A.; Broed, R.
2002-03-01
In this report, several issues related to the probabilistic methodology for performance assessments of repositories for high-level nuclear waste and spent fuel are addressed. Random Monte Carlo sampling is used to make uncertainty analyses for the migration of four nuclides and a decay chain in the geosphere. The nuclides studied are cesium, chlorine, iodine and carbon, and radium from a decay chain. A procedure is developed to take advantage of the information contained in the hydrogeological data obtained from a three-dimensional discrete fracture model as the input data for one-dimensional transport models for use in Monte Carlo calculations. This procedure retains the original correlations between parameters representing different physical entities, namely, between the groundwater flow rate and the hydrodynamic dispersion in fractured rock, in contrast with the approach commonly used that assumes that all parameters supplied for the Monte Carlo calculations are independent of each other. A small program is developed to allow the above-mentioned procedure to be used if the available three-dimensional data are scarce for Monte Carlo calculations. The program allows random sampling of data from the 3-D data distribution in the hydrogeological calculations. The impact of correlations between the groundwater flow and the hydrodynamic dispersion on the uncertainty associated with the output distribution of the radionuclides' peak releases is studied. It is shown that for the SITE-94 data, this impact can be disregarded. A global sensitivity analysis is also performed on the peak releases of the radionuclides studied. The results of these sensitivity analyses, using several known statistical methods, show discrepancies that are attributed to the limitations of these methods. The reason for the difficulties is to be found in the complexity of the models needed for the predictions of radionuclide migration, models that deliver results covering variation of several
Simultaneous global calculation of flux and importance with forward Monte Carlo
International Nuclear Information System (INIS)
Deutsch, O.L.; Carter, L.L.
1977-01-01
A procedure is described for obtaining flux and importance globally in one Monte Carlo calculation at small to moderate incremental cost in terms of the time required to process a fixed number of particle histories. The application of this procedure and analysis of results are illustrated for a prototypical controlled thermonuclear reactor (CTR) streaming problem with coolant pipe penetrations through a concrete magnet shield. Our experience indicates that the availability of global information about both flux and importance can help to generate intuition in multidimensional shielding problems and can be of significant value during the early phase of shield design
On line CALDoseX: real time Monte Carlo calculation via Internet for dosimetry in radiodiagnostic
International Nuclear Information System (INIS)
Kramer, Richard; Cassola, Vagner Ferreira; Lira, Carlos Alberto Brayner de Oliveira; Khoury, Helen Jamil; Cavalcanti, Arthur; Lins, Rafael Dueire
2011-01-01
The CALDose X 4.1 is a software which uses thr MASH and FASH phantoms. Patient dosimetry with reference phantoms is limited because the results can be applied only for patients which possess the same body mass and right height that the reference phantom. In this paper, the dosimetry of patients for diagnostic with X ray was extended by using a series of 18 phantoms with defined gender, different body masses and heights, in order to cover the real anatomy of the patients. It is possible to calculate absorbed doses in organs and tissues by real time Monte Carlo dosimetry through the Internet through a dosimetric service called CALDose X on line
Monte Carlo simulation of dose calculation in voxel and geometric phantoms using GEANT4 code
International Nuclear Information System (INIS)
Martins, Maximiano C.; Santos, Denison de S.; Queiroz Filho, Pedro P. de; Silva, Rosana de S. e; Begalli, Marcia
2009-01-01
Monte Carlo simulation techniques have become a valuable tool for scientific purposes. In radiation protection many quantities are obtained by means of the simulation of particles passing through human body models, also known as phantoms, allowing the calculation of doses deposited in an individual's organs exposed to ionizing radiation. These information are very useful from the medical viewpoint, as they are used in the planning of external beam radiotherapy and brachytherapy treatments. The goal of this work is the implementation of a voxel phantom and a geometrical phantom in the framework of the Geant4 tool kit, aiming at a future use of this code by professionals in the medical area. (author)
International Nuclear Information System (INIS)
Rojas C, E.L.; Varon T, C.F.; Pedraza N, R.
2007-01-01
The treatment of the breast cancer at early stages is of vital importance. For that, most of the investigations are dedicated to the early detection of the suffering and their treatment. As investigation consequence and clinical practice, in 2002 it was developed in U.S.A. an irradiation system of high dose rate known as Mammosite. In this work we carry out dose calculations for a simplified Mammosite system with the Monte Carlo Penelope simulation code and MCNPX, varying the concentration of the contrast material that it is used in the one. (Author)
Monte Carlo calculations of the free-molecule drag on chains of uniform spheres
International Nuclear Information System (INIS)
Dahneke, B.; Chan, P.
1980-01-01
Monte Carlo calculations of the free-molecule drag on straight chains of uniform spheres are presented. The drag on a long chain is expressed in terms of the drag on a basic chain unit (two hemispheres touching at their poles) multiplied by the number of spheres in the chain. Since there is no interaction between the basic chain units, it is argued that the results also apply as a good approximation to the drag on kinked and branched chains covering a broad range of geometries. Experimental data are cited which support this claim
Damage flux analysis. Solid state detector and Monte-Carlo calculation
International Nuclear Information System (INIS)
Genthon, J.P.; Nimal, J.C.; Vergnaud, T.
1975-09-01
The change of resistivity induced by radiation in materials is particularly suitable for the measurement of equivalent damage fluxes, when it is used at low fluence for calibration of more classical activation reactions used at high fluences. A graphite and a tungsten detector are briefly described and results obtained in a good number of European reactors are given. The polykinetic three dimensional Monte-Carlo code Tripoli is used for calculation of damage fluxes. Comparison with above measurements shows a good agreement and confirms the use of the EURATOM damaging function for graphite [fr
International Nuclear Information System (INIS)
Kim, Jung-Ha; Hill, Robin; Kuncic, Zdenka
2012-01-01
The Monte Carlo (MC) method has proven invaluable for radiation transport simulations to accurately determine radiation doses and is widely considered a reliable computational measure that can substitute a physical experiment where direct measurements are not possible or feasible. In the EGSnrc/BEAMnrc MC codes, there are several user-specified parameters and customized transport algorithms, which may affect the calculation results. In order to fully utilize the MC methods available in these codes, it is essential to understand all these options and to use them appropriately. In this study, the effects of the electron transport algorithms in EGSnrc/BEAMnrc, which are often a trade-off between calculation accuracy and efficiency, were investigated in the buildup region of a homogeneous water phantom and also in a heterogeneous phantom using the DOSRZnrc user code. The algorithms and parameters investigated include: boundary crossing algorithm (BCA), skin depth, electron step algorithm (ESA), global electron cutoff energy (ECUT) and electron production cutoff energy (AE). The variations in calculated buildup doses were found to be larger than 10% for different user-specified transport parameters. We found that using BCA = EXACT gave the best results in terms of accuracy and efficiency in calculating buildup doses using DOSRZnrc. In addition, using the ESA = PRESTA-I option was found to be the best way of reducing the total calculation time without losing accuracy in the results at high energies (few keV ∼ MeV). We also found that although choosing a higher ECUT/AE value in the beam modelling can dramatically improve computation efficiency, there is a significant trade-off in surface dose uncertainty. Our study demonstrates that a careful choice of user-specified transport parameters is required when conducting similar MC calculations. (note)
Calculation of neutron importance function in fissionable assemblies using Monte Carlo method
International Nuclear Information System (INIS)
Feghhi, S. A. H.; Afarideh, H.; Shahriari, M.
2007-01-01
The purpose of the present work is to develop an efficient solution method to calculate neutron importance function in fissionable assemblies for all criticality conditions, using Monte Carlo Method. The neutron importance function has a well important role in perturbation theory and reactor dynamic calculations. Usually this function can be determined by calculating adjoint flux through out solving the Adjoint weighted transport equation with deterministic methods. However, in complex geometries these calculations are very difficult. In this article, considering the capabilities of MCNP code in solving problems with complex geometries and its closeness to physical concepts, a comprehensive method based on physical concept of neutron importance has been introduced for calculating neutron importance function in sub-critical, critical and supercritical conditions. For this means a computer program has been developed. The results of the method has been benchmarked with ANISN code calculations in 1 and 2 group modes for simple geometries and their correctness has been approved for all three criticality conditions. Ultimately, the efficiency of the method for complex geometries has been shown by calculation of neutron importance in MNSR research reactor
Energy Technology Data Exchange (ETDEWEB)
Mille, M; Lee, C [Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Rockville, MD (United States); Failla, G [Varian Medical Systems, Gig Harbor, WA (United States)
2016-06-15
Purpose: To use the Attila deterministic solver as a supplement to Monte Carlo for calculating out-of-field organ dose in support of epidemiological studies looking at the risks of second cancers. Supplemental dosimetry tools are needed to speed up dose calculations for studies involving large-scale patient cohorts. Methods: Attila is a multi-group discrete ordinates code which can solve the 3D photon-electron coupled linear Boltzmann radiation transport equation on a finite-element mesh. Dose is computed by multiplying the calculated particle flux in each mesh element by a medium-specific energy deposition cross-section. The out-of-field dosimetry capability of Attila is investigated by comparing average organ dose to that which is calculated by Monte Carlo simulation. The test scenario consists of a 6 MV external beam treatment of a female patient with a tumor in the left breast. The patient is simulated by a whole-body adult reference female computational phantom. Monte Carlo simulations were performed using MCNP6 and XVMC. Attila can export a tetrahedral mesh for MCNP6, allowing for a direct comparison between the two codes. The Attila and Monte Carlo methods were also compared in terms of calculation speed and complexity of simulation setup. A key perquisite for this work was the modeling of a Varian Clinac 2100 linear accelerator. Results: The solid mesh of the torso part of the adult female phantom for the Attila calculation was prepared using the CAD software SpaceClaim. Preliminary calculations suggest that Attila is a user-friendly software which shows great promise for our intended application. Computational performance is related to the number of tetrahedral elements included in the Attila calculation. Conclusion: Attila is being explored as a supplement to the conventional Monte Carlo radiation transport approach for performing retrospective patient dosimetry. The goal is for the dosimetry to be sufficiently accurate for use in retrospective
International Nuclear Information System (INIS)
Chapin, D.L.
1976-03-01
Differences in neutron fluxes and nuclear reaction rates in a noncircular fusion reactor blanket when analyzed in cylindrical and toroidal geometry are studied using Monte Carlo. The investigation consists of three phases--a one-dimensional calculation using a circular approximation to a hexagonal shaped blanket; a two-dimensional calculation of a hexagonal blanket in an infinite cylinder; and a three-dimensional calculation of the blanket in tori of aspect ratios 3 and 5. The total blanket reaction rate in the two-dimensional model is found to be in good agreement with the circular model. The toroidal calculations reveal large variations in reaction rates at different blanket locations as compared to the hexagonal cylinder model, although the total reaction rate is nearly the same for both models. It is shown that the local perturbations in the toroidal blanket are due mainly to volumetric effects, and can be predicted by modifying the results of the infinite cylinder calculation by simple volume factors dependent on the blanket location and the torus major radius
Spread-out Bragg peak and monitor units calculation with the Monte Carlo Code MCNPX
International Nuclear Information System (INIS)
Herault, J.; Iborra, N.; Serrano, B.; Chauvel, P.
2007-01-01
The aim of this work was to study the dosimetric potential of the Monte Carlo code MCNPX applied to the protontherapy field. For series of clinical configurations a comparison between simulated and experimental data was carried out, using the proton beam line of the MEDICYC isochronous cyclotron installed in the Centre Antoine Lacassagne in Nice. The dosimetric quantities tested were depth-dose distributions, output factors, and monitor units. For each parameter, the simulation reproduced accurately the experiment, which attests the quality of the choices made both in the geometrical description and in the physics parameters for beam definition. These encouraging results enable us today to consider a simplification of quality control measurements in the future. Monitor Units calculation is planned to be carried out with preestablished Monte Carlo simulation data. The measurement, which was until now our main patient dose calibration system, will be progressively replaced by computation based on the MCNPX code. This determination of Monitor Units will be controlled by an independent semi-empirical calculation
Calculations of electron fluence correction factors using the Monte Carlo code PENELOPE
International Nuclear Information System (INIS)
Siegbahn, E A; Nilsson, B; Fernandez-Varea, J M; Andreo, P
2003-01-01
In electron-beam dosimetry, plastic phantom materials may be used instead of water for the determination of absorbed dose to water. A correction factor φ water plastic is then needed for converting the electron fluence in the plastic phantom to the fluence at an equivalent depth in water. The recommended values for this factor given by AAPM TG-25 (1991 Med. Phys. 18 73-109) and the IAEA protocols TRS-381 (1997) and TRS-398 (2000) disagree, in particular at large depths. Calculations of the electron fluence have been done, using the Monte Carlo code PENELOPE, in semi-infinite phantoms of water and common plastic materials (PMMA, clear polystyrene, A-150, polyethylene, Plastic water TM and Solid water TM (WT1)). The simulations have been carried out for monoenergetic electron beams of 6, 10 and 20 MeV, as well as for a realistic clinical beam. The simulated fluence correction factors differ from the values in the AAPM and IAEA recommendations by up to 2%, and are in better agreement with factors obtained by Ding et al (1997 Med. Phys. 24 161-76) using EGS4. Our Monte Carlo calculations are also in good accordance with φ water plastic values measured by using an almost perturbation-free ion chamber. The important interdependence between depth- and fluence-scaling corrections for plastic phantoms is discussed. Discrepancies between the measured and the recommended values of φ water plastic may then be explained considering the different depth-scaling rules used
International Nuclear Information System (INIS)
Densmore, Jeffery D.; Larsen, Edward W.
2001-01-01
Recently, it has been shown that the figure of merit (FOM) of Monte Carlo source-detector problems can be enhanced by using a variational rather than a direct functional to estimate the detector response. The direct functional, which is traditionally employed in Monte Carlo simulations, requires an estimate of the solution of the forward problem within the detector region. The variational functional is theoretically more accurate than the direct functional, but it requires estimates of the solutions of the forward and adjoint source-detector problems over the entire phase-space of the problem. In recent work, we have performed Monte Carlo simulations using the variational functional by (a) approximating the adjoint solution deterministically and representing this solution as a function in phase-space and (b) estimating the forward solution using Monte Carlo. We have called this general procedure variational variance reduction (VVR). The VVR method is more computationally expensive per history than traditional Monte Carlo because extra information must be tallied and processed. However, the variational functional yields a more accurate estimate of the detector response. Our simulations have shown that the VVR reduction in variance usually outweighs the increase in cost, resulting in an increased FOM. In recent work on source-detector problems, we have calculated the adjoint solution deterministically and represented this solution as a linear-in-angle, histogram-in-space function. This procedure has several advantages over previous implementations: (a) it requires much less adjoint information to be stored and (b) it is highly efficient for diffusive problems, due to the accurate linear-in-angle representation of the adjoint solution. (Traditional variance-reduction methods perform poorly for diffusive problems.) Here, we extend this VVR method to Monte Carlo criticality calculations, which are often diffusive and difficult for traditional variance-reduction methods
Calculations for the intermediate-spectrum cells of Zebra 8 using the MONK Monte-Carlo Code
International Nuclear Information System (INIS)
Hanlon, D.; Franklin, B.M.; Stevenson, J.M.
1987-10-01
The Monte-Carlo Code MONK 6A and its associated point-energy cross-section data have been used to analyse seven, zero-leakage, plate-geometry cells from the ZEBRA 8 assemblies. The convergence of the calculations was such that the uncertainties in k-infinity and the more important reaction-rate ratios were generally less than the experimental uncertainties. The MONK 6A predictions have been compared with experiment and with predictions from the MURAL collision-probability code. This uses FGL5 data which has been adjusted on the basis of ZEBRA 8 and other integral experiments. The poor predictions from the MONK calculations with errors of up to 10% in k-infinity, are attributed to deficiencies in the database for intermediate to fast spectrum systems. (author)
Dosimetric investigation of LDR brachytherapy ¹⁹²Ir wires by Monte Carlo and TPS calculations.
Bozkurt, Ahmet; Acun, Hediye; Kemikler, Gonul
2013-01-01
The aim of this study was to investigate the dose rate distribution around (192)Ir wires used as radioactive sources in low-dose-rate brachytherapy applications. Monte Carlo modeling of a 0.3-mm diameter source and its surrounding water medium was performed for five different wire lengths (1-5 cm) using the MCNP software package. The computed dose rates per unit of air kerma at distances from 0.1 up to 10 cm away from the source were first verified with literature data sets. Then, the simulation results were compared with the calculations from the XiO CMS commercial treatment planning system. The study results were found to be in concordance with the treatment planning system calculations except for the shorter wires at close distances.
Monte Carlo calculations for doses in organs and tissues to oral radiography
International Nuclear Information System (INIS)
Sampaio, E.V.M.
1985-01-01
Using the MIRD 5 phantom and Monte Carlo technique, organ doses in patients undergoing external dental examination were calculated taking into account the different x-ray beam geometries and the various possible positions of x-ray source with regard to the head of the patient. It was necessary to introduce in the original computer program a new source description specific for dental examinations. To have a realistic evaluation of organ doses during dental examination it was necessary to introduce a new region in the phantom heat which characterizes the teeth and salivary glands. The attenuation of the x-ray beam by the lead shield of the radiographic film was also introduced in the calculation. (author)
International Nuclear Information System (INIS)
Levitan, Iu.L.; Sobol, I.M.; Khlopov, M.Iu.; Chechetkin, V.M.
1982-01-01
The variation of the hard part of the neutrino emission spectra of collapsing degenerate stellar cores with matter having a small optical depth to neutrinos is analyzed. The interaction of neutrinos with the degenerate matter is determined by processes of neutrino scattering on nuclei (without a change in neutrino energy) and neutrino scattering on degenerate electrons, in which the neutrino energy can only decrease. The neutrino emission spectrum of a collapsing stellar core in the initial stage of the onset of opacity is calculated by the Monte Carlo method: using a central density of 10 trillion g/cu cm and, in the stage of deep collapse, for a central density of 60 trillion g/cu cm. In the latter case the calculation of the spectrum without allowance for effects of neutrino degeneration in the central part of the collapsing stellar core corresponds to the maximum possible suppression of the hard part of the neutrino emission spectrum
Monte-Carlo calculation of irradiation dose content beyond shielding of high-energy accelerators
International Nuclear Information System (INIS)
Mokhov, N.V.; Frolov, V.V.
1975-01-01
The MARS programme, designed for calculating the three-dimensional internuclear cascade in defence of the accelerators by the Monte Carlo method, is described. The methods used to reduce the dispersion and the system of semi-empirical formulas made it possible to exceed the parameters of the existing programmes. By means of a synthesis of the results, registered by MARS and HAMLET programmes, the dosage fields for homogeneous and heterogeneous defence were evaluated. The results of the calculated absorbed and equivalent dose behind the barrier, irradiated by a proton beam, having the energy of Esub(o)=1/1000 GeV are exposed. The dependence of the high- and low-energy neutron, proton, pion, kaon, muonium and γ-quantum dosage on the initial energy and thickness, on the material and the composition of the defence is investigated
Monte Carlo calculations of ligth-ion sputtering as a function of the incident angle
International Nuclear Information System (INIS)
Haggmark, L.G.; Biersack, J.P.
1980-01-01
The sputtering of metal surfaces by light ions has been studied as a function of the incident angle using an extension of the TRIM Monte Carlo computer program. Sputtering yields were calculated at both normal and oblique angles of incidence for H, D, T, and 4 He impinging on Ni, Mo, and Au targets with energies <= 10 keV. Direct comparisons are made with the most recent experimental and theoretical results. There is generally good agreement with the experimental data although our calculated maximum in the yield usually occurs at a smaller incident angle, measured from the surface normal. The enhancement of the yield at large incident angles over that at normal incidence is observed to be a complex function of the incident ion's energy and mass and the target's atomic weight and surface binding energy. (orig.)
International Nuclear Information System (INIS)
Taylor, Michael; Dunn, Leon; Kron, Tomas; Height, Felicity; Franich, Rick
2012-01-01
Prediction of dose distributions in close proximity to interfaces is difficult. In the context of radiotherapy of lung tumors, this may affect the minimum dose received by lesions and is particularly important when prescribing dose to covering isodoses. The objective of this work is to quantify underdosage in key regions around a hypothetical target using Monte Carlo dose calculation methods, and to develop a factor for clinical estimation of such underdosage. A systematic set of calculations are undertaken using 2 Monte Carlo radiation transport codes (EGSnrc and GEANT4). Discrepancies in dose are determined for a number of parameters, including beam energy, tumor size, field size, and distance from chest wall. Calculations were performed for 1-mm 3 regions at proximal, distal, and lateral aspects of a spherical tumor, determined for a 6-MV and a 15-MV photon beam. The simulations indicate regions of tumor underdose at the tumor-lung interface. Results are presented as ratios of the dose at key peripheral regions to the dose at the center of the tumor, a point at which the treatment planning system (TPS) predicts the dose more reliably. Comparison with TPS data (pencil-beam convolution) indicates such underdosage would not have been predicted accurately in the clinic. We define a dose reduction factor (DRF) as the average of the dose in the periphery in the 6 cardinal directions divided by the central dose in the target, the mean of which is 0.97 and 0.95 for a 6-MV and 15-MV beam, respectively. The DRF can assist clinicians in the estimation of the magnitude of potential discrepancies between prescribed and delivered dose distributions as a function of tumor size and location. Calculation for a systematic set of “generic” tumors allows application to many classes of patient case, and is particularly useful for interpreting clinical trial data.
Energy Technology Data Exchange (ETDEWEB)
Taylor, Michael, E-mail: michael.taylor@rmit.edu.au [School of Applied Sciences, College of Science, Engineering and Health, RMIT University, Melbourne, Victoria (Australia); Physical Sciences, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Dunn, Leon; Kron, Tomas; Height, Felicity; Franich, Rick [School of Applied Sciences, College of Science, Engineering and Health, RMIT University, Melbourne, Victoria (Australia); Physical Sciences, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia)
2012-04-01
Prediction of dose distributions in close proximity to interfaces is difficult. In the context of radiotherapy of lung tumors, this may affect the minimum dose received by lesions and is particularly important when prescribing dose to covering isodoses. The objective of this work is to quantify underdosage in key regions around a hypothetical target using Monte Carlo dose calculation methods, and to develop a factor for clinical estimation of such underdosage. A systematic set of calculations are undertaken using 2 Monte Carlo radiation transport codes (EGSnrc and GEANT4). Discrepancies in dose are determined for a number of parameters, including beam energy, tumor size, field size, and distance from chest wall. Calculations were performed for 1-mm{sup 3} regions at proximal, distal, and lateral aspects of a spherical tumor, determined for a 6-MV and a 15-MV photon beam. The simulations indicate regions of tumor underdose at the tumor-lung interface. Results are presented as ratios of the dose at key peripheral regions to the dose at the center of the tumor, a point at which the treatment planning system (TPS) predicts the dose more reliably. Comparison with TPS data (pencil-beam convolution) indicates such underdosage would not have been predicted accurately in the clinic. We define a dose reduction factor (DRF) as the average of the dose in the periphery in the 6 cardinal directions divided by the central dose in the target, the mean of which is 0.97 and 0.95 for a 6-MV and 15-MV beam, respectively. The DRF can assist clinicians in the estimation of the magnitude of potential discrepancies between prescribed and delivered dose distributions as a function of tumor size and location. Calculation for a systematic set of 'generic' tumors allows application to many classes of patient case, and is particularly useful for interpreting clinical trial data.
TU-F-18A-03: Improving Tissue Segmentation for Monte Carlo Dose Calculation Using DECT Data
International Nuclear Information System (INIS)
Di, Salvio A; Bedwani, S; Carrier, J
2014-01-01
Purpose: To develop a new segmentation technique using dual energy CT (DECT) to overcome limitations related to segmentation from a standard Hounsfield unit (HU) to electron density (ED) calibration curve. Both methods are compared with a Monte Carlo analysis of dose distribution. Methods: DECT allows a direct calculation of both ED and effective atomic number (EAN) within a given voxel. The EAN is here defined as a function of the total electron cross-section of a medium. These values can be effectively acquired using a calibrated method from scans at two different energies. A prior stoichiometric calibration on a Gammex RMI phantom allows us to find the parameters to calculate EAN and ED within a voxel. Scans from a Siemens SOMATOM Definition Flash dual source system provided the data for our study. A Monte Carlo analysis compares dose distribution simulated by dosxyz-nrc, considering a head phantom defined by both segmentation techniques. Results: Results from depth dose and dose profile calculations show that materials with different atomic compositions but similar EAN present differences of less than 1%. Therefore, it is possible to define a short list of basis materials from which density can be adapted to imitate interaction behavior of any tissue. Comparison of the dose distributions on both segmentations shows a difference of 50% in dose in areas surrounding bone at low energy. Conclusion: The presented segmentation technique allows a more accurate medium definition in each voxel, especially in areas of tissue transition. Since the behavior of human tissues is highly sensitive at low energies, this reduces the errors on calculated dose distribution. This method could be further developed to optimize the tissue characterization based on anatomic site
TU-F-18A-03: Improving Tissue Segmentation for Monte Carlo Dose Calculation Using DECT Data
Energy Technology Data Exchange (ETDEWEB)
Di, Salvio A; Bedwani, S; Carrier, J [CHUM - Notre-Dame, Montreal, QC (Canada)
2014-06-15
Purpose: To develop a new segmentation technique using dual energy CT (DECT) to overcome limitations related to segmentation from a standard Hounsfield unit (HU) to electron density (ED) calibration curve. Both methods are compared with a Monte Carlo analysis of dose distribution. Methods: DECT allows a direct calculation of both ED and effective atomic number (EAN) within a given voxel. The EAN is here defined as a function of the total electron cross-section of a medium. These values can be effectively acquired using a calibrated method from scans at two different energies. A prior stoichiometric calibration on a Gammex RMI phantom allows us to find the parameters to calculate EAN and ED within a voxel. Scans from a Siemens SOMATOM Definition Flash dual source system provided the data for our study. A Monte Carlo analysis compares dose distribution simulated by dosxyz-nrc, considering a head phantom defined by both segmentation techniques. Results: Results from depth dose and dose profile calculations show that materials with different atomic compositions but similar EAN present differences of less than 1%. Therefore, it is possible to define a short list of basis materials from which density can be adapted to imitate interaction behavior of any tissue. Comparison of the dose distributions on both segmentations shows a difference of 50% in dose in areas surrounding bone at low energy. Conclusion: The presented segmentation technique allows a more accurate medium definition in each voxel, especially in areas of tissue transition. Since the behavior of human tissues is highly sensitive at low energies, this reduces the errors on calculated dose distribution. This method could be further developed to optimize the tissue characterization based on anatomic site.
Energy Technology Data Exchange (ETDEWEB)
Faught, A [UT MD Anderson Cancer Center, Houston, TX (United States); University of Texas Health Science Center Houston, Graduate School of Biomedical Sciences, Houston, TX (United States); Davidson, S [University of Texas Medical Branch of Galveston, Galveston, TX (United States); Kry, S; Ibbott, G; Followill, D [UT MD Anderson Cancer Center, Houston, TX (United States); Fontenot, J [Mary Bird Perkins Cancer Center, Baton Rouge, LA (United States); Etzel, C [Consortium of Rheumatology Researchers of North America (CORRONA), Inc., Southborough, MA (United States)
2014-06-01
Purpose: To develop a comprehensive end-to-end test for Varian's TrueBeam linear accelerator for head and neck IMRT using a custom phantom designed to utilize multiple dosimetry devices. Purpose: To commission a multiple-source Monte Carlo model of Elekta linear accelerator beams of nominal energies 6MV and 10MV. Methods: A three source, Monte Carlo model of Elekta 6 and 10MV therapeutic x-ray beams was developed. Energy spectra of two photon sources corresponding to primary photons created in the target and scattered photons originating in the linear accelerator head were determined by an optimization process that fit the relative fluence of 0.25 MeV energy bins to the product of Fatigue-Life and Fermi functions to match calculated percent depth dose (PDD) data with that measured in a water tank for a 10x10cm2 field. Off-axis effects were modeled by a 3rd degree polynomial used to describe the off-axis half-value layer as a function of off-axis angle and fitting the off-axis fluence to a piecewise linear function to match calculated dose profiles with measured dose profiles for a 40×40cm2 field. The model was validated by comparing calculated PDDs and dose profiles for field sizes ranging from 3×3cm2 to 30×30cm2 to those obtained from measurements. A benchmarking study compared calculated data to measurements for IMRT plans delivered to anthropomorphic phantoms. Results: Along the central axis of the beam 99.6% and 99.7% of all data passed the 2%/2mm gamma criterion for 6 and 10MV models, respectively. Dose profiles at depths of dmax, through 25cm agreed with measured data for 99.4% and 99.6% of data tested for 6 and 10MV models, respectively. A comparison of calculated dose to film measurement in a head and neck phantom showed an average of 85.3% and 90.5% of pixels passing a 3%/2mm gamma criterion for 6 and 10MV models respectively. Conclusion: A Monte Carlo multiple-source model for Elekta 6 and 10MV therapeutic x-ray beams has been developed as a
Energy Technology Data Exchange (ETDEWEB)
Palau, J M [CEA Cadarache, Service de Physique des Reacteurs et du Cycle, Lab. de Projets Nucleaires, 13 - Saint-Paul-lez-Durance (France)
2005-07-01
This paper presents how Monte-Carlo calculations (French TRIPOLI4 poly-kinetic code with an appropriate pre-processing and post-processing software called OVNI) are used in the case of 3-dimensional heterogeneous benchmarks (slab reactor cores) to reduce model biases and enable a thorough and detailed analysis of the performances of deterministic methods and their associated data libraries with respect to key neutron parameters (reactivity, local power). Outstanding examples of application of these tools are presented regarding the new numerical methods implemented in the French lattice code APOLLO2 (advanced self-shielding models, new IDT characteristics method implemented within the discrete-ordinates flux solver model) and the JEFF3.1 nuclear data library (checked against JEF2.2 previous file). In particular we have pointed out, by performing multigroup/point-wise TRIPOLI4 (assembly and core) calculations, the efficiency (in terms of accuracy and computation time) of the new IDT method developed in APOLLO2. In addition, by performing 3-dimensional TRIPOLI4 calculations of the whole slab core (few millions of elementary volumes), the high quality of the new JEFF3.1 nuclear data files and revised evaluations (U{sup 235}, U{sup 238}, Hf) for reactivity prediction of slab cores critical experiments has been stressed. As a feedback of the whole validation process, improvements in terms of nuclear data (mainly Hf capture cross-sections) and numerical methods (advanced quadrature formulas accounting validation results, validation of new self-shielding models, parallelization) are suggested to improve even more the APOLLO2-CRONOS2 standard calculation route. (author)
International Nuclear Information System (INIS)
Palau, J.M.
2005-01-01
This paper presents how Monte-Carlo calculations (French TRIPOLI4 poly-kinetic code with an appropriate pre-processing and post-processing software called OVNI) are used in the case of 3-dimensional heterogeneous benchmarks (slab reactor cores) to reduce model biases and enable a thorough and detailed analysis of the performances of deterministic methods and their associated data libraries with respect to key neutron parameters (reactivity, local power). Outstanding examples of application of these tools are presented regarding the new numerical methods implemented in the French lattice code APOLLO2 (advanced self-shielding models, new IDT characteristics method implemented within the discrete-ordinates flux solver model) and the JEFF3.1 nuclear data library (checked against JEF2.2 previous file). In particular we have pointed out, by performing multigroup/point-wise TRIPOLI4 (assembly and core) calculations, the efficiency (in terms of accuracy and computation time) of the new IDT method developed in APOLLO2. In addition, by performing 3-dimensional TRIPOLI4 calculations of the whole slab core (few millions of elementary volumes), the high quality of the new JEFF3.1 nuclear data files and revised evaluations (U 235 , U 238 , Hf) for reactivity prediction of slab cores critical experiments has been stressed. As a feedback of the whole validation process, improvements in terms of nuclear data (mainly Hf capture cross-sections) and numerical methods (advanced quadrature formulas accounting validation results, validation of new self-shielding models, parallelization) are suggested to improve even more the APOLLO2-CRONOS2 standard calculation route. (author)
The denoising of Monte Carlo dose distributions using convolution superposition calculations
International Nuclear Information System (INIS)
El Naqa, I; Cui, J; Lindsay, P; Olivera, G; Deasy, J O
2007-01-01
Monte Carlo (MC) dose calculations can be accurate but are also computationally intensive. In contrast, convolution superposition (CS) offers faster and smoother results but by making approximations. We investigated MC denoising techniques, which use available convolution superposition results and new noise filtering methods to guide and accelerate MC calculations. Two main approaches were developed to combine CS information with MC denoising. In the first approach, the denoising result is iteratively updated by adding the denoised residual difference between the result and the MC image. Multi-scale methods were used (wavelets or contourlets) for denoising the residual. The iterations are initialized by the CS data. In the second approach, we used a frequency splitting technique by quadrature filtering to combine low frequency components derived from MC simulations with high frequency components derived from CS components. The rationale is to take the scattering tails as well as dose levels in the high-dose region from the MC calculations, which presumably more accurately incorporates scatter; high-frequency details are taken from CS calculations. 3D Butterworth filters were used to design the quadrature filters. The methods were demonstrated using anonymized clinical lung and head and neck cases. The MC dose distributions were calculated by the open-source dose planning method MC code with varying noise levels. Our results indicate that the frequency-splitting technique for incorporating CS-guided MC denoising is promising in terms of computational efficiency and noise reduction. (note)
NOTE: The denoising of Monte Carlo dose distributions using convolution superposition calculations
El Naqa, I.; Cui, J.; Lindsay, P.; Olivera, G.; Deasy, J. O.
2007-09-01
Monte Carlo (MC) dose calculations can be accurate but are also computationally intensive. In contrast, convolution superposition (CS) offers faster and smoother results but by making approximations. We investigated MC denoising techniques, which use available convolution superposition results and new noise filtering methods to guide and accelerate MC calculations. Two main approaches were developed to combine CS information with MC denoising. In the first approach, the denoising result is iteratively updated by adding the denoised residual difference between the result and the MC image. Multi-scale methods were used (wavelets or contourlets) for denoising the residual. The iterations are initialized by the CS data. In the second approach, we used a frequency splitting technique by quadrature filtering to combine low frequency components derived from MC simulations with high frequency components derived from CS components. The rationale is to take the scattering tails as well as dose levels in the high-dose region from the MC calculations, which presumably more accurately incorporates scatter; high-frequency details are taken from CS calculations. 3D Butterworth filters were used to design the quadrature filters. The methods were demonstrated using anonymized clinical lung and head and neck cases. The MC dose distributions were calculated by the open-source dose planning method MC code with varying noise levels. Our results indicate that the frequency-splitting technique for incorporating CS-guided MC denoising is promising in terms of computational efficiency and noise reduction.
Memory-efficient calculations of adjoint-weighted tallies by the Monte Carlo Wielandt method
International Nuclear Information System (INIS)
Choi, Sung Hoon; Shim, Hyung Jin
2016-01-01
Highlights: • The MC Wielandt method is applied to reduce memory for the adjoint estimation. • The adjoint-weighted kinetics parameters are estimated in the MC Wielandt calculations. • The MC S/U analyses are conducted in the MC Wielandt calculations. - Abstract: The current Monte Carlo (MC) adjoint-weighted tally techniques based on the iterated fission probability (IFP) concept require a memory amount which is proportional to the numbers of the adjoint-weighted tallies and histories per cycle to store history-wise tally estimates during the convergence of the adjoint flux. Especially the conventional MC adjoint-weighted perturbation (AWP) calculations for the nuclear data sensitivity and uncertainty (S/U) analysis suffer from the huge memory consumption to realize the IFP concept. In order to reduce the memory requirement drastically, we present a new adjoint estimation method in which the memory usage is irrelevant to the numbers of histories per cycle by applying the IFP concept for the MC Wielandt calculations. The new algorithms for the adjoint-weighted kinetics parameter estimation and the AWP calculations in the MC Wielandt method are implemented in a Seoul National University MC code, McCARD and its validity is demonstrated in critical facility problems. From the comparison of the nuclear data S/U analyses, it is demonstrated that the memory amounts to store the sensitivity estimates in the proposed method become negligibly small.
International Nuclear Information System (INIS)
Abdel-Khalik, Hany S.; Zhang, Qiong
2014-01-01
The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 10 3 - 10 5 times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.
Optimization of path length stretching in Monte Carlo calculations for non-leakage problems
Energy Technology Data Exchange (ETDEWEB)
Hoogenboom, J.E. [Delft Univ. of Technology (Netherlands)
2005-07-01
Path length stretching (or exponential biasing) is a well known variance reduction technique in Monte Carlo calculations. It can especially be useful in shielding problems where particles have to penetrate a lot of material before being tallied. Several authors sought for optimization of the path length stretching parameter for detection of the leakage of neutrons from a slab. Here the adjoint function behaves as a single exponential function and can well be used to determine the stretching parameter. In this paper optimization is sought for a detector embedded in the system, which changes the adjoint function in the detector drastically. From literature it is known that the combination of path length stretching and angular biasing can result in appreciable variance reduction. However, angular biasing is not generally available in general purpose Monte Carlo codes and therefore we want to restrict ourselves to the application of pure path length stretching and finding optimum parameters for that. Nonetheless, the starting point for our research is the zero-variance scheme. In order to study the solution in detail the simplified monoenergetic two-direction model is adopted, which allows analytical solutions and can still be used in a Monte Carlo simulation. Knowing the zero-variance solution analytically, it is shown how optimum path length stretching parameters can be derived from it. It results in path length shrinking in the detector. Results for the variance in the detector response are shown in comparison with other patterns for the stretching parameter. The effect of anisotropic scattering on the path length stretching parameter is taken into account. (author)
International Nuclear Information System (INIS)
Bahreyni Toossi, M.T.; Hashemi, S.M.; Momen Nezhad, M.
2008-01-01
In recent decades, cancer has been one of the main ever increasing causes of death in developed countries. In order to fulfill the aforementioned considerations different techniques have been used, one of which is Monte Carlo simulation technique. High accuracy of the Monte Carlo simulation has been one of the main reason for its wide spread application. In this study, MCNP-4C code was employed to simulate electron mode of the Neptun 10 PC Linac, dosimetric quantities for conventional fields have also been both measured and calculated. Although Neptun 10 PC Linac is no longer licensed for installation in European and some other countries but regrettably nearly 10 of them have been installed in different centers around the country and are in operation. Therefore, in this circumstance, to improve the accuracy of treatment planning, Monte Carlo simulation for Neptun 10 PC was recognized as a necessity. Simulated and measured values of depth dose curves, off axis dose distributions for 6 , 8 and 10 MeV electrons applied for four different size fields, 6 x 6 cm 2 , 10 x 10 cm 2 , 15 x 15 cm 2 and 20 x 20 cm 2 were obtained. The measurements were carried out by a Welhofer-Scanditronix dose scanning system, Semiconductor Detector and Ionization Chamber. The results of this study have revealed that the values of two main dosimetric quantities depth dose curves and off axis dose distributions, acquired by MCNP-4C simulation and the corresponding values achieved by direct measurements are in a very good agreement (within 1% to 2% difference). In general, very good consistency of simulated and measured results, is a good proof that the goal of this work has been accomplished. In other word where measurements of some parameters are not practically achievable, MCNP-4C simulation can be implemented confidently. (author)
International Nuclear Information System (INIS)
Serikov, A.; Fischer, U.; Grosse, D.; Leichtle, D.; Majerle, M.
2011-01-01
The Monte Carlo (MC) method is the most suitable computational technique of radiation transport for shielding applications in fusion neutronics. This paper is intended for sharing the results of long term experience of the fusion neutronics group at Karlsruhe Institute of Technology (KIT) in radiation shielding calculations with the MCNP5 code for the ITER fusion reactor with emphasizing on the use of several ITER project-driven computer programs developed at KIT. Two of them, McCad and R2S, seem to be the most useful in radiation shielding analyses. The McCad computer graphical tool allows to perform automatic conversion of the MCNP models from the underlying CAD (CATIA) data files, while the R2S activation interface couples the MCNP radiation transport with the FISPACT activation allowing to estimate nuclear responses such as dose rate and nuclear heating after the ITER reactor shutdown. The cell-based R2S scheme was applied in shutdown photon dose analysis for the designing of the In-Vessel Viewing System (IVVS) and the Glow Discharge Cleaning (GDC) unit in ITER. Newly developed at KIT mesh-based R2S feature was successfully tested on the shutdown dose rate calculations for the upper port in the Neutral Beam (NB) cell of ITER. The merits of McCad graphical program were broadly acknowledged by the neutronic analysts and its continuous improvement at KIT has introduced its stable and more convenient run with its Graphical User Interface. Detailed 3D ITER neutronic modeling with the MCNP Monte Carlo method requires a lot of computation resources, inevitably leading to parallel calculations on clusters. Performance assessments of the MCNP5 parallel runs on the JUROPA/HPC-FF supercomputer cluster permitted to find the optimal number of processors for ITER-type runs. (author)
Rodrigues, Anna; Sawkey, Daren; Yin, Fang-Fang; Wu, Qiuwen
2015-05-01
To develop a framework for accurate electron Monte Carlo dose calculation. In this study, comprehensive validations of vendor provided electron beam phase space files for Varian TrueBeam Linacs against measurement data are presented. In this framework, the Monte Carlo generated phase space files were provided by the vendor and used as input to the downstream plan-specific simulations including jaws, electron applicators, and water phantom computed in the EGSnrc environment. The phase space files were generated based on open field commissioning data. A subset of electron energies of 6, 9, 12, 16, and 20 MeV and open and collimated field sizes 3 × 3, 4 × 4, 5 × 5, 6 × 6, 10 × 10, 15 × 15, 20 × 20, and 25 × 25 cm(2) were evaluated. Measurements acquired with a CC13 cylindrical ionization chamber and electron diode detector and simulations from this framework were compared for a water phantom geometry. The evaluation metrics include percent depth dose, orthogonal and diagonal profiles at depths R100, R50, Rp, and Rp+ for standard and extended source-to-surface distances (SSD), as well as cone and cut-out output factors. Agreement for the percent depth dose and orthogonal profiles between measurement and Monte Carlo was generally within 2% or 1 mm. The largest discrepancies were observed within depths of 5 mm from phantom surface. Differences in field size, penumbra, and flatness for the orthogonal profiles at depths R100, R50, and Rp were within 1 mm, 1 mm, and 2%, respectively. Orthogonal profiles at SSDs of 100 and 120 cm showed the same level of agreement. Cone and cut-out output factors agreed well with maximum differences within 2.5% for 6 MeV and 1% for all other energies. Cone output factors at extended SSDs of 105, 110, 115, and 120 cm exhibited similar levels of agreement. We have presented a Monte Carlo simulation framework for electron beam dose calculations for Varian TrueBeam Linacs. Electron beam energies of 6 to 20 MeV for open and collimated
International Nuclear Information System (INIS)
Wagner, J. C.; Blakeman, E. D.; Peplow, D. E.
2009-01-01
This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is a variation on the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for some time to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development of an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain approximately uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented in the ADVANTG/MCNP framework and has been fully automated within the MAVRIC sequence of SCALE 6. Results of the application of the method to enabling the calculation of dose rates throughout an entire full-scale pressurized-water reactor facility are presented and discussed. (authors)
SU-F-T-371: Development of a Linac Monte Carlo Model to Calculate Surface Dose
Energy Technology Data Exchange (ETDEWEB)
Prajapati, S; Yan, Y; Gifford, K [UT MD Anderson Cancer Center, Houston, TX (United States)
2016-06-15
Purpose: To generate and validate a linac Monte Carlo (MC) model for surface dose prediction. Methods: BEAMnrc V4-2.4.0 was used to model 6 and 18 MV photon beams for a commercially available linac. DOSXYZnrc V4-2.4.0 calculated 3D dose distributions in water. Percent depth dose (PDD) and beam profiles were extracted for comparison to measured data. Surface dose and at depths in the buildup region was measured with radiochromic film at 100 cm SSD for 4 × 4 cm{sup 2} and 10 × 10 cm{sup 2} collimator settings for open and MLC collimated fields. For the 6 MV beam, films were placed at depths ranging from 0.015 cm to 2 cm and for 18 MV, 0.015 cm to 3.5 cm in Solid Water™. Films were calibrated for both photon energies at their respective dmax. PDDs and profiles were extracted from the film and compared to the MC data. The MC model was adjusted to match measured PDD and profiles. Results: For the 6 MV beam, the mean error(ME) in PDD between film and MC for open fields was 1.9%, whereas it was 2.4% for MLC. For the 18 MV beam, the ME in PDD for open fields was 2% and was 3.5% for MLC. For the 6 MV beam, the average root mean square(RMS) deviation for the central 80% of the beam profile for open fields was 1.5%, whereas it was 1.6% for MLC. For the 18 MV beam, the maximum RMS for open fields was 3%, and was 3.1% for MLC. Conclusion: The MC model of a linac agreed to within 4% of film measurements for depths ranging from the surface to dmax. Therefore, the MC linac model can predict surface dose for clinical applications. Future work will focus on adjusting the linac MC model to reduce RMS error and improve accuracy.
Microscopic calculation of level densities: the shell model Monte Carlo approach
International Nuclear Information System (INIS)
Alhassid, Yoram
2012-01-01
The shell model Monte Carlo (SMMC) approach provides a powerful technique for the microscopic calculation of level densities in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods. We discuss a number of developments: (i) Spin distribution. We used a spin projection method to calculate the exact spin distribution of energy levels as a function of excitation energy. In even-even nuclei we find an odd-even staggering effect (in spin). Our results were confirmed in recent analysis of experimental data. (ii) Heavy nuclei. The SMMC approach was extended to heavy nuclei. We have studied the crossover between vibrational and rotational collectivity in families of samarium and neodymium isotopes in model spaces of dimension approx. 10 29 . We find good agreement with experimental results for both state densities and 2 > (where J is the total spin). (iii) Collective enhancement factors. We have calculated microscopically the vibrational and rotational enhancement factors of level densities versus excitation energy. We find that the decay of these enhancement factors in heavy nuclei is correlated with the pairing and shape phase transitions. (iv) Odd-even and odd-odd nuclei. The projection on an odd number of particles leads to a sign problem in SMMC. We discuss a novel method to calculate state densities in odd-even and odd-odd nuclei despite the sign problem. (v) State densities versus level densities. The SMMC approach has been used extensively to calculate state densities. However, experiments often measure level densities (where levels are counted without including their spin degeneracies.) A spin projection method enables us to also calculate level densities in SMMC. We have calculated the SMMC level density of 162 Dy and found it to agree well with experiments
Energy Technology Data Exchange (ETDEWEB)
Kuijper, J.C.; Oppe, J.; Klein Meulekamp, R.; Koning, H. [NRG - Fuels, Actinides and Isotopes group, Petten (Netherlands)
2005-07-01
Some years ago a methodology was developed at NRG for the calculation of 'density-to-density' and 'one-group cross section-to-density' sensitivity matrices and covariance matrices for final nuclide densities for burnup schemes consisting of multiple sets of flux/spectrum and burnup calculations. The applicability of the methodology was then demonstrated by calculations of BR3 MOX pin irradiation experiments employing multi-group cross section uncertainty data from the EAF4 data library. A recent development is the extension of this methodology to enable its application in combination with the OCTOPUS-MCNP-FISPACT/ORIGEN Monte Carlo burnup scheme. This required some extensions to the sensitivity matrix calculation tool CASEMATE. The extended methodology was applied on the 'HTR Plutonium Cell Burnup Benchmark' to calculate the uncertainties (covariances) in the final densities, as far as these uncertainties are caused by uncertainties in cross sections. Up to 600 MWd/kg these uncertainties are larger than the differences between the code systems. However, it should be kept in mind that the calculated uncertainties are based on EAF4 uncertainty data. It is not exactly clear on beforehand what a proper set of associated (MCNP) cross sections and covariances would yield in terms of final uncertainties in calculated densities. This will be investigated, by the same formalism, once these data becomes available. It should be noted that the studies performed up till the present date are mainly concerned with the influence of uncertainties in cross sections. The influence of uncertainties in the decay constants, although included in the formalism, is not considered further. Also the influence of other uncertainties (such as -geometrical- modelling approximations) has been left out of consideration for the time being. (authors)
International Nuclear Information System (INIS)
Kuijper, J.C.; Oppe, J.; Klein Meulekamp, R.; Koning, H.
2005-01-01
Some years ago a methodology was developed at NRG for the calculation of 'density-to-density' and 'one-group cross section-to-density' sensitivity matrices and covariance matrices for final nuclide densities for burnup schemes consisting of multiple sets of flux/spectrum and burnup calculations. The applicability of the methodology was then demonstrated by calculations of BR3 MOX pin irradiation experiments employing multi-group cross section uncertainty data from the EAF4 data library. A recent development is the extension of this methodology to enable its application in combination with the OCTOPUS-MCNP-FISPACT/ORIGEN Monte Carlo burnup scheme. This required some extensions to the sensitivity matrix calculation tool CASEMATE. The extended methodology was applied on the 'HTR Plutonium Cell Burnup Benchmark' to calculate the uncertainties (covariances) in the final densities, as far as these uncertainties are caused by uncertainties in cross sections. Up to 600 MWd/kg these uncertainties are larger than the differences between the code systems. However, it should be kept in mind that the calculated uncertainties are based on EAF4 uncertainty data. It is not exactly clear on beforehand what a proper set of associated (MCNP) cross sections and covariances would yield in terms of final uncertainties in calculated densities. This will be investigated, by the same formalism, once these data becomes available. It should be noted that the studies performed up till the present date are mainly concerned with the influence of uncertainties in cross sections. The influence of uncertainties in the decay constants, although included in the formalism, is not considered further. Also the influence of other uncertainties (such as -geometrical- modelling approximations) has been left out of consideration for the time being. (authors)
Standard deviation of local tallies in global Monte Carlo calculation of nuclear reactor core
International Nuclear Information System (INIS)
Ueki, Taro
2010-01-01
Time series methodology has been studied to assess the feasibility of statistical error estimation in the continuous space and energy Monte Carlo calculation of the three-dimensional whole reactor core. The noise propagation was examined and the fluctuation of track length tallies for local fission rate and power has been formally shown to be represented by the autoregressive moving average process of orders p and p-1 [ARMA(p,p-1)], where p is an integer larger than or equal to two. Therefore, ARMA(p,p-1) fitting was applied to the real standard deviation estimation of the power of fuel assemblies at particular heights. Numerical results indicate that straightforward ARMA(3,2) fitting is promising, but a stability issue must be resolved toward the incorporation in the distributed version of production Monte Carlo codes. The same numerical results reveal that the average performance of ARMA(3,2) fitting is equivalent to that of the batch method with a batch size larger than 100 and smaller than 200 cycles for a 1,100 MWe pressurized water reactor. (author)
Criticality calculation in TRIGA MARK II PUSPATI Reactor using Monte Carlo code
International Nuclear Information System (INIS)
Rafhayudi Jamro; Redzuwan Yahaya; Abdul Aziz Mohamed; Eid Abdel-Munem; Megat Harun Al-Rashid; Julia Abdul Karim; Ikki Kurniawan; Hafizal Yazid; Azraf Azman; Shukri Mohd
2008-01-01
A Monte Carlo simulation of the Malaysian nuclear reactor has been performed using MCNP Version 5 code. The purpose of the work is the determination of the multiplication factor (k e ff) for the TRIGA Mark II research reactor in Malaysia based on Monte Carlo method. This work has been performed to calculate the value of k e ff for two cases, which are the control rod either fully withdrawn or fully inserted to construct a complete model of the TRIGA Mark II PUSPATI Reactor (RTP). The RTP core was modeled as close as possible to the real core and the results of k e ff from MCNP5 were obtained when the control fuel rods were fully inserted, the k e ff value indicates the RTP reactor was in the subcritical condition with a value of 0.98370±0.00054. When the control fuel rods were fully withdrawn the value of k e ff value indicates the RTP reactor is in the supercritical condition, that is 1.10773±0.00083. (Author)
A Comparison of Monte Carlo and Deterministic Solvers for keff and Sensitivity Calculations
Energy Technology Data Exchange (ETDEWEB)
Haeck, Wim [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Parsons, Donald Kent [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); White, Morgan Curtis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Saller, Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Favorite, Jeffrey A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-12-12
Verification and validation of our solutions for calculating the neutron reactivity for nuclear materials is a key issue to address for many applications, including criticality safety, research reactors, power reactors, and nuclear security. Neutronics codes solve variations of the Boltzmann transport equation. The two main variants are Monte Carlo versus deterministic solutions, e.g. the MCNP [1] versus PARTISN [2] codes, respectively. There have been many studies over the decades that examined the accuracy of such solvers and the general conclusion is that when the problems are well-posed, either solver can produce accurate results. However, the devil is always in the details. The current study examines the issue of self-shielding and the stress it puts on deterministic solvers. Most Monte Carlo neutronics codes use continuous-energy descriptions of the neutron interaction data that are not subject to this effect. The issue of self-shielding occurs because of the discretisation of data used by the deterministic solutions. Multigroup data used in these solvers are the average cross section and scattering parameters over an energy range. Resonances in cross sections can occur that change the likelihood of interaction by one to three orders of magnitude over a small energy range. Self-shielding is the numerical effect that the average cross section in groups with strong resonances can be strongly affected as neutrons within that material are preferentially absorbed or scattered out of the resonance energies. This affects both the average cross section and the scattering matrix.
Fission yield calculation using toy model based on Monte Carlo simulation
International Nuclear Information System (INIS)
Jubaidah; Kurniadi, Rizal
2015-01-01
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R c ), mean of left curve (μ L ) and mean of right curve (μ R ), deviation of left curve (σ L ) and deviation of right curve (σ R ). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90
Study of the IMRT interplay effect using a 4DCT Monte Carlo dose calculation.
Jensen, Michael D; Abdellatif, Ady; Chen, Jeff; Wong, Eugene
2012-04-21
Respiratory motion may lead to dose errors when treating thoracic and abdominal tumours with radiotherapy. The interplay between complex multileaf collimator patterns and patient respiratory motion could result in unintuitive dose changes. We have developed a treatment reconstruction simulation computer code that accounts for interplay effects by combining multileaf collimator controller log files, respiratory trace log files, 4DCT images and a Monte Carlo dose calculator. Two three-dimensional (3D) IMRT step-and-shoot plans, a concave target and integrated boost were delivered to a 1D rigid motion phantom. Three sets of experiments were performed with 100%, 50% and 25% duty cycle gating. The log files were collected, and five simulation types were performed on each data set: continuous isocentre shift, discrete isocentre shift, 4DCT, 4DCT delivery average and 4DCT plan average. Analysis was performed using 3D gamma analysis with passing criteria of 2%, 2 mm. The simulation framework was able to demonstrate that a single fraction of the integrated boost plan was more sensitive to interplay effects than the concave target. Gating was shown to reduce the interplay effects. We have developed a 4DCT Monte Carlo simulation method that accounts for IMRT interplay effects with respiratory motion by utilizing delivery log files.
Fission yield calculation using toy model based on Monte Carlo simulation
Energy Technology Data Exchange (ETDEWEB)
Jubaidah, E-mail: jubaidah@student.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia); Physics Department, Faculty of Mathematics and Natural Science – State University of Medan. Jl. Willem Iskandar Pasar V Medan Estate – North Sumatera, Indonesia 20221 (Indonesia); Kurniadi, Rizal, E-mail: rijalk@fi.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia)
2015-09-30
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90
Calculating CR-39 Response to Radon in Water Using Monte Carlo Simulation
International Nuclear Information System (INIS)
Razaie Rayeni Nejad, M. R.
2012-01-01
CR-39 detectors are widely used for Radon and progeny measurement in the air. In this paper, using the Monte Carlo simulation, the possibility of using the CR-39 for direct measurement of Radon and progeny in water is investigated. Assuming the random position and angle of alpha particle emitted by Radon and progeny, alpha energy and angular spectrum that arrive at CR-39, the calibration factor, and the suitable depth of chemical etching of CR-39 in air and water was calculated. In this simulation, a range of data were obtained from SRIM2008 software. Calibration factor of CR-39 in water is calculated as 6.6 (kBq.d/m 3 )/(track/cm 2 ) that is corresponding with EPA standard level of Radon concentration in water (10-11 kBq/m 3 ). With replacing the skin instead of CR-39, the volume affected by Radon and progeny was determined to be 2.51 mm 3 for one m 2 of skin area. The annual dose conversion factor for Radon and progeny was calculated to be between 8.8-58.8 nSv/(Bq.h/m 3 ). Using the CR-39 for Radon measurement in water can be beneficial. The annual dose conversion factor for Radon and progeny was calculated to be between 8.8-58.8 nSv/ (Bq.h/m 3 ).
Criticality Analysis Of TCA Critical Lattices With MNCP-4C Monte Carlo Calculation
International Nuclear Information System (INIS)
Zuhair
2002-01-01
The use of uranium-plutonium mixed oxide (MOX) fuel in electric generation light water reactor (PWR, BWR) is being planned in Japan. Therefore, the accuracy evaluations of neutronic analysis code for MOX cores have been employed by many scientists and reactor physicists. Benchmark evaluations for TCA was done using various calculation methods. The Monte Carlo become the most reliable method to predict criticality of various reactor types. In this analysis, the MCNP-4C code was chosen because various superiorities the code has. All in all, the MCNP-4C calculation for TCA core with 38 MOX critical lattice configurations gave the results with high accuracy. The JENDL-3.2 library showed significantly closer results to the ENDF/B-V. The k eff values calculated with the ENDF/B-VI library gave underestimated results. The ENDF/B-V library gave the best estimation. It can be concluded that MCNP-4C calculation, especially with ENDF/B-V and JENDL-3.2 libraries, for MOX fuel utilized NPP design in reactor core is the best choice
Monte carlo calculation of energy deposition and ionization yield for high energy protons
International Nuclear Information System (INIS)
Wilson, W.E.; McDonald, J.C.; Coyne, J.J.; Paretzke, H.G.
1985-01-01
Recent calculations of event size spectra for neutrons use a continuous slowing down approximation model for the energy losses experienced by secondary charged particles (protons and alphas) and thus do not allow for straggling effects. Discrepancies between the calculations and experimental measurements are thought to be, in part, due to the neglect of straggling. A tractable way of including stochastics in radiation transport calculations is via the Monte Carlo method and a number of efforts directed toward simulating positive ion track structure have been initiated employing this technique. Recent results obtained with our updated and extended MOCA code for charged particle track structure are presented here. Major emphasis has been on calculating energy deposition and ionization yield spectra for recoil proton crossers since they are the most prevalent event type at high energies (>99% at 14 MeV) for small volumes. Neutron event-size spectra can be obtained from them by numerical summing and folding techniques. Data for ionization yield spectra are presented for simulated recoil protons up to 20 MeV in sites of diameters 2-1000 nm
International Nuclear Information System (INIS)
Jabbari, N.; Hashemi-Malayeri, B.; Farajollahi, A. R.; Kazemnejad, A.
2007-01-01
In radiotherapy with electron beams, scattered radiation from an electron applicator influences the dose distribution in the patient. The contribution of this radiation to the patient dose is significant, even in modern accelerators. In most of radiotherapy treatment planning systems, this component is not explicitly included. In addition, the scattered radiation produced by applicators varies based on the applicator design as well as the field size and distance from the applicators. The aim of this study was to calculate the amount of scattered dose contribution from applicators. We also tried to provide an extensive set of calculated data that could be used as input or benchmark data for advanced treatment planning systems that use Monte Carlo algorithms for dose distribution calculations. Electron beams produced by a NEPTUN 10PC medical linac were modeled using the BEAMnrc system. Central axis depth dose curves of the electron beams were measured and calculated, with and without the applicators in place, for different field sizes and energies. The scattered radiation from the applicators was determined by subtracting the central axis depth dose curves obtained without the applicators from that with the applicator. The results of this study indicated that the scattered radiation from the electron applicators of the NEPTUN 10PC is significant and cannot be neglected in advanced treatment planning systems. Furthermore, our results showed that the scattered radiation depends on the field size and decreases almost linearly with depth. (author)
Unified description of pf-shell nuclei by the Monte Carlo shell model calculations
Energy Technology Data Exchange (ETDEWEB)
Mizusaki, Takahiro; Otsuka, Takaharu [Tokyo Univ. (Japan). Dept. of Physics; Honma, Michio
1998-03-01
The attempts to solve shell model by new methods are briefed. The shell model calculation by quantum Monte Carlo diagonalization which was proposed by the authors is a more practical method, and it became to be known that it can solve the problem with sufficiently good accuracy. As to the treatment of angular momentum, in the method of the authors, deformed Slater determinant is used as the basis, therefore, for making angular momentum into the peculiar state, projected operator is used. The space determined dynamically is treated mainly stochastically, and the energy of the multibody by the basis formed as the result is evaluated and selectively adopted. The symmetry is discussed, and the method of decomposing shell model space into dynamically determined space and the product of spin and isospin spaces was devised. The calculation processes are shown with the example of {sup 50}Mn nuclei. The calculation of the level structure of {sup 48}Cr with known exact energy can be done with the accuracy of peculiar absolute energy value within 200 keV. {sup 56}Ni nuclei are the self-conjugate nuclei of Z=N=28. The results of the shell model calculation of {sup 56}Ni nucleus structure by using the interactions of nuclear models are reported. (K.I.)
Application of a Monte Carlo linac model in routine verifications of dose calculations
International Nuclear Information System (INIS)
Linares Rosales, H. M.; Alfonso Laguardia, R.; Lara Mas, E.; Popescu, T.
2015-01-01
The analysis of some parameters of interest in Radiotherapy Medical Physics based on an experimentally validated Monte Carlo model of an Elekta Precise lineal accelerator, was performed for 6 and 15 Mv photon beams. The simulations were performed using the EGSnrc code. As reference for simulations, the optimal beam parameters values (energy and FWHM) previously obtained were used. Deposited dose calculations in water phantoms were done, on typical complex geometries commonly are used in acceptance and quality control tests, such as irregular and asymmetric fields. Parameters such as MLC scatter, maximum opening or closing position, and the separation between them were analyzed from calculations in water. Similarly simulations were performed on phantoms obtained from CT studies of real patients, making comparisons of the dose distribution calculated with EGSnrc and the dose distribution obtained from the computerized treatment planning systems (TPS) used in routine clinical plans. All the results showed a great agreement with measurements, finding all of them within tolerance limits. These results allowed the possibility of using the developed model as a robust verification tool for validating calculations in very complex situation, where the accuracy of the available TPS could be questionable. (Author)
Cross sections needed for investigations into track phenomena and Monte-Carlo calculations
International Nuclear Information System (INIS)
Paretzke, H.G.
1983-01-01
Investigations into basic radiation action mechanisms as well as into applied radiation transport problems (e.g. electron microscopy) greatly benefit from detailed computer simulations of charged particle track structures in matter. The first and in fact most important and most difficult step in any such calculation is the derivation of reliable cross sections for the most relevant interaction processes in the material(s) under consideration. The second step in radiation transport calculations is the testing of results or intermediate results for quantitative or qualitative consistency with other experimental or theoretical information (e.g. yields, backscatter factors). This paper discusses the types of the most important collision cross sections for studies on track phenomena by detailed Monte-Carlo calculations, the necessary accuracy of such data and various means of consistency checks of calculated results. This will be done mainly with examples taken from radiation physics as applied to dosimetric and biological problems (i.e. to gaseous and condensed targets). 12 references, 8 figures
Energy Technology Data Exchange (ETDEWEB)
Sheu, R.-D.; Chui, C.-S.; Jiang, S.-H. E-mail: shjiang@mx.nthu.edu.tw
2003-12-01
A simplified method, based on the integral of the first collision kernel, is presented for performing gamma-ray skyshine calculations for the collimated sources. The first collision kernels were calculated in air for a reference air density by use of the EGS4 Monte Carlo code. These kernels can be applied to other air densities by applying density corrections. The integral first collision kernel (IFCK) method has been used to calculate two of the ANSI/ANS skyshine benchmark problems and the results were compared with a number of other commonly used codes. Our results were generally in good agreement with others but only spend a small fraction of the computation time required by the Monte Carlo calculations. The scheme of the IFCK method for dealing with lots of source collimation geometry is also presented in this study.
International Nuclear Information System (INIS)
Kramer, R.; Zankl, M.; Williams, G.; Drexler, G.
1982-12-01
By the help of a Monte-Carlo program the dose that single organs, organ groups and bigger or smaller parts of body would receive on an average, caused by an irradiation definitely fixed by the geometry of irradiation and photon energy, can be determined. Thus the phantom in connection with the Monte-Carlo program can be used for several considerations as for example - calculation of dose from occupational exposures - calculation of dose from diagnostic procedures - calculation of dose from radiotherapy procedures. (orig.)
One-run Monte Carlo calculation of effective delayed neutron fraction and area-ratio reactivity
Energy Technology Data Exchange (ETDEWEB)
Zhaopeng Zhong; Talamo, Alberto; Gohar, Yousry, E-mail: zzhong@anl.gov, E-mail: alby@anl.gov, E-mail: gohar@anl.gov [Nuclear Engineering Division, Argonne National Laboratory, IL (United States)
2011-07-01
The Monte Carlo code MCNPX has been utilized to calculate the effective delayed neutron fraction and reactivity by using the area-ratio method. The effective delayed neutron fraction β{sub eff} has been calculated with the fission probability method proposed by Meulekamp and van der Marck. MCNPX was used to calculate separately the fission probability of the delayed and the prompt neutrons by using the TALLYX user subroutine of MCNPX. In this way, β{sub eff} was obtained from the one criticality (k-code) calculation without performing an adjoint calculation. The traditional k-ratio method requires two criticality calculations to calculate β{sub eff}, while this approach utilizes only one MCNPX criticality calculation. Therefore, the approach described here is referred to as a one-run method. In subcritical systems driven by a pulsed neutron source, the area-ratio method is used to calculate reactivity (in dollar units) as the ratio between the prompt and delayed areas. These areas represent the integral of the reaction rates induced from the prompt and delayed neutrons during the pulse period. Traditionally, application of the area-ratio method requires two separate fixed source MCNPX simulations: one with delayed neutrons and the other without. The number of source particles in these two simulations must be extremely high in order to obtain accurate results with low statistical errors because the values of the total and prompt areas are very close. Consequently, this approach is time consuming and suffers from the statistical errors of the two simulations. The present paper introduces a more efficient method for estimating the reactivity calculated with the area method by taking advantage of the TALLYX user subroutine of MCNPX. This subroutine has been developed for separately scoring the reaction rates caused by the delayed and the prompt neutrons during a single simulation. Therefore the method is referred to as a one run calculation. These methodologies have
One-run Monte Carlo calculation of effective delayed neutron fraction and area-ratio reactivity
International Nuclear Information System (INIS)
Zhaopeng Zhong; Talamo, Alberto; Gohar, Yousry
2011-01-01
The Monte Carlo code MCNPX has been utilized to calculate the effective delayed neutron fraction and reactivity by using the area-ratio method. The effective delayed neutron fraction β_e_f_f has been calculated with the fission probability method proposed by Meulekamp and van der Marck. MCNPX was used to calculate separately the fission probability of the delayed and the prompt neutrons by using the TALLYX user subroutine of MCNPX. In this way, β_e_f_f was obtained from the one criticality (k-code) calculation without performing an adjoint calculation. The traditional k-ratio method requires two criticality calculations to calculate β_e_f_f, while this approach utilizes only one MCNPX criticality calculation. Therefore, the approach described here is referred to as a one-run method. In subcritical systems driven by a pulsed neutron source, the area-ratio method is used to calculate reactivity (in dollar units) as the ratio between the prompt and delayed areas. These areas represent the integral of the reaction rates induced from the prompt and delayed neutrons during the pulse period. Traditionally, application of the area-ratio method requires two separate fixed source MCNPX simulations: one with delayed neutrons and the other without. The number of source particles in these two simulations must be extremely high in order to obtain accurate results with low statistical errors because the values of the total and prompt areas are very close. Consequently, this approach is time consuming and suffers from the statistical errors of the two simulations. The present paper introduces a more efficient method for estimating the reactivity calculated with the area method by taking advantage of the TALLYX user subroutine of MCNPX. This subroutine has been developed for separately scoring the reaction rates caused by the delayed and the prompt neutrons during a single simulation. Therefore the method is referred to as a one run calculation. These methodologies have been
International Nuclear Information System (INIS)
Sato, Satoshi
2003-09-01
In tokamak-type DT nuclear fusion reactor, there are various type slits and ducts in the blanket and the vacuum vessel. The helium production in the rewelding location of the blanket and the vacuum vessel, the nuclear properties in the super-conductive TF coil, e.g. the nuclear heating rate in the coil winding pack, are enhanced by the radiation streaming through the slits and ducts, and they are critical concern in the shielding design. The decay gamma ray dose rate around the duct penetrating the blanket and the vacuum vessel is also enhanced by the radiation streaming through the duct, and they are also critical concern from the view point of the human access to the cryostat during maintenance. In order to evaluate these nuclear properties with good accuracy, three dimensional Monte Carlo calculation is required but requires long calculation time. Therefore, the development of the effective simple design evaluation method for radiation streaming is substantially important. This study aims to establish the systematic evaluation method for the nuclear properties of the blanket, the vacuum vessel and the Toroidal Field (TF) coil taking into account the radiation streaming through various types of slits and ducts, based on three dimensional Monte Carlo calculation using the MNCP code, and for the decay gamma ray dose rates penetrated around the ducts. The present thesis describes three topics in five chapters as follows; 1) In Chapter 2, the results calculated by the Monte Carlo code, MCNP, are compared with those by the Sn code, DOT3.5, for the radiation streaming in the tokamak-type nuclear fusion reactor, for validating the results of the Sn calculation. From this comparison, the uncertainties of the Sn calculation results coming from the ray-effect and the effect due to approximation of the geometry are investigated whether the two dimensional Sn calculation can be applied instead of the Monte Carlo calculation. Through the study, it can be concluded that the
Propagation of Nuclear Data Uncertainties in Integral Measurements by Monte-Carlo Calculations
Energy Technology Data Exchange (ETDEWEB)
Noguere, G.; Bernard, D.; De Saint-Jean, C. [CEA Cadarache, 13 - Saint Paul lez Durance (France)
2006-07-01
Full text of the publication follows: The generation of Multi-group cross sections together with relevant uncertainties is fundamental to assess the quality of integral data. The key information that are needed to propagate the microscopic experimental uncertainties to macroscopic reactor calculations are (1) the experimental covariance matrices, (2) the correlations between the parameters of the model and (3) the covariance matrices for the multi-group cross sections. The propagation of microscopic errors by Monte-Carlo technique was applied to determine the accuracy of the integral trends provided by the OSMOSE experiment carried out in the MINERVE reactor of the CEA Cadarache. The technique consists in coupling resonance shape analysis and deterministic codes. The integral trend and its accuracy obtained on the {sup 237}Np(n,{gamma}) reaction will be presented. (author)
DEMONR, Monte-Carlo Shielding Calculation for Neutron Flux and Neutron Spectra, Teaching Program
International Nuclear Information System (INIS)
Courtney, J. C.
1987-01-01
1 - Description of problem or function: DEMONR treats the behavior of neutrons in a slab shield. It is frequently used as a teaching tool. 2 - Method of solution: An unbiased Monte Carlo code calculates the number, energy, and direction of neutrons that penetrate or are reflected from a shield. 3 - Restrictions on the complexity of the problem: Only one shield may be used in each problem. The shield material may be a single element or a homogeneous mixture of elements with a single effective atomic weight. Only elastic scattering and neutron capture processes are allowed. The source is a point located on one face of the slab. It provides a cosine distribution of current. Monoenergetic or fission spectrum neutrons may be selected
Estimation of Adjoint-Weighted Kinetics Parameters in Monte Carlo Wieland Calculations
International Nuclear Information System (INIS)
Choi, Sung Hoon; Shim, Hyung Jin
2013-01-01
The effective delayed neutron fraction, β eff , and the prompt neutron generation time, Λ, in the point kinetics equation are weighted by the adjoint flux to improve the accuracy of the reactivity estimate. Recently the Monte Carlo (MC) kinetics parameter estimation methods by using the self-consistent adjoint flux calculated in the MC forward simulations have been developed and successfully applied for the research reactor analyses. However these adjoint estimation methods based on the cycle-by-cycle genealogical table require a huge memory size to store the pedigree hierarchy. In this paper, we present a new adjoint estimation in which the pedigree of a single history is utilized by applying the MC Wielandt method. The effectiveness of the new method is demonstrated in the kinetics parameter estimations for infinite homogeneous two-group problems and the Godiva critical facility
Vectorization and parallelization of Monte-Carlo programs for calculation of radiation transport
International Nuclear Information System (INIS)
Seidel, R.
1995-01-01
The versatile MCNP-3B Monte-Carlo code written in FORTRAN77, for simulation of the radiation transport of neutral particles, has been subjected to vectorization and parallelization of essential parts, without touching its versatility. Vectorization is not dependent on a specific computer. Several sample tasks have been selected in order to test the vectorized MCNP-3B code in comparison to the scalar MNCP-3B code. The samples are a representative example of the 3-D calculations to be performed for simulation of radiation transport in neutron and reactor physics. (1) 4πneutron detector. (2) High-energy calorimeter. (3) PROTEUS benchmark (conversion rates and neutron multiplication factors for the HCLWR (High Conversion Light Water Reactor)). (orig./HP) [de
Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector
Energy Technology Data Exchange (ETDEWEB)
Padilla Cabal, Fatima, E-mail: fpadilla@instec.c [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba); Lopez-Pino, Neivy; Luis Bernal-Castillo, Jose; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D' Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba)
2010-12-15
A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ({sup 241}Am, {sup 133}Ba, {sup 22}Na, {sup 60}Co, {sup 57}Co, {sup 137}Cs and {sup 152}Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.
Monte Carlo dose calculations for BNCT treatment of diffuse human lung tumours
International Nuclear Information System (INIS)
Altieri, S.; Bortolussi, S.; Bruschi, P.
2006-01-01
In order to test the possibility to apply BNCT in the core of diffuse lung tumours, dose distribution calculations were made. The simulations were performed with the Monte Carlo code MCNP.4c2, using the male computational phantom Adam, version 07/94. Volumes of interest were voxelized for the tally requests, and results were obtained for tissues with and without Boron. Different collimated neutron sources were tested in order to establish the proper energies, as well as single and multiple beams to maximize neutron flux uniformity inside the target organs. Flux and dose distributions are reported. The use of two opposite epithermal neutron collimated beams insures good levels of dose homogeneity inside the lungs, with a substantially lower radiation dose delivered to surrounding structures. (author)
International Nuclear Information System (INIS)
Wang, Ruihong; Yang, Shulin; Pei, Lucheng
2011-01-01
Deep penetration problem has been one of the difficult problems in shielding calculation with Monte Carlo method for several decades. In this paper, an adaptive technique under the emission point as a sampling station is presented. The main advantage is to choose the most suitable sampling number from the emission point station to get the minimum value of the total cost in the process of the random walk. Further, the related importance sampling method is also derived. The main principle is to define the importance function of the response due to the particle state and ensure the sampling number of the emission particle is proportional to the importance function. The numerical results show that the adaptive method under the emission point as a station could overcome the difficulty of underestimation to the result in some degree, and the related importance sampling method gets satisfied results as well. (author)
MORSE-C, Neutron Transport, Gamma Transport for Criticality Calculation by Monte-Carlo Method
International Nuclear Information System (INIS)
2002-01-01
1 - Description of program or function: MORSE-C is a Monte-Carlo code to solve the multiple energy group form of the Boltzmann transport equation in order to obtain the eigenvalue (multiplication) when fissionable materials are present. Cross sections for up to 100 energy groups may be employed. The angular scattering is treated by the usual Legendre expansion as used in the discrete ordinates codes. Up-scattering may be specified. The geometry is defined by relationships to general 1. or 2. degree surfaces. Array units may be specified. Output includes, besides the usual values of input quantities, plots of the geometry, calculated volumes and masses, and graphs of results to assist the user in determining the correctness of the problem's solution
Monte Carlo Calculated Effective Dose to Teenage Girls from Computed Tomography Examinations
International Nuclear Information System (INIS)
Caon, M.; Bibbo, G.; Pattison, J.
2000-01-01
Effective doses from CT to paediatric patients are not common in the literature. This article reports some effective doses to teenage girls from CT examinations. The voxel computational model ADELAIDE, representative of a 14-year-old girl, was scaled in size by ±5% to represent also 11-12-year-old and 16-year-old girls. The EGS4 Monte Carlo code was used to calculate the effective dose from chest, abdomen and whole torso CT examinations to the three version of ADELAIDE using a 120 kV spectrum. For the whole torso CT examination, in order of increasing model size, the effective doses were 9.0, 8.2 and 7.8 mSv per 100 mA.s. Data are presented that allow the estimation of effective dose from CT examinations of the torso for girls between the ages of 11 and 16. (author)
International Nuclear Information System (INIS)
Picton, D.J.; Harris, R.G.; Randle, K.; Weaver, D.R.
1995-01-01
This paper describes a simple, accurate and efficient technique for the calculation of materials perturbation effects in Monte Carlo photon transport calculations. It is particularly suited to the application for which it was developed, namely the modelling of a dual detector density tool as used in borehole logging. However, the method would be appropriate to any photon transport calculation in the energy range 0.1 to 2 MeV, in which the predominant processes are Compton scattering and photoelectric absorption. The method enables a single set of particle histories to provide results for an array of configurations in which material densities or compositions vary. It can calculate the effects of small perturbations very accurately, but is by no means restricted to such cases. For the borehole logging application described here the method has been found to be efficient for a moderate range of variation in the bulk density (of the order of ±30% from a reference value) or even larger changes to a limited portion of the system (e.g. a low density mudcake of the order of a few tens of mm in thickness). The effective speed enhancement over an equivalent set of individual calculations is in the region of an order of magnitude or more. Examples of calculations on a dual detector density tool are given. It is demonstrated that the method predicts, to a high degree of accuracy, the variation of detector count rates with formation density, and that good results are also obtained for the effects of mudcake layers. An interesting feature of the results is that relative count rates (the ratios of count rates obtained with different configurations) can usually be determined more accurately than the absolute values of the count rates. (orig.)
International Nuclear Information System (INIS)
Yang, Bo; Qiu, Rui; Li, JunLi; Lu, Wei; Wu, Zhen; Li, Chunyan
2017-01-01
When a strong laser beam irradiates a solid target, a hot plasma is produced and high-energy electrons are usually generated (the so-called “hot electrons”). These energetic electrons subsequently generate hard X-rays in the solid target through the Bremsstrahlung process. To date, only limited studies have been conducted on this laser-induced radiological protection issue. In this study, extensive literature reviews on the physics and properties of hot electrons have been conducted. On the basis of these information, the photon dose generated by the interaction between hot electrons and a solid target was simulated with the Monte Carlo code FLUKA. With some reasonable assumptions, the calculated dose can be regarded as the upper boundary of the experimental results over the laser intensity ranging from 10 19 to 10 21 W/cm 2 . Furthermore, an equation to estimate the photon dose generated from ultraintense laser–solid interactions based on the normalized laser intensity is derived. The shielding effects of common materials including concrete and lead were also studied for the laser-driven X-ray source. The dose transmission curves and tenth-value layers (TVLs) in concrete and lead were calculated through Monte Carlo simulations. These results could be used to perform a preliminary and fast radiation safety assessment for the X-rays generated from ultraintense laser–solid interactions. - Highlights: • The laser–driven X-ray ionizing radiation source was analyzed in this study. • An equation to estimate the photon dose based on the laser intensity is given. • The shielding effects of concrete and lead were studied for this new X-ray source. • The aim of this study is to analyze and mitigate the laser–driven X-ray hazard.
Fix, Michael K; Cygler, Joanna; Frei, Daniel; Volken, Werner; Neuenschwander, Hans; Born, Ernst J; Manser, Peter
2013-05-07
The electron Monte Carlo (eMC) dose calculation algorithm available in the Eclipse treatment planning system (Varian Medical Systems) is based on the macro MC method and uses a beam model applicable to Varian linear accelerators. This leads to limitations in accuracy if eMC is applied to non-Varian machines. In this work eMC is generalized to also allow accurate dose calculations for electron beams from Elekta and Siemens accelerators. First, changes made in the previous study to use eMC for low electron beam energies of Varian accelerators are applied. Then, a generalized beam model is developed using a main electron source and a main photon source representing electrons and photons from the scattering foil, respectively, an edge source of electrons, a transmission source of photons and a line source of electrons and photons representing the particles from the scrapers or inserts and head scatter radiation. Regarding the macro MC dose calculation algorithm, the transport code of the secondary particles is improved. The macro MC dose calculations are validated with corresponding dose calculations using EGSnrc in homogeneous and inhomogeneous phantoms. The validation of the generalized eMC is carried out by comparing calculated and measured dose distributions in water for Varian, Elekta and Siemens machines for a variety of beam energies, applicator sizes and SSDs. The comparisons are performed in units of cGy per MU. Overall, a general agreement between calculated and measured dose distributions for all machine types and all combinations of parameters investigated is found to be within 2% or 2 mm. The results of the dose comparisons suggest that the generalized eMC is now suitable to calculate dose distributions for Varian, Elekta and Siemens linear accelerators with sufficient accuracy in the range of the investigated combinations of beam energies, applicator sizes and SSDs.
Monte Carlo calculations of resonance radiative transfer through a semi-infinite atmosphere
International Nuclear Information System (INIS)
Slater, G.; Salpeter, E.E.; Wasserman, I.
1982-01-01
The results of Monte Carlo calculations of radiative transfer through a semi-infinite plane-parallel atmosphere of resonant scatterers are presented. With a photon source at optical depth tau/sub ES/ we model the semi-infinite geometry by embedding a perfectly reflecting mirror at depth tau/sub MS/+tau/sub ES/. Although some quantities characterizing the emergent photons diverge as tau/sub MS/→infinity, the mean number of scatters, N/sub ES/, and path length, L/sub ES/, accumulated between the source and the edge of the atmosphere converge. Accurate results of N/sub ES/, L/sub ES/, X/sub pk/, the most probable frequency shift of the escaping photons, and tau/sub LAST/, the mean optical depth at which they last scatter, are obtained by choosing tau/sub MS/ = 4tau/sub ES/. Approximate analytic calculations of N/sub ES/, L/sub ES/, N, the mean total number of scatters undergone by escaping photons, L, their mean total path length, and , their mean (absolute) frequency shift, are presented for a symmetric slab with αtau/sub ES/>>1 and tau/sub MS/>>tau/sub ES/. Analogous calculations for an asymmetric slab are discussed. Analytic fitting formulae for N/sub ES/, L/sub ES/, X/sub pk/, and tau/sub LAST/ are given
Monte Carlo method for dose calculation due to oral X-rays
International Nuclear Information System (INIS)
Loureiro, Eduardo Cesar de Miranda
1998-06-01
The increasing utilization of oral X-rays, especially in youngsters and children, calls for the assessment of equivalent doses in their organs and tissues. With this purpose, a Monte Carlo code was adapted to simulate an X-ray source irradiating phantoms of the MIRD-5 type with different ages (10, 15 and 40 years old) to calculate the conversion coefficients which transform the exposure at skin to equivalent doses at several organs and tissues of interest. In order to check the computer program, simulations were performed for adult patients using the original code (ADAM.FOR developed at the GSF-Germany) and the adapted program (MCDRO.PAS). Good agreement between results obtained with both codes was observed. Irradiations of the incisive, canine and molar teeth were simulated. The conversion factors were calculated for the following organs and tissues: thyroid, active bone narrow (head and whole body), bone (facial skeleton, cranium and whole body), skin (head and whole body) and crystalline. Based on the obtained results, it follows that the younger the patient and the larger the field area, the higher the dose in assessed organs and tissues. The variation of the source-skin distance does not change the conversion coefficients. On the other hand, the increase in the voltage applied to the X-ray tube causes an increase in the calculated conversion coefficients. (author)
International Nuclear Information System (INIS)
Ibrahim, Ahmad M.; Polunovskiy, Eduard; Loughlin, Michael J.; Grove, Robert E.; Sawan, Mohamed E.
2016-01-01
Highlights: • Assess the detailed distribution of the nuclear heating among the components of the ITER toroidal field coils. • Utilize the FW-CADIS method to dramatically accelerate the calculation of detailed nuclear analysis. • Compare the efficiency and reliability of the FW-CADIS method and the MCNP weight window generator. - Abstract: Because the superconductivity of the ITER toroidal field coils (TFC) must be protected against local overheating, detailed spatial distribution of the TFC nuclear heating is needed to assess the acceptability of the designs of the blanket, vacuum vessel (VV), and VV thermal shield. Accurate Monte Carlo calculations of the distributions of the TFC nuclear heating are challenged by the small volumes of the tally segmentations and by the thick layers of shielding provided by the blanket and VV. To speed up the MCNP calculation of the nuclear heating distribution in different segments of the coil casing, ground insulation, and winding packs of the ITER TFC, the ITER Organization (IO) used the MCNP weight window generator (WWG). The maximum relative uncertainty of the tallies in this calculation was 82.7%. In this work, this MCNP calculation was repeated using variance reduction parameters generated by the Oak Ridge National Laboratory AutomateD VAriaNce reducTion Generator (ADVANTG) code and both MCNP calculations were compared in terms of computational efficiency and reliability. Even though the ADVANTG MCNP calculation used less than one-sixth of the computational resources of the IO calculation, the relative uncertainties of all the tallies in the ADVANTG MCNP calculation were less than 6.1%. The nuclear heating results of the two calculations were significantly different by factors between 1.5 and 2.3 in some of the segments of the furthest winding pack turn from the plasma neutron source. Even though the nuclear heating in this turn may not affect the ITER design because it is much smaller than the nuclear heating in the
Energy Technology Data Exchange (ETDEWEB)
Ibrahim, Ahmad M., E-mail: ibrahimam@ornl.gov [Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, TN 37831 (United States); Polunovskiy, Eduard; Loughlin, Michael J. [ITER Organization, Route de Vinon Sur Verdon, 13067 St. Paul Lez Durance (France); Grove, Robert E. [Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, TN 37831 (United States); Sawan, Mohamed E. [University of Wisconsin-Madison, 1500 Engineering Dr., Madison, WI 53706 (United States)
2016-11-01
Highlights: • Assess the detailed distribution of the nuclear heating among the components of the ITER toroidal field coils. • Utilize the FW-CADIS method to dramatically accelerate the calculation of detailed nuclear analysis. • Compare the efficiency and reliability of the FW-CADIS method and the MCNP weight window generator. - Abstract: Because the superconductivity of the ITER toroidal field coils (TFC) must be protected against local overheating, detailed spatial distribution of the TFC nuclear heating is needed to assess the acceptability of the designs of the blanket, vacuum vessel (VV), and VV thermal shield. Accurate Monte Carlo calculations of the distributions of the TFC nuclear heating are challenged by the small volumes of the tally segmentations and by the thick layers of shielding provided by the blanket and VV. To speed up the MCNP calculation of the nuclear heating distribution in different segments of the coil casing, ground insulation, and winding packs of the ITER TFC, the ITER Organization (IO) used the MCNP weight window generator (WWG). The maximum relative uncertainty of the tallies in this calculation was 82.7%. In this work, this MCNP calculation was repeated using variance reduction parameters generated by the Oak Ridge National Laboratory AutomateD VAriaNce reducTion Generator (ADVANTG) code and both MCNP calculations were compared in terms of computational efficiency and reliability. Even though the ADVANTG MCNP calculation used less than one-sixth of the computational resources of the IO calculation, the relative uncertainties of all the tallies in the ADVANTG MCNP calculation were less than 6.1%. The nuclear heating results of the two calculations were significantly different by factors between 1.5 and 2.3 in some of the segments of the furthest winding pack turn from the plasma neutron source. Even though the nuclear heating in this turn may not affect the ITER design because it is much smaller than the nuclear heating in the
International Nuclear Information System (INIS)
Pan, Yuxi; Qiu, Rui; Ge, Chaoyong; Xie, Wenzhang; Li, Junli; Gao, Linfeng; Zheng, Junzheng
2014-01-01
With the rapidly growing number of CT examinations, the consequential radiation risk has aroused more and more attention. The average dose in each organ during CT scans can only be obtained by using Monte Carlo simulation with computational phantoms. Since children tend to have higher radiation sensitivity than adults, the radiation dose of pediatric CT examinations requires special attention and needs to be assessed accurately. So far, studies on organ doses from CT exposures for pediatric patients are still limited. In this work, a 1-year-old computational phantom was constructed. The body contour was obtained from the CT images of a 1-year-old physical phantom and the internal organs were deformed from an existing Chinese reference adult phantom. To ensure the organ locations in the 1-year-old computational phantom were consistent with those of the physical phantom, the organ locations in 1-year-old computational phantom were manually adjusted one by one, and the organ masses were adjusted to the corresponding Chinese reference values. Moreover, a CT scanner model was developed using the Monte Carlo technique and the 1-year-old computational phantom was applied to estimate organ doses derived from simulated CT exposures. As a result, a database including doses to 36 organs and tissues from 47 single axial scans was built. It has been verified by calculation that doses of axial scans are close to those of helical scans; therefore, this database could be applied to helical scans as well. Organ doses were calculated using the database and compared with those obtained from the measurements made in the physical phantom for helical scans. The differences between simulation and measurement were less than 25% for all organs. The result shows that the 1-year-old phantom developed in this work can be used to calculate organ doses in CT exposures, and the dose database provides a method for the estimation of 1-year-old patient doses in a variety of CT examinations. (paper)
International Nuclear Information System (INIS)
Meireles, Ramiro Conceicao
2016-01-01
The shielding calculation methodology for radiotherapy services adopted in Brazil and in several countries is that described in publication 151 of the National Council on Radiation Protection and Measurements (NCRP 151). This methodology however, markedly employs several approaches that can impact both in the construction cost and in the radiological safety of the facility. Although this methodology is currently well established by the high level of use, some parameters employed in the calculation methodology did not undergo to a detailed assessment to evaluate the impact of the various approaches considered. In this work the MCNP5 Monte Carlo code was used with the purpose of evaluating the above mentioned approaches. TVLs values were obtained for photons in conventional concrete (2.35g / cm 3 ), considering the energies of 6, 10 and 25 MeV, respectively, first considering an isotropic radiation source impinging perpendicular to the barriers, and subsequently a lead head shielding emitting a shaped beam, in the format of a pyramid trunk. Primary barriers safety margins, taking in account the head shielding emitting photon beam pyramid-shaped in the energies of 6, 10, 15 and 18 MeV were assessed. A study was conducted considering the attenuation provided by the patient's body in the energies of 6,10, 15 and 18 MeV, leading to new attenuation factors. Experimental measurements were performed in a real radiotherapy room, in order to map the leakage radiation emitted by the accelerator head shielding and the results obtained were employed in the Monte Carlo simulation, as well as to validate the entire study. The study results indicate that the TVLs values provided by (NCRP, 2005) show discrepancies in comparison with the values obtained by simulation and that there may be some barriers that are calculated with insufficient thickness. Furthermore, the simulation results show that the additional safety margins considered when calculating the width of the primary
Monte Carlo sampling on technical parameters in criticality and burn-up-calculations
International Nuclear Information System (INIS)
Kirsch, M.; Hannstein, V.; Kilger, R.
2011-01-01
The increase in computing power over the recent years allows for the introduction of Monte Carlo sampling techniques for sensitivity and uncertainty analyses in criticality safety and burn-up calculations. With these techniques it is possible to assess the influence of a variation of the input parameters within their measured or estimated uncertainties on the final value of a calculation. The probabilistic result of a statistical analysis can thus complement the traditional method of figuring out both the nominal (best estimate) and the bounding case of the neutron multiplication factor (k eff ) in criticality safety analyses, e.g. by calculating the uncertainty of k eff or tolerance limits. Furthermore, the sampling method provides a possibility to derive sensitivity information, i.e. it allows figuring out which of the uncertain input parameters contribute the most to the uncertainty of the system. The application of Monte Carlo sampling methods has become a common practice in both industry and research institutes. Within this approach, two main paths are currently under investigation: the variation of nuclear data used in a calculation and the variation of technical parameters such as manufacturing tolerances. This contribution concentrates on the latter case. The newly developed SUnCISTT (Sensitivities and Uncertainties in Criticality Inventory and Source Term Tool) is introduced. It defines an interface to the well established GRS tool for sensitivity and uncertainty analyses SUSA, that provides the necessary statistical methods for sampling based analyses. The interfaced codes are programs that are used to simulate aspects of the nuclear fuel cycle, such as the criticality safety analysis sequence CSAS5 of the SCALE code system, developed by Oak Ridge National Laboratories, or the GRS burn-up system OREST. In the following, first the implementation of the SUnCISTT will be presented, then, results of its application in an exemplary evaluation of the neutron
International Nuclear Information System (INIS)
Berki, T.
2003-01-01
The signal of ex-core detectors depends not only on the total power of a reactor but also on the power distribution. The spatial weighting function establishes correspondence between the power distribution and the detector signal. The weighting function is independent of the power distribution. The weighting function is used for detector-response analyses, for example in the case of rod-drop experiments. (1) The paper describes the calculation and analysis of the weighting function of a VVER-440. The three-dimensional Monte Carlo code MCNP is used for the evaluation. Results from forward and adjoint calculations are compared. The effect of the change in the concentration of boric acid is also investigated. The evaluation of the spatial weighting function is a fixed-source neutron transport problem, which can be solved much faster by adjoint calculation, however forward calculations provide more detailed results. It is showed that the effect of boric acid upon the weighting function is negligible. (author)
International Nuclear Information System (INIS)
Barreras Caballero, A. A.; Hernandez Garcia, J.J.; Alfonso Laguardia, R.
2009-01-01
Were directly determined correction factors depending on the type camera beam quality, k, Q, and kQ, Qo, instead of the product (w, air p) Q, for three type cylindrical ionization chambers Pinpoint and divergent monoenergetic beams of photons in a wide range of energies (4-20 MV). The method of calculation used dispenses with the approaches taken in the classic procedure considered independent of braking power ratios and the factors disturbance of the camera. A detailed description of the geometry and materials chambers were supplied by the manufacturer and used as data input for the system 2006 of PENELOPE Monte Carlo calculation using a User code that includes correlated sampling, and forced interactions division of particles. We used a photon beam Co-60 as beam reference for calculating the correction factors for beam quality. No data exist for the cameras PTW 31014, 31015 and 31016 in the TRS-398 at they do not compare the results with data calculated or determined experimentally by other authors. (author)
Mairani, A; Kraemer, M; Sommerer, F; Parodi, K; Scholz, M; Cerutti, F; Ferrari, A; Fasso, A
2010-01-01
Clinical Monte Carlo (MC) calculations for carbon ion therapy have to provide absorbed and RBE-weighted dose. The latter is defined as the product of the dose and the relative biological effectiveness (RBE). At the GSI Helmholtzzentrum fur Schwerionenforschung as well as at the Heidelberg Ion Therapy Center (HIT), the RBE values are calculated according to the local effect model (LEM). In this paper, we describe the approach followed for coupling the FLUKA MC code with the LEM and its application to dose and RBE-weighted dose calculations for a superimposition of two opposed C-12 ion fields as applied in therapeutic irradiations. The obtained results are compared with the available experimental data of CHO (Chinese hamster ovary) cell survival and the outcomes of the GSI analytical treatment planning code TRiP98. Some discrepancies have been observed between the analytical and MC calculations of absorbed physical dose profiles, which can be explained by the differences between the laterally integrated depth-d...
Motta, Mario; Zhang, Shiwei
2018-05-01
We propose an algorithm for accurate, systematic, and scalable computation of interatomic forces within the auxiliary-field quantum Monte Carlo (AFQMC) method. The algorithm relies on the Hellmann-Feynman theorem and incorporates Pulay corrections in the presence of atomic orbital basis sets. We benchmark the method for small molecules by comparing the computed forces with the derivatives of the AFQMC potential energy surface and by direct comparison with other quantum chemistry methods. We then perform geometry optimizations using the steepest descent algorithm in larger molecules. With realistic basis sets, we obtain equilibrium geometries in agreement, within statistical error bars, with experimental values. The increase in computational cost for computing forces in this approach is only a small prefactor over that of calculating the total energy. This paves the way for a general and efficient approach for geometry optimization and molecular dynamics within AFQMC.
SU-F-T-575: Verification of a Monte-Carlo Small Field SRS/SBRT Dose Calculation System
International Nuclear Information System (INIS)
Sudhyadhom, A; McGuinness, C; Descovich, M
2016-01-01
Purpose: To develop a methodology for validation of a Monte-Carlo dose calculation model for robotic small field SRS/SBRT deliveries. Methods: In a robotic treatment planning system, a Monte-Carlo model was iteratively optimized to match with beam data. A two-part analysis was developed to verify this model. 1) The Monte-Carlo model was validated in a simulated water phantom versus a Ray-Tracing calculation on a single beam collimator-by-collimator calculation. 2) The Monte-Carlo model was validated to be accurate in the most challenging situation, lung, by acquiring in-phantom measurements. A plan was created and delivered in a CIRS lung phantom with film insert. Separately, plans were delivered in an in-house created lung phantom with a PinPoint chamber insert within a lung simulating material. For medium to large collimator sizes, a single beam was delivered to the phantom. For small size collimators (10, 12.5, and 15mm), a robotically delivered plan was created to generate a uniform dose field of irradiation over a 2×2cm 2 area. Results: Dose differences in simulated water between Ray-Tracing and Monte-Carlo were all within 1% at dmax and deeper. Maximum dose differences occurred prior to dmax but were all within 3%. Film measurements in a lung phantom show high correspondence of over 95% gamma at the 2%/2mm level for Monte-Carlo. Ion chamber measurements for collimator sizes of 12.5mm and above were within 3% of Monte-Carlo calculated values. Uniform irradiation involving the 10mm collimator resulted in a dose difference of ∼8% for both Monte-Carlo and Ray-Tracing indicating that there may be limitations with the dose calculation. Conclusion: We have developed a methodology to validate a Monte-Carlo model by verifying that it matches in water and, separately, that it corresponds well in lung simulating materials. The Monte-Carlo model and algorithm tested may have more limited accuracy for 10mm fields and smaller.
Energy Technology Data Exchange (ETDEWEB)
Adamson, Justus; Newton, Joseph; Yang Yun; Steffey, Beverly; Cai, Jing; Adamovics, John; Oldham, Mark; Chino, Junzo; Craciunescu, Oana [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27710 (United States); Department of Chemistry, Rider University, Lawrenceville, New Jersey 08648 (United States); Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27710 (United States)
2012-07-15
10 Degree-Sign caused a 1%{+-} 0.1%, 1.7%{+-} 0.4%, and 2.6%{+-} 0.7% increase in rectal dose, respectively, with smaller effect to dose to Point A, bladder, sigmoid, and bowel. For 3D dosimetry, 90.6% of voxels had a 3D {gamma}-index (criteria = 0.1 cm, 3% local signal) below 1.0 when comparing measured and expected dose about the unshielded source. Dose transmission through the gold shielding at a radial distance of 1 cm was 85.9%{+-} 0.2%, 83.4%{+-} 0.7%, and 82.5%{+-} 2.2% for Monte Carlo, and measurement for left and right buckets, respectively. Dose transmission was lowest at oblique angles from the bucket with a minimum of 56.7%{+-} 0.8%, 65.6%{+-} 1.7%, and 57.5%{+-} 1.6%, respectively. For a clinical T and O plan, attenuation from the buckets leads to a decrease in average Point A dose of {approx}3.2% and decrease in D{sub 2cc} to bladder, rectum, bowel, and sigmoid of 5%, 18%, 6%, and 12%, respectively. Conclusions: Differences between dummy and afterloading bucket position in the ovoids is minor compared to effects from asymmetric ovoid shielding, for which rectal dose is most affected. 3D dosimetry can fulfill a novel role in verifying Monte Carlo calculations of complex dose distributions as are common about brachytherapy sources and applicators.
Adamson, Justus; Newton, Joseph; Yang, Yun; Steffey, Beverly; Cai, Jing; Adamovics, John; Oldham, Mark; Chino, Junzo; Craciunescu, Oana
2012-07-01
, sigmoid, and bowel. For 3D dosimetry, 90.6% of voxels had a 3D γ-index (criteria = 0.1 cm, 3% local signal) below 1.0 when comparing measured and expected dose about the unshielded source. Dose transmission through the gold shielding at a radial distance of 1 cm was 85.9% ± 0.2%, 83.4% ± 0.7%, and 82.5% ± 2.2% for Monte Carlo, and measurement for left and right buckets, respectively. Dose transmission was lowest at oblique angles from the bucket with a minimum of 56.7% ± 0.8%, 65.6% ± 1.7%, and 57.5% ± 1.6%, respectively. For a clinical T&O plan, attenuation from the buckets leads to a decrease in average Point A dose of ∼3.2% and decrease in D(2cc) to bladder, rectum, bowel, and sigmoid of 5%, 18%, 6%, and 12%, respectively. Differences between dummy and afterloading bucket position in the ovoids is minor compared to effects from asymmetric ovoid shielding, for which rectal dose is most affected. 3D dosimetry can fulfill a novel role in verifying Monte Carlo calculations of complex dose distributions as are common about brachytherapy sources and applicators.
Monte Carlo calculation of the cross-section of single event upset induced by 14MeV neutrons
International Nuclear Information System (INIS)
Li, H.; Deng, J.Y.; Chang, D.M.
2005-01-01
High-density static random access memory may experience single event upsets (SEU) in neutron environments. We present a new method to calculate the SEU cross-section. Our method is based on explicit generation and transport of the secondary reaction products and detailed accounting for energy loss by ionization. Instead of simulating the behavior of the circuit, we use the Monte Carlo method to simulate the process of energy deposition in sensitive volumes. Thus, we do not need to know details about the circuit. We only need a reasonable guess for the size of the sensitive volumes. In the Monte Carlo simulation, the cross-section of SEU induced by 14MeV neutrons is calculated. We can see that the Monte Carlo simulation not only can provide a new method to calculate SEU cross-section, but also can give a detailed description about random process of the SEU
Enger, Shirin A; Munck af Rosenschöld, Per; Rezaei, Arash; Lundqvist, Hans
2006-02-01
GEANT4 is a Monte Carlo code originally implemented for high-energy physics applications and is well known for particle transport at high energies. The capacity of GEANT4 to simulate neutron transport in the thermal energy region is not equally well known. The aim of this article is to compare MCNP, a code commonly used in low energy neutron transport calculations and GEANT4 with experimental results and select the suitable code for gadolinium neutron capture applications. To account for the thermal neutron scattering from chemically bound atoms [S(alpha,beta)] in biological materials a comparison of thermal neutron fluence in tissue-like poly(methylmethacrylate) phantom is made with MCNP4B, GEANT4 6.0 patch1, and measurements from the neutron capture therapy (NCT) facility at the Studsvik, Sweden. The fluence measurements agreed with MCNP calculated results considering S(alpha,beta). The location of the thermal neutron peak calculated with MCNP without S(alpha,beta) and GEANT4 is shifted by about 0.5 cm towards a shallower depth and is 25%-30% lower in amplitude. Dose distribution from the gadolinium neutron capture reaction is then simulated by MCNP and compared with measured data. The simulations made by MCNP agree well with experimental results. As long as thermal neutron scattering from chemically bound atoms are not included in GEANT4 it is not suitable for NCT applications.
Farr, W. M.; Mandel, I.; Stevens, D.
2015-01-01
Selection among alternative theoretical models given an observed dataset is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty and it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted in the Markov chain Monte Carlo (MCMC) algorithm and convergence is correspondingly slow. Here, we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose intermodel jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in modest dimensionality. We show that our technique leads to improved convergence over naive jumps in an RJMCMC, and compare it to other proposals in the literature to improve the convergence of RJMCMCs. We also demonstrate the use of the same interpolation technique as a way to construct efficient ‘global’ proposal distributions for single-model MCMCs without prior knowledge of the structure of the posterior distribution, and discuss improvements that permit the method to be used in higher dimensional spaces efficiently. PMID:26543580
A new Monte Carlo method for neutron noise calculations in the frequency domain
International Nuclear Information System (INIS)
Rouchon, Amélie; Zoia, Andrea; Sanchez, Richard
2017-01-01
Neutron noise equations, which are obtained by assuming small perturbations of macroscopic cross sections around a steady-state neutron field and by subsequently taking the Fourier transform in the frequency domain, have been usually solved by analytical techniques or by resorting to diffusion theory. A stochastic approach has been recently proposed in the literature by using particles with complex-valued weights and by applying a weight cancellation technique. We develop a new Monte Carlo algorithm that solves the transport neutron noise equations in the frequency domain. The stochastic method presented here relies on a modified collision operator and does not need any weight cancellation technique. In this paper, both Monte Carlo methods are compared with deterministic methods (diffusion in a slab geometry and transport in a simplified rod model) for several noise frequencies and for isotropic and anisotropic noise sources. Our stochastic method shows better performances in the frequency region of interest and is easier to implement because it relies upon the conventional algorithm for fixed-source problems.
Energy Technology Data Exchange (ETDEWEB)
Bock, M.; Wagner, M. [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH, Garching (Germany). Forschungszentrum
2012-11-01
In recent years, the availability of computing resources has increased enormously. There are two ways to take advantage of this increase in analyses in the field of the nuclear fuel cycle, such as burn-up calculations or criticality safety calculations. The first possible way is to improve the accuracy of the models that are analyzed. For burn-up calculations this means, that the goal to model and to calculate the burn-up of a full reactor core is getting more and more into reach. The second way to utilize the resources is to run state-of-the-art programs with simplified models several times, but with varied input parameters. This second way opens the applicability of the assessment of uncertainties and sensitivities based on the Monte Carlo method for fields of research that rely heavily on either high CPU usage or high memory consumption. In the context of the nuclear fuel cycle, applications that belong to these types of demanding analyses are again burn-up and criticality safety calculations. The assessment of uncertainties in burn-up analyses can complement traditional analysis techniques such as best estimate or bounding case analyses and can support the safety analysis in future design decisions, e.g. by analyzing the uncertainty of the decay heat power of the nuclear inventory stored in the spent fuel pool of a nuclear power plant. This contribution concentrates on the uncertainty analysis in burn-up calculations of PWR fuel assemblies. The uncertainties in the results arise from the variation of the input parameters. In this case, the focus is on the one hand on the variation of manufacturing tolerances that are present in the different production stages of the fuel assemblies. On the other hand, uncertainties that describe the conditions during the reactor operation are taken into account. They also affect the results of burn-up calculations. In order to perform uncertainty analyses in burn-up calculations, GRS has improved the capabilities of its general
Program MCU for Monte-Carlo calculations of neutron-physical characteristics of nuclear reactors
International Nuclear Information System (INIS)
Abagyan, L.P.; Alekseev, N.I.; Bryzgalov, V.I.; Glushkov, A.E.; Gomin, E.A.; Gurevich, M.I.; Kalugin, M.A.; Majorov, L.V.; Marin, S.V.; Yhdkevich, M.S.
1994-01-01
A description of the MCU data modification is presented. The calculation results by the MCU-2 and MCU-3 codes are compared for the critical assemblies of a different reactor types. The full list of the critical assemblies calculation results obtained by all MCU code versions is given. 32 refs.; 32 tabs
Faught, Austin M; Davidson, Scott E; Fontenot, Jonas; Kry, Stephen F; Etzel, Carol; Ibbott, Geoffrey S; Followill, David S
2017-09-01
The Imaging and Radiation Oncology Core Houston (IROC-H) (formerly the Radiological Physics Center) has reported varying levels of agreement in their anthropomorphic phantom audits. There is reason to believe one source of error in this observed disagreement is the accuracy of the dose calculation algorithms and heterogeneity corrections used. To audit this component of the radiotherapy treatment process, an independent dose calculation tool is needed. Monte Carlo multiple source models for Elekta 6 MV and 10 MV therapeutic x-ray beams were commissioned based on measurement of central axis depth dose data for a 10 × 10 cm 2 field size and dose profiles for a 40 × 40 cm 2 field size. The models were validated against open field measurements consisting of depth dose data and dose profiles for field sizes ranging from 3 × 3 cm 2 to 30 × 30 cm 2 . The models were then benchmarked against measurements in IROC-H's anthropomorphic head and neck and lung phantoms. Validation results showed 97.9% and 96.8% of depth dose data passed a ±2% Van Dyk criterion for 6 MV and 10 MV models respectively. Dose profile comparisons showed an average agreement using a ±2%/2 mm criterion of 98.0% and 99.0% for 6 MV and 10 MV models respectively. Phantom plan comparisons were evaluated using ±3%/2 mm gamma criterion, and averaged passing rates between Monte Carlo and measurements were 87.4% and 89.9% for 6 MV and 10 MV models respectively. Accurate multiple source models for Elekta 6 MV and 10 MV x-ray beams have been developed for inclusion in an independent dose calculation tool for use in clinical trial audits. © 2017 American Association of Physicists in Medicine.
Monte Carlo code Serpent calculation of the parameters of the stationary nuclear fission wave
Directory of Open Access Journals (Sweden)
V. M. Khotyayintsev
2017-12-01
Full Text Available n this work, propagation of the stationary nuclear fission wave was simulated for series of fixed power values using Monte Carlo code Serpent. The wave moved in the axial direction in 5 m long cylindrical core of fast reactor with pure 238U raw fuel. Stationary wave mode arises some period later after the wave ignition and lasts sufficiently long to determine kef with high enough accuracy. The velocity characteristic of the reactor was determined as the dependence of the wave velocity on the neutron multiplication factor. As we have recently shown within a one-group diffusion description, the velocity characteristic is two-valued due to the effect of concentration mechanisms, while thermal feedback affects it only quantitatively. The shape and parameters of the velocity characteristic critically affect feasibility of the reactor design since stationary wave solutions of the lower branch are unstable and do not correspond to any real waves in self-regulated reactor, like CANDLE. In this work calculations were performed without taking into account thermal feedback. They confirm that theoretical dependence correctly describes the shape of the velocity characteristic calculated using the results of the Serpent modeling.
Calculation of Credit Valuation Adjustment Based on Least Square Monte Carlo Methods
Directory of Open Access Journals (Sweden)
Qian Liu
2015-01-01
Full Text Available Counterparty credit risk has become one of the highest-profile risks facing participants in the financial markets. Despite this, relatively little is known about how counterparty credit risk is actually priced mathematically. We examine this issue using interest rate swaps. This largely traded financial product allows us to well identify the risk profiles of both institutions and their counterparties. Concretely, Hull-White model for rate and mean-reverting model for default intensity have proven to be in correspondence with the reality and to be well suited for financial institutions. Besides, we find that least square Monte Carlo method is quite efficient in the calculation of credit valuation adjustment (CVA, for short as it avoids the redundant step to generate inner scenarios. As a result, it accelerates the convergence speed of the CVA estimators. In the second part, we propose a new method to calculate bilateral CVA to avoid double counting in the existing bibliographies, where several copula functions are adopted to describe the dependence of two first to default times.
International Nuclear Information System (INIS)
Caon, M.
2010-01-01
Full text: Medical imaging provides two-dimensional pictures of the human internal anatomy from which may be constructed a three-dimensional model of organs and tissues suitable for calculation of dose from radiation. Diagnostic CT provides the greatest exposure to radiation per examination and the frequency of CT examination is high. Esti mates of dose from diagnostic radiography are still determined from data derived from geometric models (rather than anatomical models), models scaled from adult bodies (rather than bodies of children) and CT scanner hardware that is no longer used. The aim of anatomical modelling is to produce a mathematical representation of internal anatomy that has organs of realistic size, shape and positioning. The organs and tissues are represented by a great many cuboidal volumes (voxels). The conversion of medical images to voxels is called segmentation and on completion every pixel in an image is assigned to a tissue or organ. Segmentation is time consuming. An image processing pack age is used to identify organ boundaries in each image. Thirty to forty tomographic voxel models of anatomy have been reported in the literature. Each model is of an individual, or a composite from several individuals. Images of children are particularly scarce. So there remains a need for more paediatric anatomical models. I am working on segmenting ''William'' who is 368 PET-CT images from head to toe of a seven year old boy. William will be used for Monte Carlo dose calculations of dose from CT examination using a simulated modern CT scanner.
Recent R and D around the Monte-Carlo code Tripoli-4 for criticality calculation
International Nuclear Information System (INIS)
Hugot, F.X.; Lee, Y.K.; Malvagi, F.
2008-01-01
TRIPOLI-4 [1] is the fourth generation of the TRIPOLI family of Monte Carlo codes developed from the 60's by CEA. It simulates the 3D transport of neutrons, photons, electrons and positrons as well as coupled neutron-photon propagation and electron-photons cascade showers. The code addresses radiation protection and shielding problems, as well as criticality and reactor physics problems through both critical and subcritical neutronics calculations. It uses full pointwise as well as multigroup cross-sections. The code has been validated through several hundred benchmarks as well as measurement campaigns. It is used as a reference tool by CEA as well as its industrial and institutional partners, and in the NURESIM [2] European project. Section 2 reviews its main features, with emphasis on the latest developments. Section 3 presents some recent R and D for criticality calculations. Fission matrix, Eigen-values and eigenvectors computations will be exposed. Corrections on the standard deviation estimator in the case of correlations between generation steps will be detailed. Section 4 presents some preliminary results obtained by the new mesh tally feature. The last section presents the interest of using XML format output files. (authors)
Ishizawa, Yoshiki; Dobashi, Suguru; Kadoya, Noriyuki; Ito, Kengo; Chiba, Takahito; Takayama, Yoshiki; Sato, Kiyokazu; Takeda, Ken
2018-05-17
An accurate source model of a medical linear accelerator is essential for Monte Carlo (MC) dose calculations. This study aims to propose an analytical photon source model based on particle transport in parameterized accelerator structures, focusing on a more realistic determination of linac photon spectra compared to existing approaches. We designed the primary and secondary photon sources based on the photons attenuated and scattered by a parameterized flattening filter. The primary photons were derived by attenuating bremsstrahlung photons based on the path length in the filter. Conversely, the secondary photons were derived from the decrement of the primary photons in the attenuation process. This design facilitates these sources to share the free parameters of the filter shape and be related to each other through the photon interaction in the filter. We introduced two other parameters of the primary photon source to describe the particle fluence in penumbral regions. All the parameters are optimized based on calculated dose curves in water using the pencil-beam-based algorithm. To verify the modeling accuracy, we compared the proposed model with the phase space data (PSD) of the Varian TrueBeam 6 and 15 MV accelerators in terms of the beam characteristics and the dose distributions. The EGS5 Monte Carlo code was used to calculate the dose distributions associated with the optimized model and reference PSD in a homogeneous water phantom and a heterogeneous lung phantom. We calculated the percentage of points passing 1D and 2D gamma analysis with 1%/1 mm criteria for the dose curves and lateral dose distributions, respectively. The optimized model accurately reproduced the spectral curves of the reference PSD both on- and off-axis. The depth dose and lateral dose profiles of the optimized model also showed good agreement with those of the reference PSD. The passing rates of the 1D gamma analysis with 1%/1 mm criteria between the model and PSD were 100% for 4
Energy Technology Data Exchange (ETDEWEB)
Wang, L. L. W.; La Russa, D. J.; Rogers, D. W. O. [Ottawa Carleton Institute of Physics, Carleton University, Campus Ottawa, Ottawa, Ontario KIS 5B6 (Canada)
2009-05-15
In a previous study [Med. Phys. 35, 1747-1755 (2008)], the authors proposed two direct methods of calculating the replacement correction factors (P{sub repl} or p{sub cav}p{sub dis}) for ion chambers by Monte Carlo calculation. By ''direct'' we meant the stopping-power ratio evaluation is not necessary. The two methods were named as the high-density air (HDA) and low-density water (LDW) methods. Although the accuracy of these methods was briefly discussed, it turns out that the assumption made regarding the dose in an HDA slab as a function of slab thickness is not correct. This issue is reinvestigated in the current study, and the accuracy of the LDW method applied to ion chambers in a {sup 60}Co photon beam is also studied. It is found that the two direct methods are in fact not completely independent of the stopping-power ratio of the two materials involved. There is an implicit dependence of the calculated P{sub repl} values upon the stopping-power ratio evaluation through the choice of an appropriate energy cutoff {Delta}, which characterizes a cavity size in the Spencer-Attix cavity theory. Since the {Delta} value is not accurately defined in the theory, this dependence on the stopping-power ratio results in a systematic uncertainty on the calculated P{sub repl} values. For phantom materials of similar effective atomic number to air, such as water and graphite, this systematic uncertainty is at most 0.2% for most commonly used chambers for either electron or photon beams. This uncertainty level is good enough for current ion chamber dosimetry, and the merits of the two direct methods of calculating P{sub repl} values are maintained, i.e., there is no need to do a separate stopping-power ratio calculation. For high-Z materials, the inherent uncertainty would make it practically impossible to calculate reliable P{sub repl} values using the two direct methods.
International Nuclear Information System (INIS)
Daures, J.; Gouriou, J.; Bordy, J.M.
2010-01-01
The authors report calculations performed using the MNCP and PENELOPE codes to determine the Hp(3)/K air conversion coefficient which allows the Hp(3) dose equivalent to be determined from the measured value of the kerma in the air. They report the definition of the phantom, a 20 cm diameter and 20 cm high cylinder which is considered as representative of a head. Calculations are performed for an energy range corresponding to interventional radiology or cardiology (20 keV-110 keV). Results obtained with both codes are compared
International Nuclear Information System (INIS)
Mazurier, J.
1999-01-01
This thesis has been performed in the framework of national reference setting-up for absorbed dose in water and high energy photon beam provided with the SATURNE-43 medical accelerator of the BNM-LPRI (acronym for National Bureau of Metrology and Primary standard laboratory of ionising radiation). The aim of this work has been to develop and validate different user codes, based on PENELOPE Monte Carlo code system, to determine the photon beam characteristics and calculate the correction factors of reference dosimeters such as Fricke dosimeters and graphite calorimeter. In the first step, the developed user codes have permitted the influence study of different components constituting the irradiation head. Variance reduction techniques have been used to reduce the calculation time. The phase space has been calculated for 6, 12 and 25 MV at the output surface level of the accelerator head, then used for calculating energy spectra and dose distributions in the reference water phantom. Results obtained have been compared with experimental measurements. The second step has been devoted to develop an user code allowing calculation correction factors associated with both BNM-LPRI's graphite and Fricke dosimeters thanks to a correlated sampling method starting with energy spectra obtained in the first step. Then the calculated correction factors have been compared with experimental and calculated results obtained with the Monte Carlo EGS4 code system. The good agreement, between experimental and calculated results, leads to validate simulations performed with the PENELOPE code system. (author)
Energy Technology Data Exchange (ETDEWEB)
Mairani, A [University of Pavia, Department of Nuclear and Theoretical Physics, and INFN, via Bassi 6, 27100 Pavia (Italy); Brons, S; Parodi, K [Heidelberg Ion Beam Therapy Center and Department of Radiation Oncology, Im Neuenheimer Feld 450, 69120 Heidelberg (Germany); Cerutti, F; Ferrari, A; Sommerer, F [CERN, 1211 Geneva 23 (Switzerland); Fasso, A [SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025 (United States); Kraemer, M; Scholz, M, E-mail: Andrea.Mairani@mi.infn.i [GSI Biophysik, Planck-Str. 1, D-64291 Darmstadt (Germany)
2010-08-07
Clinical Monte Carlo (MC) calculations for carbon ion therapy have to provide absorbed and RBE-weighted dose. The latter is defined as the product of the dose and the relative biological effectiveness (RBE). At the GSI Helmholtzzentrum fuer Schwerionenforschung as well as at the Heidelberg Ion Therapy Center (HIT), the RBE values are calculated according to the local effect model (LEM). In this paper, we describe the approach followed for coupling the FLUKA MC code with the LEM and its application to dose and RBE-weighted dose calculations for a superimposition of two opposed {sup 12}C ion fields as applied in therapeutic irradiations. The obtained results are compared with the available experimental data of CHO (Chinese hamster ovary) cell survival and the outcomes of the GSI analytical treatment planning code TRiP98. Some discrepancies have been observed between the analytical and MC calculations of absorbed physical dose profiles, which can be explained by the differences between the laterally integrated depth-dose distributions in water used as input basic data in TRiP98 and the FLUKA recalculated ones. On the other hand, taking into account the differences in the physical beam modeling, the FLUKA-based biological calculations of the CHO cell survival profiles are found in good agreement with the experimental data as well with the TRiP98 predictions. The developed approach that combines the MC transport/interaction capability with the same biological model as in the treatment planning system (TPS) will be used at HIT to support validation/improvement of both dose and RBE-weighted dose calculations performed by the analytical TPS.
Energy Technology Data Exchange (ETDEWEB)
Sommer, Fabian
2017-05-15
In the frame of the project ''evaluation and feasibility of a validation for computational codes for criticality and burnout calculations for the use in systems with boiling water reactor fuel'' the burnout code HELIOS for the calculation of inventories was used which allows due to the fast routines Monte-Carlo based sensitivity and uncertainty analyses. The calculated neutron multiplication factor for the HELIOS based calculations were compared with TRITON results.
Monte Carlo calculations on a parallel computer using MORSE-C.G
International Nuclear Information System (INIS)
Wood, J.
1995-01-01
The general purpose particle transport Monte Carlo code, MORSE-C.G., is implemented on a parallel computing transputer-based system having MIMD architecture. Example problems are solved which are representative of the 3-principal types of problem that can be solved by the original serial code, namely, fixed source, eigenvalue (k-eff) and time-dependent. The results from the parallelized version of the code are compared in tables with the serial code run on a mainframe serial computer, and with an independent, deterministic transport code. The performance of the parallel computer as the number of processors is varied is shown graphically. For the parallel strategy used, the loss of efficiency as the number of processors is increased, is investigated. (author)
Improved cache performance in Monte Carlo transport calculations using energy banding
Siegel, A.; Smith, K.; Felker, K.; Romano, P.; Forget, B.; Beckman, P.
2014-04-01
We present an energy banding algorithm for Monte Carlo (MC) neutral particle transport simulations which depend on large cross section lookup tables. In MC codes, read-only cross section data tables are accessed frequently, exhibit poor locality, and are typically too much large to fit in fast memory. Thus, performance is often limited by long latencies to RAM, or by off-node communication latencies when the data footprint is very large and must be decomposed on a distributed memory machine. The proposed energy banding algorithm allows maximal temporal reuse of data in band sizes that can flexibly accommodate different architectural features. The energy banding algorithm is general and has a number of benefits compared to the traditional approach. In the present analysis we explore its potential to achieve improvements in time-to-solution on modern cache-based architectures.
Monte Carlo electron-transport calculations for clinical beams using energy grouping
Energy Technology Data Exchange (ETDEWEB)
Teng, S P; Anderson, D W; Lindstrom, D G
1986-01-01
A Monte Carlo program has been utilized to study the penetration of broad electron beams into a water phantom. The MORSE-E code, originally developed for neutron and photon transport, was chosen for adaptation to electrons because of its versatility. The electron energy degradation model employed logarithmic spacing of electron energy groups and included effects of elastic scattering, inelastic-moderate-energy-loss-processes and inelastic-large-energy-loss-processes (catastrophic). Energy straggling and angular deflections were modeled from group to group, using the Moeller cross section for energy loss, and Goudsmit-Saunderson theory to describe angular deflections. The resulting energy- and electron-deposition distributions in depth were obtained at 10 and 20 MeV and are compared with ETRAN results and broad beam experimental data from clinical accelerators.
Energy Technology Data Exchange (ETDEWEB)
Shulenburger, Luke [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mattsson, Thomas Kjell Rene [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Desjarlais, Michael Paul [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-01-01
Motivated by the disagreement between recent diffusion Monte Carlo calculations of the phase transition pressure between the ambient and beta-Sn phases of silicon and experiments, we present a study of the HCP to BCC phase transition in beryllium. This lighter element provides an opportunity for directly testing many of the approximations required for calculations on silicon and may suggest a path towards increasing the practical accuracy of diffusion Monte Carlo calculations of solids in general. We demonstrate that the single largest approximation in these calculations is the pseudopotential approximation and after removing this we find excellent agreement with experiment for the ambient HCP phase and results similar to careful calculations using density functional theory for the phase transition pressure.
Holmes, Jesse Curtis
established that depends on uncertainties in the physics models and methodology employed to produce the DOS. Through Monte Carlo sampling of perturbations from the reference phonon spectrum, an S(alpha, beta) covariance matrix may be generated. In this work, density functional theory and lattice dynamics in the harmonic approximation are used to calculate the phonon DOS for hexagonal crystalline graphite. This form of graphite is used as an example material for the purpose of demonstrating procedures for analyzing, calculating and processing thermal neutron inelastic scattering uncertainty information. Several sources of uncertainty in thermal neutron inelastic scattering calculations are examined, including sources which cannot be directly characterized through a description of the phonon DOS uncertainty, and their impacts are evaluated. Covariances for hexagonal crystalline graphite S(alpha, beta) data are quantified by coupling the standard methodology of LEAPR with a Monte Carlo sampling process. The mechanics of efficiently representing and processing this covariance information is also examined. Finally, with appropriate sensitivity information, it is shown that an S(alpha, beta) covariance matrix can be propagated to generate covariance data for integrated cross sections, secondary energy distributions, and coupled energy-angle distributions. This approach enables a complete description of thermal neutron inelastic scattering cross section uncertainties which may be employed to improve the simulation of nuclear systems.
Infantino, Angelo; Marengo, Mario; Baschetti, Serafina; Cicoria, Gianfranco; Longo Vaschetto, Vittorio; Lucconi, Giulia; Massucci, Piera; Vichi, Sara; Zagni, Federico; Mostacci, Domiziano
2015-11-01
Biomedical cyclotrons for production of Positron Emission Tomography (PET) radionuclides and radiotherapy with hadrons or ions are widely diffused and established in hospitals as well as in industrial facilities and research sites. Guidelines for site planning and installation, as well as for radiation protection assessment, are given in a number of international documents; however, these well-established guides typically offer analytic methods of calculation of both shielding and materials activation, in approximate or idealized geometry set up. The availability of Monte Carlo codes with accurate and up-to-date libraries for transport and interactions of neutrons and charged particles at energies below 250 MeV, together with the continuously increasing power of nowadays computers, makes systematic use of simulations with realistic geometries possible, yielding equipment and site specific evaluation of the source terms, shielding requirements and all quantities relevant to radiation protection. In this work, the well-known Monte Carlo code FLUKA was used to simulate two representative models of cyclotron for PET radionuclides production, including their targetry; and one type of proton therapy cyclotron including the energy selection system. Simulations yield estimates of various quantities of radiological interest, including the effective dose distribution around the equipment, the effective number of neutron produced per incident proton and the activation of target materials, the structure of the cyclotron, the energy degrader, the vault walls and the soil. The model was validated against experimental measurements and comparison with well-established reference data. Neutron ambient dose equivalent H*(10) was measured around a GE PETtrace cyclotron: an average ratio between experimental measurement and simulations of 0.99±0.07 was found. Saturation yield of 18F, produced by the well-known 18O(p,n)18F reaction, was calculated and compared with the IAEA recommended
International Nuclear Information System (INIS)
Vieira, Jose Wilson
2004-07-01
The MAX phantom has been developed from existing segmented images of a male adult body, in order to achieve a representation as close as possible to the anatomical properties of the reference adult male specified by the ICRP. In computational dosimetry, MAX can simulate the geometry of a human body under exposure to ionizing radiations, internal or external, with the objective of calculating the equivalent dose in organs and tissues for occupational, medical or environmental purposes of the radiation protection. This study presents a methodology used to build a new computational exposure model MAX/EGS4: the geometric construction of the phantom; the development of the algorithm of one-directional, divergent, and isotropic radioactive sources; new methods for calculating the equivalent dose in the red bone marrow and in the skin, and the coupling of the MAX phantom with the EGS4 Monte Carlo code. Finally, some results of radiation protection, in the form of conversion coefficients between equivalent dose (or effective dose) and free air-kerma for external photon irradiation are presented and discussed. Comparing the results presented with similar data from other human phantoms it is possible to conclude that the coupling MAX/EGS4 is satisfactory for the calculation of the equivalent dose in radiation protection. (author)
International Nuclear Information System (INIS)
Zankl, M.; Panzer, W.; Drexler, G.
1991-11-01
Computed tomography (CT) is a technique which offers a high diagnostic capability; however, the dose to the patient is high compared to conventional radiography. This report provides a catalogue of organ doses resulting from CT examinations. The organ doses were calculated for the type of CT scanners most commonly used in the FRG and for three different radiation qualities. For the dose calculations, the patients were represented by the adult mathematical phantoms Adam and Eva. The radiation transport in the body was simulated using a Monte Carlo method. The doses were calculated as conversion factors of mean organ doses per air kerma free in air on the axis of rotation. Mean organ dose conversion factors are given per organ and per single CT slice of 1 cm width. The mean dose to an organ resulting from a particular CT examination can be estimated by summing up the contribution to the organ dose from each relevant slice. In order to facilitate the selection of the appropriate slices, a table is given which relates the mathematical phantoms' coordinates to certain anatomical landmarks in the human body. (orig.)
Towards real-time photon Monte Carlo dose calculation in the cloud
Ziegenhein, Peter; Kozin, Igor N.; Kamerling, Cornelis Ph; Oelfke, Uwe
2017-06-01
Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.
Chan, EuJin; Lydon, Jenny; Kron, Tomas
2015-03-07
This study aims to investigate the effects of oblique incidence, small field size and inhomogeneous media on the electron dose distribution, and to compare calculated (Elekta/CMS XiO) and measured results. All comparisons were done in terms of absolute dose. A new measuring method was developed for high resolution, absolute dose measurement of non-standard beams using Gafchromic® EBT3 film. A portable U-shaped holder was designed and constructed to hold EBT3 films vertically in a reproducible setup submerged in a water phantom. The experimental film method was verified with ionisation chamber measurements and agreed to within 2% or 1 mm. Agreement between XiO electron Monte Carlo (eMC) and EBT3 was within 2% or 2 mm for most standard fields and 3% or 3 mm for the non-standard fields. Larger differences were seen in the build-up region where XiO eMC overestimates dose by up to 10% for obliquely incident fields and underestimates the dose for small circular fields by up to 5% when compared to measurement. Calculations with inhomogeneous media mimicking ribs, lung and skull tissue placed at the side of the film in water agreed with measurement to within 3% or 3 mm. Gafchromic film in water proved to be a convenient high spatial resolution method to verify dose distributions from electrons in non-standard conditions including irradiation in inhomogeneous media.
Energy Technology Data Exchange (ETDEWEB)
Makovicka, L.; Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J. [Universite de Franche-Comte, Equipe IRMA/ENISYS/FEMTO-ST, UMR6174 CNRS, 25 - Montbeliard (France); Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J.; Salomon, M. [Universite de Franche-Comte, Equipe AND/LIFC, 90 - Belfort (France)
2009-01-15
Monte Carlo codes, precise but slow, are very important tools in the vast majority of specialities connected to Radiation Physics, Radiation Protection and Dosimetry. A discussion about some other computing solutions is carried out; solutions not only based on the enhancement of computer power, or on the 'biasing'used for relative acceleration of these codes (in the case of photons), but on more efficient methods (A.N.N. - artificial neural network, C.B.R. - case-based reasoning - or other computer science techniques) already and successfully used for a long time in other scientific or industrial applications and not only Radiation Protection or Medical Dosimetry. (authors)
International Nuclear Information System (INIS)
Zazula, J.M.
1984-01-01
This work concerns calculation of a neutron response, caused by a neutron field perturbed by materials surrounding the source or the detector. Solution of a problem is obtained using coupling of the Monte Carlo radiation transport computation for the perturbed region and the discrete ordinates transport computation for the unperturbed system. (author). 62 refs
International Nuclear Information System (INIS)
Chen, Y.; Fischer, U.
2005-01-01
Shielding calculations of advanced nuclear facilities such as accelerator based neutron sources or fusion devices of the tokamak type are complicated due to their complex geometries and their large dimensions, including bulk shields of several meters thickness. While the complexity of the geometry in the shielding calculation can be hardly handled by the discrete ordinates method, the deep penetration of radiation through bulk shields is a severe challenge for the Monte Carlo particle transport technique. This work proposes a dedicated computational scheme for coupled Monte Carlo-Discrete Ordinates transport calculations to handle this kind of shielding problems. The Monte Carlo technique is used to simulate the particle generation and transport in the target region with both complex geometry and reaction physics, and the discrete ordinates method is used to treat the deep penetration problem in the bulk shield. The coupling scheme has been implemented in a program system by loosely integrating the Monte Carlo transport code MCNP, the three-dimensional discrete ordinates code TORT and a newly developed coupling interface program for mapping process. Test calculations were performed with comparison to MCNP solutions. Satisfactory agreements were obtained between these two approaches. The program system has been chosen to treat the complicated shielding problem of the accelerator-based IFMIF neutron source. The successful application demonstrates that coupling scheme with the program system is a useful computational tool for the shielding analysis of complex and large nuclear facilities. (authors)
International Nuclear Information System (INIS)
Mesbahi, A.; Nejad, F.S.
2007-01-01
The purpose of this study was to investigate the dosimetric effect of various hip prostheses on pelvis lateral fields treated by a 9-MV photon beam using Monte Carlo (MC) and effective path-length (EPL) methods. The head of the Neptun 10 pc linac was simulated using the MCNP4C MC code. The accuracy of the MC model was evaluated using measured dosimetric features including depth dose values and dose profiles in a water phantom. The Alfard treatment planning system (TPS) was used for EPL calculations. A virtual water phantom with dimensions of 30 x 30 x 30 cm 3 and a cube with dimensions of 4 x 4 x 4 cm 3 made of various metals centered in 12 cm depth was used for MC and EPL calculations. Various materials including titanium, Co-Cr-Mo, and steel alloys were used as hip prostheses. Our results showed significant attenuation in absorbed dose for points after and inside the prostheses. Attenuations of 32%, 54% and 55% were seen for titanium, Co-Cr-Mo, and steel alloys, respectively, at a distance of 5 cm from the prosthesis. Considerable dose increase (up to 18%) was found at the water-prosthesis interface due to back-scattered electrons using the MC method. The results of EPL calculations for the titanium implant were comparable to the MC calculations. This method, however, was not able to predict the interface effect or calculate accurately the absorbed dose in the presence of the Co-Cr-Mo and steel prostheses. The dose perturbation effect of hip prostheses is significant and cannot be predicted accurately by the EPL method for Co-Cr-Mo or steel prostheses. The use of MC-based TPS is recommended for treatments requiring fields passing through hip prostheses. (author)
International Nuclear Information System (INIS)
Martin, William R.; Brown, Forrest B.
2001-01-01
We present an alternative Monte Carlo method for solving the coupled equations of radiation transport and material energy. This method is based on incorporating the analytical solution to the material energy equation directly into the Monte Carlo simulation for the radiation intensity. This method, which we call the Analytical Monte Carlo (AMC) method, differs from the well known Implicit Monte Carlo (IMC) method of Fleck and Cummings because there is no discretization of the material energy equation since it is solved as a by-product of the Monte Carlo simulation of the transport equation. Our method also differs from the method recently proposed by Ahrens and Larsen since they use Monte Carlo to solve both equations, while we are solving only the radiation transport equation with Monte Carlo, albeit with effective sources and cross sections to represent the emission sources. Our method bears some similarity to a method developed and implemented by Carter and Forest nearly three decades ago, but there are substantive differences. We have implemented our method in a simple zero-dimensional Monte Carlo code to test the feasibility of the method, and the preliminary results are very promising, justifying further extension to more realistic geometries. (authors)
Energy Technology Data Exchange (ETDEWEB)
Guenay, Mehtap [Malatya Univ. (Turkey). Physics Department
2015-03-15
In this study, salt-heavy metal mixtures consisting of 93-85% Li{sub 20}Sn{sub 80} + 5% SFG-PuO{sub 2} and 2-10% UO{sub 2}, 93-85% Li{sub 20}Sn{sub 80} + 5% SFG-PuO{sub 2} and 2-10% NpO{sub 2}, and 93-85% Li{sub 20}Sn{sub 80} + 5% SFG-PuO{sub 2} and 2-10% UCO were used as fluids. The fluids were used in the liquid first wall, blanket, and shield zones of a fusion-fission hybrid reactor system. A beryllium (Be) zone with a width of 3 cm was used for neutron multiplicity between the liquid first wall and the blanket. 9Cr2WVTa ferritic steel with the width of 4 cm was used as the structural material. The contributions of each isotope in the fluids to the nuclear parameters, such as tritium breeding ratio (TBR), energy multiplication factor (M), and heat deposition rate, of the fusion-fission hybrid reactor were calculated in the liquid first wall, blanket, and shield zones. Three-dimensional analyses were performed using the Monte Carlo code MCNPX-2.7.0 and nuclear data library ENDF/B-VII.0.
Monte Carlo calculations with dynamical fermions by a local stochastic process
International Nuclear Information System (INIS)
Rossi, P.; Zwanziger, D.
1984-01-01
We develop and test numerically a Monte Carlo method for fermions on a lattice which accounts for the effect of the fermionic determinant to arbitrary accuracy. It is tested numerically in a 4-dimensional model with SU(2) color group and scalar fermionic quarks interacting with gluons. Computer time grows linearly with the volume of the lattice and the updating of gluons is not restricted to small jumps. The method is based on random location updating, instead of an ordered sweep, in which quarks are updated, on the average, R times more frequently than gluons. It is proven that the error in R is only of order 1/R instead of 1/Rsup(1/2) as one might naively expect. Quarks are represented by pseudofermionic variables in M pseudoflavors (which requires M times more memory for each physical fermionic degree of freedom) with an error in M of order 1/M. The method is tested by calculating the self-energy of an external quark, a quantity which would be infinite in the absence of dynamical or sea quarks. For the quantities measured, the dependence on R -1 is linear for R >= 8, and, within our statistical uncertainty, M = 2 is already asymptotic. (orig.)
A power spectrum approach to tally convergence in Monte Carlo criticality calculation
International Nuclear Information System (INIS)
Ueki, Taro
2017-01-01
In Monte Carlo criticality calculation, confidence interval estimation is based on the central limit theorem (CLT) for a series of tallies from generations in equilibrium. A fundamental assertion resulting from CLT is the convergence in distribution (CID) of the interpolated standardized time series (ISTS) of tallies. In this work, the spectral analysis of ISTS has been conducted in order to assess the convergence of tallies in terms of CID. Numerical results obtained indicate that the power spectrum of ISTS is equal to the theoretically predicted power spectrum of Brownian motion for tallies of effective neutron multiplication factor; on the other hand, the power spectrum of ISTS of a strongly correlated series of tallies from local powers fluctuates wildly while maintaining the spectral form of fractional Brownian motion. The latter result is the evidence of a case where a series of tallies are away from CID, while the spectral form supports normality assumption on the sample mean. It is also demonstrated that one can make the unbiased estimation of the standard deviation of sample mean well before CID occurs. (author)
Energy Technology Data Exchange (ETDEWEB)
Anusionwu, Princess [Medical Physics, CancerCare Manitoba, Winnipeg Canada (Canada); Department of Physics & Astronomy, University of Manitoba, Winnipeg Canada (Canada); Alpuche Aviles, Jorge E. [Medical Physics, CancerCare Manitoba, Winnipeg Canada (Canada); Pistorius, Stephen [Medical Physics, CancerCare Manitoba, Winnipeg Canada (Canada); Department of Physics & Astronomy, University of Manitoba, Winnipeg Canada (Canada); Department of Radiology, University of Manitoba, Winnipeg (Canada)
2016-08-15
Objective: Commissioning of a Monte Carlo based electron dose calculation algorithm requires percentage depth doses (PDDs) and beam profiles which can be measured with multiple detectors. Electron dosimetry is commonly performed with cylindrical chambers but parallel plate chambers and diodes can also be used. The purpose of this study was to determine the most appropriate detector to perform the commissioning measurements. Methods: PDDs and beam profiles were measured for beams with energies ranging from 6 MeV to 15 MeV and field sizes ranging from 6 cm × 6 cm to 40 cm × 40 cm. Detectors used included diodes, cylindrical and parallel plate ionization chambers. Beam profiles were measured in water (100 cm source to surface distance) and in air (95 cm source to detector distance). Results: PDDs for the cylindrical chambers were shallower (1.3 mm averaged over all energies and field sizes) than those measured with the parallel plate chambers and diodes. Surface doses measured with the diode and cylindrical chamber were on average larger by 1.6 % and 3% respectively than those of the parallel plate chamber. Profiles measured with a diode resulted in penumbra values smaller than those measured with the cylindrical chamber by 2 mm. Conclusion: The diode was selected as the most appropriate detector since PDDs agreed with those measured with parallel plate chambers (typically recommended for low energies) and results in sharper profiles. Unlike ion chambers, no corrections are needed to measure PDDs, making it more convenient to use.
International Nuclear Information System (INIS)
Guenay, Mehtap
2015-01-01
In this study, salt-heavy metal mixtures consisting of 93-85% Li 20 Sn 80 + 5% SFG-PuO 2 and 2-10% UO 2 , 93-85% Li 20 Sn 80 + 5% SFG-PuO 2 and 2-10% NpO 2 , and 93-85% Li 20 Sn 80 + 5% SFG-PuO 2 and 2-10% UCO were used as fluids. The fluids were used in the liquid first wall, blanket, and shield zones of a fusion-fission hybrid reactor system. A beryllium (Be) zone with a width of 3 cm was used for neutron multiplicity between the liquid first wall and the blanket. 9Cr2WVTa ferritic steel with the width of 4 cm was used as the structural material. The contributions of each isotope in the fluids to the nuclear parameters, such as tritium breeding ratio (TBR), energy multiplication factor (M), and heat deposition rate, of the fusion-fission hybrid reactor were calculated in the liquid first wall, blanket, and shield zones. Three-dimensional analyses were performed using the Monte Carlo code MCNPX-2.7.0 and nuclear data library ENDF/B-VII.0.
Kadioglu, Yelda; Santana, Juan A.; Özaydin, H. Duygu; Ersan, Fatih; Aktürk, O. Üzengi; Aktürk, Ethem; Reboredo, Fernando A.
2018-06-01
We have studied the structural stability of monolayer and bilayer arsenene (As) in the buckled (b) and washboard (w) phases with diffusion quantum Monte Carlo (DMC) and density functional theory (DFT) calculations. DMC yields cohesive energies of 2.826(2) eV/atom for monolayer b-As and 2.792(3) eV/atom for w-As. In the case of bilayer As, DMC and DFT predict that AA-stacking is the more stable form of b-As, while AB is the most stable form of w-As. The DMC layer-layer binding energies for b-As-AA and w-As-AB are 30(1) and 53(1) meV/atom, respectively. The interlayer separations were estimated with DMC at 3.521(1) Å for b-As-AA and 3.145(1) Å for w-As-AB. A comparison of DMC and DFT results shows that the van der Waals density functional method yields energetic properties of arsenene close to DMC, while the DFT + D3 method closely reproduced the geometric properties from DMC. The electronic properties of monolayer and bilayer arsenene were explored with various DFT methods. The bandgap values vary significantly with the DFT method, but the results are generally qualitatively consistent. We expect the present work to be useful for future experiments attempting to prepare multilayer arsenene and for further development of DFT methods for weakly bonded systems.
Efficient heterogeneous execution of Monte Carlo shielding calculations on a Beowulf cluster
International Nuclear Information System (INIS)
Dewar, D.; Hulse, P.; Cooper, A.; Smith, N.
2005-01-01
Recent work has been done in using a high-performance 'Beowulf' cluster computer system for the efficient distribution of Monte Carlo shielding calculations. This has enabled the rapid solution of complex shielding problems at low cost and with greater modularity and scalability than traditional platforms. The work has shown that a simple approach to distributing the workload is as efficient as using more traditional techniques such as PVM (Parallel Virtual Machine). In addition, when used in an operational setting this technique is fairer with the use of resources than traditional methods, in that it does not tie up a single computing resource but instead shares the capacity with other tasks. These developments in computing technology have enabled shielding problems to be solved that would have taken an unacceptably long time to run on traditional platforms. This paper discusses the BNFL Beowulf cluster and a number of tests that have recently been run to demonstrate the efficiency of the asynchronous technique in running the MCBEND program. The BNFL Beowulf currently consists of 84 standard PCs running RedHat Linux. Current performance of the machine has been estimated to be between 40 and 100 Gflop s -1 . When the whole system is employed on one problem up to four million particles can be tracked per second. There are plans to review its size in line with future business needs. (authors)
International Nuclear Information System (INIS)
Titt, U.; Newhauser, W. D.
2005-01-01
Proton therapy facilities are shielded to limit the amount of secondary radiation to which patients, occupational workers and members of the general public are exposed. The most commonly applied shielding design methods for proton therapy facilities comprise semi-empirical and analytical methods to estimate the neutron dose equivalent. This study compares the results of these methods with a detailed simulation of a proton therapy facility by using the Monte Carlo technique. A comparison of neutron dose equivalent values predicted by the various methods reveals the superior accuracy of the Monte Carlo predictions in locations where the calculations converge. However, the reliability of the overall shielding design increases if simulation results, for which solutions have not converged, e.g. owing to too few particle histories, can be excluded, and deterministic models are being used at these locations. Criteria to accept or reject Monte Carlo calculations in such complex structures are not well understood. An optimum rejection criterion would allow all converging solutions of Monte Carlo simulation to be taken into account, and reject all solutions with uncertainties larger than the design safety margins. In this study, the optimum rejection criterion of 10% was found. The mean ratio was 26, 62% of all receptor locations showed a ratio between 0.9 and 10, and 92% were between 1 and 100. (authors)
Reduced Variance using ADVANTG in Monte Carlo Calculations of Dose Coefficients to Stylized Phantoms
Hiller, Mauritius; Bellamy, Michael; Eckerman, Keith; Hertel, Nolan
2017-09-01
The estimation of dose coefficients of external radiation sources to the organs in phantoms becomes increasingly difficult for lower photon source energies. This study focus on the estimation of photon emitters around the phantom. The computer time needed to calculate a result within a certain precision can be lowered by several orders of magnitude using ADVANTG compared to a standard run. Using ADVANTG which employs the DENOVO adjoint calculation package enables the user to create a fully populated set of weight windows and source biasing instructions for an MCNP calculation.
3D calculation of absorbed dose for 131I-targeted radiotherapy: A Monte Carlo study
International Nuclear Information System (INIS)
Saeedzadeh, E.; Sarkar, S.; Abbaspour Tehrani-Fard, A.; Ay, M. R.; Khosravi, H. R.; Loudos, G.
2008-01-01
Various methods, such as those developed by the Medical Internal Radiation Dosimetry (MIRD) Committee of the Society of Nuclear Medicine or employing dose point kernels, have been applied to the radiation dosimetry of 131 I radionuclide therapy. However, studies have not shown a strong relationship between tumour absorbed dose and its overall therapeutic response, probably due in part to inaccuracies in activity and dose estimation. In the current study, the GATE Monte Carlo computer code was used to facilitate voxel-level radiation dosimetry for organ activities measured in an. 131 I-treated thyroid cancer patient. This approach allows incorporation of the size, shape and composition of organs (in the current study, in the Zubal anthropomorphic phantom) and intra-organ and intra-tumour inhomogeneities in the activity distributions. The total activities of the tumours and their heterogeneous distributions were measured from the SPECT images to calculate the dose maps. For investigating the effect of activity distribution on dose distribution, a hypothetical homogeneous distribution of the same total activity was considered in the tumours. It was observed that the tumour mean absorbed dose rates per unit cumulated activity were 0.65 E-5 and 0.61 E-5 mGY MBq -1 s -1 for the uniform and non-uniform distributions in the tumour, respectively, which do not differ considerably. However, the dose-volume histograms (DVH) show that the tumour non-uniform activity distribution decreases the absorbed dose to portions of the tumour volume. In such a case, it can be misleading to quote the mean or maximum absorbed dose, because overall response is likely limited by the tumour volume that receives low (i.e. non-cytocidal) doses. Three-dimensional radiation dosimetry, and calculation of tumour DVHs, may lead to the derivation of clinically reliable dose-response relationships and therefore may ultimately improve treatment planning as well as response assessment for radionuclide
Directory of Open Access Journals (Sweden)
Lin Wang
2018-01-01
Full Text Available Monte Carlo simulation of light propagation in turbid medium has been studied for years. A number of software packages have been developed to handle with such issue. However, it is hard to compare these simulation packages, especially for tissues with complex heterogeneous structures. Here, we first designed a group of mesh datasets generated by Iso2Mesh software, and used them to cross-validate the accuracy and to evaluate the performance of four Monte Carlo-based simulation packages, including Monte Carlo model of steady-state light transport in multi-layered tissues (MCML, tetrahedron-based inhomogeneous Monte Carlo optical simulator (TIMOS, Molecular Optical Simulation Environment (MOSE, and Mesh-based Monte Carlo (MMC. The performance of each package was evaluated based on the designed mesh datasets. The merits and demerits of each package were also discussed. Comparative results showed that the TIMOS package provided the best performance, which proved to be a reliable, efficient, and stable MC simulation package for users.
Energy Technology Data Exchange (ETDEWEB)
Chibani, Omar, E-mail: omar.chibani@fccc.edu; C-M Ma, Charlie [Fox Chase Cancer Center, Philadelphia, Pennsylvania 19111 (United States)
2014-05-15
Purpose: To present a new accelerated Monte Carlo code for CT-based dose calculations in high dose rate (HDR) brachytherapy. The new code (HDRMC) accounts for both tissue and nontissue heterogeneities (applicator and contrast medium). Methods: HDRMC uses a fast ray-tracing technique and detailed physics algorithms to transport photons through a 3D mesh of voxels representing the patient anatomy with applicator and contrast medium included. A precalculated phase space file for the{sup 192}Ir source is used as source term. HDRM is calibrated to calculated absolute dose for real plans. A postprocessing technique is used to include the exact density and composition of nontissue heterogeneities in the 3D phantom. Dwell positions and angular orientations of the source are reconstructed using data from the treatment planning system (TPS). Structure contours are also imported from the TPS to recalculate dose-volume histograms. Results: HDRMC was first benchmarked against the MCNP5 code for a single source in homogenous water and for a loaded gynecologic applicator in water. The accuracy of the voxel-based applicator model used in HDRMC was also verified by comparing 3D dose distributions and dose-volume parameters obtained using 1-mm{sup 3} versus 2-mm{sup 3} phantom resolutions. HDRMC can calculate the 3D dose distribution for a typical HDR cervix case with 2-mm resolution in 5 min on a single CPU. Examples of heterogeneity effects for two clinical cases (cervix and esophagus) were demonstrated using HDRMC. The neglect of tissue heterogeneity for the esophageal case leads to the overestimate of CTV D90, CTV D100, and spinal cord maximum dose by 3.2%, 3.9%, and 3.6%, respectively. Conclusions: A fast Monte Carlo code for CT-based dose calculations which does not require a prebuilt applicator model is developed for those HDR brachytherapy treatments that use CT-compatible applicators. Tissue and nontissue heterogeneities should be taken into account in modern HDR
Mendenhall, Marcus H.; Weller, Robert A.
2011-01-01
In Monte Carlo particle transport codes, it is often important to adjust reaction cross sections to reduce the variance of calculations of relatively rare events, in a technique known as non-analogous Monte Carlo. We present the theory and sample code for a Geant4 process which allows the cross section of a G4VDiscreteProcess to be scaled, while adjusting track weights so as to mitigate the effects of altered primary beam depletion induced by the cross section change. This makes it possible t...
International Nuclear Information System (INIS)
Aufiero, Manuele; Brovchenko, Mariya; Cammi, Antonio; Clifford, Ivor; Geoffroy, Olivier; Heuer, Daniel; Laureau, Axel; Losa, Mario; Luzzi, Lelio; Merle-Lucotte, Elsa; Ricotti, Marco E.; Rouch, Hervé
2014-01-01
Highlights: • Calculation of effective delayed neutron fraction in circulating-fuel reactors. • Extension of the Monte Carlo SERPENT-2 code for delayed neutron precursor tracking. • Forward and adjoint multi-group diffusion eigenvalue problems in OpenFOAM. • Analytical approach for β eff calculation in simple geometries and flow conditions. • Good agreement among the three proposed approaches in the MSFR test-case. - Abstract: This paper deals with the calculation of the effective delayed neutron fraction (β eff ) in circulating-fuel nuclear reactors. The Molten Salt Fast Reactor is adopted as test case for the comparison of the analytical, deterministic and Monte Carlo methods presented. The Monte Carlo code SERPENT-2 has been extended to allow for delayed neutron precursors drift, according to the fuel velocity field. The forward and adjoint eigenvalue multi-group diffusion problems are implemented and solved adopting the multi-physics tool-kit OpenFOAM, by taking into account the convective and turbulent diffusive terms in the precursors balance. These two approaches show good agreement in the whole range of the MSFR operating conditions. An analytical formula for the circulating-to-static conditions β eff correction factor is also derived under simple hypotheses, which explicitly takes into account the spatial dependence of the neutron importance. Its accuracy is assessed against Monte Carlo and deterministic results. The effects of in-core recirculation vortex and turbulent diffusion are finally analysed and discussed
International Nuclear Information System (INIS)
Kelsey IV, Charles T.; Prinja, Anil K.
2011-01-01
We evaluate the Monte Carlo calculation efficiency for multigroup transport relative to continuous energy transport using the MCNPX code system to evaluate secondary neutron doses from a proton beam. We consider both fully forward simulation and application of a midway forward adjoint coupling method to the problem. Previously we developed tools for building coupled multigroup proton/neutron cross section libraries and showed consistent results for continuous energy and multigroup proton/neutron transport calculations. We observed that forward multigroup transport could be more efficient than continuous energy. Here we quantify solution efficiency differences for a secondary radiation dose problem characteristic of proton beam therapy problems. We begin by comparing figures of merit for forward multigroup and continuous energy MCNPX transport and find that multigroup is 30 times more efficient. Next we evaluate efficiency gains for coupling out-of-beam adjoint solutions with forward in-beam solutions. We use a variation of a midway forward-adjoint coupling method developed by others for neutral particle transport. Our implementation makes use of the surface source feature in MCNPX and we use spherical harmonic expansions for coupling in angle rather than solid angle binning. The adjoint out-of-beam transport for organs of concern in a phantom or patient can be coupled with numerous forward, continuous energy or multigroup, in-beam perturbations of a therapy beam line configuration. Out-of-beam dose solutions are provided without repeating out-of-beam transport. (author)
Croteau, T; Bertram, A K; Patey, G N
2008-10-30
Grand canonical Monte Carlo calculations are used to determine water adsorption and structure on defect-free kaolinite surfaces as a function of relative humidity at 235 K. This information is then used to gain insight into ice nucleation on kaolinite surfaces. Results for both the SPC/E and TIP5P-E water models are compared and demonstrate that the Al-surface [(001) plane] and both protonated and unprotonated edges [(100) plane] strongly adsorb at atmospherically relevant relative humidities. Adsorption on the Al-surface exhibits properties of a first-order process with evidence of collective behavior, whereas adsorption on the edges is essentially continuous and appears dominated by strong water lattice interactions. For the protonated and unprotonated edges no structure that matches hexagonal ice is observed. For the Al-surface some of the water molecules formed hexagonal rings. However, the a o lattice parameter for these rings is significantly different from the corresponding constant for hexagonal ice ( Ih). A misfit strain of 14.0% is calculated between the hexagonal pattern of water adsorbed on the Al-surface and the basal plane of ice Ih. Hence, the ring structures that form on the Al-surface are not expected to be good building-blocks for ice nucleation due to the large misfit strain.
International Nuclear Information System (INIS)
Gomes B, W. O.
2015-10-01
Full text: In this study irradiation geometry applicable to PCXMC and the consequent calculation of effective dose in applications of cone beam computed tomography (CBCT) was developed. Two different CBCT equipment s for dental applications were evaluated: Care Stream Cs-9000 3-Dimensional and Gendex GXCB-500 tomographs. Each protocol initially was characterized by measuring the surface kerma input and the product air kerma-area, P KA . Then, technical parameters of each of the predetermined protocols and geometric conditions in the PCXMC software were introduced to obtain the values of effective dose. The calculated effective dose is within the range of 9.0 to 15.7 μSv for Cs 9000 3-D and in the range 44.5 to 89 mSv for GXCB-500 equipment. These values were compared with dosimetric results obtained using thermoluminescent dosimeters implanted in anthropomorphic mannequin and were considered consistent. The effective dose results are very sensitive to the radiation geometry (beam position); this represents a factor of fragility software usage, but on the other hand, turns out to be a very useful tool for quick conclusions regarding the optimization process of protocols. We can conclude that the use of Monte Carlo simulation software PCXMC is useful in the evaluation of test protocols of CBCT in dental applications. (Author)
Monte-Carlo-calculations for the simulation of channelling-experiments with V3Si-single-crystals
International Nuclear Information System (INIS)
Kaufmann, R.
1978-05-01
The results of channelling-investigations on single-crystals of A15-type structure, like e.g. V 3 Si, are not directly comparable to analytical model-calculations. Therefore the channelling-process was simulated in a Monte-Carlo-program on the basis of the binary-collision-model. The calculated values for the minimum yield, Chisub(min), and the critical angle, Psisub(1/2), were in good agreement with the results of experiments with 2 MeV- 4 He + -particles. The lattice damage in the range of 2,000 Angstroem at the surface after an irradiation with a fluence of 6 x 10 16 - 4 He + /cm 2 at 300 KeV could be explained by normally distributed static displacements of the V-atoms with a mean value of 0.05 A. The transverse damage structure after an irradiation with a fluence of 1.5 x 10 16 - 4 He + /cm 2 at 50 KeV could be simulated by a step profile of 50% displacements of the V-atoms with a maximum value of 0.5 Angstroem at the depth of the projected range. (orig./HPOE) [de
Botta, F; Mairani, A; Battistoni, G; Cremonesi, M; Di Dia, A; Fassò, A; Ferrari, A; Ferrari, M; Paganelli, G; Pedroli, G; Valente, M
2011-07-01
The calculation of patient-specific dose distribution can be achieved by Monte Carlo simulations or by analytical methods. In this study, FLUKA Monte Carlo code has been considered for use in nuclear medicine dosimetry. Up to now, FLUKA has mainly been dedicated to other fields, namely high energy physics, radiation protection, and hadrontherapy. When first employing a Monte Carlo code for nuclear medicine dosimetry, its results concerning electron transport at energies typical of nuclear medicine applications need to be verified. This is commonly achieved by means of calculation of a representative parameter and comparison with reference data. Dose point kernel (DPK), quantifying the energy deposition all around a point isotropic source, is often the one. FLUKA DPKS have been calculated in both water and compact bone for monoenergetic electrons (10-3 MeV) and for beta emitting isotopes commonly used for therapy (89Sr, 90Y, 131I 153Sm, 177Lu, 186Re, and 188Re). Point isotropic sources have been simulated at the center of a water (bone) sphere, and deposed energy has been tallied in concentric shells. FLUKA outcomes have been compared to PENELOPE v.2008 results, calculated in this study as well. Moreover, in case of monoenergetic electrons in water, comparison with the data from the literature (ETRAN, GEANT4, MCNPX) has been done. Maximum percentage differences within 0.8.RCSDA and 0.9.RCSDA for monoenergetic electrons (RCSDA being the continuous slowing down approximation range) and within 0.8.X90 and 0.9.X90 for isotopes (X90 being the radius of the sphere in which 90% of the emitted energy is absorbed) have been computed, together with the average percentage difference within 0.9.RCSDA and 0.9.X90 for electrons and isotopes, respectively. Concerning monoenergetic electrons, within 0.8.RCSDA (where 90%-97% of the particle energy is deposed), FLUKA and PENELOPE agree mostly within 7%, except for 10 and 20 keV electrons (12% in water, 8.3% in bone). The
SU-F-T-74: Experimental Validation of Monaco Electron Monte Carlo Dose Calculation for Small Fields
International Nuclear Information System (INIS)
Varadhan; Way, S; Arentsen, L; Gerbi, B
2016-01-01
Purpose: To verify experimentally the accuracy of Monaco (Elekta) electron Monte Carlo (eMC) algorithm to calculate small field size depth doses, monitor units and isodose distributions. Methods: Beam modeling of eMC algorithm was performed for electron energies of 6, 9, 12 15 and 18 Mev for a Elekta Infinity Linac and all available ( 6, 10, 14 20 and 25 cone) applicator sizes. Electron cutouts of incrementally smaller field sizes (20, 40, 60 and 80% blocked from open cone) were fabricated. Dose calculation was performed using a grid size smaller than one-tenth of the R_8_0_–_2_0 electron distal falloff distance and number of particle histories was set at 500,000 per cm"2. Percent depth dose scans and beam profiles at dmax, d_9_0 and d_8_0 depths were measured for each cutout and energy with Wellhoffer (IBA) Blue Phantom"2 scanning system and compared against eMC calculated doses. Results: The measured dose and output factors of incrementally reduced cutout sizes (to 3cm diameter) agreed with eMC calculated doses within ± 2.5%. The profile comparisons at dmax, d_9_0 and d_8_0 depths and percent depth doses at reduced field sizes agreed within 2.5% or 2mm. Conclusion: Our results indicate that the Monaco eMC algorithm can accurately predict depth doses, isodose distributions, and monitor units in homogeneous water phantom for field sizes as small as 3.0 cm diameter for energies in the 6 to 18 MeV range at 100 cm SSD. Consequently, the old rule of thumb to approximate limiting cutout size for an electron field determined by the lateral scatter equilibrium (E (MeV)/2.5 in centimeters of water) does not apply to Monaco eMC algorithm.
SU-F-T-74: Experimental Validation of Monaco Electron Monte Carlo Dose Calculation for Small Fields
Energy Technology Data Exchange (ETDEWEB)
Varadhan [Minneapolis Radiation Oncology, Fridley, MN (United States); Way, S [Minneapolis Radiation Oncology, Robbinsdale, MN (United States); Arentsen, L; Gerbi, B [University of Minnesota, Minneapolis, MN (United States)
2016-06-15
Purpose: To verify experimentally the accuracy of Monaco (Elekta) electron Monte Carlo (eMC) algorithm to calculate small field size depth doses, monitor units and isodose distributions. Methods: Beam modeling of eMC algorithm was performed for electron energies of 6, 9, 12 15 and 18 Mev for a Elekta Infinity Linac and all available ( 6, 10, 14 20 and 25 cone) applicator sizes. Electron cutouts of incrementally smaller field sizes (20, 40, 60 and 80% blocked from open cone) were fabricated. Dose calculation was performed using a grid size smaller than one-tenth of the R{sub 80–20} electron distal falloff distance and number of particle histories was set at 500,000 per cm{sup 2}. Percent depth dose scans and beam profiles at dmax, d{sub 90} and d{sub 80} depths were measured for each cutout and energy with Wellhoffer (IBA) Blue Phantom{sup 2} scanning system and compared against eMC calculated doses. Results: The measured dose and output factors of incrementally reduced cutout sizes (to 3cm diameter) agreed with eMC calculated doses within ± 2.5%. The profile comparisons at dmax, d{sub 90} and d{sub 80} depths and percent depth doses at reduced field sizes agreed within 2.5% or 2mm. Conclusion: Our results indicate that the Monaco eMC algorithm can accurately predict depth doses, isodose distributions, and monitor units in homogeneous water phantom for field sizes as small as 3.0 cm diameter for energies in the 6 to 18 MeV range at 100 cm SSD. Consequently, the old rule of thumb to approximate limiting cutout size for an electron field determined by the lateral scatter equilibrium (E (MeV)/2.5 in centimeters of water) does not apply to Monaco eMC algorithm.
International Nuclear Information System (INIS)
Ding, Aiping; Liu, Tianyu; Liang, Chao; Ji, Wei; Shephard, Mark S.; Xu, X George; Brown, Forrest B.
2011-01-01
Monte Carlo simulation is ideally suited for solving Boltzmann neutron transport equation in inhomogeneous media. However, routine applications require the computation time to be reduced to hours and even minutes in a desktop system. The interest in adopting GPUs for Monte Carlo acceleration is rapidly mounting, fueled partially by the parallelism afforded by the latest GPU technologies and the challenge to perform full-size reactor core analysis on a routine basis. In this study, Monte Carlo codes for a fixed-source neutron transport problem and an eigenvalue/criticality problem were developed for CPU and GPU environments, respectively, to evaluate issues associated with computational speedup afforded by the use of GPUs. The results suggest that a speedup factor of 30 in Monte Carlo radiation transport of neutrons is within reach using the state-of-the-art GPU technologies. However, for the eigenvalue/criticality problem, the speedup was 8.5. In comparison, for a task of voxelizing unstructured mesh geometry that is more parallel in nature, the speedup of 45 was obtained. It was observed that, to date, most attempts to adopt GPUs for Monte Carlo acceleration were based on naïve implementations and have not yielded the level of anticipated gains. Successful implementation of Monte Carlo schemes for GPUs will likely require the development of an entirely new code. Given the prediction that future-generation GPU products will likely bring exponentially improved computing power and performances, innovative hardware and software solutions may make it possible to achieve full-core Monte Carlo calculation within one hour using a desktop computer system in a few years. (author)
International Nuclear Information System (INIS)
Xiang, Hong F.; Song, Jun S.; Chin, David W. H.; Cormack, Robert A.; Tishler, Roy B.; Makrigiorgos, G. Mike; Court, Laurence E.; Chin, Lee M.
2007-01-01
This work is intended to investigate the application and accuracy of micro-MOSFET for superficial dose measurement under clinically used MV x-ray beams. Dose response of micro-MOSFET in the build-up region and on surface under MV x-ray beams were measured and compared to Monte Carlo calculations. First, percentage-depth-doses were measured with micro-MOSFET under 6 and 10 MV beams of normal incidence onto a flat solid water phantom. Micro-MOSFET data were compared with the measurements from a parallel plate ionization chamber and Monte Carlo dose calculation in the build-up region. Then, percentage-depth-doses were measured for oblique beams at 0 deg. - 80 deg. onto the flat solid water phantom with micro-MOSFET placed at depths of 2 cm, 1 cm, and 2 mm below the surface. Measurements were compared to Monte Carlo calculations under these settings. Finally, measurements were performed with micro-MOSFET embedded in the first 1 mm layer of bolus placed on a flat phantom and a curved phantom of semi-cylindrical shape. Results were compared to superficial dose calculated from Monte Carlo for a 2 mm thin layer that extends from the surface to a depth of 2 mm. Results were (1) Comparison of measurements with MC calculation in the build-up region showed that micro-MOSFET has a water-equivalence thickness (WET) of 0.87 mm for 6 MV beam and 0.99 mm for 10 MV beam from the flat side, and a WET of 0.72 mm for 6 MV beam and 0.76 mm for 10 MV beam from the epoxy side. (2) For normal beam incidences, percentage depth dose agree within 3%-5% among micro-MOSFET measurements, parallel-plate ionization chamber measurements, and MC calculations. (3) For oblique incidence on the flat phantom with micro-MOSFET placed at depths of 2 cm, 1 cm, and 2 mm, measurements were consistent with MC calculations within a typical uncertainty of 3%-5%. (4) For oblique incidence on the flat phantom and a curved-surface phantom, measurements with micro-MOSFET placed at 1.0 mm agrees with the MC
Kramer, R; Khoury, H J; Vieira, J W; Loureiro, E C M; Lima, V J M; Lima, F R A; Hoff, G
2004-12-07
The International Commission on Radiological Protection (ICRP) has created a task group on dose calculations, which, among other objectives, should replace the currently used mathematical MIRD phantoms by voxel phantoms. Voxel phantoms are based on digital images recorded from scanning of real persons by computed tomography or magnetic resonance imaging (MRI). Compared to the mathematical MIRD phantoms, voxel phantoms are true to the natural representations of a human body. Connected to a radiation transport code, voxel phantoms serve as virtual humans for which equivalent dose to organs and tissues from exposure to ionizing radiation can be calculated. The principal database for the construction of the FAX (Female Adult voXel) phantom consisted of 151 CT images recorded from scanning of trunk and head of a female patient, whose body weight and height were close to the corresponding data recommended by the ICRP in Publication 89. All 22 organs and tissues at risk, except for the red bone marrow and the osteogenic cells on the endosteal surface of bone ('bone surface'), have been segmented manually with a technique recently developed at the Departamento de Energia Nuclear of the UFPE in Recife, Brazil. After segmentation the volumes of the organs and tissues have been adjusted to agree with the organ and tissue masses recommended by ICRP for the Reference Adult Female in Publication 89. Comparisons have been made with the organ and tissue masses of the mathematical EVA phantom, as well as with the corresponding data for other female voxel phantoms. The three-dimensional matrix of the segmented images has eventually been connected to the EGS4 Monte Carlo code. Effective dose conversion coefficients have been calculated for exposures to photons, and compared to data determined for the mathematical MIRD-type phantoms, as well as for other voxel phantoms.
International Nuclear Information System (INIS)
Li Chunjuan; Liu Yi'na; Zhang Weihua; Wang Zhiqiang
2014-01-01
The manganese bath method for measuring the neutron emission rate of radionuclide sources requires corrections to be made for emitted neutrons which are not captured by manganese nuclei. The Monte Carlo particle transport code MCNP was used to simulate the manganese bath system of the standards for the measurement of neutron source intensity. The correction factors were calculated and the reliability of the model was demonstrated through the key comparison for the radionuclide neutron source emission rate measurements organized by BIPM. The uncertainties in the calculated values were evaluated by considering the sensitivities to the solution density, the density of the radioactive material, the positioning of the source, the radius of the bath, and the interaction cross-sections. A new method for the evaluation of the uncertainties in Monte Carlo calculation was given. (authors)
International Nuclear Information System (INIS)
Jacimovic, R.; Maucec, M.; Trkov, A.
2002-01-01
In this work experimental verification of Monte Carlo neutron flux calculations in the carousel facility (CF) of the 250 kW TRIGA Mark II reactor at the Jozef Stefan Institute is presented. Simulations were carried out using the Monte Carlo radiation-transport code, MCNP4B. The objective of the work was to model and verify experimentally the azimuthal variation of neutron flux in the CF for core No. 176, set up in April 2002. '1'9'8Au activities of Al-Au(0.1%) disks irradiated in 11 channels of the CF covering 180'0 around the perimeter of the core were measured. The comparison between MCNP calculation and measurement shows relatively good agreement and demonstrates the overall accuracy with which the detailed spectral characteristics can be predicted by calculations.(author)
Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy.
Martinez-Rovira, I; Sempau, J; Prezado, Y
2012-05-01
Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-μm-wide microbeams spaced by 200-400 μm) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Good agreement between MC simulations and experimental results was achieved, even at the interfaces between two
Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy
Energy Technology Data Exchange (ETDEWEB)
Martinez-Rovira, I.; Sempau, J.; Prezado, Y. [Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain) and ID17 Biomedical Beamline, European Synchrotron Radiation Facility (ESRF), 6 rue Jules Horowitz B.P. 220, F-38043 Grenoble Cedex (France); Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain); Laboratoire Imagerie et modelisation en neurobiologie et cancerologie, UMR8165, Centre National de la Recherche Scientifique (CNRS), Universites Paris 7 et Paris 11, Bat 440., 15 rue Georges Clemenceau, F-91406 Orsay Cedex (France)
2012-05-15
Purpose: Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-{mu}m-wide microbeams spaced by 200-400 {mu}m) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. Methods: The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Results: Good agreement between MC simulations and experimental results was achieved, even at
The iterative hopping expansion algorithm for Monte Carlo calculations with very light fermions
International Nuclear Information System (INIS)
Montvay, I.
1985-03-01
The number of numerical operations necessary for a Monte Carlo simulation with very light fermions (like u- and d-quarks in quantum chromodynamics) is estimated within the iterative hopping expansion method. (orig.)
International Nuclear Information System (INIS)
Mitrica, B.; Brancus, I.M.; Toma, G.; Bercuci, A.; Aiftimiei, C.; Wentz, J.; Rebel, H.
2004-01-01
Atmospheric muons are produced in the interactions of primary cosmic rays particle with Earth's atmosphere, mainly by the decay of pions and kaons generated in hadronic interactions. They decay further in electrons and positrons and electron and muon neutrinos. Being the penetrating cosmic rays component, the muons manage to pass entirely through the atmosphere and can pass even larger absorbers before they interact with the material at the Earth's surface, and due to cosmogenic production of isotopes by atmospheric muons, information of astrophysical, environmental and material research interest can be obtained. Up to now, mainly semi-analytical approximations have been used to calculate the muon flux for estimating the cosmogenic isotope production, necessary for different applications. Our estimation of the atmospheric muon flux is based on a Monte-Carlo simulation program CORSIKA, in which we simulate the development in the atmosphere of the extensive air showers, using different models for the description of the hadronic interaction. Atmospheric muons are produced in the interactions of primary cosmic rays particle with Earth's atmosphere, mainly by the decay of pions and kaons generated in hadronic interactions. They decay further in electrons and positrons and electron and muon neutrinos. Being the penetrating cosmic rays component, the muons manage to pass entirely through the atmosphere and can pass even larger absorbers before they interact with the material at the Earth's surface, and due to cosmogenic production of isotopes by atmospheric muons, information of astrophysical, environmental and material research interest can be obtained. Up to now, mainly semi-analytical approximations have been used to calculate the muon flux for estimating the cosmogenic isotope production, necessary for different applications. Our estimation of the atmospheric muon flux is based on a Monte-Carlo simulation program CORSIKA, in which we simulates the development in the
Monte Carlo calculation of the maximum therapeutic gain of tumor antivascular alpha therapy
Energy Technology Data Exchange (ETDEWEB)
Huang, Chen-Yu; Oborn, Bradley M.; Guatelli, Susanna; Allen, Barry J. [Centre for Experimental Radiation Oncology, St. George Clinical School, University of New South Wales, Kogarah, New South Wales 2217 (Australia); Illawarra Cancer Care Centre, Wollongong, New South Wales 2522, Australia and Centre for Medical Radiation Physics, University of Wollongong, New South Wales 2522 (Australia); Centre for Medical Radiation Physics, University of Wollongong, New South Wales 2522 (Australia); Centre for Experimental Radiation Oncology, St. George Clinical School, University of New South Wales, Kogarah, New South Wales 2217 (Australia)
2012-03-15
Purpose: Metastatic melanoma lesions experienced marked regression after systemic targeted alpha therapy in a phase 1 clinical trial. This unexpected response was ascribed to tumor antivascular alpha therapy (TAVAT), in which effective tumor regression is achieved by killing endothelial cells (ECs) in tumor capillaries and, thus, depriving cancer cells of nutrition and oxygen. The purpose of this paper is to quantitatively analyze the therapeutic efficacy and safety of TAVAT by building up the testing Monte Carlo microdosimetric models. Methods: Geant4 was adapted to simulate the spatial nonuniform distribution of the alpha emitter {sup 213}Bi. The intraluminal model was designed to simulate the background dose to normal tissue capillary ECs from the nontargeted activity in the blood. The perivascular model calculates the EC dose from the activity bound to the perivascular cancer cells. The key parameters are the probability of an alpha particle traversing an EC nucleus, the energy deposition, the lineal energy transfer, and the specific energy. These results were then applied to interpret the clinical trial. Cell survival rate and therapeutic gain were determined. Results: The specific energy for an alpha particle hitting an EC nucleus in the intraluminal and perivascular models is 0.35 and 0.37 Gy, respectively. As the average probability of traversal in these models is 2.7% and 1.1%, the mean specific energy per decay drops to 1.0 cGy and 0.4 cGy, which demonstrates that the source distribution has a significant impact on the dose. Using the melanoma clinical trial activity of 25 mCi, the dose to tumor EC nucleus is found to be 3.2 Gy and to a normal capillary EC nucleus to be 1.8 cGy. These data give a maximum therapeutic gain of about 180 and validate the TAVAT concept. Conclusions: TAVAT can deliver a cytotoxic dose to tumor capillaries without being toxic to normal tissue capillaries.
A voxel-based mouse for internal dose calculations using Monte Carlo simulations (MCNP).
Bitar, A; Lisbona, A; Thedrez, P; Sai Maurel, C; Le Forestier, D; Barbet, J; Bardies, M
2007-02-21
Murine models are useful for targeted radiotherapy pre-clinical experiments. These models can help to assess the potential interest of new radiopharmaceuticals. In this study, we developed a voxel-based mouse for dosimetric estimates. A female nude mouse (30 g) was frozen and cut into slices. High-resolution digital photographs were taken directly on the frozen block after each section. Images were segmented manually. Monoenergetic photon or electron sources were simulated using the MCNP4c2 Monte Carlo code for each source organ, in order to give tables of S-factors (in Gy Bq-1 s-1) for all target organs. Results obtained from monoenergetic particles were then used to generate S-factors for several radionuclides of potential interest in targeted radiotherapy. Thirteen source and 25 target regions were considered in this study. For each source region, 16 photon and 16 electron energies were simulated. Absorbed fractions, specific absorbed fractions and S-factors were calculated for 16 radionuclides of interest for targeted radiotherapy. The results obtained generally agree well with data published previously. For electron energies ranging from 0.1 to 2.5 MeV, the self-absorbed fraction varies from 0.98 to 0.376 for the liver, and from 0.89 to 0.04 for the thyroid. Electrons cannot be considered as 'non-penetrating' radiation for energies above 0.5 MeV for mouse organs. This observation can be generalized to radionuclides: for example, the beta self-absorbed fraction for the thyroid was 0.616 for I-131; absorbed fractions for Y-90 for left kidney-to-left kidney and for left kidney-to-spleen were 0.486 and 0.058, respectively. Our voxel-based mouse allowed us to generate a dosimetric database for use in preclinical targeted radiotherapy experiments.
Aboulbanine, Zakaria; El Khayati, Naïma
2018-04-01
The use of phase space in medical linear accelerator Monte Carlo (MC) simulations significantly improves the execution time and leads to results comparable to those obtained from full calculations. The classical representation of phase space stores directly the information of millions of particles, producing bulky files. This paper presents a virtual source model (VSM) based on a reconstruction algorithm, taking as input a compressed file of roughly 800 kb derived from phase space data freely available in the International Atomic Energy Agency (IAEA) database. This VSM includes two main components; primary and scattered particle sources, with a specific reconstruction method developed for each. Energy spectra and other relevant variables were extracted from IAEA phase space and stored in the input description data file for both sources. The VSM was validated for three photon beams: Elekta Precise 6 MV/10 MV and a Varian TrueBeam 6 MV. Extensive calculations in water and comparisons between dose distributions of the VSM and IAEA phase space were performed to estimate the VSM precision. The Geant4 MC toolkit in multi-threaded mode (Geant4-[mt]) was used for fast dose calculations and optimized memory use. Four field configurations were chosen for dose calculation validation to test field size and symmetry effects, , , and for squared fields, and for an asymmetric rectangular field. Good agreement in terms of formalism, for 3%/3 mm and 2%/3 mm criteria, for each evaluated radiation field and photon beam was obtained within a computation time of 60 h on a single WorkStation for a 3 mm voxel matrix. Analyzing the VSM’s precision in high dose gradient regions, using the distance to agreement concept (DTA), showed also satisfactory results. In all investigated cases, the mean DTA was less than 1 mm in build-up and penumbra regions. In regards to calculation efficiency, the event processing speed is six times faster using Geant4-[mt] compared to sequential
International Nuclear Information System (INIS)
Kim, Do Hyun; Shin, Chang Ho; Kim, Song Hyun
2014-01-01
It uses the deterministic method to calculate adjoint fluxes for the decision of the parameters used in the variance reductions. This is called as hybrid Monte Carlo method. The CADIS method, however, has a limitation to reduce the stochastic errors of all responses. The Forward Weighted CADIS (FW-CADIS) was introduced to solve this problem. To reduce the overall stochastic errors of the responses, the forward flux is used. In the previous study, the Multi-Response CADIS (MR-CAIDS) method was derived for minimizing sum of each squared relative error. In this study, the characteristic of the MR-CADIS method was evaluated and compared with the FW-CADIS method. In this study, how the CADIS, FW-CADIS, and MR-CADIS methods are applied to optimize and decide the parameters used in the variance reduction techniques was analyzed. The MR-CADIS Method uses a technique that the sum of squared relative error in each tally region was minimized to achieve uniform uncertainty. To compare the simulation efficiency of the methods, a simple shielding problem was evaluated. Using FW-CADIS method, it was evaluated that the average of the relative errors was minimized; however, MR-CADIS method gives a lowest variance of the relative errors. Analysis shows that, MR-CADIS method can efficiently and uniformly reduce the relative error of the plural response problem than FW-CADIS method
Directory of Open Access Journals (Sweden)
Takahashi Wataru
2012-02-01
Full Text Available Abstract Background The purpose of this study was to compare dose distributions from three different algorithms with the x-ray Voxel Monte Carlo (XVMC calculations, in actual computed tomography (CT scans for use in stereotactic radiotherapy (SRT of small lung cancers. Methods Slow CT scan of 20 patients was performed and the internal target volume (ITV was delineated on Pinnacle3. All plans were first calculated with a scatter homogeneous mode (SHM which is compatible with Clarkson algorithm using Pinnacle3 treatment planning system (TPS. The planned dose was 48 Gy in 4 fractions. In a second step, the CT images, structures and beam data were exported to other treatment planning systems (TPSs. Collapsed cone convolution (CCC from Pinnacle3, superposition (SP from XiO, and XVMC from Monaco were used for recalculating. The dose distributions and the Dose Volume Histograms (DVHs were compared with each other. Results The phantom test revealed that all algorithms could reproduce the measured data within 1% except for the SHM with inhomogeneous phantom. For the patient study, the SHM greatly overestimated the isocenter (IC doses and the minimal dose received by 95% of the PTV (PTV95 compared to XVMC. The differences in mean doses were 2.96 Gy (6.17% for IC and 5.02 Gy (11.18% for PTV95. The DVH's and dose distributions with CCC and SP were in agreement with those obtained by XVMC. The average differences in IC doses between CCC and XVMC, and SP and XVMC were -1.14% (p = 0.17, and -2.67% (p = 0.0036, respectively. Conclusions Our work clearly confirms that the actual practice of relying solely on a Clarkson algorithm may be inappropriate for SRT planning. Meanwhile, CCC and SP were close to XVMC simulations and actual dose distributions obtained in lung SRT.
International Nuclear Information System (INIS)
Jinaphanh, A.
2012-01-01
Monte Carlo criticality calculation allows to estimate the effective multiplication factor as well as local quantities such as local reaction rates. Some configurations presenting weak neutronic coupling (high burn up profile, complete reactor core,...) may induce biased estimations for k eff or reaction rates. In order to improve robustness of the iterative Monte Carlo methods, a coupling with a deterministic code was studied. An adjoint flux is obtained by a deterministic calculation and then used in the Monte Carlo. The initial guess is then automated, the sampling of fission sites is modified and the random walk of neutrons is modified using splitting and russian roulette strategies. An automated convergence detection method has been developed. It locates and suppresses the transient due to the initialization in an output series, applied here to k eff and Shannon entropy. It relies on modeling stationary series by an order 1 auto regressive process and applying statistical tests based on a Student Bridge statistics. This method can easily be extended to every output of an iterative Monte Carlo. Methods developed in this thesis are tested on different test cases. (author)
Energy Technology Data Exchange (ETDEWEB)
Sampaio, E V.M.
1986-12-31
Using the MIRD 5 phantom and Monte Carlo technique, organ doses in patients undergoing external dental examination were calculated taking into account the different x-ray beam geometries and the various possible positions of x-ray source with regard to the head of the patient. It was necessary to introduce in the original computer program a new source description specific for dental examinations. To have a realistic evaluation of organ doses during dental examination it was necessary to introduce a new region in the phantom heat which characterizes the teeth and salivary glands. The attenuation of the x-ray beam by the lead shield of the radiographic film was also introduced in the calculation. (author).
International Nuclear Information System (INIS)
Thomas, D; O’Connell, D; Lamb, J; Cao, M; Yang, Y; Agazaryan, N; Lee, P; Low, D
2015-01-01
Purpose: To demonstrate real-time dose calculation of free-breathing MRI guided Co−60 treatments, using a motion model and Monte-Carlo dose calculation to accurately account for the interplay between irregular breathing motion and an IMRT delivery. Methods: ViewRay Co-60 dose distributions were optimized on ITVs contoured from free-breathing CT images of lung cancer patients. Each treatment plan was separated into 0.25s segments, accounting for the MLC positions and beam angles at each time point. A voxel-specific motion model derived from multiple fast-helical free-breathing CTs and deformable registration was calculated for each patient. 3D images for every 0.25s of a simulated treatment were generated in real time, here using a bellows signal as a surrogate to accurately account for breathing irregularities. Monte-Carlo dose calculation was performed every 0.25s of the treatment, with the number of histories in each calculation scaled to give an overall 1% statistical uncertainty. Each dose calculation was deformed back to the reference image using the motion model and accumulated. The static and real-time dose calculations were compared. Results: Image generation was performed in real time at 4 frames per second (GPU). Monte-Carlo dose calculation was performed at approximately 1frame per second (CPU), giving a total calculation time of approximately 30 minutes per treatment. Results show both cold- and hot-spots in and around the ITV, and increased dose to contralateral lung as the tumor moves in and out of the beam during treatment. Conclusion: An accurate motion model combined with a fast Monte-Carlo dose calculation allows almost real-time dose calculation of a free-breathing treatment. When combined with sagittal 2D-cine-mode MRI during treatment to update the motion model in real time, this will allow the true delivered dose of a treatment to be calculated, providing a useful tool for adaptive planning and assessing the effectiveness of gated treatments
International Nuclear Information System (INIS)
Tian, Zhen; Jia, Xun; Jiang, Steve B; Graves, Yan Jiang
2014-01-01
Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6 MV beam was used to generate the PSLs for 6 MV beams. In a simulation study, we commissioned a Siemens 6 MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D γ-index test passing rate within the regions with dose above 10% of d max dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2 mm criteria and from 32.22 to 89.65% for 1%/1 mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The
Response matrix Monte Carlo based on a general geometry local calculation for electron transport
International Nuclear Information System (INIS)
Ballinger, C.T.; Rathkopf, J.A.; Martin, W.R.
1991-01-01
A Response Matrix Monte Carlo (RMMC) method has been developed for solving electron transport problems. This method was born of the need to have a reliable, computationally efficient transport method for low energy electrons (below a few hundred keV) in all materials. Today, condensed history methods are used which reduce the computation time by modeling the combined effect of many collisions but fail at low energy because of the assumptions required to characterize the electron scattering. Analog Monte Carlo simulations are prohibitively expensive since electrons undergo coulombic scattering with little state change after a collision. The RMMC method attempts to combine the accuracy of an analog Monte Carlo simulation with the speed of the condensed history methods. Like condensed history, the RMMC method uses probability distributions functions (PDFs) to describe the energy and direction of the electron after several collisions. However, unlike the condensed history method the PDFs are based on an analog Monte Carlo simulation over a small region. Condensed history theories require assumptions about the electron scattering to derive the PDFs for direction and energy. Thus the RMMC method samples from PDFs which more accurately represent the electron random walk. Results show good agreement between the RMMC method and analog Monte Carlo. 13 refs., 8 figs
International Nuclear Information System (INIS)
Sutherland, J. G. H.; Thomson, R. M.; Rogers, D. W. O.
2011-01-01
Purpose: To investigate the use of various breast tissue segmentation models in Monte Carlo dose calculations for low-energy brachytherapy. Methods: The EGSnrc user-code BrachyDose is used to perform Monte Carlo simulations of a breast brachytherapy treatment using TheraSeed Pd-103 seeds with various breast tissue segmentation models. Models used include a phantom where voxels are randomly assigned to be gland or adipose (randomly segmented), a phantom where a single tissue of averaged gland and adipose is present (averaged tissue), and a realistically segmented phantom created from previously published numerical phantoms. Radiation transport in averaged tissue while scoring in gland along with other combinations is investigated. The inclusion of calcifications in the breast is also studied in averaged tissue and randomly segmented phantoms. Results: In randomly segmented and averaged tissue phantoms, the photon energy fluence is approximately the same; however, differences occur in the dose volume histograms (DVHs) as a result of scoring in the different tissues (gland and adipose versus averaged tissue), whose mass energy absorption coefficients differ by 30%. A realistically segmented phantom is shown to significantly change the photon energy fluence compared to that in averaged tissue or randomly segmented phantoms. Despite this, resulting DVHs for the entire treatment volume agree reasonably because fluence differences are compensated by dose scoring differences. DVHs for the dose to only the gland voxels in a realistically segmented phantom do not agree with those for dose to gland in an averaged tissue phantom. Calcifications affect photon energy fluence to such a degree that the differences in fluence are not compensated for (as they are in the no calcification case) by dose scoring in averaged tissue phantoms. Conclusions: For low-energy brachytherapy, if photon transport and dose scoring both occur in an averaged tissue, the resulting DVH for the entire
Energy Technology Data Exchange (ETDEWEB)
Sutherland, J. G. H.; Thomson, R. M.; Rogers, D. W. O. [Carleton Laboratory for Radiotherapy Physics, Department of Physics, Carleton University, Ottawa K1S 5B6 (Canada)
2011-08-15
Purpose: To investigate the use of various breast tissue segmentation models in Monte Carlo dose calculations for low-energy brachytherapy. Methods: The EGSnrc user-code BrachyDose is used to perform Monte Carlo simulations of a breast brachytherapy treatment using TheraSeed Pd-103 seeds with various breast tissue segmentation models. Models used include a phantom where voxels are randomly assigned to be gland or adipose (randomly segmented), a phantom where a single tissue of averaged gland and adipose is present (averaged tissue), and a realistically segmented phantom created from previously published numerical phantoms. Radiation transport in averaged tissue while scoring in gland along with other combinations is investigated. The inclusion of calcifications in the breast is also studied in averaged tissue and randomly segmented phantoms. Results: In randomly segmented and averaged tissue phantoms, the photon energy fluence is approximately the same; however, differences occur in the dose volume histograms (DVHs) as a result of scoring in the different tissues (gland and adipose versus averaged tissue), whose mass energy absorption coefficients differ by 30%. A realistically segmented phantom is shown to significantly change the photon energy fluence compared to that in averaged tissue or randomly segmented phantoms. Despite this, resulting DVHs for the entire treatment volume agree reasonably because fluence differences are compensated by dose scoring differences. DVHs for the dose to only the gland voxels in a realistically segmented phantom do not agree with those for dose to gland in an averaged tissue phantom. Calcifications affect photon energy fluence to such a degree that the differences in fluence are not compensated for (as they are in the no calcification case) by dose scoring in averaged tissue phantoms. Conclusions: For low-energy brachytherapy, if photon transport and dose scoring both occur in an averaged tissue, the resulting DVH for the entire
Energy Technology Data Exchange (ETDEWEB)
Utsunomiya, S; Kushima, N; Katsura, K; Tanabe, S; Hayakawa, T; Sakai, H; Yamada, T; Takahashi, H; Abe, E; Wada, S; Aoyama, H [Niigata University, Niigata (Japan)
2016-06-15
Purpose: To establish a simple relation of backscatter dose enhancement around a high-Z dental alloy in head and neck radiation therapy to its average atomic number based on Monte Carlo calculations. Methods: The PHITS Monte Carlo code was used to calculate dose enhancement, which is quantified by the backscatter dose factor (BSDF). The accuracy of the beam modeling with PHITS was verified by comparing with basic measured data namely PDDs and dose profiles. In the simulation, a high-Z alloy of 1 cm cube was embedded into a tough water phantom irradiated by a 6-MV (nominal) X-ray beam of 10 cm × 10 cm field size of Novalis TX (Brainlab). The ten different materials of high-Z alloys (Al, Ti, Cu, Ag, Au-Pd-Ag, I, Ba, W, Au, Pb) were considered. The accuracy of calculated BSDF was verified by comparing with measured data by Gafchromic EBT3 films placed at from 0 to 10 mm away from a high-Z alloy (Au-Pd-Ag). We derived an approximate equation to determine the relation of BSDF and range of backscatter to average atomic number of high-Z alloy. Results: The calculated BSDF showed excellent agreement with measured one by Gafchromic EBT3 films at from 0 to 10 mm away from the high-Z alloy. We found the simple linear relation of BSDF and range of backscatter to average atomic number of dental alloys. The latter relation was proven by the fact that energy spectrum of backscatter electrons strongly depend on average atomic number. Conclusion: We found a simple relation of backscatter dose enhancement around high-Z alloys to its average atomic number based on Monte Carlo calculations. This work provides a simple and useful method to estimate backscatter dose enhancement from dental alloys and corresponding optimal thickness of dental spacer to prevent mucositis effectively.
Energy Technology Data Exchange (ETDEWEB)
Mendenhall, Marcus H., E-mail: marcus.h.mendenhall@vanderbilt.edu [Vanderbilt University, Department of Electrical Engineering, P.O. Box 351824B, Nashville, TN 37235 (United States); Weller, Robert A., E-mail: robert.a.weller@vanderbilt.edu [Vanderbilt University, Department of Electrical Engineering, P.O. Box 351824B, Nashville, TN 37235 (United States)
2012-03-01
In Monte Carlo particle transport codes, it is often important to adjust reaction cross-sections to reduce the variance of calculations of relatively rare events, in a technique known as non-analog Monte Carlo. We present the theory and sample code for a Geant4 process which allows the cross-section of a G4VDiscreteProcess to be scaled, while adjusting track weights so as to mitigate the effects of altered primary beam depletion induced by the cross-section change. This makes it possible to increase the cross-section of nuclear reactions by factors exceeding 10{sup 4} (in appropriate cases), without distorting the results of energy deposition calculations or coincidence rates. The procedure is also valid for bias factors less than unity, which is useful in problems that involve the computation of particle penetration deep into a target (e.g. atmospheric showers or shielding studies).
International Nuclear Information System (INIS)
Mendenhall, Marcus H.; Weller, Robert A.
2012-01-01
In Monte Carlo particle transport codes, it is often important to adjust reaction cross-sections to reduce the variance of calculations of relatively rare events, in a technique known as non-analog Monte Carlo. We present the theory and sample code for a Geant4 process which allows the cross-section of a G4VDiscreteProcess to be scaled, while adjusting track weights so as to mitigate the effects of altered primary beam depletion induced by the cross-section change. This makes it possible to increase the cross-section of nuclear reactions by factors exceeding 10 4 (in appropriate cases), without distorting the results of energy deposition calculations or coincidence rates. The procedure is also valid for bias factors less than unity, which is useful in problems that involve the computation of particle penetration deep into a target (e.g. atmospheric showers or shielding studies).
DEEP code to calculate dose equivalents in human phantom for external photon exposur