APOLLO2 code self-shielding formalism
International Nuclear Information System (INIS)
This report describes the various self-shielding methods used in the APOLLO2 code for treating one resonant nucleus or a mixture of resonant nuclei. The methods are expounded in chronological order. First of all, the methods dealing with one resonant isotope are explained. Then an original method dealing directly with a resonant mixture is detailed. This new method is also convenient for one resonant nucleus and leads, in that case, to interesting improvements in the self-shielding modeling. (author)
Self-shielding models of MICROX-2 code: Review and updates
International Nuclear Information System (INIS)
Highlights: • The MICROX-2 code has been improved to expand its application to advanced reactors. • New fine-group cross section libraries based on ENDF/B-VII have been generated. • Resonance self-shielding and spatial self-shielding models have been improved. • The improvements were assessed by a series of benchmark calculations against MCNPX. - Abstract: The MICROX-2 is a transport theory code that solves for the neutron slowing-down and thermalization equations of a two-region lattice cell. The MICROX-2 code has been updated to expand its application to advanced reactor concepts and fuel cycle simulations, including generation of new fine-group cross section libraries based on ENDF/B-VII. In continuation of previous work, the MICROX-2 methods are reviewed and updated in this study, focusing on its resonance self-shielding and spatial self-shielding models for neutron spectrum calculations. The improvement of self-shielding method was assessed by a series of benchmark calculations against the Monte Carlo code, using homogeneous and heterogeneous pin cell models. The results have shown that the implementation of the updated self-shielding models is correct and the accuracy of physics calculation is improved. Compared to the existing models, the updates reduced the prediction error of the infinite multiplication factor by ∼0.1% and ∼0.2% for the homogeneous and heterogeneous pin cell models, respectively, considered in this study
Energy Technology Data Exchange (ETDEWEB)
Nasrabadi, M.N. [Department of Nuclear Engineering, Faculty of Modern Sciences and Technologies, University of Isfahan, Isfahan 81746-73441 (Iran, Islamic Republic of)], E-mail: mnnasrabadi@ast.ui.ac.ir; Mohammadi, A. [Department of Physics, Payame Noor University (PNU), Kohandej, Isfahan (Iran, Islamic Republic of); Jalali, M. [Isfahan Nuclear Science and Technology Research Institute (NSTRT), Reactor and Accelerators Research and Development School, Atomic Energy Organization of Iran (Iran, Islamic Republic of)
2009-07-15
In this paper bulk sample prompt gamma neutron activation analysis (BSPGNAA) was applied to aqueous sample analysis using a relative method. For elemental analysis of an unknown bulk sample, gamma self-shielding coefficient was required. Gamma self-shielding coefficient of unknown samples was estimated by an experimental method and also by MCNP code calculation. The proposed methodology can be used for the determination of the elemental concentration of unknown aqueous samples by BSPGNAA where knowledge of the gamma self-shielding within the sample volume is required.
Nasrabadi, M N; Mohammadi, A; Jalali, M
2009-01-01
In this paper bulk sample prompt gamma neutron activation analysis (BSPGNAA) was applied to aqueous sample analysis using a relative method. For elemental analysis of an unknown bulk sample, gamma self-shielding coefficient was required. Gamma self-shielding coefficient of unknown samples was estimated by an experimental method and also by MCNP code calculation. The proposed methodology can be used for the determination of the elemental concentration of unknown aqueous samples by BSPGNAA where knowledge of the gamma self-shielding within the sample volume is required. PMID:19328700
REPOSITORY LAYOUT SUPPORTING DESIGN FEATURE #13- WASTE PACKAGE SELF SHIELDING
Energy Technology Data Exchange (ETDEWEB)
J. Owen
1999-04-09
The objective of this analysis is to develop a repository layout, for Feature No. 13, that will accommodate self-shielding waste packages (WP) with an areal mass loading of 25 metric tons of uranium per acre (MTU/acre). The scope of this analysis includes determination of the number of emplacement drifts, amount of emplacement drift excavation required, and a preliminary layout for illustrative purposes.
MPACT Subgroup Self-Shielding Efficiency Improvements
Energy Technology Data Exchange (ETDEWEB)
Stimpson, Shane [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Liu, Yuxuan [Univ. of Michigan, Ann Arbor, MI (United States); Collins, Benjamin S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Clarno, Kevin T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2016-08-31
Recent developments to improve the efficiency of the MOC solvers in MPACT have yielded effective kernels that loop over several energy groups at once, rather that looping over one group at a time. These kernels have produced roughly a 2x speedup on the MOC sweeping time during eigenvalue calculation. However, the self-shielding subgroup calculation had not been reevaluated to take advantage of these new kernels, which typically requires substantial solve time. The improvements covered in this report start by integrating the multigroup kernel concepts into the subgroup calculation, which are then used as the basis for further extensions. The next improvement that is covered is what is currently being termed as “Lumped Parameter MOC”. Because the subgroup calculation is a purely fixed source problem and multiple sweeps are performed only to update the boundary angular fluxes, the sweep procedure can be condensed to allow for the instantaneous propagation of the flux across a spatial domain, without the need to sweep along all segments in a ray. Once the boundary angular fluxes are considered to be converged, an additional sweep that will tally the scalar flux is completed. The last improvement that is investigated is the possible reduction of the number of azimuthal angles per octant in the shielding sweep. Typically 16 azimuthal angles per octant are used for self-shielding and eigenvalue calculations, but it is possible that the self-shielding sweeps are less sensitive to the number of angles than the full eigenvalue calculation.
Self-shielding clumps in starburst clusters
Palouš, Jan; Ehlerová, Soňa; Tenorio-Tagle, Guillermo
2016-01-01
Young and massive star clusters above a critical mass form thermally unstable clumps reducing locally the temperature and pressure of the hot 10$^{7}$~K cluster wind. The matter reinserted by stars, and mass loaded in interactions with pristine gas and from evaporating circumstellar disks, accumulate on clumps that are ionized with photons produced by massive stars. We discuss if they may become self-shielded when they reach the central part of the cluster, or even before it, during their free fall to the cluster center. Here we explore the importance of heating efficiency of stellar winds.
Self-shielding Electron Beam Installation for Sterilization
Institute of Scientific and Technical Information of China (English)
Linac; Laboratory
2002-01-01
China Institute of Atomic Energy (CIAE) has developed a self-shielding electron beam installationfor sterilization as handling letters with anthrax germ or spores which has the least volume and the least
International Nuclear Information System (INIS)
Full text: One of the major problems encountered during the irradiation of large inhomogeneous samples in performing activation analysis using neutron is the perturbation of the neutron field due to absorption and scattering of neutron within the sample as well as along the neutron guide in the case of prompt gamma activation analysis. The magnitude of this perturbation shown by self-shielding coefficient and flux depression depend on several factors including the average neutron energy, the size and shape of the sample, as well as the macroscopic absorption cross section of the sample. In this study, we use Monte Carlo N-Particle codes to simulate the variation of neutron self-shielding coefficient and thermal flux depression factor as a function of the macroscopic thermal absorption cross section. The simulation works was carried out using the high performance computing facility available at UTM while the experimental work was performed at the tangential beam port of Reactor TRIGA PUSPATI, Malaysia Nuclear Agency. The neutron flux measured along the beam port is found to be in good agreement with the simulated data. Our simulation results also reveal that total flux perturbation factor decreases as the value of absorption increases. This factor is close to unity for low absorbing sample and tends towards zero for strong absorber. In addition, sample with long mean chord length produces smaller flux perturbation than the shorter mean chord length. When comparing both the graphs of self-shielding factor and total disturbance, we can conclude that the total disturbance of the thermal neutron flux on the large samples is dominated by the self-shielding effect. (Author)
Self-shielding in large cross-section neutron absorbers
International Nuclear Information System (INIS)
This study is dealing with finding the effects on neutron regime in several cases, among them a fuel bundle comprised of 16 fuel rods, made of sintered UO2 pellets clad in zircaloy-4 and irradiated in the neutron trap. The variations of the average neutron flux and the effect of self-shielding were studied. Similar calculations were carried out, both theoretically and experimentally for samples of europium oxide. Self-shielding effects were studied, and the variation of the effective multiplication factor was found as function of mass. The isotope generation and depletion code origin was used to compute the radioactivity of fission products from irradiating uranium different enrichments in IRT-5000. The effect of self-shielding on the flux and on the activities were found also. 14 tabs.; 34 figs.; 27 refs
Self-shielded electron linear accelerators designed for radiation technologies
Belugin, V. M.; Rozanov, N. E.; Pirozhenko, V. M.
2009-09-01
This paper describes self-shielded high-intensity electron linear accelerators designed for radiation technologies. The specific property of the accelerators is that they do not apply an external magnetic field; acceleration and focusing of electron beams are performed by radio-frequency fields in the accelerating structures. The main characteristics of the accelerators are high current and beam power, but also reliable operation and a long service life. To obtain these characteristics, a number of problems have been solved, including a particular optimization of the accelerator components and the application of a variety of specific means. The paper describes features of the electron beam dynamics, accelerating structure, and radio-frequency power supply. Several compact self-shielded accelerators for radiation sterilization and x-ray cargo inspection have been created. The introduced methods made it possible to obtain a high intensity of the electron beam and good performance of the accelerators.
Byoun, T. Y.; Block, R. C.; Semler, T. T.
1972-01-01
A series of average transmission and average self-indication ratio measurements were performed in order to investigate the temperature dependence of the resonance self-shielding effect in the unresolved resonance region of depleted uranium and tantalum. The measurements were carried out at 77 K, 295 K and approximately 1000 K with sample thicknesses varying from approximately 0.1 to 1.0 mean free path. The average resonance parameters as well as the temperature dependence were determined by using an analytical model which directly integrates over the resonance parameter distribution functions.
Measurement of resonance self-shielding factors of neutron capture cross section by 238U
International Nuclear Information System (INIS)
Resonance self-shielding factors fsub(c) of neutron capture cross section by 238U in the 20-100 keV energy range are measured. The method for determining the fsub(c) factor consists in measuring partial transmission and transmission in the total cross section at different 238U filter thickness. The fsub(c) factor values in the 46.5-100 and 21.5-46.5 keV energy ranges are equal to 0.89+-0.03 and 0.81+-0.04, respectively
International Nuclear Information System (INIS)
The URR computer code has been developed to calculate cross-section probability tables, Bondarenko self-shielding factors, and self-indication ratios for fertile and fissile isotopes in the unresolved resonance region. Monte Carlo methods are utilized to select appropriate resonance parameters and to compute the cross sections at the desired reference energy. The neutron cross sections are calculated by the single-level Breit-Wigner formalism with s-, p-, and d-wave contributions. The cross-section probability tables are constructed by sampling by Doppler broadened cross-sections. The various self-shielding factors are computer numerically as Lebesgue integrals over the cross-section probability tables
International Nuclear Information System (INIS)
A calculation methodology of Flux Depression, Self-Shielding and Cadmium Factors is presented, using the ANISN code, for experiments conducted at the IPEN/MB-01 Research Reactor. The correction factors were determined considering thermal neutron flux and 0.125 e 0.250 mm diameter of 197 Au wires. (author)
Energy Technology Data Exchange (ETDEWEB)
Munoz-Cobos, J.G.
1981-08-01
The Fortran IV code PAPIN has been developed to calculate cross section probability tables, Bondarenko self-shielding factors and average self-indication ratios for non-fissile isotopes, below the inelastic threshold, on the basis of the ENDF/B prescriptions for the unresolved resonance region. Monte-Carlo methods are utilized to generate ladders of resonance parameters in the unresolved resonance region, from average resonance parameters and their appropriate distribution functions. The neutron cross-sections are calculated by the single level Breit-Wigner (SLBW) formalism, with s, p and d-wave contributions. The cross section probability tables are constructed by sampling the Doppler-broadened cross sections. The various self-shielded factors are computed numerically as Lebesgue integrals over the cross section probability tables. The program PAPIN has been validated through extensive comparisons with several deterministic codes.
Formation Mechanism of Inclusion in Self-Shielded Flux Cored Arc Welds
Institute of Scientific and Technical Information of China (English)
YU Ping; LU Xiao-sheng; PAN Chuan; XUE Jin; LI Zheng-bang
2005-01-01
The formation mechanism of inclusion in welds with different aluminum contents was determined based on thermodynamic equilibrium in self-shielded flux cored arc welds. Inclusions in welds were systematically studied by optical microscopy, scanning microscopy and image analyzer. The results show that the average size and the contamination rate of inclusions in low-aluminum weld are lower than those in high-aluminum weld. Highly faceted AlN inclusions with big size in the high-aluminum weld are more than those in low-aluminum weld. As a result,the low temperature impact toughness of low-aluminum weld is higher than that of high-aluminum weld. Finally,the thermodynamic analysis indicates that thermodynamic result agrees with the experimental data.
Study on the Processing Method for Resonance Self-shielding Calculations
Institute of Scientific and Technical Information of China (English)
无
2011-01-01
We investigate a new approach for resonance self-shielding calculations, based on a straightforward subgroup method, used in association with characteristics method. Subgroup method is actually the subdivision of cross section range for resonance energy range.
Self-shielding effect of a single phase liquid xenon detector for direct dark matter search
Minamino, A.; Abe, K.; Ashie, Y.; Hosaka, J.; Ishihara, K; Kobayashi, K; Koshio, Y.; Mitsuda, C.; Moriyama, S.; Nakahata, M.(University of Tokyo, Institute for Cosmic Ray Research, Kamioka Observatory, Kamioka, Japan); Nakajima, Y; Namba, T.; Ogawa, H.; Sekiya, H.; Shiozawa, M
2009-01-01
Liquid xenon is a suitable material for a dark matter search. For future large scale experiments, single phase detectors are attractive due to their simple configuration and scalability. However, in order to reduce backgrounds, they need to fully rely on liquid xenon's self-shielding property. A prototype detector was developed at Kamioka Observatory to establish vertex and energy reconstruction methods and to demonstrate the self-shielding power against gamma rays from outside of the detecto...
Hartwig, Tilman; Glover, Simon C. O.; Klessen, Ralf S.; Latif, Muhammad A.; Volonteri, Marta
2015-09-01
High-redshift quasars at z > 6 have masses up to ˜109 M⊙. One of the pathways to their formation includes direct collapse of gas, forming a supermassive star, precursor of the black hole seed. The conditions for direct collapse are more easily achievable in metal-free haloes, where atomic hydrogen cooling operates and molecular hydrogen (H2) formation is inhibited by a strong external (ultraviolet) UV flux. Above a certain value of UV flux (Jcrit), the gas in a halo collapses isothermally at ˜104 K and provides the conditions for supermassive star formation. However, H2 can self-shield, reducing the effect of photodissociation. So far, most numerical studies used the local Jeans length to calculate the column densities for self-shielding. We implement an improved method for the determination of column densities in 3D simulations and analyse its effect on the value of Jcrit. This new method captures the gas geometry and velocity field and enables us to properly determine the direction-dependent self-shielding factor of H2 against photodissociating radiation. We find a value of Jcrit that is a factor of 2 smaller than with the Jeans approach (˜2000 J21 versus ˜4000 J21). The main reason for this difference is the strong directional dependence of the H2 column density. With this lower value of Jcrit, the number of haloes exposed to a flux > Jcrit is larger by more than an order of magnitude compared to previous studies. This may translate into a similar enhancement in the predicted number density of black hole seeds.
Shavers, M. R.; Atwell, W.; Cucinotta, F. A.; Badhwar, G. D. (Technical Monitor)
1999-01-01
cell killing from GCR, including patterns of cell killing from single particle tracks. can provide useful information on expected differences between proton and HZE tracks and clinical experiences with photon irradiation. To model effects on cells in the brain, it is important that transport models accurately describe changes in the GCR due to interactions in the cranium and proximate tissues. We describe calculations of the attenuated GCR particle fluxes at three dose-points in the brain and associated patterns of cell killing using biophysical models. The effects of the brain self-shielding and bone-tissue interface of the skull in modulating the GCR environment are considered. For each brain dose-point, the mass distribution in the surrounding 4(pi) solid angle is characterized using the CAM model to trace 512 rays. The CAM model describes the self-shielding by converting the tissue distribution to mass-equivalent aluminum, and nominal values of spacecraft shielding is considered. Particle transport is performed with the proton, neutron, and heavy-ion transport code HZETRN with the nuclear fragmentation model QMSFRG. The distribution of cells killed along the path of individual GCR ions is modeled using in vitro cell inactivation data for cells with varying sensitivity. Monte Carlo simulations of arrays of inactivated cells are considered for protons and heavy ions and used to describe the absolute number of cell killing events of various magnitude in the brain from the GCR. Included are simulations of positions of inactivated cells from stopping heavy ions and nuclear stars produced by high-energy ions most importantly, protons and neutrons.
Hartwig, Tilman; Klessen, Ralf S; Latif, Muhammad A; Volonteri, Marta
2015-01-01
The highest redshift quasars at z>6 have mass estimates of about a billion M$_\\odot$. One of the pathways to their formation includes direct collapse of gas, forming a supermassive star ($\\sim 10^5\\,\\mathrm{M}_\\odot$) precursor of the black hole seed. The conditions for direct collapse are more easily achievable in metal-free haloes, where atomic hydrogen cooling operates and molecular hydrogen (H$_2$) formation is inhibited by a strong external UV flux. Above a certain value of UV flux ($J_{\\rm crit}$), the gas in a halo collapses isothermally at $\\sim10^4$K and provides the conditions for supermassive star formation. However, H$_2$ can self-shield and the effect of photodissociation is reduced. So far, most numerical studies used the local Jeans length to calculate the column densities for self-shielding. We implement an improved method for the determination of column densities in 3D simulations and analyse its effect on the value of $J_{\\rm crit}$. This new method captures the gas geometry and velocity fie...
Directory of Open Access Journals (Sweden)
Zeng Huilin
2014-10-01
Full Text Available In order to realize the automatic welding of pipes in a complex operation environment, an automatic welding system has been developed by use of all-position self-shielded flux cored wires due to their advantages, such as all-position weldability, good detachability, arc's stability, low incomplete fusion, no need for welding protective gas or protection against wind when the wind speed is < 8 m/s. This system consists of a welding carrier, a guide rail, an auto-control system, a welding source, a wire feeder, and so on. Welding experiments with this system were performed on the X-80 pipeline steel to determine proper welding parameters. The welding technique comprises root welding, filling welding and cover welding and their welding parameters were obtained from experimental analysis. On this basis, the mechanical properties tests were carried out on welded joints in this case. Results show that this system can help improve the continuity and stability of the whole welding process and the welded joints' inherent quality, appearance shape, and mechanical performance can all meet the welding criteria for X-80 pipeline steel; with no need for windbreak fences, the overall welding cost will be sharply reduced. Meanwhile, more positive proposals were presented herein for the further research and development of this self-shielded flux core wires.
Resonance self-shielding in the blanket of a hybrid reactor
International Nuclear Information System (INIS)
Three sets of energy group cross sections were obtained using various approximations for resonance self shielding. The three models used in obtaining the cross sections were: (a) infinitely dilute model, (b) homogeneous-medium resonance self shielding, and (c) heterogeneous-medium resonance self shielding. The effects on the blanket performance of fusion--fission hybrid reactors, and in particular, on the performance of the current reference Westinghouse Demonstration Tokamak Hybrid Reactor blanket, were compared and analyzed for a variety of fuel-coolant combinations. It has been concluded that (1) the infinitely dilute cross sections can be used to produce preliminary crude estimates for beginning-of-life (BOL) only, (2) the resonance absorber finite dilution should be considered for BOL, poorly moderated blankets and well moderated blankets with low fissile material content situations, and (3) the spacial details should be considered in high fissile content, well moderated blanket situations
SUBGR: A Program to Generate Subgroup Data for the Subgroup Resonance Self-Shielding Calculation
Energy Technology Data Exchange (ETDEWEB)
Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2016-06-06
The Subgroup Data Generation (SUBGR) program generates subgroup data, including levels and weights from the resonance self-shielded cross section table as a function of background cross section. Depending on the nuclide and the energy range, these subgroup data can be generated by (a) narrow resonance approximation, (b) pointwise flux calculations for homogeneous media; and (c) pointwise flux calculations for heterogeneous lattice cells. The latter two options are performed by the AMPX module IRFFACTOR. These subgroup data are to be used in the Consortium for Advanced Simulation of Light Water Reactors (CASL) neutronic simulator MPACT, for which the primary resonance self-shielding method is the subgroup method.
Evaluation of resonance self-shielding factors for 238U in the unresolved resonance region
International Nuclear Information System (INIS)
On the basis of a theoretical model of identical equidistant resonances for the energy dependence of cross-sections in the unresolved resonance region, the authors have parametrized the values of the resonance self-shielding factors and their Doppler increments for 238U. They have proposed a method by which the Doppler increments of the self-shielding factors can be calculated from simple analytical formulae by redetermination of the model parameters. Analysing the experimental data on direct and capture transmissions in the unresolved resonance region, they demonstrate the possibility of describing those data as a whole and of deriving from them the cross-section group functionals. (author)
SUBGR: A Program to Generate Subgroup Data for the Subgroup Resonance Self-Shielding Calculation
Energy Technology Data Exchange (ETDEWEB)
Kim, Kang Seog [ORNL
2016-06-01
The Subgroup Data Generation (SUBGR) program generates subgroup data, including levels and weights from the resonance self-shielded cross section table as a function of background cross section. Depending on the nuclide and the energy range, these subgroup data can be generated by (a) narrow resonance approximation, (b) pointwise flux calculations for homogeneous media; and (c) pointwise flux calculations for heterogeneous lattice cells. The latter two options are performed by the AMPX module IRFFACTOR. These subgroup data are to be used in the Consortium for Advanced Simulation of Light Water Reactors (CASL) neutronic simulator MPACT, for which the primary resonance self-shielding method is the subgroup method.
Energy Technology Data Exchange (ETDEWEB)
Nasrabadi, M.N. [Department of Physics, Faculty of Science, University of Kashan, Km. 6, Ravand Road, Kashan (Iran, Islamic Republic of)], E-mail: mnnasri@kashanu.ac.ir; Jalali, M. [Isfahan Nuclear Science and Technology Research Institute, Atomic Energy organization of Iran (Iran, Islamic Republic of); Mohammadi, A. [Department of Physics, Faculty of Science, University of Kashan, Km. 6, Ravand Road, Kashan (Iran, Islamic Republic of)
2007-10-15
In this work thermal neutron self-shielding in aqueous bulk samples containing neutron absorbing materials is studied using bulk sample prompt gamma neutron activation analysis (BSPGNAA) with the MCNP code. The code was used to perform three dimensional simulations of a neutron source, neutron detector and sample of various material compositions. The MCNP model was validated against experimental measurements of the neutron flux performed using a BF{sub 3} detector. Simulations were performed to predict thermal neutron self-shielding in aqueous bulk samples containing neutron absorbing solutes. In practice, the MCNP calculations are combined with experimental measurements of the relative thermal neutron flux over the sample's surface, with respect to a reference water sample, to derive the thermal neutron self-shielding within the sample. The proposed methodology can be used for the determination of the elemental concentration of unknown aqueous samples by BSPGNAA where knowledge of the average thermal neutron flux within the sample volume is required.
Advanced resonance self-shielding method for gray resonance treatment in lattice physics code GALAXY
International Nuclear Information System (INIS)
A new resonance self-shielding method based on the equivalence theory is developed for general application to the lattice physics calculations. The present scope includes commercial light water reactor (LWR) design applications which require both calculation accuracy and calculation speed. In order to develop the new method, all the calculation processes from cross-section library preparation to effective cross-section generation are reviewed and reframed by adopting the current enhanced methodologies for lattice calculations. The new method is composed of the following four key methods: (1) cross-section library generation method with a polynomial hyperbolic tangent formulation, (2) resonance self-shielding method based on the multi-term rational approximation for general lattice geometry and gray resonance absorbers, (3) spatially dependent gray resonance self-shielding method for generation of intra-pellet power profile and (4) integrated reaction rate preservation method between the multi-group and the ultra-fine-group calculations. From the various verifications and validations, applicability of the present resonance treatment is totally confirmed. As a result, the new resonance self-shielding method is established, not only by extension of a past concentrated effort in the reactor physics research field, but also by unification of newly developed unique and challenging techniques for practical application to the lattice physics calculations. (author)
Rotor position sensing in brushless ac motors with self-shielding magnets using linear Hall sensors
Zhu, Z. Q.; Shi, Y. F.; Howe, D.
2006-04-01
This paper investigates the use of low cost linear Hall sensors for rotor position sensing in brushless ac motors equipped with self-shielding magnets, addresses practical issues, such as the influence of magnetic and mechanical tolerances, temperature variations, and the armature reaction field, and describes the performance which is achieved.
Energy Technology Data Exchange (ETDEWEB)
T. Downar
2009-03-31
The overall objective of the work here has been to eliminate the approximations used in current resonance treatments by developing continuous energy multi-dimensional transport calculations for problem dependent self-shielding calculations. The work here builds on the existing resonance treatment capabilities in the ORNL SCALE code system.
Characterization and dosimetry of a practical X-ray alternative to self-shielded gamma irradiators
Mehta, Kishor; Parker, Andrew
2011-01-01
The Insect Pest Control Laboratory of the Joint FAO/IAEA Division of Nuclear Techniques in Food and Agriculture recently purchased an X-ray irradiator as part of their programme to develop the sterile insect technique (SIT). It is a self-contained type with a maximum X-ray beam energy of 150 keV using a newly developed 4 π X-ray tube to provide a very uniform dose to the product. This paper describes the results of our characterization study, which includes determination of dose rate in the centre of a canister as well as establishing absorbed dose distribution in the canister. The irradiation geometry consists of five canisters rotating around an X-ray tube—the volume of each canister being 3.5 l. The dose rate at the maximum allowed power of the tube (about 6.75 kW) in the centre of a canister filled with insects (or a simulated product) is about 14 Gy min -1. The dose uniformity ratio is about 1.3. The dose rate was measured using a Farmer type 0.18-cm 3 ionization chamber calibrated at the relevant low photon energies. Routine absorbed dose measurement and absorbed dose mapping can be performed using a Gafchromic® film dosimetry system. The radiation response of Gafchromic film is almost independent of X-ray energy in the range 100-150 keV, but is very sensitive to the surrounding material with which it is in immediate contact. It is important, therefore, to ensure that all absorbed dose measurements are performed under identical conditions to those used for the calibration of the dosimetry system. Our study indicates that this X-ray irradiator provides a practical alternative to self-shielded gamma irradiators for SIT programmes. Food and Agriculture Organization/International Atomic Energy Agency.
Characterization and dosimetry of a practical X-ray alternative to self-shielded gamma irradiators
Energy Technology Data Exchange (ETDEWEB)
Mehta, Kishor, E-mail: mehta@aon.a [Insect Pest Control Laboratory, Joint FAO/IAEA Division of Nuclear Techniques in Food and Agriculture, International Atomic Energy Agency, Vienna (Austria); Parker, Andrew [Insect Pest Control Laboratory, Joint FAO/IAEA Division of Nuclear Techniques in Food and Agriculture, International Atomic Energy Agency, Vienna (Austria)
2011-01-15
The Insect Pest Control Laboratory of the Joint FAO/IAEA Division of Nuclear Techniques in Food and Agriculture recently purchased an X-ray irradiator as part of their programme to develop the sterile insect technique (SIT). It is a self-contained type with a maximum X-ray beam energy of 150 keV using a newly developed 4{pi} X-ray tube to provide a very uniform dose to the product. This paper describes the results of our characterization study, which includes determination of dose rate in the centre of a canister as well as establishing absorbed dose distribution in the canister. The irradiation geometry consists of five canisters rotating around an X-ray tube-the volume of each canister being 3.5 l. The dose rate at the maximum allowed power of the tube (about 6.75 kW) in the centre of a canister filled with insects (or a simulated product) is about 14 Gy min{sup -1}. The dose uniformity ratio is about 1.3. The dose rate was measured using a Farmer type 0.18-cm{sup 3} ionization chamber calibrated at the relevant low photon energies. Routine absorbed dose measurement and absorbed dose mapping can be performed using a Gafchromic (registered) film dosimetry system. The radiation response of Gafchromic film is almost independent of X-ray energy in the range 100-150 keV, but is very sensitive to the surrounding material with which it is in immediate contact. It is important, therefore, to ensure that all absorbed dose measurements are performed under identical conditions to those used for the calibration of the dosimetry system. Our study indicates that this X-ray irradiator provides a practical alternative to self-shielded gamma irradiators for SIT programmes.
The up-scattering treatment in the fine-structure self-shielding method in APOLLO3®
International Nuclear Information System (INIS)
The use of the exact elastic scattering in resonance domain introduces the neutron up-scattering which must be taken into account in the deterministic transport code. We present the newly implemented up-scattering treatment in the fine-structure self-shielding method of APOLLO3®. Two pin cell calculations have been carried out in order to evaluate the impact of the up-scattering treatment. The results are compared to those obtained by the Monte Carlo code TRIPOLI-4® with its newly implemented DBRC model. The comparison of k-eff values on the examples of single cell calculations shows a very good agreement between the APOLLO3® up-scattering treatment and the TRIPOLI-4® DBRC model, which is less than 30 pcm for UOX fuel and less than 110 pcm for MOX. Also, the differential effects of asymptotic versus exact kernel produced by APOLLO3® compared to TRIPOLI-4®, do not exceed 20 pcm for the UOX cell and 40 pcm for the MOX cell. A detailed comparison of the U238 absorption rates shows clearly the influences of the first four big resonances of U238 to the calculation results. (author)
A Monte Carlo simulation technique to determine the optimal portfolio
Directory of Open Access Journals (Sweden)
Hassan Ghodrati
2014-03-01
Full Text Available During the past few years, there have been several studies for portfolio management. One of the primary concerns on any stock market is to detect the risk associated with various assets. One of the recognized methods in order to measure, to forecast, and to manage the existing risk is associated with Value at Risk (VaR, which draws much attention by financial institutions in recent years. VaR is a method for recognizing and evaluating of risk, which uses the standard statistical techniques and the method has been used in other fields, increasingly. The present study has measured the value at risk of 26 companies from chemical industry in Tehran Stock Exchange over the period 2009-2011 using the simulation technique of Monte Carlo with 95% confidence level. The used variability in the present study has been the daily return resulted from the stock daily price change. Moreover, the weight of optimal investment has been determined using a hybrid model called Markowitz and Winker model in each determined stocks. The results showed that the maximum loss would not exceed from 1259432 Rials at 95% confidence level in future day.
Combination of self-shielded and gas-shielded flux-cored arc welding
Lian, Atle Korsnes
2011-01-01
This master thesis have consisted of experimental and theoretical studies of the change in microstructure and mechanical properties in intermixed weld metal from self-shielded and gas-shielded flux-cored welding wires. The main objective of the present thesis has been to do detailed metallographic analysis on different weld metal combinations, and find out and give an explanation why satisfying values were achieved or not achieved.The report is divided into four parts. Part one consists of re...
STUDY ON METAL TRANSFER MODES OF SELF-SHIELDED FLUX-CORED WELDING WIRE
Institute of Scientific and Technical Information of China (English)
无
1999-01-01
Metal transfer behavior of six kinds of self-shielded flux-cored wire(SSFCW) is studied using the apparatus of SSFCW high-speed photography self-made. Six kinds of metal transfer modes of SSFCW were obtained through observation for high-speed photograph film and analysis. It is believed that the research is of magnificent for improving operative performance and mechanical properties of SSFCW and dynamics characteristic of welding power.
Effects of CeF3 on properties of self-shielded flux cored wire
Institute of Scientific and Technical Information of China (English)
Yu Ping; Tian Zhiling; Pan Chuan; Xue Jin
2006-01-01
Effects of CeF3 on properties of self-shielded flux cored wire including welding process, inclusions in weld metal and mechanical properties are systematically studied. Welding smoke and spatter are reduced with the addition of CeF3. The main non-metallic inclusions in weld metal are AlN and Al2 O3. CeF3 can refine non-metallic inclusions and reduce the amount of large size inclusions, which is attributed to the inclusion floating behavior during the solidification of weld metal. The low temperature impact toughness is improved by adding suitable amount of CeF3 in the flux.
ZZ ABBN, 26 Group Cross-Sections and Self Shielding Factors for Fast Reactors
International Nuclear Information System (INIS)
1 - Description of program or function: Format: special format; Number of groups: 26 group X-section and resonance self-shielding factor library. Nuclides: H, D, Li-6, Li-7, Be, B-10, B-11, C, N, O, Na, Mg, Al, Si, K, Ca, Ti, V, Cr, Fe, Ni, Cu, Zr, Nb, Mo, Ta, W, Re, Pb, Bi, Th-232, U-233, U-234, U-235, U-236, U-238, Pu-239, Pu-240, Pu-241, Pu-242, FP-U-233, FP-U-235, FP-Pu-239. Origin: Multiple experimental sources; Weighting spectrum: yes; 26 group cross section and resonance self-shielding factor library for the following materials: H, D, Li-6, Li-7, Be, B-10, B-11, C, N, O, Na, Mg, Al, Si, K, Ca, Ti, V, Cr, Fe, Ni, Cu, Zr, Nb, Mo, Ta, W, Re, Pb, Bi, Th-232, U-233, U-234, U-235, U-236, U-238, Pu-239, Pu-240, Pu-241, Pu-242, FP-U-233, FP-U-235, FP-Pu-239. 2 - Restrictions on the complexity of the problem: This group cross section library has been developed for fast and intermediate reactors
Resolution and intensity in neutron spectrometry determined by Monte Carlo simulation
DEFF Research Database (Denmark)
Dietrich, O.W.
1968-01-01
The Monte Carlo simulation technique was applied to the propagation of Bragg-reflected neutrons in mosaic single crystals. The method proved to be very useful for the determination of resolution and intensity in neutron spectrometers.......The Monte Carlo simulation technique was applied to the propagation of Bragg-reflected neutrons in mosaic single crystals. The method proved to be very useful for the determination of resolution and intensity in neutron spectrometers....
Pytel, Krzysztof; Józefowicz, Krystyna; Pytel, Beatrycze; Koziel, Alina
2004-01-01
The design and optimisation of a neutron beam for neutron capture therapy (NCT) is accompanied by the neutron spectra measurements at the target position. The method of activation detectors was applied for the neutron spectra measurements. Epithermal neutron energy region imposes the resonance structure of activation cross sections resulting in strong self-shielding effects. The neutron self-shielding correction factor was calculated using a simple analytical model of a single absorption event. Such a procedure has been applied to individual cross sections from pointwise ENDF/B-VI library and new corrected activation cross sections were introduced to a spectra unfolding algorithm. The method has been verified experimentally both for isotropic and for parallel neutron beams. Two sets of diluted and non-diluted activation foils covered with cadmium were irradiated in the neutron field. The comparison of activation rates of diluted and non-diluted foils has demonstrated the correctness of the applied self-shielding model.
Self-shielding and burn-out effects in the irradiation of strongly-neutron-absorbing material
International Nuclear Information System (INIS)
Self-shielding and burn-out effects are discussed in the evaluation of radioisotopes formed by neutron irradiation of a strongly-neutron-absorbing material. A method of the evaluation of such effects is developed both for thermal and epithermal neutrons. Gadolinium oxide uniformly mixed with graphite powder was irradiated by reactor-neutrons together with pieces of a Co-Al alloy wire (the content of Co being 0.475%) as the neutron flux monitor. The configuration of the samples and flux monitors in each of two irradiations is illustrated. The yields of activities produced in the irradiated samples were determined by the γ-spectrometry with a Ge(Li) detector of a relative detection efficiency of 8%. Activities at the end of irradiation were estimated by corrections due to pile-up, self-absorption, detection efficiency, branching ratio, and decay of the activity. Results of the calculation are discussed in comparison with the observed yields of 153Gd, 160Tb, and 161Tb for the case of neutron irradiation of disc-shaped targets of gadolinium oxide. (T.G.)
Gamage, K A A; Joyce, M J
2011-10-01
A novel analytical approach is described that accounts for self-shielding of γ radiation in decommissioning scenarios. The approach is developed with plutonium-239, cobalt-60 and caesium-137 as examples; stainless steel and concrete have been chosen as the media for cobalt-60 and caesium-137, respectively. The analytical methods have been compared MCNPX 2.6.0 simulations. A simple, linear correction factor relates the analytical results and the simulated estimates. This has the potential to greatly simplify the estimation of self-shielding effects in decommissioning activities. PMID:21723136
Self-shielding flex-circuit drift tube, drift tube assembly and method of making
Energy Technology Data Exchange (ETDEWEB)
Jones, David Alexander
2016-04-26
The present disclosure is directed to an ion mobility drift tube fabricated using flex-circuit technology in which every other drift electrode is on a different layer of the flex-circuit and each drift electrode partially overlaps the adjacent electrodes on the other layer. This results in a self-shielding effect where the drift electrodes themselves shield the interior of the drift tube from unwanted electro-magnetic noise. In addition, this drift tube can be manufactured with an integral flex-heater for temperature control. This design will significantly improve the noise immunity, size, weight, and power requirements of hand-held ion mobility systems such as those used for explosive detection.
Line Overlap and Self-Shielding of Molecular Hydrogen in Galaxies
Gnedin, Nickolay Y.; Draine, Bruce T.
2014-11-01
The effect of line overlap in the Lyman and Werner bands, often ignored in galactic studies of the atomic-to-molecular transition, greatly enhances molecular hydrogen self-shielding in low metallicity environments and dominates over dust shielding for metallicities below about 10% solar. We implement that effect in cosmological hydrodynamics simulations with an empirical model, calibrated against the observational data, and provide fitting formulae for the molecular hydrogen fraction as a function of gas density on various spatial scales and in environments with varied dust abundance and interstellar radiation field. We find that line overlap, while important for detailed radiative transfer in the Lyman and Werner bands, has only a minor effect on star formation on galactic scales, which, to a much larger degree, is regulated by stellar feedback.
Photodissociation of H2 in Protogalaxies: Modeling Self-Shielding in 3D Simulations
Wolcott-Green, Jemma; Bryan, Greg L
2011-01-01
The ability of primordial gas to cool in proto-galactic haloes exposed to Lyman-Werner (LW) radiation is critically dependent on the self-shielding of H_2. We perform radiative transfer calculations of LW line photons, post-processing outputs from three-dimensional adaptive mesh refinement (AMR) simulations of haloes with T_vir > 10^4 K at redshifts around z=10. We calculate the optically thick photodissociation rate numerically, including the effects of density, temperature, and velocity gradients in the gas, as well as line overlap and shielding of H_2 by HI, over a large number of sight-lines. In low-density regions (n10^4 K haloes by an order of magnitude; this increases the number of such haloes in which supermassive (approx. M=10^5 M_sun) black holes may have formed.
Line Overlap and Self-Shielding of Molecular Hydrogen in Galaxies
Gnedin, Nickolay Y
2014-01-01
The effect of line overlap in the Lyman and Werner bands, often ignored in galactic studies of the atomic-to-molecular transition, greatly enhances molecular hydrogen self-shielding in low metallicity environments, and dominates over dust shielding for metallicities below about 10% solar. We implement that effect in cosmological hydrodynamics simulations with an empirical model, calibrated against the observational data, and provide fitting formulae for the molecular hydrogen fraction as a function of gas density on various spatial scales and in environments with varied dust abundance and interstellar radiation field. We find that line overlap, while important for detailed radiative transfer in the Lyman and Werner bands, has only a minor effect on star formation on galactic scales, which, to a much larger degree, is regulated by stellar feedback.
Design of a control system for self-shielded irradiators with remote access capability
International Nuclear Information System (INIS)
With self-shielded irradiators like Gamma chambers, and Blood irradiators are being sold by BRIT to customers both within and outside the country, it has become necessary to improve the quality of service without increasing the overheads. The recent advances in the field of communications and information technology can be exploited for improving the quality of service to the customers. A state of the art control system with remote accessibility has been designed for these irradiators enhancing their performance. This will provide an easy access to these units wherever they might be located, through the Internet. With this technology it will now be possible to attend to the needs of the customers, as regards fault rectification, error debugging, system software update, performance testing, data acquisition etc. This will not only reduce the downtime of these irradiators but also reduce the overheads. (author)
Nuclear reactions and self-shielding effects of gamma-ray database for nuclear materials
International Nuclear Information System (INIS)
A database for transmutation and radioactivity of nuclear materials is required for selection and design of materials used in various nuclear reactors. The database based on the FENDL/A-2.0 on the Internet and the additional data collected from several references has been developed in NRIM site of 'Data-Free-Way' on the Internet. Recently, the function predicted self-shielding effect of materials for γ-ray was added to this database. The user interface for this database has been constructed for retrieval of necessary data and for graphical presentation of the relation between the energy spectrum of neutron and neutron capture cross section. It is demonstrated that the possibility of chemical compositional change and radioactivity in a material caused by nuclear reactions can be easily retrieved using a browser such as Netscape or Explorer. (author)
2013-08-30
... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF STATE Culturally Significant Objects Imported for Exhibition Determinations: ``Venetian Glass by Carlo Scarpa: The... April 15, 2003, I hereby determine that the objects to be included in the exhibition ``Venetian Glass...
Lautenschlager, Gary J.
The parallel analysis method for determining the number of components to retain in a principal components analysis has received a recent resurgence of support and interest. However, researchers and practitioners desiring to use this criterion have been hampered by the required Monte Carlo analyses needed to develop the criteria. Two recent…
Meric, N; Bor, D
1999-01-01
Scatter fractions have been determined experimentally for lucite, polyethylene, polypropylene, aluminium and copper of varying thicknesses using a polyenergetic broad X-ray beam of 67 kVp. Simulation of the experiment has been carried out by the Monte Carlo technique under the same input conditions. Comparison of the measured and predicted data with each other and with the previously reported values has been given. The Monte Carlo calculations have also been carried out for water, bakelite and bone to examine the dependence of scatter fraction on the density of the scatterer.
Weld metal microstructures of hardfacing deposits produced by self-shielded flux-cored arc welding
International Nuclear Information System (INIS)
The molten pool weld produced during self-shielded flux-cored arc welding (SSFCAW) is protected from gas porosity arising from oxygen and nitrogen by reaction ('killing') of these gases by aluminium. However, residual Al can result in mixed micro-structures of δ-ferrite, martensite and bainite in hardfacing weld metals produced by SSFCAW and therefore, microstructural control can be an issue for hardfacing weld repair. The effect of the residual Al content on weld metal micro-structure has been examined using thermodynamic modeling and dilatometric analysis. It is concluded that the typical Al content of about 1 wt% promotes δ-ferrite formation at the expense of austenite and its martensitic/bainitic product phase(s), thereby compromising the wear resistance of the hardfacing deposit. This paper also demonstrates how the development of a Schaeffler-type diagram for predicting the weld metal micro-structure can provide guidance on weld filler metal design to produce the optimum microstructure for industrial hardfacing applications.
Resonance self-shielding methodology of new neutron transport code STREAM
International Nuclear Information System (INIS)
This paper reports on the development and verification of three new resonance self-shielding methods. The verifications were performed using the new neutron transport code, STREAM. The new methodologies encompass the extension of energy range for resonance treatment, the development of optimum rational approximation, and the application of resonance treatment to isotopes in the cladding region. (1) The extended resonance energy range treatment has been developed to treat the resonances below 4 eV of three resonance isotopes and shows significant improvements in the accuracy of effective cross sections (XSs) in that energy range. (2) The optimum rational approximation can eliminate the geometric limitations of the conventional approach of equivalence theory and can also improve the accuracy of fuel escape probability. (3) The cladding resonance treatment method makes it possible to treat resonances in cladding material which have not been treated explicitly in the conventional methods. These three new methods have been implemented in the new lattice physics code STREAM and the improvement in the accuracy of effective XSs is demonstrated through detailed verification calculations. (author)
CO Self-Shielding as a Mechanism to Make 16O-Enriched Solids in the Solar Nebula
Directory of Open Access Journals (Sweden)
Joseph A. Nuth, III
2014-05-01
Full Text Available Photochemical self-shielding of CO has been proposed as a mechanism to produce solids observed in the modern, 16O-depleted solar system. This is distinct from the relatively 16O-enriched composition of the solar nebula, as demonstrated by the oxygen isotopic composition of the contemporary sun. While supporting the idea that self-shielding can produce local enhancements in 16O-depleted solids, we argue that complementary enhancements of 16O-enriched solids can also be produced via C16O-based, Fischer-Tropsch type (FTT catalytic processes that could produce much of the carbonaceous feedstock incorporated into accreting planetesimals. Local enhancements could explain observed 16O enrichment in calcium-aluminum-rich inclusions (CAIs, such as those from the meteorite, Isheyevo (CH/CHb, as well as in chondrules from the meteorite, Acfer 214 (CH3. CO self-shielding results in an overall increase in the 17O and 18O content of nebular solids only to the extent that there is a net loss of C16O from the solar nebula. In contrast, if C16O reacts in the nebula to produce organics and water then the net effect of the self-shielding process will be negligible for the average oxygen isotopic content of nebular solids and other mechanisms must be sought to produce the observed dichotomy between oxygen in the Sun and that in meteorites and the terrestrial planets. This illustrates that the formation and metamorphism of rocks and organics need to be considered in tandem rather than as isolated reaction networks.
Zeng Huilin; Wang Changjiang; Yang Xuemei; Wang Xinsheng; Liu Ran
2014-01-01
In order to realize the automatic welding of pipes in a complex operation environment, an automatic welding system has been developed by use of all-position self-shielded flux cored wires due to their advantages, such as all-position weldability, good detachability, arc's stability, low incomplete fusion, no need for welding protective gas or protection against wind when the wind speed is
International Nuclear Information System (INIS)
Continuing with the domestic 'Burnable Absorbers Research Plan' studies were done to estimate self-shielding effects during Gd2O3 burnup as burnable absorber included in fuel pins of a CAREM geometry. In this way, its burnup was calculated without and with self-shielding. For the second case, were obtained values depending on internal pin radius and the effective one for the homogenized pin. For Gd 157, the burnup corresponding to the first case resulted 52.6 % and of 1.23 % for the effective one. That shows the magnitude of the effects under study. Considering that is necessary to perform one experimental verification, also are presented calculational results for the case to irradiate a pellet containing UO2 (natural) and 8 wt % of Gd2O3, as a function of cooling time, that include: measurable isotopes concentrations, expected activities, and photon spectra for conditions able to be compared with bidimensional calculations with self-shielding. The irradiation time was supposed 30 dpp using RA-3 reactor at 10 MW. (author)
Aguirre, Eder; David, Mariano; deAlmeida, Carlos E
2016-01-01
This work studies the impact of systematic uncertainties associated to interaction cross sections on depth dose curves determined by Monte Carlo simulations. The corresponding sensitivity factors are quantified by changing cross sections in a given amount and determining the variation in the dose. The influence of total cross sections for all particles, photons and only for Compton scattering is addressed. The PENELOPE code was used in all simulations. It was found that photon cross section sensitivity factors depend on depth. In addition, they are positive and negative for depths below and above an equilibrium depth, respectively. At this depth, sensitivity factors are null. The equilibrium depths found in this work agree very well with the mean free path of the corresponding incident photon energy. Using the sensitivity factors reported here, it is possible to estimate the impact of photon cross section uncertainties on the uncertainty of Monte Carlo-determined depth dose curves.
International Nuclear Information System (INIS)
The URR computer code has been developed to calculate cross-section probability tables, Bondarenko self-shielding factors, and self- indication ratios for fertile and fissile isotopes in the unresolved resonance region. Monte Carlo methods are utilized to select appropriate resonance parameters and to compute the cross sections at the desired reference energy. The neutron cross sections are calculated by the single-level Breit-Wigner formalism with s-, p-, and d-wave contributions. The cross-section probability tables are constructed by sampling the Doppler broadened cross-section. The various shelf-shielded factors are computed numerically as Lebesgue integrals over the cross-section probability tables. 6 refs
Patient-specific CT dose determination from CT images using Monte Carlo simulations
Liang, Qing
Radiation dose from computed tomography (CT) has become a public concern with the increasing application of CT as a diagnostic modality, which has generated a demand for patient-specific CT dose determinations. This thesis work aims to provide a clinically applicable Monte-Carlo-based CT dose calculation tool based on patient CT images. The source spectrum was simulated based on half-value layer measurements. Analytical calculations along with the measured flux distribution were used to estimate the bowtie-filter geometry. Relative source output at different points in a cylindrical phantom was measured and compared with Monte Carlo simulations to verify the determined spectrum and bowtie-filter geometry. Sensitivity tests were designed with four spectra with the same kVp and different half-value layers, and showed that the relative output at different locations in a phantom is sensitive to different beam qualities. An mAs-to-dose conversion factor was determined with in-air measurements using an Exradin A1SL ionization chamber. Longitudinal dose profiles were measured with thermoluminescent dosimeters (TLDs) and compared with the Monte-Carlo-simulated dose profiles to verify the mAs-to-dose conversion factor. Using only the CT images to perform Monte Carlo simulations would cause dose underestimation due to the lack of a scatter region. This scenario was demonstrated with a cylindrical phantom study. Four different image extrapolation methods from the existing CT images and the Scout images were proposed. The results show that performing image extrapolation beyond the scan region improves the dose calculation accuracy under both step-shoot scan mode and helical scan mode. Two clinical studies were designed and comparisons were performed between the current CT dose metrics and the Monte-Carlo-based organ dose determination techniques proposed in this work. The results showed that the current CT dosimetry failed to show dose differences between patients with the same
Application of Monte Carlo method in determination of secondary characteristic X radiation in XFA
International Nuclear Information System (INIS)
Secondary characteristic radiation is excited by primary radiation from the X-ray tube and by secondary radiation of other elements so that excitations of several orders result. The Monte Carlo method was used to consider all these possibilities and the resulting flux of characteristic radiation was simulated for samples of silicate raw materials. A comparison of the results of these computations with experiments allows to determine the effect of sample preparation on the characteristic radiation flux. (M.D.)
Energy Technology Data Exchange (ETDEWEB)
Gonzalez, Jorge A. Carrazana; Ferrera, Eduardo A. Capote; Gomez, Isis M. Fernandez; Castro, Gloria V. Rodriguez; Ricardo, Niury Martinez, E-mail: cphr@cphr.edu.cu [Centro de Proteccion e Higiene de las Radiaciones (CPHR), La Habana (Cuba)
2013-07-01
This work shows how is established the traceability of the analytical determinations using this calibration method. Highlights the advantages offered by Monte Carlo simulation for the application of corrections by differences in chemical composition, density and height of the samples analyzed. Likewise, the results obtained by the LVRA in two exercises organized by the International Agency for Atomic Energy (IAEA) are presented. In these exercises (an intercomparison and a proficiency test) all reported analytical results were obtained based on calibrations in efficiency by Monte Carlo simulation using the DETEFF program.
Self-Shielding of Thermal Radiation by Chicxulub Ejecta: Firestorm or Fizzle?
Goldin, T. J.; Melosh, H. J.
2008-12-01
The discovery of soot within the Chicxulub ejecta sequence and the observed survival patterns of terrestrial organisms across the K/Pg boundary led to the hypothesis that thermal radiation from the atmospheric reentry of hypervelocity impact ejecta was sufficient to ignite global wildfires and cause biological catastrophe. Using a two-dimensional, two-phase fluid flow code, KFIX-LPL, we model the atmospheric reentry of distal Chicxulub ejecta and calculate the fluxes of thermal radiation throughout the atmosphere. The model treatment includes optical opacity, allowing us to examine the effects that greenhouse gases and the spherules themselves have on the transfer of thermal radiation to the ground. We model a simple Chicxulub scenario where 250-µm spherules reenter the atmosphere for an hour with maximum inflow after 10 minutes. Our models predict a pulse of thermal radiation at the ground peaking at ~6 kW/m2, analogous to an oven set on 'broil'. Previous calculations, which did not consider spherule opacity, yielded >10 kW/ m2 sustained over an hour or more and such an extended pulse of high fluxes is thought to be required for wildfire ignition. However, our model suggests a half-hour in which fluxes exceed the solar norm and only a few minutes >5 kW/m2. Large fluxes are not sustained in our models due to the increasingly opaque cloud of settling spherules, which increasingly blocks the transmission of thermal radiation from the decelerating spherules above. Hence, the spherules themselves limit the magnitude and duration of thermal radiation at the ground. Such self-shielding may have prevented the ignition of global wildfires following Chicxulub and limited other environmental effects. Keeping the impact wildfire hypothesis will require a mechanism to override this effect. A nonuniform distribution of spherule reentry may produce gaps in the opaque spherule layer through which the downward thermal radiation may be concentrated. Additionally, an opaque cloud
Choi, Chang Heon; Jung, Seongmoon; Choi, Kanghyuk; Son, Kwang-Jae; Lee, Jun Sig; Ye, Sung-Joon
2016-04-01
This study aims to determine the activity of a sealed pure beta-source by measuring the surface dose rate using an extrapolation chamber. A conversion factor (cGy s-1 Bq-1), which was defined as the ratio of surface dose rate to activity, can be calculated by Monte Carlo simulations of the extrapolation chamber measurement. To validate this hypothesis the certified activities of two standard pure beta-sources of Sr/Y-90 and Si/P-32 were compared with those determined by this method. In addition, a sealed test source of Sr/Y-90 was manufactured by the HANARO reactor group of KAERI (Korea Atomic Energy Research Institute) and used to further validate this method. The measured surface dose rates of the Sr/Y-90 and Si/P-32 standard sources were 4.615×10-5 cGy s-1 and 2.259×10-5 cGy s-1, respectively. The calculated conversion factors of the two sources were 1.213×10-8 cGy s-1 Bq-1 and 1.071×10-8 cGy s-1 Bq-1, respectively. Therefore, the activity of the standard Sr/Y-90 source was determined to be 3.995 kBq, which was 2.0% less than the certified value (4.077 kBq). For Si/P-32 the determined activity was 2.102 kBq, which was 6.6% larger than the certified activity (1.971 kBq). The activity of the Sr/Y-90 test source was determined to be 4.166 kBq, while the apparent activity reported by KAERI was 5.803 kBq. This large difference might be due to evaporation and diffusion of the source liquid during preparation and uncertainty in the amount of weighed aliquot of source liquid. The overall uncertainty involved in this method was determined to be 7.3%. We demonstrated that the activity of a sealed pure beta-source could be conveniently determined by complementary combination of measuring the surface dose rate and Monte Carlo simulations.
International Nuclear Information System (INIS)
A new improved method has been developed for calculating sensitivity coefficients of neutronics parameters in pressurized water reactor cells relative to infinite dilution cross-sections by taking account of resonance self-shielding effect. In our paper, the IR approximation is used in order to get accurate results in both high and low energy groups. This method is applied to UO2 and MOX fueled PWR cells to calculate sensitivity coefficients and uncertainties of eigenvalue responses. We have verified the improved method by comparing the sensitivities with MCNP code and good agreement is found. For uncertainty, the improved results are compared with TSUNAMI-1D, and demonstrate that the differences are caused by the use of different covariance matrix. (author)
International Nuclear Information System (INIS)
A 35 group cross-section set with P3-anisotropic scattering matrices and resonance self-shielding factors has been generated from the basic ENDF/B-IV cross-section Library for 57 reactor elements. This library, called BARC35, is considered to be well suited for the neutronics and safety analysis of fission, fusion and hybrid systems. (author)
Energy Technology Data Exchange (ETDEWEB)
Vieira, Jose Wilson
2001-08-01
Brachytherapy is a special form of cancer treatment in which the radioactive source is very close to or inside the tumor with the objective of causing the necrosis of the cancerous tissue. The intensity of cell response to the radiation varies according to the tissue type and degree of differentiation. Since the malign cells are less differentiated than the normal ones, they are more sensitive to the radiation. This is the basis for radiotherapy techniques. Institutes that work with the application of high dose rates use sophisticated computer programs to calculate the necessary dose to achieve the necrosis of the tumor and the same time, minimizing the irradiation of tissues and organs of the neighborhood. With knowledge the characteristics of the source and the tumor, it is possible to trace isodose curves with the necessary information for planning the brachytherapy in patients. The objective of this work is, using Monte Carlo techniques, to develop a computer program - the ISODOSE - which allows to determine isodose curves in turn of linear radioactive sources used in brachytherapy. The development of ISODOSE is important because the available commercial programs, in general, are very expensive and practically inaccessible to small clinics. The use of Monte Carlo techniques is viable because they avoid problems inherent to analytic solutions as, for instance , the integration of functions with singularities in its domain. The results of ISODOSE were compared with similar data found in the literature and also with those obtained at the institutes of radiotherapy of the 'Hospital do Cancer do Recife' and of the 'Hospital Portugues do Recife'. ISODOSE presented good performance, mainly, due to the Monte Carlo techniques, that allowed a quite detailed drawing of the isodose curves in turn of linear sources. (author)
International Nuclear Information System (INIS)
Research on ADS, related new fuels and their ability for nuclear waste incineration leads to a revival of interest in nuclear cross-sections of many nuclides in a large energy range. Discrepancies observed between nuclear databases require new measurements in several cases. A complete measurement of such cross-sections including resonance resolution consists in an extensive beam time experiment associated to a long analysis. With a slowing down spectrometer associated to a pulsed neutron source, it is possible to determine a good cross-section profile in an energy range from 0.1 eV to 40 keV by making use of a slowing-down time lead spectrometer associated to a pulsed neutron source. These measurements performed at ISN (Grenoble) with the neutron source GENEPI requires only small quantities of matter (as small as 0.1 g) and about one day of beam by target. We present cross-section profile measurements and an experimental study of the self-shielding effect. A CeF3 scintillator coupled with a photomultiplier detects gamma rays from neutronic capture in the studied target. The neutron flux is also measured with a 233U fission detector and a 3He detector at symmetrical position to the PM in relation to the neutron source. Absolute flux values are given by activation of Au and W foils. The cross-section profiles can then be deduced from the target capture rate and are compared with very detailed MCNP simulations, which reproduce the experimental set-up and provide also capture rates and flux. The method is then applied to 232Th, of main interest for new fuel cycle studies, and is complementary to higher energy measurements made by D. Karamanis et al. (CENBG). Results obtained for three target thicknesses will be compared with simulations based on different data bases. Special attention will be paid to the region of unresolved resonances (>100 eV). (author)
Energy Technology Data Exchange (ETDEWEB)
Dinpajooh, Mohammadhasan [Department of Chemistry and Chemical Theory Center, University of Minnesota, 207 Pleasant Street SE, Minneapolis, Minnesota 55455 (United States); Bai, Peng; Allan, Douglas A. [Department of Chemical Engineering and Materials Science, University of Minnesota, 421 Washington Avenue SE, Minneapolis, Minnesota 55455 (United States); Siepmann, J. Ilja, E-mail: siepmann@umn.edu [Department of Chemistry and Chemical Theory Center, University of Minnesota, 207 Pleasant Street SE, Minneapolis, Minnesota 55455 (United States); Department of Chemical Engineering and Materials Science, University of Minnesota, 421 Washington Avenue SE, Minneapolis, Minnesota 55455 (United States)
2015-09-21
Since the seminal paper by Panagiotopoulos [Mol. Phys. 61, 813 (1997)], the Gibbs ensemble Monte Carlo (GEMC) method has been the most popular particle-based simulation approach for the computation of vapor–liquid phase equilibria. However, the validity of GEMC simulations in the near-critical region has been questioned because rigorous finite-size scaling approaches cannot be applied to simulations with fluctuating volume. Valleau [Mol. Simul. 29, 627 (2003)] has argued that GEMC simulations would lead to a spurious overestimation of the critical temperature. More recently, Patel et al. [J. Chem. Phys. 134, 024101 (2011)] opined that the use of analytical tail corrections would be problematic in the near-critical region. To address these issues, we perform extensive GEMC simulations for Lennard-Jones particles in the near-critical region varying the system size, the overall system density, and the cutoff distance. For a system with N = 5500 particles, potential truncation at 8σ and analytical tail corrections, an extrapolation of GEMC simulation data at temperatures in the range from 1.27 to 1.305 yields T{sub c} = 1.3128 ± 0.0016, ρ{sub c} = 0.316 ± 0.004, and p{sub c} = 0.1274 ± 0.0013 in excellent agreement with the thermodynamic limit determined by Potoff and Panagiotopoulos [J. Chem. Phys. 109, 10914 (1998)] using grand canonical Monte Carlo simulations and finite-size scaling. Critical properties estimated using GEMC simulations with different overall system densities (0.296 ≤ ρ{sub t} ≤ 0.336) agree to within the statistical uncertainties. For simulations with tail corrections, data obtained using r{sub cut} = 3.5σ yield T{sub c} and p{sub c} that are higher by 0.2% and 1.4% than simulations with r{sub cut} = 5 and 8σ but still with overlapping 95% confidence intervals. In contrast, GEMC simulations with a truncated and shifted potential show that r{sub cut} = 8σ is insufficient to obtain accurate results. Additional GEMC simulations for hard
Monte Carlo simulation methods of determining red bone marrow dose from external radiation
International Nuclear Information System (INIS)
Objective: To provide evidence for a more reasonable method of determining red bone marrow dose by analyzing and comparing existing simulation methods. Methods: By utilizing Monte Carlo simulation software MCNPX, the absorbed doses of red hone marrow of Rensselaer Polytechnic Institute (RPI) adult female voxel phantom were calculated through 4 different methods: direct energy deposition.dose response function (DRF), King-Spiers factor method and mass-energy absorption coefficient (MEAC). The radiation sources were defined as infinite plate.sources with the energy ranging from 20 keV to 10 MeV, and 23 sources with different energies were simulated in total. The source was placed right next to the front of the RPI model to achieve a homogeneous anteroposterior radiation scenario. The results of different simulated photon energy sources through different methods were compared. Results: When the photon energy was lower than 100 key, the direct energy deposition method gave the highest result while the MEAC and King-Spiers factor methods showed more reasonable results. When the photon energy was higher than 150 keV taking into account of the higher absorption ability of red bone marrow at higher photon energy, the result of the King-Spiers factor method was larger than those of other methods. Conclusions: The King-Spiers factor method might be the most reasonable method to estimate the red bone marrow dose from external radiation. (authors)
Chang, Kwo-Ping; Wang, Zhi-Wei; Shiau, An-Cheng
2014-02-01
Monte Carlo (MC) method is a well known calculation algorithm which can accurately assess the dose distribution for radiotherapy. The present study investigated all the possible regions of the depth-dose or lateral profiles which may affect the fitting of the initial parameters (mean energy and the radial intensity (full width at half maximum, FWHM) of the incident electron). EGSnrc-based BEAMnrc codes were used to generate the phase space files (SSD=100 cm, FS=40×40 cm2) for the linac (linear accelerator, Varian 21EX, 6 MV photon mode) and EGSnrc-based DOSXYZnrc code was used to calculate the dose in the region of interest. Interpolation of depth dose curves of pre-set energies was proposed as a preliminary step for optimal energy fit. A good approach for determination of the optimal mean energy is the difference comparison of the PDD curves excluding buildup region, and using D(10) as a normalization method. For FWHM fitting, due to electron disequilibrium and the larger statistical uncertainty, using horn or/and penumbra regions will give inconsistent outcomes at various depths. Difference comparisons should be performed in the flat regions of the off-axis dose profiles at various depths to optimize the FWHM parameter.
Omar, Artur; Benmakhlouf, Hamza; Marteinsdottir, Maria; Bujila, Robert; Nowik, Patrik; Andreo, Pedro
2014-03-01
Complex interventional and diagnostic x-ray angiographic (XA) procedures may yield patient skin doses exceeding the threshold for radiation induced skin injuries. Skin dose is conventionally determined by converting the incident air kerma free-in-air into entrance surface air kerma, a process that requires the use of backscatter factors. Subsequently, the entrance surface air kerma is converted into skin kerma using mass energy-absorption coefficient ratios tissue-to-air, which for the photon energies used in XA is identical to the skin dose. The purpose of this work was to investigate how the cranial bone affects backscatter factors for the dosimetry of interventional neuroradiology procedures. The PENELOPE Monte Carlo system was used to calculate backscatter factors at the entrance surface of a spherical and a cubic water phantom that includes a cranial bone layer. The simulations were performed for different clinical x-ray spectra, field sizes, and thicknesses of the bone layer. The results show a reduction of up to 15% when a cranial bone layer is included in the simulations, compared with conventional backscatter factors calculated for a homogeneous water phantom. The reduction increases for thicker bone layers, softer incident beam qualities, and larger field sizes, indicating that, due to the increased photoelectric crosssection of cranial bone compared to water, the bone layer acts primarily as an absorber of low-energy photons. For neurointerventional radiology procedures, backscatter factors calculated at the entrance surface of a water phantom containing a cranial bone layer increase the accuracy of the skin dose determination.
International Nuclear Information System (INIS)
The effect of inclusions on the microstructure and toughness of the deposited metals of self-shielded flux cored wires was investigated by optical microscopy, electron microscopy and mechanical testing. The deposited metals of three different wires showed different levels of low temperature impact toughness at −40 °C mainly because of differences in the properties of inclusions. The inclusions formed in the deposited metals as a result of deoxidation caused by the addition of extra Al–Mg alloy and ferromanganese to the flux. The inclusions, spherical in shape, were mixtures of Al2O3 and MgO. Inclusions predominantly Al2O3 and 0.3–0.8 μm in diameter were effective for nucleation of acicular ferrite. However, inclusions predominantly MgO were promoted by increasing Mg in the flux and were more effective than Al2O3 inclusions of the same size. These findings suggest that the control of inclusions can be an effective way to improve the impact toughness of the deposited metal
Energy Technology Data Exchange (ETDEWEB)
Zhang, Tianli [School of Materials Science and Engineering, Tianjin University, Tianjin 300072 (China); College of Materials Science and Engineering, Beijing University of Technology, Beijing 100124 (China); Department of Materials Science and Engineering, University of Wisconsin, Madison, WI 53706 (United States); Li, Zhuoxin [College of Materials Science and Engineering, Beijing University of Technology, Beijing 100124 (China); Kou, Sindo, E-mail: kou@engr.wisc.edu [Department of Materials Science and Engineering, University of Wisconsin, Madison, WI 53706 (United States); Jing, Hongyang [School of Materials Science and Engineering, Tianjin University, Tianjin 300072 (China); Li, Guodong; Li, Hong [College of Materials Science and Engineering, Beijing University of Technology, Beijing 100124 (China); Jin Kim, Hee [Advanced Joining Research Team, Korea Institute of Industrial Technology, Chanan-si 330-825 (Korea, Republic of)
2015-03-25
The effect of inclusions on the microstructure and toughness of the deposited metals of self-shielded flux cored wires was investigated by optical microscopy, electron microscopy and mechanical testing. The deposited metals of three different wires showed different levels of low temperature impact toughness at −40 °C mainly because of differences in the properties of inclusions. The inclusions formed in the deposited metals as a result of deoxidation caused by the addition of extra Al–Mg alloy and ferromanganese to the flux. The inclusions, spherical in shape, were mixtures of Al{sub 2}O{sub 3} and MgO. Inclusions predominantly Al{sub 2}O{sub 3} and 0.3–0.8 μm in diameter were effective for nucleation of acicular ferrite. However, inclusions predominantly MgO were promoted by increasing Mg in the flux and were more effective than Al{sub 2}O{sub 3} inclusions of the same size. These findings suggest that the control of inclusions can be an effective way to improve the impact toughness of the deposited metal.
Coefficients of an analytical aerosol forcing equation determined with a Monte-Carlo radiation model
Hassan, Taufiq; Moosmüller, H.; Chung, Chul E.
2015-10-01
Simple analytical equations for global-average direct aerosol radiative forcing are useful to quickly estimate aerosol forcing changes as function of key atmosphere, surface and aerosol parameters. The surface and atmosphere parameters in these analytical equations are the globally uniform atmospheric transmittance and surface albedo, and have so far been estimated from simplified observations under untested assumptions. In the present study, we take the state-of-the-art analytical equation and write the aerosol forcing as a linear function of the single scattering albedo (SSA) and replace the average upscatter fraction with the asymmetry parameter (ASY). Then we determine the surface and atmosphere parameter values of this equation using the output from the global MACR (Monte-Carlo Aerosol Cloud Radiation) model, as well as testing the validity of the equation. The MACR model incorporated spatio-temporally varying observations for surface albedo, cloud optical depth, water vapor, stratosphere column ozone, etc., instead of assuming as in the analytical equation that the atmosphere and surface parameters are globally uniform, and should thus be viewed as providing realistic radiation simulations. The modified analytical equation needs globally uniform aerosol parameters that consist of AOD (Aerosol Optical Depth), SSA, and ASY. The MACR model is run here with the same globally uniform aerosol parameters. The MACR model is also run without cloud to test the cloud effect. In both cloudy and cloud-free runs, the equation fits in the model output well whether SSA or ASY varies. This means the equation is an excellent approximation for the atmospheric radiation. On the other hand, the determined parameter values are somewhat realistic for the cloud-free runs but unrealistic for the cloudy runs. The global atmospheric transmittance, one of the determined parameters, is found to be around 0.74 in case of the cloud-free conditions and around 1.03 with cloud. The surface
Coefficients of an analytical aerosol forcing equation determined with a Monte-Carlo radiation model
International Nuclear Information System (INIS)
Simple analytical equations for global-average direct aerosol radiative forcing are useful to quickly estimate aerosol forcing changes as function of key atmosphere, surface and aerosol parameters. The surface and atmosphere parameters in these analytical equations are the globally uniform atmospheric transmittance and surface albedo, and have so far been estimated from simplified observations under untested assumptions. In the present study, we take the state-of-the-art analytical equation and write the aerosol forcing as a linear function of the single scattering albedo (SSA) and replace the average upscatter fraction with the asymmetry parameter (ASY). Then we determine the surface and atmosphere parameter values of this equation using the output from the global MACR (Monte-Carlo Aerosol Cloud Radiation) model, as well as testing the validity of the equation. The MACR model incorporated spatio-temporally varying observations for surface albedo, cloud optical depth, water vapor, stratosphere column ozone, etc., instead of assuming as in the analytical equation that the atmosphere and surface parameters are globally uniform, and should thus be viewed as providing realistic radiation simulations. The modified analytical equation needs globally uniform aerosol parameters that consist of AOD (Aerosol Optical Depth), SSA, and ASY. The MACR model is run here with the same globally uniform aerosol parameters. The MACR model is also run without cloud to test the cloud effect. In both cloudy and cloud-free runs, the equation fits in the model output well whether SSA or ASY varies. This means the equation is an excellent approximation for the atmospheric radiation. On the other hand, the determined parameter values are somewhat realistic for the cloud-free runs but unrealistic for the cloudy runs. The global atmospheric transmittance, one of the determined parameters, is found to be around 0.74 in case of the cloud-free conditions and around 1.03 with cloud. The surface
Energy Technology Data Exchange (ETDEWEB)
Coste-Delclaux, M
2006-03-15
This document describes the improvements carried out for modelling the self-shielding phenomenon in the multigroup transport code APOLLO2. They concern the space and energy treatment of the slowing-down equation, the setting up of quadrature formulas to calculate reaction rates, the setting-up of a method that treats directly a resonant mixture and the development of a sub-group method. We validate these improvements either in an elementary or in a global way. Now, we obtain, more accurate multigroup reaction rates and we are able to carry out a reference self-shielding calculation on a very fine multigroup mesh. To end, we draw a conclusion and give some prospects on the remaining work. (author)
Palmer, Grant; Prabhu, Dinesh; Cruden, Brett A.
2013-01-01
The 2013-2022 Decaedal survey for planetary exploration has identified probe missions to Uranus and Saturn as high priorities. This work endeavors to examine the uncertainty for determining aeroheating in such entry environments. Representative entry trajectories are constructed using the TRAJ software. Flowfields at selected points on the trajectories are then computed using the Data Parallel Line Relaxation (DPLR) Computational Fluid Dynamics Code. A Monte Carlo study is performed on the DPLR input parameters to determine the uncertainty in the predicted aeroheating, and correlation coefficients are examined to identify which input parameters show the most influence on the uncertainty. A review of the present best practices for input parameters (e.g. transport coefficient and vibrational relaxation time) is also conducted. It is found that the 2(sigma) - uncertainty for heating on Uranus entry is no more than 2.1%, assuming an equilibrium catalytic wall, with the uncertainty being determined primarily by diffusion and H(sub 2) recombination rate within the boundary layer. However, if the wall is assumed to be partially or non-catalytic, this uncertainty may increase to as large as 18%. The catalytic wall model can contribute over 3x change in heat flux and a 20% variation in film coefficient. Therefore, coupled material response/fluid dynamic models are recommended for this problem. It was also found that much of this variability is artificially suppressed when a constant Schmidt number approach is implemented. Because the boundary layer is reacting, it is necessary to employ self-consistent effective binary diffusion to obtain a correct thermal transport solution. For Saturn entries, the 2(sigma) - uncertainty for convective heating was less than 3.7%. The major uncertainty driver was dependent on shock temperature/velocity, changing from boundary layer thermal conductivity to diffusivity and then to shock layer ionization rate as velocity increases. While
International Nuclear Information System (INIS)
Isotope production and Application Division of Bhabha Atomic Research Center developed 32P patch sources for treatment of superficial tumors. Surface dose rate of a newly developed 32P patch source of nominal diameter 25 mm was measured experimentally using standard extrapolation ionization chamber and Gafchromic EBT film. Monte Carlo model of the 32P patch source along with the extrapolation chamber was also developed to estimate the surface dose rates from these sources. The surface dose rates to tissue (cGy/min) measured using extrapolation chamber and radiochromic films are 82.03±4.18 (k=2) and 79.13±2.53 (k=2) respectively. The two values of the surface dose rates measured using the two independent experimental methods are in good agreement to each other within a variation of 3.5%. The surface dose rate to tissue (cGy/min) estimated using the MCNP Monte Carlo code works out to be 77.78±1.16 (k=2). The maximum deviation between the surface dose rates to tissue obtained by Monte Carlo and the extrapolation chamber method is 5.2% whereas the difference between the surface dose rates obtained by radiochromic film measurement and the Monte Carlo simulation is 1.7%. The three values of the surface dose rates of the 32P patch source obtained by three independent methods are in good agreement to one another within the uncertainties associated with their measurements and calculation. This work has demonstrated that MCNP based electron transport simulations are accurate enough for determining the dosimetry parameters of the indigenously developed 32P patch sources for contact brachytherapy applications. - Highlights: • Surface dose rates of 25 mm nominal diameter newly developed 32P patch sources were measured experimentally using extrapolation chamber and Gafchromic EBT2 film. Monte Carlo model of the 32P patch source along with the extrapolation chamber was also developed. • The surface dose rates to tissue (cGy/min) measured using extrapolation chamber and
TRIPOLI-3: a neutron/photon Monte Carlo transport code
Energy Technology Data Exchange (ETDEWEB)
Nimal, J.C.; Vergnaud, T. [Commissariat a l' Energie Atomique, Gif-sur-Yvette (France). Service d' Etudes de Reacteurs et de Mathematiques Appliquees
2001-07-01
The present version of TRIPOLI-3 solves the transport equation for coupled neutron and gamma ray problems in three dimensional geometries by using the Monte Carlo method. This code is devoted both to shielding and criticality problems. The most important feature for particle transport equation solving is the fine treatment of the physical phenomena and sophisticated biasing technics useful for deep penetrations. The code is used either for shielding design studies or for reference and benchmark to validate cross sections. Neutronic studies are essentially cell or small core calculations and criticality problems. TRIPOLI-3 has been used as reference method, for example, for resonance self shielding qualification. (orig.)
Determination of cascade summing correction for HPGe spectrometers by the Monte Carlo method
Takeda, M N
2001-01-01
The present work describes the methodology developed for calculating the cascade sum correction to be applied to experimental efficiencies obtained by means of HPGe spectrometers. The detection efficiencies have been numerically calculated by the Monte Carlo Method for point sources. Another Monte Carlo algorithm has been developed to follow the path in the decay scheme from the beginning state at the precursor radionuclide decay level, down to the ground state of the daughter radionuclide. Each step in the decay scheme is selected by random numbers taking into account the transition probabilities and internal transition coefficients. The selected transitions are properly tagged according to the type of interaction has occurred, giving rise to a total or partial energy absorption events inside the detector crystal. Once the final state has been reached, the selected transitions were accounted for verifying each pair of transitions which occurred simultaneously. With this procedure it was possible to calculate...
International Nuclear Information System (INIS)
Different codes were used for Monte Carlo calculations in radiation therapy. In this study, a new Monte Carlo Simulation Program (MCSP) was developed for the effects of the physical parameters of photons emitted from a Siemens Primus clinical linear accelerator (LINAC) on the dose distribution in water. For MCSP, it was written considering interactions of photons with matter. Here, it was taken into account mainly two interactions: The Compton (or incoherent) scattering and photoelectric effect. Photons which come to water phantom surface emitting from a point source were Bremsstrahlung photons. It should be known the energy distributions of these photons for following photons. Bremsstrahlung photons which have 6 MeV (6 MV photon mode) maximum energies were taken into account. In the 6 MV photon mode, the energies of photons were sampled from using Mohan's experimental energy spectrum (Mohan at al 1985). In order to investigate the performance and accuracy of the simulation, measured and calculated (MCSP) percentage depth dose curves and dose profiles were compared. The Monte Carlo results were shown good agreement with experimental measurements.
Directory of Open Access Journals (Sweden)
Rehman Shakeel U.
2009-01-01
Full Text Available A primary-interaction based Monte Carlo algorithm has been developed for determination of the total efficiency of cylindrical scintillation g-ray detectors. This methodology has been implemented in a Matlab based computer program BPIMC. For point isotropic sources at axial locations with respect to the detector axis, excellent agreement has been found between the predictions of the BPIMC code with the corresponding results obtained by using hybrid Monte Carlo as well as by experimental measurements over a wide range of g-ray energy values. For off-axis located point sources, the comparison of the BPIMC predictions with the corresponding results obtained by direct calculations as well as by conventional Monte Carlo schemes shows good agreement validating the proposed algorithm. Using the BPIMC program, the energy dependent detector efficiency has been found to approach an asymptotic profile by increasing either thickness or diameter of scintillator while keeping the other fixed. The variation of energy dependent total efficiency of a 3'x3' NaI(Tl scintillator with axial distance has been studied using the BPIMC code. About two orders of magnitude change in detector efficiency has been observed for zero to 50 cm variation in the axial distance. For small values of axial separation, a similar large variation has also been observed in total efficiency for 137Cs as well as for 60Co sources by increasing the axial-offset from zero to 50 cm.
Monte-Carlo simulation for determining SNR and DQE of linear array plastic scintillating fiber
Institute of Scientific and Technical Information of China (English)
Mohammad Mehdi NASSERI; MA Qing-Li; YIN Ze-Jie; WU Xiao-Yi
2004-01-01
Fundamental characteristics of the plastic-scintillating fiber (PSF) for wide energy range of electromagnetic radiation (X & γ) have been studied to evaluate possibility of using the PSF as an imaging detector for industrial purposes. Monte-Carlo simulation program (GEANT4.5.1, 2003) was used to generate the data. In order to evaluate image quality of the detector, fiber array was irradiated under various energy and fluxes. Signal to noise ratio (SNR)as well as detector quantum efficiency (DQE) were obtained.
International Nuclear Information System (INIS)
This paper concludes our efforts in describing SU(3)-Yang-Mills theories at different couplings/temperatures in terms of effective Polyakov-loop models. The associated effective couplings are determined through an inverse Monte Carlo procedure based on novel Schwinger-Dyson equations that employ the symmetries of the Haar measure. Because of the first-order nature of the phase transition we encounter a fine-tuning problem in reproducing the correct behavior of the Polyakov-loop from the effective models. The problem remains under control as long as the number of effective couplings is sufficiently small
Nuth, Joseph A., III; Johnson, Natasha M.
2012-01-01
There are at least 3 separate photochemical self-shielding models with different degrees of commonality. All of these models rely on the selective absorption of (12))C(16)O dissociative photons as the radiation source penetrates through the gas allowing the production of reactive O-17 and O-18 atoms within a specific volume. Each model also assumes that the undissociated C(16)O is stable and does not participate in the chemistry of nebular dust grains. In what follows we will argue that this last, very important assumption is simply not true despite the very high energy of the CO molecular bond.
International Nuclear Information System (INIS)
The author gives a scheme for the calculation of the self-shielding factors in the unresolved resonance region using the GRUCON applied program package. This package is especially created to be used in the conversion of evaluated neutron cross-section data, as available in existing data libraries, into multigroup microscopic constants. A detailed description of the formulae and algorithms used in the programs is given. Some typical examples of calculation are considered and the results are compared with those of other authors. The calculation accuracy is better than 2%
International Nuclear Information System (INIS)
The neutron multidetector consists of 81 detectors, made of 4x4x12 cmc BC-400 crystals mounted on XP2972 phototubes. This detector placed in the forward direction at 138 cm from the target, was used to detect the correlated neutrons in the fusion of Li11 halo nuclei with Si targets. To verify the criterion for selecting the true coincidences against cross-talk ( a spurious effect in which the same neutron is registered by two or more detectors) and to establish the optimal distance between adjacent detectors, the program MENATE ( written by P.Desesquelles, IPN - Orsay) was used to generate Monte Carlo neutrons and their interactions in multidetector. The results were analysed with PAW (from CERN Library). (authors)
Energy Technology Data Exchange (ETDEWEB)
Videira, Heber S.; Burkhardt, Guilherme M.; Santos, Ronielly S., E-mail: heber@cyclopet.com.br [Cyclopet Radiofarmacos Ltda., Curitiba, PR (Brazil); Passaro, Bruno M.; Gonzalez, Julia A.; Santos, Josefina; Guimaraes, Maria I.C.C. [Universidade de Sao Paulo (HCFMRP/USP), Sao Paulo, SP (Brazil). Faculdade de Medicina. Hospital das Clinicas; Lenzi, Marcelo K. [Universidade Federal do Parana (UFPR), Curitina (Brazil). Programa de Pos-Graduacao em Engenharia Quimica
2013-04-15
The technological advances in positron emission tomography (PET) in conventional clinic imaging have led to a steady increase in the number of cyclotrons worldwide. Most of these cyclotrons are being used to produce {sup 18}F-FDG, either for themselves as for the distribution to other centers that have PET. For there to be safety in radiological facilities, the cyclotron intended for medical purposes can be classified in category I and category II, ie, self-shielded or non-shielded (bunker). Therefore, the aim of this work is to verify the effectiveness of borated water shield built for a cyclotron accelerator-type Self-shielded PETtrace 860. Mixtures of water borated occurred in accordance with the manufacturer’s specifications, as well as the results of the radiometric survey in the vicinity of the self-shielding of the cyclotron in the conditions established by the manufacturer showed that radiation levels were below the limits. (author)
Energy Technology Data Exchange (ETDEWEB)
Zubal, I.G.; Harrell, C.R. (Yale Univ., New Haven, CT (USA). Dept. of Diagnostic Radiology); Esser, P.D. (Columbia Univ., New York (USA). Coll. of Physicians and Surgeons)
1990-12-20
In order to realistically define the internal organs of a representative human, 150 transverse CT scans of an (average) male patient were acquired from head to mid-thigh on the GE 9800 Quick scanner. The reconstructed transverse slices were read into a microVAX 3500 and members of the medical staff outlined 42 separate internal organs contained in the transverse slice. This digitized human phantom serves as an input to a Monte Carlo program which models photoelectric absorption and scatter processes of gamma-rays in matter. The organs can be 'filled' with variable amounts of radiopharmaceuticals and the simulation computes the emerging energy spectra for a given source distribution and detector position. The simulation follows gamma-ray histories out to a maximum of 32 scatter events. Scatter spectra are histogrammed into energy distributions of gamma-rays which have undergone a specific number of scatter events before emerging from the phantom. A sum of all these scatter spectra yields the simulated total spectra. Simulated total spectra of diagnostically relevant human distributions are compared to spectra acquired from nuclear medicine clinical patients. (orig.).
Zubal, L. G.; Harrell, C. R.; Esser, P. D.
1990-12-01
In order to realistically define the internal organs of a representative human, 150 transverse CT scans of an (average) male patient were acquired from head to mid-thigh on the GE 9800 Quick scanner. The reconstructed transverse slices were read into a microVAX 3500 and members of the medical staff outlined 42 separate internal organs contained in the transverse slice. This digitized human phantom serves as an input to a Monte Carlo program which models photoelectric absorption and scatter processes of gamma-rays in matter. The organs can be "filled" with variable amounts of radiopharmaceuticals and the simulation computes the emerging energy spectra for a given source distribution and detector position. The simulation follows gamma-ray histories out to a maximum of 32 scatter events. Scatter spectra are histogrammed into energy distributions of gamma-rays which have undergone a specific number of scatter events before emerging from the phantom. A sum of all these scatter spectra yields the simulated total spectra. Simulated total spectra of diagnostically relevant human distributions are compared to spectra acquired from nuclear medicine clinical patients.
McNamara, A L; Heijnis, H; Fierro, D; Reinhard, M I
2012-04-01
A Compton suppressed high-purity germanium (HPGe) detector is well suited to the analysis of low levels of radioactivity in environmental samples. The difference in geometry, density and composition of environmental calibration standards (e.g. soil) can contribute to excessive experimental uncertainty to the measured efficiency curve. Furthermore multiple detectors, like those used in a Compton suppressed system, can add complexities to the calibration process. Monte Carlo simulations can be a powerful complement in calibrating these types of detector systems, provided enough physical information on the system is known. A full detector model using the Geant4 simulation toolkit is presented and the system is modelled in both the suppressed and unsuppressed mode of operation. The full energy peak efficiencies of radionuclides from a standard source sample is calculated and compared to experimental measurements. The experimental results agree relatively well with the simulated values (within ∼5 - 20%). The simulations show that coincidence losses in the Compton suppression system can cause radionuclide specific effects on the detector efficiency, especially in the Compton suppressed mode of the detector. Additionally since low energy photons are more sensitive to small inaccuracies in the computational detector model than high energy photons, large discrepancies may occur at energies lower than ∼100 keV. PMID:22304994
Computation of a Canadian SCWR unit cell with deterministic and Monte Carlo codes
International Nuclear Information System (INIS)
The Canadian SCWR has the potential to achieve the goals that the generation IV nuclear reactors must meet. As part of the optimization process for this design concept, lattice cell calculations are routinely performed using deterministic codes. In this study, the first step (self-shielding treatment) of the computation scheme developed with the deterministic code DRAGON for the Canadian SCWR has been validated. Some options available in the module responsible for the resonance self-shielding calculation in DRAGON 3.06 and different microscopic cross section libraries based on the ENDF/B-VII.0 evaluated nuclear data file have been tested and compared to a reference calculation performed with the Monte Carlo code SERPENT under the same conditions. Compared to SERPENT, DRAGON underestimates the infinite multiplication factor in all cases. In general, the original Stammler model with the Livolant-Jeanpierre approximations are the most appropriate self-shielding options to use in this case of study. In addition, the 89 groups WIMS-AECL library for slight enriched uranium and the 172 groups WLUP library for a mixture of plutonium and thorium give the most consistent results with those of SERPENT. (authors)
Energy Technology Data Exchange (ETDEWEB)
David, Mariano Gazineu; Salata, Camila; Almeida, Carlos Eduardo, E-mail: marianogd08@gmail.com [Universidade do Estado do Rio de Janeiro (UERJ/LCR), Rio de Janeiro (Brazil). Lab. de Ciencias Radiologicas
2014-07-01
The Laboratorio de Ciencias Radiologicas develops a methodology for the determination of the absorbed dose to water by Fricke chemical dosimetry method for brachytherapy sources of {sup 192}Ir high dose rate and have compared their results with the laboratory of the National Research Council Canada. This paper describes the determination of the correction factors by Monte Carlo method, with the Penelope code. Values for all factors are presented, with a maximum difference of 0.22% for their determination by an alternative way. (author)
Harvey, J.-P.; Gheribi, A. E.; Chartrand, P.
2011-08-01
The design of multicomponent alloys used in different applications based on specific thermo-physical properties determined experimentally or predicted from theoretical calculations is of major importance in many engineering applications. A procedure based on Monte Carlo simulations (MCS) and the thermodynamic integration (TI) method to improve the quality of the predicted thermodynamic properties calculated from classical thermodynamic calculations is presented in this study. The Gibbs energy function of the liquid phase of the Cu-Zr system at 1800 K has been determined based on this approach. The internal structure of Cu-Zr melts and amorphous alloys at different temperatures, as well as other physical properties were also obtained from MCS in which the phase trajectory was modeled by the modified embedded atom model formalism. A rigorous comparison between available experimental data and simulated thermo-physical properties obtained from our MCS is presented in this work. The modified quasichemical model in the pair approximation was parameterized using the internal structure data obtained from our MCS and the precise Gibbs energy function calculated at 1800 K from the TI method. The predicted activity of copper in Cu-Zr melts at 1499 K obtained from our thermodynamic optimization was corroborated by experimental data found in the literature. The validity of the amplitude of the entropy of mixing obtained from the in silico procedure presented in this work was analyzed based on the thermodynamic description of hard sphere mixtures.
Monte Carlo Model of TRIGA Reactor to Support Neutron Activation Analysis
Energy Technology Data Exchange (ETDEWEB)
Zerovnik, G.; Snoj, L.; Trkov, A. [Reactor Physics Department, Jozef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia)
2011-07-01
The TRIGA reactor at Jozef Stefan Institute is used as a neutron source for neutron activation analysis. The accuracy of the method depends on the accuracy of the neutron spectrum characterization. Therefore, computational models on different scales have been developed: Monte Carlo full reactor model, model of an irradiation channel and deterministic code for self-shielding factor calculations. The models have been validated by comparing against experiment and thus provide a very strong support for neutron activation analysis of samples irradiated at the TRIGA reactor. (author)
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
The magnetic anisotropy field in thin films with in-plane uniaxial anisotropy can be deduced from the VSM magnetization curves measured in magnetic fields of constant magnitudes. This offers a new possibility of applying rotational magnetization curves to determine the firstand second-order anisotropy constant in these films. In this paper we report a theoretical derivation of rotational magnetization curve in hexagonal crystal system with easy-plane anisotropy based on the principle of the minimum total energy. This model is applied to calculate and analyze the rotational magnetization process for magnetic spherical particles with hexagonal easy-plane anisotropy when rotating the external magnetic field in the basal plane. The theoretical calculations are consistent with Monte Carlo simulation results. It is found that to well reproduce experimental curves, the effect of coercive force on the magnetization reversal process should be fully considered when the intensity of the external field is much weaker than that of the anisotropy field. Our research proves that the rotational magnetization curve from VSM measurement provides an effective access to analyze the in-plane anisotropy constant K3 in hexagonal compounds, and the suitable experimental condition to measure K3 is met when the ratio of the magnitude of the external field to that of the anisotropy field is around 0.2.
Institute of Scientific and Technical Information of China (English)
WANG AiMin; PANG Hua
2009-01-01
The magnetic anisotropy field in thin films with in-plane uniaxial anisotropy can be deduced from the VSM magnetization curves measured in magnetic fields of constant magnitudes. This offers a new possibility of applying rotational magnetization curves to determine the first- and second-order ani-aotropy constant in these films. In this paper we report a theoretical derivation of rotational magnetiza-tion curve in hexagonal crystal system with easy-plane anisotropy based on the principle of the minimum total energy. This model is applied to calculate and analyze the rotational magnetization process for magnetic spherical particles with hexagonal easy-plane anisotropy when rotating the external magnetic field in the basal plane. The theoretical calculations are consistent with Monte Carlo simulation results. It is found that to well reproduce experimental curves, the effect of coercive force on the magnetization reversal process should be fully considered when the intensity of the ex-ternal field is much weaker than that of the anisotropy field. Our research proves that the rotational magnetization curve from VSM measurement provides an effective access to analyze the in-plane anisotropy constant K3 in hexagonal compounds, and the suitable experimental condition to measure K3 is met when the ratio of the magnitude of the external field to that of the anisotropy field is around 0.2.
Lunt, Mark F.; Rigby, Matt; Ganesan, Anita L.; Manning, Alistair J.
2016-09-01
Atmospheric trace gas inversions often attempt to attribute fluxes to a high-dimensional grid using observations. To make this problem computationally feasible, and to reduce the degree of under-determination, some form of dimension reduction is usually performed. Here, we present an objective method for reducing the spatial dimension of the parameter space in atmospheric trace gas inversions. In addition to solving for a set of unknowns that govern emissions of a trace gas, we set out a framework that considers the number of unknowns to itself be an unknown. We rely on the well-established reversible-jump Markov chain Monte Carlo algorithm to use the data to determine the dimension of the parameter space. This framework provides a single-step process that solves for both the resolution of the inversion grid, as well as the magnitude of fluxes from this grid. Therefore, the uncertainty that surrounds the choice of aggregation is accounted for in the posterior parameter distribution. The posterior distribution of this transdimensional Markov chain provides a naturally smoothed solution, formed from an ensemble of coarser partitions of the spatial domain. We describe the form of the reversible-jump algorithm and how it may be applied to trace gas inversions. We build the system into a hierarchical Bayesian framework in which other unknown factors, such as the magnitude of the model uncertainty, can also be explored. A pseudo-data example is used to show the usefulness of this approach when compared to a subjectively chosen partitioning of a spatial domain. An inversion using real data is also shown to illustrate the scales at which the data allow for methane emissions over north-west Europe to be resolved.
Institute of Scientific and Technical Information of China (English)
王长江; 曾惠林; 杨雪梅; 蒋戎; 刘然
2014-01-01
Aiming at the X80 high strength pipeline steel and related welding technical specification requirements,it adopted independent development eight welding torch internal welder and self-shielded flux-cored wire automatic welder to carry out pipeline all-position welding test,and conducted mechanical properties testing of welded joints. It determined reasonable groove type,welding wire type and matching welding parameters,and formulated welding technology internal welding machine for root welding+self-shielded flux-cored wire automatic welding for fill and cover welding. The test results proved that the welding process is reasonable,simple operation,and high efficiency,the various performance indicators of welded joints all meet the X80 steel grade pipeline engineering welding specification and relevant national standards.%针对X80高强管线钢及相关焊接技术规范要求，应用自主研发的八焊炬内焊机及自保护药芯焊丝自动焊机进行管道全位置焊接试验，并进行焊接接头力学性能测试。确定了合理的坡口形式、焊丝型号及与之匹配的焊接参数，制定了“内焊机根焊+管道自保护药芯焊丝自动焊填充、盖面焊”的焊接工艺。试验结果证明，焊接工艺制定合理，操作简单，效率高，其焊接接头的各项性能指标均满足X80钢级管道工程焊接规范及国家相关标准要求。
Benmakhlouf, H.; Johansson, J.; Paddick, I.; Andreo, P.
2015-05-01
The measurement of output factors (OF) for the small photon beams generated by Leksell Gamma Knife® (LGK) radiotherapy units is a challenge for the physicist due to the under or over estimation of these factors by a vast majority of the detectors commercially available. Output correction factors, introduced in the international formalism published by Alfonso (2008 Med. Phys. 35 5179-86), standardize the determination of OFs for small photon beams by correcting detector-reading ratios to yield OFs in terms of absorbed-dose ratios. In this work output correction factors for a number of detectors have been determined for LGK Perfexion™ 60Co γ-ray beams by Monte Carlo (MC) calculations and measurements. The calculations were made with the MC system PENELOPE, scoring the energy deposited in the active volume of the detectors and in a small volume of water; the detectors simulated were two silicon diodes, one liquid ionization chamber (LIC), alanine and TLD. The calculated LIC output correction factors were within ± 0.4%, and this was selected as the reference detector for experimental determinations where output correction factors for twelve detectors were measured, normalizing their readings to those of the LIC. The MC-calculated and measured output correction factors for silicon diodes yielded corrections of up to 5% for the smallest LGK collimator size of 4 mm diameter. The air ionization chamber measurements led to extremely large output correction factors, caused by the well-known effect of partial volume averaging. The corrections were up to 7% for the natural diamond detector in the 4 mm collimator, also due to partial volume averaging, and decreased to within about ± 0.6% for the smaller synthetic diamond detector. The LIC, showing the smallest corrections, was used to investigate machine-to-machine output factor differences by performing measurements in four LGK units with different dose rates. These resulted in OFs within ± 0.6% and ± 0
1 1/2 years of experience with a 10 MeV self-shielded on-line e-beam sterilization system
International Nuclear Information System (INIS)
The Vascular Intervening Group of the Guidant Corporation (Guidant IV) has been operating a self-shielded, 10 MeV 4 kW, electron beam sterilization system since July of 1988. The system was designed, built and installed in a 70 square meter area in an existing Guidant manufacturing facility by Titan Scan Corporation and performance of the system was validated in conformance with 1S0-11137 standards. The goal of this on-site e-beam system was 'just in time' JIT, sterilization, i.e. the ability to manufacture, sterilize and ship, high intrinsic value medical devices in less than 24 hours. The benefits of moving from a long gas sterilization cycle of greater than one week to a JIT process were envisioned to be a) speed to market with innovated new products b) rapid response to customer requirements c) reduced inventory carrying costs and finally manufacturing and quality system efficiency. The ability of Guidant to realize these benefits depended upon the ability of the Guidant VI business units to adapt to the new sterilization modality and functionality and on the overall system reliability. This paper reviews the operating experience to date and the overall system reliability. (author)
International Nuclear Information System (INIS)
This paper describes the process of installation of a self-shielded irradiator category I, model ISOGAMMA LL.Co of 60Co, with a nominal 25 kCi activity, rate of absorbed dose 8 kG/h and 5 L workload. The stages are describe step by step: import, the customs procedure which included the interview with the master of the vessel transporter, the monitoring of the entire process by the head of radiological protection of the importing Center, control of the levels of surface contamination of the shipping container of the sources before the removal of the ship, the supervision of the national regulatory authority and the transportation to the final destination. Details of assembling of the installation and the opening of the container for transportation of supplies is outlined. The action plan previously developed for the case of occurrence of radiological successful events is presented, detailing the phase of the load of radioactive sources by the specialists of the company selling the facility (IZOTOP). Finally describes the setting and implementation of the installation and the procedure of licensing for exploitation
International Nuclear Information System (INIS)
A method for tuning parameters in Monte Carlo generators is described and applied to a specific case. The method works in the following way: each observable is generated several times using different values of the parameters to be tuned. The output is then approximated by some analytic form to describe the dependence of the observables on the parameters. This approximation is used to find the values of the parameter that give the best description of the experimental data. This results in significantly faster fitting compared to an approach in which the generator is called iteratively. As an application, we employ this method to fit the parameters of the unintegrated gluon density used in the Cascade Monte Carlo generator, using inclusive deep inelastic data measured by the H1 Collaboration. We discuss the results of the fit, its limitations, and its strong points. (orig.)
International Nuclear Information System (INIS)
The Monte Carlo method can be used to compute the gamma-ray backscattering albedo. This method was used by Raso to compute the angular differential albedo. Raso's results have been used by Chilton and Huddelston to adjust their well-known albedo formula. Here, an efficient estimator is proposed to compute the double-differential angular and energetic albedo from gamma-ray histories simulated in matter by the three-dimensional Monte Carlo transport code TRIPOLI. A detailed physical albedo analysis could be done in this way. The double-differential angular and energetic gamma-ray albedo is calculated for iron material for initial gamma-ray energies of 8, 3, 1, and 0.5 MeV
Neutron cross-section probability tables in TRIPOLI-3 Monte Carlo transport code
Energy Technology Data Exchange (ETDEWEB)
Zheng, S.H.; Vergnaud, T.; Nimal, J.C. [Commissariat a l`Energie Atomique, Gif-sur-Yvette (France). Lab. d`Etudes de Protection et de Probabilite
1998-03-01
Neutron transport calculations need an accurate treatment of cross sections. Two methods (multi-group and pointwise) are usually used. A third one, the probability table (PT) method, has been developed to produce a set of cross-section libraries, well adapted to describe the neutron interaction in the unresolved resonance energy range. Its advantage is to present properly the neutron cross-section fluctuation within a given energy group, allowing correct calculation of the self-shielding effect. Also, this PT cross-section representation is suitable for simulation of neutron propagation by the Monte Carlo method. The implementation of PTs in the TRIPOLI-3 three-dimensional general Monte Carlo transport code, developed at Commissariat a l`Energie Atomique, and several validation calculations are presented. The PT method is proved to be valid not only in the unresolved resonance range but also in all the other energy ranges.
Mohammad W. Marashdeh; Ibrahim F. Al-Hamarneh; Eid M. Abdel Munem; A.A. Tajuddin; Alawiah Ariffin; Saleh Al-Omari
2015-01-01
Rhizophora spp. wood has the potential to serve as a solid water or tissue equivalent phantom for photon and electron beam dosimetry. In this study, the effective atomic number (Zeff) and effective electron density (Neff) of raw wood and binderless Rhizophora spp. particleboards in four different particle sizes were determined in the 10–60 keV energy region. The mass attenuation coefficients used in the calculations were obtained using the Monte Carlo N-Particle (MCNP5) simulation code. The M...
Yan, Yangqian; Blume, D.
2016-05-01
The unitary equal-mass Fermi gas with zero-range interactions constitutes a paradigmatic model system that is relevant to atomic, condensed matter, nuclear, particle, and astro physics. This work determines the fourth-order virial coefficient b4 of such a strongly-interacting Fermi gas using a customized ab inito path integral Monte Carlo (PIMC) algorithm. In contrast to earlier theoretical results, which disagreed on the sign and magnitude of b4, our b4 agrees with the experimentally determined value, thereby resolving an ongoing literature debate. Utilizing a trap regulator, our PIMC approach determines the fourth-order virial coefficient by directly sampling the partition function. An on-the-fly anti-symmetrization avoids the Thomas collapse and, combined with the use of the exact two-body zero-range propagator, establishes an efficient general means to treat small Fermi systems with zero-range interactions. We gratefully acknowledge support by the NSF.
Yan, Yangqian; Blume, D
2016-06-10
The unitary equal-mass Fermi gas with zero-range interactions constitutes a paradigmatic model system that is relevant to atomic, condensed matter, nuclear, particle, and astrophysics. This work determines the fourth-order virial coefficient b_{4} of such a strongly interacting Fermi gas using a customized ab initio path-integral Monte Carlo (PIMC) algorithm. In contrast to earlier theoretical results, which disagreed on the sign and magnitude of b_{4}, our b_{4} agrees within error bars with the experimentally determined value, thereby resolving an ongoing literature debate. Utilizing a trap regulator, our PIMC approach determines the fourth-order virial coefficient by directly sampling the partition function. An on-the-fly antisymmetrization avoids the Thomas collapse and, combined with the use of the exact two-body zero-range propagator, establishes an efficient general means to treat small Fermi systems with zero-range interactions. PMID:27341213
Yan, Yangqian; Blume, D.
2016-06-01
The unitary equal-mass Fermi gas with zero-range interactions constitutes a paradigmatic model system that is relevant to atomic, condensed matter, nuclear, particle, and astrophysics. This work determines the fourth-order virial coefficient b4 of such a strongly interacting Fermi gas using a customized ab initio path-integral Monte Carlo (PIMC) algorithm. In contrast to earlier theoretical results, which disagreed on the sign and magnitude of b4 , our b4 agrees within error bars with the experimentally determined value, thereby resolving an ongoing literature debate. Utilizing a trap regulator, our PIMC approach determines the fourth-order virial coefficient by directly sampling the partition function. An on-the-fly antisymmetrization avoids the Thomas collapse and, combined with the use of the exact two-body zero-range propagator, establishes an efficient general means to treat small Fermi systems with zero-range interactions.
MO-G-BRF-05: Determining Response to Anti-Angiogenic Therapies with Monte Carlo Tumor Modeling
Energy Technology Data Exchange (ETDEWEB)
Valentinuzzi, D [Jozef Stefan Institute, Ljubljana (Slovenia); Simoncic, U; Jeraj, R [Jozef Stefan Institute, Ljubljana (Slovenia); University of Wisconsin, Madison, WI (United States); Titz, B [University of Wisconsin, Madison, WI (United States)
2014-06-15
Purpose: Patient response to anti-angiogenic therapies with vascular endothelial growth factor receptor - tyrosine kinase inhibitors (VEGFR TKIs) is heterogeneous. This study investigates key biological characteristics that drive differences in patient response via Monte Carlo computational modeling capable of simulating tumor response to therapy with VEGFR TKI. Methods: VEGFR TKIs potently block receptors, responsible for promoting angiogenesis in tumors. The model incorporates drug pharmacokinetic and pharmacodynamic properties, as well as patientspecific data of cellular proliferation derived from [18F]FLT-PET data. Sensitivity of tumor response was assessed for multiple parameters, including initial partial oxygen tension (pO{sub 2}), cell cycle time, daily vascular growth fraction, and daily vascular regression fraction. Results were benchmarked to clinical data (patient 2 weeks on VEGFR TKI, followed by 1-week drug holiday). The tumor pO{sub 2} was assumed to be uniform. Results: Among the investigated parameters, the simulated proliferation was most sensitive to the initial tumor pO{sub 2}. Initial change of 5 mmHg can already Result in significantly different levels of proliferation. The model reveals that hypoxic tumors (pO{sub 2} ≥ 20 mmHg) show the highest decrease of proliferation, experiencing mean FLT standardized uptake value (SUVmean) decrease for at least 50% at the end of the clinical trial (day 21). Oxygenated tumors (pO{sub 2} 20 mmHg) show a transient SUV decrease (30–50%) at the end of the treatment with VEGFR TKI (day 14) but experience a rapid SUV rebound close to the pre-treatment SUV levels (70–110%) at the time of a drug holiday (day 14–21) - the phenomenon known as a proliferative flare. Conclusion: Model's high sensitivity to initial pO{sub 2} clearly emphasizes the need for experimental assessment of the pretreatment tumor hypoxia status, as it might be predictive of response to antiangiogenic therapies and the occurrence
Energy Technology Data Exchange (ETDEWEB)
Lopez Ponte, M. A.; Navarro Amaro, J. F.; Perez Lopez, B.; Navarro Bravo, T.; Nogueira, P.; Vrba, T.
2013-07-01
From the Group of WG7 internal dosimetry of the EURADOS Organization (European Radiation Dosimetry group, e.V.) which It coordinates CIEMAT, international action for the vivo measurement of americium has been conducted in three mannequins type skull with detectors of Germanium by gamma spectrometry and simulation by Monte Carlo methods. Such action has been raised as two separate exercises, with the participation of institutions in Europe, America and Asia. Other actions similar precede this vivo intercomparison of measurement and modeling Monte Carlo1. The preliminary results and associated findings are presented in this work. The laboratory of the body radioactivity (CRC) of service counter of dosimetry staff internal (DPI) of the CIEMAT, it has been one of the participants in vivo measures exercise. On the other hand part, the Group of numerical dosimetry of CIEMAT is participant of the Monte Carlo2 simulation exercise. (Author)
Energy Technology Data Exchange (ETDEWEB)
Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of Monte Carlo. Welcome to Los Alamos, the birthplace of “Monte Carlo” for computational physics. Stanislaw Ulam, John von Neumann, and Nicholas Metropolis are credited as the founders of modern Monte Carlo methods. The name “Monte Carlo” was chosen in reference to the Monte Carlo Casino in Monaco (purportedly a place where Ulam’s uncle went to gamble). The central idea (for us) – to use computer-generated “random” numbers to determine expected values or estimate equation solutions – has since spread to many fields. "The first thoughts and attempts I made to practice [the Monte Carlo Method] were suggested by a question which occurred to me in 1946 as I was convalescing from an illness and playing solitaires. The question was what are the chances that a Canfield solitaire laid out with 52 cards will come out successfully? After spending a lot of time trying to estimate them by pure combinatorial calculations, I wondered whether a more practical method than “abstract thinking” might not be to lay it out say one hundred times and simply observe and count the number of successful plays... Later [in 1946], I described the idea to John von Neumann, and we began to plan actual calculations." - Stanislaw Ulam.
Directory of Open Access Journals (Sweden)
Mohammad W. Marashdeh
2015-01-01
Full Text Available Rhizophora spp. wood has the potential to serve as a solid water or tissue equivalent phantom for photon and electron beam dosimetry. In this study, the effective atomic number (Zeff and effective electron density (Neff of raw wood and binderless Rhizophora spp. particleboards in four different particle sizes were determined in the 10–60 keV energy region. The mass attenuation coefficients used in the calculations were obtained using the Monte Carlo N-Particle (MCNP5 simulation code. The MCNP5 calculations of the attenuation parameters for the Rhizophora spp. samples were plotted graphically against photon energy and discussed in terms of their relative differences compared with those of water and breast tissue. Moreover, the validity of the MCNP5 code was examined by comparing the calculated attenuation parameters with the theoretical values obtained by the XCOM program based on the mixture rule. The results indicated that the MCNP5 process can be followed to determine the attenuation of gamma rays with several photon energies in other materials.
Marashdeh, Mohammad W.; Al-Hamarneh, Ibrahim F.; Abdel Munem, Eid M.; Tajuddin, A. A.; Ariffin, Alawiah; Al-Omari, Saleh
Rhizophora spp. wood has the potential to serve as a solid water or tissue equivalent phantom for photon and electron beam dosimetry. In this study, the effective atomic number (Zeff) and effective electron density (Neff) of raw wood and binderless Rhizophora spp. particleboards in four different particle sizes were determined in the 10-60 keV energy region. The mass attenuation coefficients used in the calculations were obtained using the Monte Carlo N-Particle (MCNP5) simulation code. The MCNP5 calculations of the attenuation parameters for the Rhizophora spp. samples were plotted graphically against photon energy and discussed in terms of their relative differences compared with those of water and breast tissue. Moreover, the validity of the MCNP5 code was examined by comparing the calculated attenuation parameters with the theoretical values obtained by the XCOM program based on the mixture rule. The results indicated that the MCNP5 process can be followed to determine the attenuation of gamma rays with several photon energies in other materials.
Energy Technology Data Exchange (ETDEWEB)
Hyun, Hae Ri; Hong, Ser Gi [Kyung Hee University, Yongin (Korea, Republic of); Kim, Yong Nam; Kim, Soo Kon [Kangwon National University Hospital, Chuncheon (Korea, Republic of)
2015-05-15
In this paper, we performed MOSFET dosimeter simulation using the latest MCNP version code (MCNP 6). In order to determine the absorbed dose, we set the four source positions of 0 .deg. , 90 .deg. , 180 .deg. and 270 .deg. directions as in the previous study2. And, the absorbed dose traversed by electrons in the sensitive volume of extremely thin layer (1..m) was determined by both F4 tally (i.e., track length estimator) and F8 tally (i.e., energy deposition tally). However, the accurate determination of the absorbed dose in the very small volume is quite difficult due to the extremely small sensitive volume, which results a large variance in the tally with the typical number of source particles. To resolve this difficulty, we used MCNP [ESTEP] option and F4 tally. In this paper, we performed Monte Carlo simulation of MOSFET dosimeter using MCNP6. In particular, the F4 track length and*F8 energy deposition estimators coupled with the ESTEP option in MCNP [Material data card] were used to accurately estimate the absorbed doses in the extremely small sensitive volume. In order to calculate the absorbed dose in the sensitive volume, we used MCNP F4 tally which is referred to the track length estimator and F8 tally. The ESTEP option in MCNP accommodates enough number of sub-steps for an accurate simulation of the electron's trajectory. Also, MCNP [DE card] and [DF card] are used in the track length estimator to determine the absorbed dose over the sensitive volume. Also, we considered two different response functions in the F4 track length tally to calculate the absorbed doses. The first one is calculated with the formulations suggested by Schaart et al and the second one is the mass electronic collision stopping power which was extracted from MCNP output.
Energy Technology Data Exchange (ETDEWEB)
Isambert, A.; Lefkopoulos, D. [Institut Gustave-Roussy, Medical Physics Dept., 94 - Villejuif (France); Brualla, L. [NCTeam, Strahlenklinik, Universitatsklinikum Essen (Germany); Benkebil, M. [DOSIsoft, 94 - Cachan (France)
2010-04-15
Purpose of study Monte Carlo based treatment planning system are known to be more accurate than analytical methods for performing absorbed dose estimation, particularly in and near heterogeneities. However, the required computation time can still be an issue. The present study focused on the determination of the optimum statistical uncertainty in order to minimise computation time while keeping the reliability of the absorbed dose estimation in treatments planned with electron-beams. Materials and methods Three radiotherapy plans (medulloblastoma, breast and gynaecological) were used to investigate the influence of the statistical uncertainty of the absorbed dose on the target volume dose-volume histograms (spinal cord, intra-mammary nodes and pelvic lymph nodes, respectively). Results The study of the dose-volume histograms showed that for statistical uncertainty levels (1 S.D.) above 2 to 3%, the standard deviation of the mean dose in the target volume calculated from the dose-volume histograms increases by at least 6%, reflecting the gradual flattening of the dose-volume histograms. Conclusions This work suggests that, in clinical context, Monte Carlo based absorbed dose estimations should be performed with a maximum statistical uncertainty of 2 to 3%. (authors)
Directory of Open Access Journals (Sweden)
Charlie Samuya Veric
2001-12-01
Full Text Available The importance of Carlos Bulosan in Filipino and Filipino-American radical history and literature is indisputable. His eminence spans the pacific, and he is known, diversely, as a radical poet, fictionist, novelist, and labor organizer. Author of the canonical America Iis the Hearts, Bulosan is celebrated for chronicling the conditions in America in his time, such as racism and unemployment. In the history of criticism on Bulosan's life and work, however, there is an undeclared general consensus that views Bulosan and his work as coherent permanent texts of radicalism and anti-imperialism. Central to the existence of such a tradition of critical reception are the generations of critics who, in more ways than one, control the discourse on and of Carlos Bulosan. This essay inquires into the sphere of the critical reception that orders, for our time and for the time ahead, the reading and interpretation of Bulosan. What eye and seeing, the essay asks, determine the perception of Bulosan as the angel of radicalism? What is obscured in constructing Bulosan as an immutable figure of the political? What light does the reader conceive when the personal is brought into the open and situated against the political? the essay explores the answers to these questions in Bulosan's loving letters to various friends, strangers, and white American women. The presence of these interrogations, the essay believes, will secure ultimately the continuing importance of Carlos Bulosan to radical literature and history.
Dzuba, Sergei A.
2016-08-01
Pulsed double electron-electron resonance technique (DEER, or PELDOR) is applied to study conformations and aggregation of peptides, proteins, nucleic acids, and other macromolecules. For a pair of spin labels, experimental data allows for the determination of their distance distribution function, P(r). P(r) is derived as a solution of a first-kind Fredholm integral equation, which is an ill-posed problem. Here, we suggest regularization by increasing the distance discretization length to its upper limit where numerical integration still provides agreement with experiment. This upper limit is found to be well above the lower limit for which the solution instability appears because of the ill-posed nature of the problem. For solving the integral equation, Monte Carlo trials of P(r) functions are employed; this method has an obvious advantage of the fulfillment of the non-negativity constraint for P(r). The regularization by the increasing of distance discretization length for the case of overlapping broad and narrow distributions may be employed selectively, with this length being different for different distance ranges. The approach is checked for model distance distributions and for experimental data taken from literature for doubly spin-labeled DNA and peptide antibiotics.
Thorn, Graeme J; King, John R
2016-01-01
The Gram-positive bacterium Clostridium acetobutylicum is an anaerobic endospore-forming species which produces acetone, butanol and ethanol via the acetone-butanol (AB) fermentation process, leading to biofuels including butanol. In previous work we looked to estimate the parameters in an ordinary differential equation model of the glucose metabolism network using data from pH-controlled continuous culture experiments. Here we combine two approaches, namely the approximate Bayesian computation via an existing sequential Monte Carlo (ABC-SMC) method (to compute credible intervals for the parameters), and the profile likelihood estimation (PLE) (to improve the calculation of confidence intervals for the same parameters), the parameters in both cases being derived from experimental data from forward shift experiments. We also apply the ABC-SMC method to investigate which of the models introduced previously (one non-sporulation and four sporulation models) have the greatest strength of evidence. We find that the joint approximate posterior distribution of the parameters determines the same parameters as previously, including all of the basal and increased enzyme production rates and enzyme reaction activity parameters, as well as the Michaelis-Menten kinetic parameters for glucose ingestion, while other parameters are not as well-determined, particularly those connected with the internal metabolites acetyl-CoA, acetoacetyl-CoA and butyryl-CoA. We also find that the approximate posterior is strongly non-Gaussian, indicating that our previous assumption of elliptical contours of the distribution is not valid, which has the effect of reducing the numbers of pairs of parameters that are (linearly) correlated with each other. Calculations of confidence intervals using the PLE method back this up. Finally, we find that all five of our models are equally likely, given the data available at present. PMID:26561777
Neutron spectrum obtained with Monte Carlo and transport theory
International Nuclear Information System (INIS)
The development of the computer, resulting in increasing memory capacity and processing speed, has enabled the application of Monte Carlo method to estimate the fluxes in thousands of fine bin energy structure. Usually the MC calculation is made using continuous energy nuclear data and exact geometry. Self shielding and interference of nuclides resonances are properly considered. Therefore, the fluxes obtained by this method may be a good estimation of the neutron energy distribution (spectrum) for the problem. In an early work it was proposed to use these fluxes as weighting spectrum to generate multigroup cross section for fast reactor analysis using deterministic codes. This non-traditional use of MC calculation needs a validation to gain confidence in the results. The work presented here is the validation start step of this scheme. The spectra of the JOYO first core fuel assembly MK-I and the benchmark Godiva were calculated using the tally flux estimator of the MCNP code and compared with the reference. Also, the two problems were solved with the multigroup transport theory code XSDRN of the AMPX system using the 171 energy groups VITAMIN-C library. The spectra differences arising from the utilization of these codes, the influence of evaluated data file and the application to fast reactor calculation are discussed. (author)
Institute of Scientific and Technical Information of China (English)
王志明; 刘海云; 王勇; 张英乔
2011-01-01
采用高速摄影、汉诺威弧焊质量分析仪和体式显微镜对自保护药芯焊丝弧桥并存过渡特征和过渡机理进行了试验研究,结果表明:弧桥并存过渡是一种液桥持续存在的同时电弧不熄灭的熔滴过渡模式,是自保护药芯焊丝主要熔滴过渡模式之一；电弧电压和焊接电流波形没有短路过渡特征,表现为一定范围内小幅波动,与弧桥并存过渡特征相对应；电压概率密度分布曲线和电流概率密度曲线都没有短路过渡的特征；弧桥并存过渡的液桥是由熔融渣包裹液态金属混合形成的；自保护药芯焊丝孤桥并存过渡主要是在表面张力和电磁收缩力的共同作用下完成.%By using the high speed photography,Hanover analyzer and macro-microscope,the tranfer characteristic and the tranfer mechanisim of metal transfer of self-shielded flux cored wire have been studied.The results reaveal that bridging transfer without arc interruption transfer is a kind of transfer which the bridge exists lastingly while the arc isn't extinguished,and it is one of the metal transfer models of self-shielded flux cored wire.Without the characteristic of short-circuit transfer,the wave pattern of arc voltage and welding current represent the slight waving within a certain scope,corresponding to the characteristic of bridging transfer without arc interruption,the U-PDD & I-PDD doesn't show the characteristic of short circuit transfer.The bridge of bridging transfer without arc interruption is formed of mixture of the slag's covering the liquid metal.The accomplishment of bridging transfer without arc interruption is mainly conducted by joint work of surface tension and electromagnetic force.
Daures, J; Gouriou, J; Bordy, J M
2011-03-01
This work has been performed within the frame of the European Union ORAMED project (Optimisation of RAdiation protection for MEDical staff). The main goal of the project is to improve standards of protection for medical staff for procedures resulting in potentially high exposures and to develop methodologies for better assessing and for reducing, exposures to medical staff. The Work Package WP2 is involved in the development of practical eye-lens dosimetry in interventional radiology. This study is complementary of the part of the ENEA report concerning the calculations with the MCNP-4C code of the conversion factors related to the operational quantity H(p)(3). In this study, a set of energy- and angular-dependent conversion coefficients (H(p)(3)/K(a)), in the newly proposed square cylindrical phantom made of ICRU tissue, have been calculated with the Monte-Carlo code PENELOPE and MCNP5. The H(p)(3) values have been determined in terms of absorbed dose, according to the definition of this quantity, and also with the kerma approximation as formerly reported in ICRU reports. At a low-photon energy (up to 1 MeV), the two results obtained with the two methods are consistent. Nevertheless, large differences are showed at a higher energy. This is mainly due to the lack of electronic equilibrium, especially for small angle incidences. The values of the conversion coefficients obtained with the MCNP-4C code published by ENEA quite agree with the kerma approximation calculations obtained with PENELOPE. We also performed the same calculations with the code MCNP5 with two types of tallies: F6 for kerma approximation and *F8 for estimating the absorbed dose that is, as known, due to secondary electrons. PENELOPE and MCNP5 results agree for the kerma approximation and for the absorbed dose calculation of H(p)(3) and prove that, for photon energies larger than 1 MeV, the transport of the secondary electrons has to be taken into account.
Krongkietlearts, K.; Tangboonduangjit, P.; Paisangittisakul, N.
2016-03-01
In order to improve the life's quality for a cancer patient, the radiation techniques are constantly evolving. Especially, the two modern techniques which are intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) are quite promising. They comprise of many small beam sizes (beamlets) with various intensities to achieve the intended radiation dose to the tumor and minimal dose to the nearby normal tissue. The study investigates whether the microDiamond detector (PTW manufacturer), a synthetic single crystal diamond detector, is suitable for small field output factor measurement. The results were compared with those measured by the stereotactic field detector (SFD) and the Monte Carlo simulation (EGSnrc/BEAMnrc/DOSXYZ). The calibration of Monte Carlo simulation was done using the percentage depth dose and dose profile measured by the photon field detector (PFD) of the 10×10 cm2 field size with 100 cm SSD. Comparison of the values obtained from the calculations and measurements are consistent, no more than 1% difference. The output factors obtained from the microDiamond detector have been compared with those of SFD and Monte Carlo simulation, the results demonstrate the percentage difference of less than 2%.
Dunn, William L
2012-01-01
Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble
Calculation of Gamma-ray Responses for HPGe Detectors with TRIPOLI-4 Monte Carlo Code
Lee, Yi-Kang; Garg, Ruchi
2014-06-01
The gamma-ray response calculation of HPGe (High Purity Germanium) detector is one of the most important topics of the Monte Carlo transport codes for nuclear instrumentation applications. In this study the new options of TRIPOLI-4 Monte Carlo transport code for gamma-ray spectrometry were investigated. Recent improvements include the gamma-rays modeling of the electron-position annihilation, the low energy electron transport modeling, and the low energy characteristic X-ray production. The impact of these improvements on the detector efficiency of the gamma-ray spectrometry calculations was verified. Four models of HPGe detectors and sample sources were studied. The germanium crystal, the dead layer of the crystal, the central hole, the beryllium window, and the metal housing are the essential parts in detector modeling. A point source, a disc source, and a cylindrical extended source containing a liquid radioactive solution were used to study the TRIPOLI-4 calculations for the gamma-ray energy deposition and the gamma-ray self-shielding. The calculations of full-energy-peak and total detector efficiencies for different sample-detector geometries were performed. Using TRIPOLI-4 code, different gamma-ray energies were applied in order to establish the efficiency curves of the HPGe gamma-ray detectors.
Application de la methode des sous-groupes au calcul Monte-Carlo multigroupe
Martin, Nicolas
This thesis is dedicated to the development of a Monte Carlo neutron transport solver based on the subgroup (or multiband) method. In this formalism, cross sections for resonant isotopes are represented in the form of probability tables on the whole energy spectrum. This study is intended in order to test and validate this approach in lattice physics and criticality-safety applications. The probability table method seems promising since it introduces an alternative computational way between the legacy continuous-energy representation and the multigroup method. In the first case, the amount of data invoked in continuous-energy Monte Carlo calculations can be very important and tend to slow down the overall computational time. In addition, this model preserves the quality of the physical laws present in the ENDF format. Due to its cheap computational cost, the multigroup Monte Carlo way is usually at the basis of production codes in criticality-safety studies. However, the use of a multigroup representation of the cross sections implies a preliminary calculation to take into account self-shielding effects for resonant isotopes. This is generally performed by deterministic lattice codes relying on the collision probability method. Using cross-section probability tables on the whole energy range permits to directly take into account self-shielding effects and can be employed in both lattice physics and criticality-safety calculations. Several aspects have been thoroughly studied: (1) The consistent computation of probability tables with a energy grid comprising only 295 or 361 groups. The CALENDF moment approach conducted to probability tables suitable for a Monte Carlo code. (2) The combination of the probability table sampling for the energy variable with the delta-tracking rejection technique for the space variable, and its impact on the overall efficiency of the proposed Monte Carlo algorithm. (3) The derivation of a model for taking into account anisotropic
ROESSEL, ROBERT A., JR.
THE FIRST SECTION OF THIS BOOK COVERS THE HISTORICAL AND CULTURAL BACKGROUND OF THE SAN CARLOS APACHE INDIANS, AS WELL AS AN HISTORICAL SKETCH OF THE DEVELOPMENT OF THEIR FORMAL EDUCATIONAL SYSTEM. THE SECOND SECTION IS DEVOTED TO THE PROBLEMS OF TEACHERS OF THE INDIAN CHILDREN IN GLOBE AND SAN CARLOS, ARIZONA. IT IS DIVIDED INTO THREE PARTS--(1)…
DEFF Research Database (Denmark)
Holm, Bent
2005-01-01
En indplacering af operahuset San Carlo i en kulturhistorisk repræsentationskontekst med særligt henblik på begrebet napolalità.......En indplacering af operahuset San Carlo i en kulturhistorisk repræsentationskontekst med særligt henblik på begrebet napolalità....
Energy Technology Data Exchange (ETDEWEB)
Baba, Justin S [ORNL; Koju, Vijay [ORNL; John, Dwayne O [ORNL
2016-01-01
The modulation of the state of polarization of photons due to scatter generates associated geometric phase that is being investigated as a means for decreasing the degree of uncertainty in back-projecting the paths traversed by photons detected in backscattered geometry. In our previous work, we established that polarimetrically detected Berry phase correlates with the mean photon penetration depth of the backscattered photons collected for image formation. In this work, we report on the impact of state-of-linear-polarization (SOLP) filtering on both the magnitude and population distributions of image forming detected photons as a function of the absorption coefficient of the scattering sample. The results, based on Berry phase tracking implemented Polarized Monte Carlo Code, indicate that sample absorption plays a significant role in the mean depth attained by the image forming backscattered detected photons.
Energy Technology Data Exchange (ETDEWEB)
Palau, J.M. [CEA Cadarache, Service de Physique des Reacteurs et du Cycle, Lab. de Projets Nucleaires, 13 - Saint-Paul-lez-Durance (France)
2005-07-01
This paper presents how Monte-Carlo calculations (French TRIPOLI4 poly-kinetic code with an appropriate pre-processing and post-processing software called OVNI) are used in the case of 3-dimensional heterogeneous benchmarks (slab reactor cores) to reduce model biases and enable a thorough and detailed analysis of the performances of deterministic methods and their associated data libraries with respect to key neutron parameters (reactivity, local power). Outstanding examples of application of these tools are presented regarding the new numerical methods implemented in the French lattice code APOLLO2 (advanced self-shielding models, new IDT characteristics method implemented within the discrete-ordinates flux solver model) and the JEFF3.1 nuclear data library (checked against JEF2.2 previous file). In particular we have pointed out, by performing multigroup/point-wise TRIPOLI4 (assembly and core) calculations, the efficiency (in terms of accuracy and computation time) of the new IDT method developed in APOLLO2. In addition, by performing 3-dimensional TRIPOLI4 calculations of the whole slab core (few millions of elementary volumes), the high quality of the new JEFF3.1 nuclear data files and revised evaluations (U{sup 235}, U{sup 238}, Hf) for reactivity prediction of slab cores critical experiments has been stressed. As a feedback of the whole validation process, improvements in terms of nuclear data (mainly Hf capture cross-sections) and numerical methods (advanced quadrature formulas accounting validation results, validation of new self-shielding models, parallelization) are suggested to improve even more the APOLLO2-CRONOS2 standard calculation route. (author)
International Nuclear Information System (INIS)
This paper presents how Monte-Carlo calculations (French TRIPOLI4 poly-kinetic code with an appropriate pre-processing and post-processing software called OVNI) are used in the case of 3-dimensional heterogeneous benchmarks (slab reactor cores) to reduce model biases and enable a thorough and detailed analysis of the performances of deterministic methods and their associated data libraries with respect to key neutron parameters (reactivity, local power). Outstanding examples of application of these tools are presented regarding the new numerical methods implemented in the French lattice code APOLLO2 (advanced self-shielding models, new IDT characteristics method implemented within the discrete-ordinates flux solver model) and the JEFF3.1 nuclear data library (checked against JEF2.2 previous file). In particular we have pointed out, by performing multigroup/point-wise TRIPOLI4 (assembly and core) calculations, the efficiency (in terms of accuracy and computation time) of the new IDT method developed in APOLLO2. In addition, by performing 3-dimensional TRIPOLI4 calculations of the whole slab core (few millions of elementary volumes), the high quality of the new JEFF3.1 nuclear data files and revised evaluations (U235, U238, Hf) for reactivity prediction of slab cores critical experiments has been stressed. As a feedback of the whole validation process, improvements in terms of nuclear data (mainly Hf capture cross-sections) and numerical methods (advanced quadrature formulas accounting validation results, validation of new self-shielding models, parallelization) are suggested to improve even more the APOLLO2-CRONOS2 standard calculation route. (author)
Institute of Scientific and Technical Information of China (English)
宋宏图; 李力; 丁韦; 季关钰
2011-01-01
无缝线路建设的大范围展开迫切需要性能、质量、生产效率相匹配的原位焊接方法,目前使用最多的为铝热焊和电弧焊.介绍了窄间陈电弧焊在钢轨焊接中的应用,并重点对电弧位置实时检测技术和自保护药芯焊丝自动钢轨窄间隙电弧焊工艺及装备进行了说明.进行接头性能试验,结果表明:采用钢轨自保护药芯焊丝自动窄间隙电弧焊焊接的接头性能良好,完全超过另外一种原位焊接方法铝热焊的接头性能,能够通过铝热焊不能通过的落锤试验,拉伸性能也强于铝热焊,冲击性能大幅度优于目前使用的闪光焊、气压焊和铝热焊.%Higher joint quality and performance good production efficiency in situ rail welding method should be developed for jointless railway wide construction. Nowadays most ways in used are thermit welding and arc welding. In this paper,narrow gap are welding used in rail welding is introduced,especially on our study of automatic narrow gap are rail welding using self-shielded flux cored wire and based on are position vision detection.Joint properties is overall better than another method thermit welding of in situ welding joint performance,can pass through the drop hammer test,tensile properties is also stronger than thermit welding,the impact performance significantly better than the currently used flash welding, gas pressure welding and thermit welding.
Monte Carlo Radiative Transfer
Whitney, Barbara A
2011-01-01
I outline methods for calculating the solution of Monte Carlo Radiative Transfer (MCRT) in scattering, absorption and emission processes of dust and gas, including polarization. I provide a bibliography of relevant papers on methods with astrophysical applications.
Monte Carlo transition probabilities
Lucy, L. B.
2001-01-01
Transition probabilities governing the interaction of energy packets and matter are derived that allow Monte Carlo NLTE transfer codes to be constructed without simplifying the treatment of line formation. These probabilities are such that the Monte Carlo calculation asymptotically recovers the local emissivity of a gas in statistical equilibrium. Numerical experiments with one-point statistical equilibrium problems for Fe II and Hydrogen confirm this asymptotic behaviour. In addition, the re...
Chin, Lee C L; Worthington, Arthur E; Whelan, William M; Vitkin, I Alex
2007-01-01
Interstitial quantification of the optical properties of tissue is important in biomedicine for both treatment planning of minimally invasive laser therapies and optical spectroscopic characterization of tissues, for example, prostate cancer. In a previous study, we analyzed a method first demonstrated by Dickey et al., [Phys. Med. Biol. 46, 2359 (2001)] to utilize relative interstitial steady-state radiance measurements for recovering the optical properties of turbid media. The uniqueness of point radiance measurements were demonstrated in a forward sense, and strategies were suggested for improving performance under noisy experimental conditions. In this work, we test our previous conclusions by fitting the P3 approximation for radiance to Monte Carlo predictions and experimental data in tissue-simulating phantoms. Fits are performed at: 1. a single sensor position (0.5 or 1 cm), 2. two sensor positions (0.5 and 1 cm), and 3. a single sensor position (0.5 or 1 cm) with input knowledge of the sample's effective attenuation coefficient. The results demonstrate that single sensor radiance measurements can be used to retrieve optical properties to within approximately 20%, provided the transport albedo is greater than approximately 0.9. Furthermore, compared to the single sensor fits, employing radiance data at two sensor positions did not significantly improve the accuracy of recovered optical properties. However, with knowledge of the effective attenuation coefficient of the medium, optical properties can be retrieved experimentally to within approximately 10% for an albedo greater or equal to 0.5. PMID:18163843
2009-01-01
Carlo Rubbia turned 75 on March 31, and CERN held a symposium to mark his birthday and pay tribute to his impressive contribution to both CERN and science. Carlo Rubbia, 4th from right, together with the speakers at the symposium.On 7 April CERN hosted a celebration marking Carlo Rubbia’s 75th birthday and 25 years since he was awarded the Nobel Prize for Physics. "Today we will celebrate 100 years of Carlo Rubbia" joked CERN’s Director-General, Rolf Heuer in his opening speech, "75 years of his age and 25 years of the Nobel Prize." Rubbia received the Nobel Prize along with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. During the symposium, which was held in the Main Auditorium, several eminent speakers gave lectures on areas of science to which Carlo Rubbia made decisive contributions. Among those who spoke were Michel Spiro, Director of the French National Insti...
Energy Technology Data Exchange (ETDEWEB)
Cooper, R.; Silva, J. H. F.; Trevisan, R. E.
2004-07-01
The present work refers to the characterization of API 5L-X80 pipeline joints welded with self-shielded flux cored wire. This process was evaluated under preheating conditions, with an uniform and steady heat input. All joints were welded in flat position (1G), with the pipe turning and the torch still. Tube dimensions were 762 mm in external diameter and 16 mm in thickness. Welds were applied on single V-groove, with six weld beads, along with three levels of preheating temperatures (room temperature, 100 degree centigree, 160 degree centigree). These temperatures were maintained as inter pass temperature. The filler metal E71T8-K6 with mechanical properties different from parent metal was used in under matched conditions. The weld characterization is presented according to the mechanical test results of tensile strength, hardness and impact test. The mechanical tests were conducted according to API 1104, AWS and ASTM standards. API 1104 and API 51 were used as screening criteria. According to the results obtained, it was possible to remark that it is appropriate to weld API 5L-X80 steel ducts with Self-shielded Flux Cored wires, in conformance to the API standards and no preheat temperature is necessary. (Author) 22 refs.
Hrivnacova, I; Berejnov, V V; Brun, R; Carminati, F; Fassò, A; Futo, E; Gheata, A; Caballero, I G; Morsch, Andreas
2003-01-01
The concept of Virtual Monte Carlo (VMC) has been developed by the ALICE Software Project to allow different Monte Carlo simulation programs to run without changing the user code, such as the geometry definition, the detector response simulation or input and output formats. Recently, the VMC classes have been integrated into the ROOT framework, and the other relevant packages have been separated from the AliRoot framework and can be used individually by any other HEP project. The general concept of the VMC and its set of base classes provided in ROOT will be presented. Existing implementations for Geant3, Geant4 and FLUKA and simple examples of usage will be described.
Approaching Chemical Accuracy with Quantum Monte Carlo
Petruzielo, F R; Umrigar, C J
2012-01-01
A quantum Monte Carlo study of the atomization energies for the G2 set of molecules is presented. Basis size dependence of diffusion Monte Carlo atomization energies is studied with a single determinant Slater-Jastrow trial wavefunction formed from Hartree-Fock orbitals. With the largest basis set, the mean absolute deviation from experimental atomization energies for the G2 set is 3.0 kcal/mol. Optimizing the orbitals within variational Monte Carlo improves the agreement between diffusion Monte Carlo and experiment, reducing the mean absolute deviation to 2.1 kcal/mol. Moving beyond a single determinant Slater-Jastrow trial wavefunction, diffusion Monte Carlo with a small complete active space Slater-Jastrow trial wavefunction results in near chemical accuracy. In this case, the mean absolute deviation from experimental atomization energies is 1.2 kcal/mol. It is shown from calculations on systems containing phosphorus that the accuracy can be further improved by employing a larger active space.
Leonardo Rossi
Carlo Caso (1940 - 2007) Our friend and colleague Carlo Caso passed away on July 7th, after several months of courageous fight against cancer. Carlo spent most of his scientific career at CERN, taking an active part in the experimental programme of the laboratory. His long and fruitful involvement in particle physics started in the sixties, in the Genoa group led by G. Tomasini. He then made several experiments using the CERN liquid hydrogen bubble chambers -first the 2000HBC and later BEBC- to study various facets of the production and decay of meson and baryon resonances. He later made his own group and joined the NA27 Collaboration to exploit the EHS Spectrometer with a rapid cycling bubble chamber as vertex detector. Amongst their many achievements, they were the first to measure, with excellent precision, the lifetime of the charmed D mesons. At the start of the LEP era, Carlo and his group moved to the DELPHI experiment, participating in the construction and running of the HPC electromagnetic c...
Multidimensional stochastic approximation Monte Carlo.
Zablotskiy, Sergey V; Ivanov, Victor A; Paul, Wolfgang
2016-06-01
Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g(E), of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g(E_{1},E_{2}). We show when and why care has to be exercised when obtaining the microcanonical density of states g(E_{1}+E_{2}) from g(E_{1},E_{2}). PMID:27415383
Monte Carlo and nonlinearities
Dauchet, Jérémi; Blanco, Stéphane; Caliot, Cyril; Charon, Julien; Coustet, Christophe; Hafi, Mouna El; Eymet, Vincent; Farges, Olivier; Forest, Vincent; Fournier, Richard; Galtier, Mathieu; Gautrais, Jacques; Khuong, Anaïs; Pelissier, Lionel; Piaud, Benjamin; Roger, Maxime; Terrée, Guillaume; Weitz, Sebastian
2016-01-01
The Monte Carlo method is widely used to numerically predict systems behaviour. However, its powerful incremental design assumes a strong premise which has severely limited application so far: the estimation process must combine linearly over dimensions. Here we show that this premise can be alleviated by projecting nonlinearities on a polynomial basis and increasing the configuration-space dimension. Considering phytoplankton growth in light-limited environments, radiative transfer in planetary atmospheres, electromagnetic scattering by particles and concentrated-solar-power-plant productions, we prove the real world usability of this advance on four test-cases that were so far regarded as impracticable by Monte Carlo approaches. We also illustrate an outstanding feature of our method when applied to sharp problems with interacting particles: handling rare events is now straightforward. Overall, our extension preserves the features that made the method popular: addressing nonlinearities does not compromise o...
Energy Technology Data Exchange (ETDEWEB)
Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
2009-01-01
On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency and Professor at the IUSS School for Advanced Studies in Pavia will speak about his work with Carlo Rubbia. Finally, Hans Joachim Sch...
2009-01-01
On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency, will speak about his work with Carlo Rubbia. Finally, Hans Joachim Schellnhuber of the Potsdam Institute for Climate Research and Sven Kul...
Aygun, Bünyamin; Korkut, Turgay; Karabulut, Abdulhalik
2016-05-01
Despite the possibility of depletion of fossil fuels increasing energy needs the use of radiation tends to increase. Recently the security-focused debate about planned nuclear power plants still continues. The objective of this thesis is to prevent the radiation spread from nuclear reactors into the environment. In order to do this, we produced higher performanced of new shielding materials which are high radiation holders in reactors operation. Some additives used in new shielding materials; some of iron (Fe), rhenium (Re), nickel (Ni), chromium (Cr), boron (B), copper (Cu), tungsten (W), tantalum (Ta), boron carbide (B4C). The results of this experiments indicated that these materials are good shields against gamma and neutrons. The powder metallurgy technique was used to produce new shielding materials. CERN - FLUKA Geant4 Monte Carlo simulation code and WinXCom were used for determination of the percentages of high temperature resistant and high-level fast neutron and gamma shielding materials participated components. Super alloys was produced and then the experimental fast neutron dose equivalent measurements and gamma radiation absorpsion of the new shielding materials were carried out. The produced products to be used safely reactors not only in nuclear medicine, in the treatment room, for the storage of nuclear waste, nuclear research laboratories, against cosmic radiation in space vehicles and has the qualities.
Directory of Open Access Journals (Sweden)
Pedro Pablo Ferrer Gallego
2012-07-01
Full Text Available RESUMEN: Se presenta y comenta un conjunto de cartas enviadas por Carlos Vicioso a Carlos Pau durante su estancia en Bicorp (Valencia entre 1914 y 1915. Las cartas se encuentran depositadas en el Archivo Histórico del Instituto Botánico de Barcelona. Esta correspondencia epistolar marca el comienzo de la relación científica entre Vicioso y Pau, basada en un primer momento en las consultas que le hace Vicioso al de Segorbe para la determinación de las especies que a través de pliegos de herbario envía desde su estancia en la localidad valenciana. En la actualidad estos pliegos testigo se encuentran conservados en diferentes herbarios oficiales nacionales y también extranjeros, fruto del envío e intercambio de material entre Vicioso y otros botánicos de la época, principalmente con Pau, Sennen y Font Quer.ABSTRACT: Epistolary correspondece between Carlos Vicioso and Carlos Pau during a stay in Bicorp (Valencia. A set of letters sent from Carlos Vicioso to Carlos Pau during a stay in Bicorp (Valencia between 1914 and 1915 are here presented and discussed. The letters are located in the Archivo Histórico del Instituto Botánico de Barcelona. This lengthy correspondence among the authors shows the beginning of their scientific relationship. At first, the correspondence was based on the consults from Vicioso to Pau for the determination of the species which were sent from Bicorp, in the herbarium sheets. Nowadays, these witness sheets are preserved in the national and international herbaria, thanks to the botanical material exchange among Vicioso and other botanist of the time, mainly with Pau, Sennen and Font Quer.
Quantum Monte Carlo methods algorithms for lattice models
Gubernatis, James; Werner, Philipp
2016-01-01
Featuring detailed explanations of the major algorithms used in quantum Monte Carlo simulations, this is the first textbook of its kind to provide a pedagogical overview of the field and its applications. The book provides a comprehensive introduction to the Monte Carlo method, its use, and its foundations, and examines algorithms for the simulation of quantum many-body lattice problems at finite and zero temperature. These algorithms include continuous-time loop and cluster algorithms for quantum spins, determinant methods for simulating fermions, power methods for computing ground and excited states, and the variational Monte Carlo method. Also discussed are continuous-time algorithms for quantum impurity models and their use within dynamical mean-field theory, along with algorithms for analytically continuing imaginary-time quantum Monte Carlo data. The parallelization of Monte Carlo simulations is also addressed. This is an essential resource for graduate students, teachers, and researchers interested in ...
Optimization of Monte Carlo simulations
Bryskhe, Henrik
2009-01-01
This thesis considers several different techniques for optimizing Monte Carlo simulations. The Monte Carlo system used is Penelope but most of the techniques are applicable to other systems. The two mayor techniques are the usage of the graphics card to do geometry calculations, and raytracing. Using graphics card provides a very efficient way to do fast ray and triangle intersections. Raytracing provides an approximation of Monte Carlo simulation but is much faster to perform. A program was ...
Challenges of Monte Carlo Transport
Energy Technology Data Exchange (ETDEWEB)
Long, Alex Roberts [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Computational Physics and Methods (CCS-2)
2016-06-10
These are slides from a presentation for Parallel Summer School at Los Alamos National Laboratory. Solving discretized partial differential equations (PDEs) of interest can require a large number of computations. We can identify concurrency to allow parallel solution of discrete PDEs. Simulated particles histories can be used to solve the Boltzmann transport equation. Particle histories are independent in neutral particle transport, making them amenable to parallel computation. Physical parameters and method type determine the data dependencies of particle histories. Data requirements shape parallel algorithms for Monte Carlo. Then, Parallel Computational Physics and Parallel Monte Carlo are discussed and, finally, the results are given. The mesh passing method greatly simplifies the IMC implementation and allows simple load-balancing. Using MPI windows and passive, one-sided RMA further simplifies the implementation by removing target synchronization. The author is very interested in implementations of PGAS that may allow further optimization for one-sided, read-only memory access (e.g. Open SHMEM). The MPICH_RMA_OVER_DMAPP option and library is required to make one-sided messaging scale on Trinitite - Moonlight scales poorly. Interconnect specific libraries or functions are likely necessary to ensure performance. BRANSON has been used to directly compare the current standard method to a proposed method on idealized problems. The mesh passing algorithm performs well on problems that are designed to show the scalability of the particle passing method. BRANSON can now run load-imbalanced, dynamic problems. Potential avenues of improvement in the mesh passing algorithm will be implemented and explored. A suite of test problems that stress DD methods will elucidate a possible path forward for production codes.
Challenges of Monte Carlo Transport
Energy Technology Data Exchange (ETDEWEB)
Long, Alex Roberts [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Computational Physics and Methods (CCS-2)
2016-06-10
These are slides from a presentation for Parallel Summer School at Los Alamos National Laboratory. Solving discretized partial differential equations (PDEs) of interest can require a large number of computations. We can identify concurrency to allow parallel solution of discrete PDEs. Simulated particles histories can be used to solve the Boltzmann transport equation. Particle histories are independent in neutral particle transport, making them amenable to parallel computation. Physical parameters and method type determine the data dependencies of particle histories. Data requirements shape parallel algorithms for Monte Carlo. Then, Parallel Computational Physics and Parallel Monte Carlo are discussed and finally the results are given. The mesh passing method greatly simplifies the IMC implementation and allows simple load-balancing. Using MPI windows and passive, one-sided RMA further simplifies the implementation by removing target synchronization. The author is very interested in implementations of PGAS that may allow further optimization for one-sided, read-only memory access (e.g. Open SHMEM). The MPICH_RMA_OVER_DMAPP option and library is required to make one-sided messaging scale on Trinitite - Moonlight scales poorly. Interconnect specific libraries or functions are likely necessary to ensure performance. BRANSON has been used to directly compare the current standard method to a proposed method on idealized problems. The mesh passing algorithm performs well on problems that are designed to show the scalability of the particle passing method. BRANSON can now run load-imbalanced, dynamic problems. Potential avenues of improvement in the mesh passing algorithm will be implemented and explored. A suite of test problems that stress DD methods will elucidate a possible path forward for production codes.
International Nuclear Information System (INIS)
The course of ''Monte Carlo Techniques'' will try to give a general overview of how to build up a method based on a given theory, allowing you to compare the outcome of an experiment with that theory. Concepts related with the construction of the method, such as, random variables, distributions of random variables, generation of random variables, random-based numerical methods, will be introduced in this course. Examples of some of the current theories in High Energy Physics describing the e+e- annihilation processes (QED, Electro-Weak, QCD) will also be briefly introduced. A second step in the employment of this method is related to the detector. The interactions that a particle could have along its way, through the detector as well as the response of the different materials which compound the detector will be quoted in this course. An example of detector at LEP era, in which these techniques are being applied, will close the course. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Marcus, Ryan C. [Los Alamos National Laboratory
2012-07-25
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.
Monte Carlo methods for electromagnetics
Sadiku, Matthew NO
2009-01-01
Until now, novices had to painstakingly dig through the literature to discover how to use Monte Carlo techniques for solving electromagnetic problems. Written by one of the foremost researchers in the field, Monte Carlo Methods for Electromagnetics provides a solid understanding of these methods and their applications in electromagnetic computation. Including much of his own work, the author brings together essential information from several different publications.Using a simple, clear writing style, the author begins with a historical background and review of electromagnetic theory. After addressing probability and statistics, he introduces the finite difference method as well as the fixed and floating random walk Monte Carlo methods. The text then applies the Exodus method to Laplace's and Poisson's equations and presents Monte Carlo techniques for handing Neumann problems. It also deals with whole field computation using the Markov chain, applies Monte Carlo methods to time-varying diffusion problems, and ...
Directory of Open Access Journals (Sweden)
Hammam Oktajianto
2014-12-01
Full Text Available Gas-cooled nuclear reactor is a Generation IV reactor which has been receiving significant attention due to many desired characteristics such as inherent safety, modularity, relatively low cost, short construction period, and easy financing. High temperature reactor (HTR pebble-bed as one of type of gas-cooled reactor concept is getting attention. In HTR pebble-bed design, radius and enrichment of the fuel kernel are the key parameter that can be chosen freely to determine the desired value of criticality. This paper models HTR pebble-bed 10 MW and determines an effective of enrichment and radius of the fuel (Kernel to get criticality value of reactor. The TRISO particle coated fuel particle which was modelled explicitly and distributed in the fuelled region of the fuel pebbles using a Simple-Cubic (SC lattice. The pebble-bed balls and moderator balls distributed in the core zone using a Body-Centred Cubic lattice with assumption of a fresh fuel by the fuel enrichment was 7-17% at 1% range and the size of the fuel radius was 175-300 µm at 25 µm ranges. The geometrical model of the full reactor is obtained by using lattice and universe facilities provided by MCNP4C. The details of model are discussed with necessary simplifications. Criticality calculations were conducted by Monte Carlo transport code MCNP4C and continuous energy nuclear data library ENDF/B-VI. From calculation results can be concluded that an effective of enrichment and radius of fuel (Kernel to achieve a critical condition was the enrichment of 15-17% at a radius of 200 µm, the enrichment of 13-17% at a radius of 225 µm, the enrichments of 12-15% at radius of 250 µm, the enrichments of 11-14% at a radius of 275 µm and the enrichment of 10-13% at a radius of 300 µm, so that the effective of enrichments and radii of fuel (Kernel can be considered in the HTR 10 MW. Keywords—MCNP4C, HTR, enrichment, radius, criticality
Confidence and efficiency scaling in Variational Quantum Monte Carlo calculations
Delyon, François; Holzmann, Markus
2016-01-01
Based on the central limit theorem, we discuss the problem of evaluation of the statistical error of Monte Carlo calculations using a time discretized diffusion process. We present a robust and practical method to determine the effective variance of general observables and show how to verify the equilibrium hypothesis by the Kolmogorov-Smirnov test. We then derive scaling laws of the efficiency illustrated by Variational Monte Carlo calculations on the two dimensional electron gas.
Soares, Fabiana Vieira; Greve, Patrícia; Sendín, Francisco Alburquerque; Benze, Benedito Galvão; de Castro, Alessandra Paiva; Rebelatto, José Rubens
2012-01-01
The aim of this study was to identify the correlation between the number of deaths of elderly people and climate change in the district of São Carlos (SP) over a period of 10 years (1997-2006). Records of deaths were obtained from DATASUS for people aged over 60 who died between 1997 and 2006 in São Carlos. The average monthly maximum and minimum temperature data and relative air humidity in São Carlos were provided by the National Institute of Meteorology. The mortality coefficient of the district was calculated by gender and age and the resulting data were analyzed using t test, one-way ANOVA, the Bonferroni test and the Pearson correlation coefficient test. There were 8,304 deaths which predominantly occurred among males aged over 80, and diseases of the circulatory system were the main cause of death. There was a positive correlation between mortality by infectious disease and minimum humidity, and a negative correlation between mortality by infectious diseases and minimum temperatures, between mortality caused by respiratory disease and minimum humidity, between mortality caused by endocrine disease and minimum and maximum temperature. Thereby, it was possible to conclude that there was a correlation between climate change and mortality among elderly individuals in São Carlos.
Parallelizing Monte Carlo with PMC
Energy Technology Data Exchange (ETDEWEB)
Rathkopf, J.A.; Jones, T.R.; Nessett, D.M.; Stanberry, L.C.
1994-11-01
PMC (Parallel Monte Carlo) is a system of generic interface routines that allows easy porting of Monte Carlo packages of large-scale physics simulation codes to Massively Parallel Processor (MPP) computers. By loading various versions of PMC, simulation code developers can configure their codes to run in several modes: serial, Monte Carlo runs on the same processor as the rest of the code; parallel, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on other MPP processor(s); distributed, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on a different machine. This multi-mode approach allows maintenance of a single simulation code source regardless of the target machine. PMC handles passing of messages between nodes on the MPP, passing of messages between a different machine and the MPP, distributing work between nodes, and providing independent, reproducible sequences of random numbers. Several production codes have been parallelized under the PMC system. Excellent parallel efficiency in both the distributed and parallel modes results if sufficient workload is available per processor. Experiences with a Monte Carlo photonics demonstration code and a Monte Carlo neutronics package are described.
Monte Carlo Simulation for Particle Detectors
Pia, Maria Grazia
2012-01-01
Monte Carlo simulation is an essential component of experimental particle physics in all the phases of its life-cycle: the investigation of the physics reach of detector concepts, the design of facilities and detectors, the development and optimization of data reconstruction software, the data analysis for the production of physics results. This note briefly outlines some research topics related to Monte Carlo simulation, that are relevant to future experimental perspectives in particle physics. The focus is on physics aspects: conceptual progress beyond current particle transport schemes, the incorporation of materials science knowledge relevant to novel detection technologies, functionality to model radiation damage, the capability for multi-scale simulation, quantitative validation and uncertainty quantification to determine the predictive power of simulation. The R&D on simulation for future detectors would profit from cooperation within various components of the particle physics community, and synerg...
TARC: Carlo Rubbia's Energy Amplifier
Laurent Guiraud
1997-01-01
Transmutation by Adiabatic Resonance Crossing (TARC) is Carlo Rubbia's energy amplifier. This CERN experiment demonstrated that long-lived fission fragments, such as 99-TC, can be efficiently destroyed.
Energy Technology Data Exchange (ETDEWEB)
Gonzalez, Dania Soguero; Ardanza, Armando Chavez, E-mail: sdania@ceaden.edu.cu [Centro de Aplicaciones Tecnologicas y Desarrollo Nuclear (CEADEN), La Habana (Cuba)
2013-07-01
This paper describes the process of installation of a self-shielded irradiator category I, model ISOGAMMA LL.Co of {sup 60}Co, with a nominal 25 kCi activity, rate of absorbed dose 8 kG/h and 5 L workload. The stages are describe step by step: import, the customs procedure which included the interview with the master of the vessel transporter, the monitoring of the entire process by the head of radiological protection of the importing Center, control of the levels of surface contamination of the shipping container of the sources before the removal of the ship, the supervision of the national regulatory authority and the transportation to the final destination. Details of assembling of the installation and the opening of the container for transportation of supplies is outlined. The action plan previously developed for the case of occurrence of radiological successful events is presented, detailing the phase of the load of radioactive sources by the specialists of the company selling the facility (IZOTOP). Finally describes the setting and implementation of the installation and the procedure of licensing for exploitation.
Quantum Monte Carlo for vibrating molecules
International Nuclear Information System (INIS)
Quantum Monte Carlo (QMC) has successfully computed the total electronic energies of atoms and molecules. The main goal of this work is to use correlation function quantum Monte Carlo (CFQMC) to compute the vibrational state energies of molecules given a potential energy surface (PES). In CFQMC, an ensemble of random walkers simulate the diffusion and branching processes of the imaginary-time time dependent Schroedinger equation in order to evaluate the matrix elements. The program QMCVIB was written to perform multi-state VMC and CFQMC calculations and employed for several calculations of the H2O and C3 vibrational states, using 7 PES's, 3 trial wavefunction forms, two methods of non-linear basis function parameter optimization, and on both serial and parallel computers. In order to construct accurate trial wavefunctions different wavefunctions forms were required for H2O and C3. In order to construct accurate trial wavefunctions for C3, the non-linear parameters were optimized with respect to the sum of the energies of several low-lying vibrational states. In order to stabilize the statistical error estimates for C3 the Monte Carlo data was collected into blocks. Accurate vibrational state energies were computed using both serial and parallel QMCVIB programs. Comparison of vibrational state energies computed from the three C3 PES's suggested that a non-linear equilibrium geometry PES is the most accurate and that discrete potential representations may be used to conveniently determine vibrational state energies
Automated Monte Carlo biasing for photon-generated electrons near surfaces.
Energy Technology Data Exchange (ETDEWEB)
Franke, Brian Claude; Crawford, Martin James; Kensek, Ronald Patrick
2009-09-01
This report describes efforts to automate the biasing of coupled electron-photon Monte Carlo particle transport calculations. The approach was based on weight-windows biasing. Weight-window settings were determined using adjoint-flux Monte Carlo calculations. A variety of algorithms were investigated for adaptivity of the Monte Carlo tallies. Tree data structures were used to investigate spatial partitioning. Functional-expansion tallies were used to investigate higher-order spatial representations.
Energy Technology Data Exchange (ETDEWEB)
Abanades, A., E-mail: abanades@etsii.upm.es [Grupo de Modelizacion de Sistemas Termoenergeticos, ETSII, Universidad Politecnica de Madrid, c/Ramiro de Maeztu, 7, 28040 Madrid (Spain); Alvarez-Velarde, F.; Gonzalez-Romero, E.M. [Centro de Investigaciones Medioambientales y Tecnologicas (CIEMAT), Avda. Complutense, 40, Ed. 17, 28040 Madrid (Spain); Ismailov, K. [Tokyo Institute of Technology, 2-12-1, O-okayama, Meguro-ku, Tokyo 152-8550 (Japan); Lafuente, A. [Grupo de Modelizacion de Sistemas Termoenergeticos, ETSII, Universidad Politecnica de Madrid, c/Ramiro de Maeztu, 7, 28040 Madrid (Spain); Nishihara, K. [Transmutation Section, J-PARC Center, JAEA, Tokai-mura, Ibaraki-ken 319-1195 (Japan); Saito, M. [Tokyo Institute of Technology, 2-12-1, O-okayama, Meguro-ku, Tokyo 152-8550 (Japan); Stanculescu, A. [International Atomic Energy Agency (IAEA), Vienna (Austria); Sugawara, T. [Transmutation Section, J-PARC Center, JAEA, Tokai-mura, Ibaraki-ken 319-1195 (Japan)
2013-01-15
Highlights: Black-Right-Pointing-Pointer TARC experiment benchmark capture rates results. Black-Right-Pointing-Pointer Utilization of updated databases, included ADSLib. Black-Right-Pointing-Pointer Self-shielding effect in reactor design for transmutation. Black-Right-Pointing-Pointer Effect of Lead nuclear data. - Abstract: The design of Accelerator Driven Systems (ADS) requires the development of simulation tools that are able to describe in a realistic way their nuclear performance and transmutation rate capability. In this publication, we present an evaluation of state of the art Monte Carlo design tools to assess their performance concerning transmutation of long-lived fission products. This work, performed under the umbrella of the International Atomic Energy Agency, analyses two important aspects for transmutation systems: moderation on Lead and neutron captures of {sup 99}Tc, {sup 127}I and {sup 129}I. The analysis of the results shows how shielding effects due to the resonances at epithermal energies of these nuclides affects strongly their transmutation rate. The results suggest that some research effort should be undertaken to improve the quality of Iodine nuclear data at epithermal and fast neutron energy to obtain a reliable transmutation estimation.
Proton Upset Monte Carlo Simulation
O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.
2009-01-01
The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.
Synchronous Parallel Kinetic Monte Carlo
Energy Technology Data Exchange (ETDEWEB)
Mart?nez, E; Marian, J; Kalos, M H
2006-12-14
A novel parallel kinetic Monte Carlo (kMC) algorithm formulated on the basis of perfect time synchronicity is presented. The algorithm provides an exact generalization of any standard serial kMC model and is trivially implemented in parallel architectures. We demonstrate the mathematical validity and parallel performance of the method by solving several well-understood problems in diffusion.
Monte Carlo Particle Lists: MCPL
Kittelmann, Thomas; Knudsen, Erik B; Willendrup, Peter; Cai, Xiao Xiao; Kanaki, Kalliopi
2016-01-01
A binary format with lists of particle state information, for interchanging particles between various Monte Carlo simulation applications, is presented. Portable C code for file manipulation is made available to the scientific community, along with converters and plugins for several popular simulation packages.
Quantum Monte Carlo for atoms and molecules
International Nuclear Information System (INIS)
The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H2, LiH, Li2, and H2O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li2, and H2O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations, the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions
Energy Technology Data Exchange (ETDEWEB)
Barbosa, Nilseia A.; Rosa, Luiz A. Ribeiro da, E-mail: nilseia@ird.gov.br, E-mail: lrosa@ird.gov.br [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ),Rio de Janeiro, RJ (Brazil); Braz, Delson, E-mail: delson@nuclear.ufrj.br [Coordenacao dos programas de Pos-Graduacao em Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear
2014-07-01
The COC ophthalmic applicators using beta radiation source of {sup 106}Ru/{sup 106}Rh are used in the treatment of intraocular tumors near the optic nerve. In this type of treatment is very important to know the dose distribution in order to provide the best possible delivery of prescribed dose to the tumor, preserves the optic nerve region extremely critical, that if damaged, can compromise the patient's visual acuity, and cause brain sequelae. These dose distributions are complex and doctors, who will have the responsibility on the therapy, only have the source calibration certificate provided by the manufacturer Eckert and Ziegler BEBIG GmbH. These certificates provide 10 absorbed dose values at water depth along the central axis applicator with the uncertainties of the order of 20% isodose and in a plane located 1 mm from the applicator surface. Thus, it is important to know with more detail and precision the dose distributions in water generated by such applicators. To this end, the Monte Carlo simulation was used using MCNPX code. Initially, was validated the simulation by comparing the obtained results to the central axis of the applicator with those provided by the certificate. The different percentages were lower than 5%, validating the used method. Lateral dose profile was calculated for 6 different depths in intervals of 1 mm and the dose rates in mGy.min{sup -1} for the same depths.
Quantum Monte Carlo for vibrating molecules
Energy Technology Data Exchange (ETDEWEB)
Brown, W.R. [Univ. of California, Berkeley, CA (United States). Chemistry Dept.]|[Lawrence Berkeley National Lab., CA (United States). Chemical Sciences Div.
1996-08-01
Quantum Monte Carlo (QMC) has successfully computed the total electronic energies of atoms and molecules. The main goal of this work is to use correlation function quantum Monte Carlo (CFQMC) to compute the vibrational state energies of molecules given a potential energy surface (PES). In CFQMC, an ensemble of random walkers simulate the diffusion and branching processes of the imaginary-time time dependent Schroedinger equation in order to evaluate the matrix elements. The program QMCVIB was written to perform multi-state VMC and CFQMC calculations and employed for several calculations of the H{sub 2}O and C{sub 3} vibrational states, using 7 PES`s, 3 trial wavefunction forms, two methods of non-linear basis function parameter optimization, and on both serial and parallel computers. In order to construct accurate trial wavefunctions different wavefunctions forms were required for H{sub 2}O and C{sub 3}. In order to construct accurate trial wavefunctions for C{sub 3}, the non-linear parameters were optimized with respect to the sum of the energies of several low-lying vibrational states. In order to stabilize the statistical error estimates for C{sub 3} the Monte Carlo data was collected into blocks. Accurate vibrational state energies were computed using both serial and parallel QMCVIB programs. Comparison of vibrational state energies computed from the three C{sub 3} PES`s suggested that a non-linear equilibrium geometry PES is the most accurate and that discrete potential representations may be used to conveniently determine vibrational state energies.
Monte Carlo simulation of the seed germination process
International Nuclear Information System (INIS)
Paper presented a mathematical model of seed germination process based on the Monte Carlo method and theoretical premises resulted from the physiology of seed germination suggesting three consecutive stages: physical, biochemical and physiological. The model was experimentally verified by determination of germination characteristics for seeds of ground tomatoes, Promyk cultivar, within broad range of temperatures (from 15 to 30 deg C)
Monte Carlo Simulation Optimizing Design of Grid Ionization Chamber
Institute of Scientific and Technical Information of China (English)
ZHENG; Yu-lai; WANG; Qiang; YANG; Lu
2013-01-01
The grid ionization chamber detector is often used for measuring charged particles.Based on Monte Carlo simulation method,the energy loss distribution and electron ion pairs of alpha particle with different energy have been calculated to determine suitable filling gas in the ionization chamber filled with
Testing Dependent Correlations with Nonoverlapping Variables: A Monte Carlo Simulation
Silver, N. Clayton; Hittner, James B.; May, Kim
2004-01-01
The authors conducted a Monte Carlo simulation of 4 test statistics or comparing dependent correlations with no variables in common. Empirical Type 1 error rates and power estimates were determined for K. Pearson and L. N. G. Filon's (1898) z, O. J. Dunn and V. A. Clark's (1969) z, J. H. Steiger's (1980) original modification of Dunn and Clark's…
MOSFET GATE CURRENT MODELLING USING MONTE-CARLO METHOD
Voves, J.; Vesely, J.
1988-01-01
The new technique for determining the probability of hot-electron travel through the gate oxide is presented. The technique is based on the Monte Carlo method and is used in MOSFET gate current modelling. The calculated values of gate current are compared with experimental results from direct measurements on MOSFET test chips.
Institute of Scientific and Technical Information of China (English)
王攀; 肖军; 李映映; 李子越; 汪超
2016-01-01
Background:As a kind of important activating material, accurate measurement of 115In neutron inelastic scattering cross section data of neutron flux monitoring is of great significance.Purpose: The purpose is to measure 115In neutron inelastic scattering cross section, and compare the results with the existing data.Methods: The cross section at 2.95 MeV, 3.94 MeV and 5.24 MeV was measured using the activation technique at a 2.5 MeV electrostatic accelerator of Sichuan University, and the reaction of D(d,n)3He was used for neutron sources. The deflections which were caused by multiple scattering and self-shielding of the experiment were corrected with MCNPX.Results:115In neutron inelastic scattering cross section data at three energy values were obtained after Monte Carlo correction and the results fit well with the calculated values of Loevestam.Conclusion:The effect of multiple scattering effects and self-shielding effect can be reduced by reducing the thickness of the target tube, bottom lining, water layer and cladding material of the sample.%115In是一种重要的活化材料,准确测量它的中子非弹性散射截面数据对中子注量监测具有重要意义.在四川大学原子核科学技术研究所2.5 MV静电质子加速器上,利用核反应D(d,n)3He产生的单能中子,以197Au作为标准,采用活化法测量了2.95MeV、3.94MeV、5.24MeV能点的115In中子非弹性散射截面.用Monte Carlo程序MCNPX(Monte Carlo N-Particle eXtended)对靶头材料、冷却水层和样品的包层材料等引起的多次散射效应及注量率衰减效应等进行了修正计算,得到最终结果与Loevestam的计算值符合较好,并且实验中可通过减小靶管、靶底衬、水层及样品的包层材料等厚度来减小多次散射效应和自屏蔽效应的影响.
An unbiased Hessian representation for Monte Carlo PDFs
Energy Technology Data Exchange (ETDEWEB)
Carrazza, Stefano; Forte, Stefano [Universita di Milano, TIF Lab, Dipartimento di Fisica, Milan (Italy); INFN, Sezione di Milano (Italy); Kassabov, Zahari [Universita di Milano, TIF Lab, Dipartimento di Fisica, Milan (Italy); Universita di Torino, Dipartimento di Fisica, Turin (Italy); INFN, Sezione di Torino (Italy); Latorre, Jose Ignacio [Universitat de Barcelona, Departament d' Estructura i Constituents de la Materia, Barcelona (Spain); Rojo, Juan [University of Oxford, Rudolf Peierls Centre for Theoretical Physics, Oxford (United Kingdom)
2015-08-15
We develop a methodology for the construction of a Hessian representation of Monte Carlo sets of parton distributions, based on the use of a subset of the Monte Carlo PDF replicas as an unbiased linear basis, and of a genetic algorithm for the determination of the optimal basis. We validate the methodology by first showing that it faithfully reproduces a native Monte Carlo PDF set (NNPDF3.0), and then, that if applied to Hessian PDF set (MMHT14) which was transformed into a Monte Carlo set, it gives back the starting PDFs with minimal information loss. We then show that, when applied to a large Monte Carlo PDF set obtained as combination of several underlying sets, the methodology leads to a Hessian representation in terms of a rather smaller set of parameters (MC-H PDFs), thereby providing an alternative implementation of the recently suggested Meta-PDF idea and a Hessian version of the recently suggested PDF compression algorithm (CMC-PDFs). The mc2hessian conversion code is made publicly available together with (through LHAPDF6) a Hessian representations of the NNPDF3.0 set, and the MC-H PDF set. (orig.)
Calibration and Monte Carlo modelling of neutron long counters
Tagziria, H
2000-01-01
The Monte Carlo technique has become a very powerful tool in radiation transport as full advantage is taken of enhanced cross-section data, more powerful computers and statistical techniques, together with better characterisation of neutron and photon source spectra. At the National Physical Laboratory, calculations using the Monte Carlo radiation transport code MCNP-4B have been combined with accurate measurements to characterise two long counters routinely used to standardise monoenergetic neutron fields. New and more accurate response function curves have been produced for both long counters. A novel approach using Monte Carlo methods has been developed, validated and used to model the response function of the counters and determine more accurately their effective centres, which have always been difficult to establish experimentally. Calculations and measurements agree well, especially for the De Pangher long counter for which details of the design and constructional material are well known. The sensitivit...
Kinematics of multigrid Monte Carlo
International Nuclear Information System (INIS)
We study the kinematics of multigrid Monte Carlo algorithms by means of acceptance rates for nonlocal Metropolis update proposals. An approximation formula for acceptance rates is derived. We present a comparison of different coarse-to-fine interpolation schemes in free field theory, where the formula is exact. The predictions of the approximation formula for several interacting models are well confirmed by Monte Carlo simulations. The following rule is found: For a critical model with fundametal Hamiltonian Η(φ), absence of critical slowing down can only be expected if the expansion of (Η(φ+ψ)) in terms of the shift ψ contains no relevant (mass) term. We also introduce a multigrid update procedure for nonabelian lattice gauge theory and study the acceptance rates for gauge group SU(2) in four dimensions. (orig.)
Neural Adaptive Sequential Monte Carlo
Gu, Shixiang; Ghahramani, Zoubin; Turner, Richard E
2015-01-01
Sequential Monte Carlo (SMC), or particle filtering, is a popular class of methods for sampling from an intractable target distribution using a sequence of simpler intermediate distributions. Like other importance sampling-based methods, performance is critically dependent on the proposal distribution: a bad proposal can lead to arbitrarily inaccurate estimates of the target distribution. This paper presents a new method for automatically adapting the proposal using an approximation of the Ku...
Adaptive Multilevel Monte Carlo Simulation
Hoel, H
2011-08-23
This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).
Monomial Gamma Monte Carlo Sampling
Zhang, Yizhe; Wang, Xiangyu; Chen, Changyou; Fan, Kai; Carin, Lawrence
2016-01-01
We unify slice sampling and Hamiltonian Monte Carlo (HMC) sampling by demonstrating their connection under the canonical transformation from Hamiltonian mechanics. This insight enables us to extend HMC and slice sampling to a broader family of samplers, called monomial Gamma samplers (MGS). We analyze theoretically the mixing performance of such samplers by proving that the MGS draws samples from a target distribution with zero-autocorrelation, in the limit of a single parameter. This propert...
Self shielding of surfaces irradiated by intense energy fluxes
Energy Technology Data Exchange (ETDEWEB)
Varghese, P.L.; Howell, J.R.; Propp, A.
1991-08-01
This dissertation will outline a direct methods of temperature, density, composition, and velocity measurement which should be widely applicable to railgun systems. The measurements demonstrated here should prove usefull basis for further studies of plasma/target interaction.
Structure of Self-shielding Electron Beam Installation for Sterilization
Institute of Scientific and Technical Information of China (English)
2002-01-01
In order to prevent terrorist using letters with anthrax germ or spores to postal route and disturbsociety, and defend the people’s life-safety China Institute of Atomic Energy (CIAE) has developed aself-shielding electron beam installation for sterilization (SEBIS).
Monte Carlo simulation of virtual Compton scattering below pion threshold
International Nuclear Information System (INIS)
This paper describes the Monte Carlo simulation developed specifically for the Virtual Compton Scattering (VCS) experiments below pion threshold that have been performed at MAMI and JLab. This simulation generates events according to the (Bethe-Heitler + Born) cross-section behaviour and takes into account all relevant resolution-deteriorating effects. It determines the 'effective' solid angle for the various experimental settings which are used for the precise determination of the photon electroproduction absolute cross-section
Morse Monte Carlo Radiation Transport Code System
Energy Technology Data Exchange (ETDEWEB)
Emmett, M.B.
1975-02-01
The report contains sections containing descriptions of the MORSE and PICTURE codes, input descriptions, sample problems, deviations of the physical equations and explanations of the various error messages. The MORSE code is a multipurpose neutron and gamma-ray transport Monte Carlo code. Time dependence for both shielding and criticality problems is provided. General three-dimensional geometry may be used with an albedo option available at any material surface. The PICTURE code provide aid in preparing correct input data for the combinatorial geometry package CG. It provides a printed view of arbitrary two-dimensional slices through the geometry. By inspecting these pictures one may determine if the geometry specified by the input cards is indeed the desired geometry. 23 refs. (WRF)
Monte Carlo modeling and meteor showers
International Nuclear Information System (INIS)
Prediction of short lived increases in the cosmic dust influx, the concentration in lower thermosphere of atoms and ions of meteor origin and the determination of the frequency of micrometeor impacts on spacecraft are all of scientific and practical interest and all require adequate models of meteor showers at an early stage of their existence. A Monte Carlo model of meteor matter ejection from a parent body at any point of space was worked out by other researchers. This scheme is described. According to the scheme, the formation of ten well known meteor streams was simulated and the possibility of genetic affinity of each of them with the most probable parent comet was analyzed. Some of the results are presented
Monte Carlo modeling and meteor showers
Kulikova, N. V.
1987-08-01
Prediction of short lived increases in the cosmic dust influx, the concentration in lower thermosphere of atoms and ions of meteor origin and the determination of the frequency of micrometeor impacts on spacecraft are all of scientific and practical interest and all require adequate models of meteor showers at an early stage of their existence. A Monte Carlo model of meteor matter ejection from a parent body at any point of space was worked out by other researchers. This scheme is described. According to the scheme, the formation of ten well known meteor streams was simulated and the possibility of genetic affinity of each of them with the most probable parent comet was analyzed. Some of the results are presented.
Cuartel San Carlos. Yacimiento veterano
Directory of Open Access Journals (Sweden)
Mariana Flores
2007-01-01
Full Text Available El Cuartel San Carlos es un monumento histórico nacional (1986 de finales del siglo XVIII (1785-1790, caracterizado por sufrir diversas adversidades en su construcción y soportar los terremotos de 1812 y 1900. En el año 2006, el organismo encargado de su custodia, el Instituto de Patrimonio Cultural del Ministerio de Cultura, ejecutó tres etapas de exploración arqueológica, que abarcaron las áreas Traspatio, Patio Central y las Naves Este y Oeste de la edificación. Este trabajo reseña el análisis de la documentación arqueológica obtenida en el sitio, a partir de la realización de dicho proyecto, denominado EACUSAC (Estudio Arqueológico del Cuartel San Carlos, que representa además, la tercera campaña realizada en el sitio. La importancia de este yacimiento histórico, radica en su participación en los acontecimientos que propiciaron conflictos de poder durante el surgimiento de la República y en los sucesos políticos del siglo XX. De igual manera, se encontró en el sitio una amplia muestra de materiales arqueológicos que reseñan un estilo de vida cotidiana militar, así como las dinámicas sociales internas ocurridas en el San Carlos, como lugar estratégico para la defensa de los diferentes regímenes que atravesó el país, desde la época del imperialismo español hasta nuestros días.
International Nuclear Information System (INIS)
calculation solver SNATCH in the PARIS code platform. The latter uses the transport theory which is indispensable for the new generation fast reactors analysis. The principal conclusions are as follows: The Monte-Carlo assembly calculation code is an interesting way (in the sense of avoiding the difficulties in the self-shielding calculation, the limited order development of anisotropy parameters, the exact 3D geometries) to validate the deterministic codes like ECCO or APOLLO3 and to produce the multi-group constants for deterministic or Monte-Carlo multi-group calculation codes. The results obtained for the moment with the multi-group constants calculated by TRIPOLI-4 code are comparable with those produced from ECCO, but did not show remarkable advantages. (author)
Carlos Restrepo. Un verdadero Maestro
Pelayo Correa
2009-01-01
Carlos Restrepo fue el primer profesor de Patología y un miembro ilustre del grupo de pioneros que fundaron la Facultad de Medicina de la Universidad del Valle. Estos pioneros convergieron en Cali en la década de 1950, en posesión de un espíritu renovador y creativo que emprendió con mucho éxito la labor de cambiar la cultura académica del Valle del Cauca. Ellos encontraron una sociedad apacible, que disfrutaba de la generosidad de su entorno, sin deseos de romper las tradiciones centenarias...
Energy Technology Data Exchange (ETDEWEB)
Mendonca, Dalila; Neves, Lucio P.; Perini, Ana P., E-mail: anapaula.perini@ufu.br [Universidade Federal de Uberlandia (INFIS/UFU), Uberlandia, MG (Brazil). Instituto de Fisica; Santos, William S.; Caldas, Linda V.E. [Instituto de Pesquisas Energeticas e Nucleres (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)
2015-07-01
A special pencil type ionization chamber, developed at the Instituto de Pesquisas Energeticas e Nucleares, was characterized by means of Monte Carlo simulation to determine the influence of its components on its response. The main differences between this ionization chamber and commercial ionization chambers are related to its configuration and constituent materials. The simulations were made employing the MCNP-4C Monte Carlo code. The highest influence was obtained for the body of PMMA: 7.0%. (author)
Monte Carlo techniques in radiation therapy
Verhaegen, Frank
2013-01-01
Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...
Monte Carlo primer for health physicists
International Nuclear Information System (INIS)
The basic ideas and principles of Monte Carlo calculations are presented in the form of a primer for health physicists. A simple integral with a known answer is evaluated by two different Monte Carlo approaches. Random number, which underlie Monte Carlo work, are discussed, and a sample table of random numbers generated by a hand calculator is presented. Monte Carlo calculations of dose and linear energy transfer (LET) from 100-keV neutrons incident on a tissue slab are discussed. The random-number table is used in a hand calculation of the initial sequence of events for a 100-keV neutron entering the slab. Some pitfalls in Monte Carlo work are described. While this primer addresses mainly the bare bones of Monte Carlo, a final section briefly describes some of the more sophisticated techniques used in practice to reduce variance and computing time
Monte Carlo Treatment Planning for Advanced Radiotherapy
DEFF Research Database (Denmark)
Cronholm, Rickard
for commissioning of a Monte Carlo model of a medical linear accelerator, ensuring agreement with measurements within 1% for a range of situations, is presented. The resulting Monte Carlo model was validated against measurements for a wider range of situations, including small field output factors, and agreement...... modulating the intensity of the field during the irradiation. The workflow described has the potential to fully model the dynamic delivery, including gantry rotation during irradiation, of modern radiotherapy. Three corner stones of Monte Carlo Treatment Planning are identified: Building, commissioning...... and validation of a Monte Carlo model of a medical linear accelerator (i), converting a CT scan of a patient to a Monte Carlo compliant phantom (ii) and translating the treatment plan parameters (including beam energy, angles of incidence, collimator settings etc) to a Monte Carlo input file (iii). A protocol...
Mean field simulation for Monte Carlo integration
Del Moral, Pierre
2013-01-01
In the last three decades, there has been a dramatic increase in the use of interacting particle methods as a powerful tool in real-world applications of Monte Carlo simulation in computational physics, population biology, computer sciences, and statistical machine learning. Ideally suited to parallel and distributed computation, these advanced particle algorithms include nonlinear interacting jump diffusions; quantum, diffusion, and resampled Monte Carlo methods; Feynman-Kac particle models; genetic and evolutionary algorithms; sequential Monte Carlo methods; adaptive and interacting Marko
Novel Quantum Monte Carlo Approaches for Quantum Liquids
Rubenstein, Brenda M.
Quantum Monte Carlo methods are a powerful suite of techniques for solving the quantum many-body problem. By using random numbers to stochastically sample quantum properties, QMC methods are capable of studying low-temperature quantum systems well beyond the reach of conventional deterministic techniques. QMC techniques have likewise been indispensible tools for augmenting our current knowledge of superfluidity and superconductivity. In this thesis, I present two new quantum Monte Carlo techniques, the Monte Carlo Power Method and Bose-Fermi Auxiliary-Field Quantum Monte Carlo, and apply previously developed Path Integral Monte Carlo methods to explore two new phases of quantum hard spheres and hydrogen. I lay the foundation for a subsequent description of my research by first reviewing the physics of quantum liquids in Chapter One and the mathematics behind Quantum Monte Carlo algorithms in Chapter Two. I then discuss the Monte Carlo Power Method, a stochastic way of computing the first several extremal eigenvalues of a matrix too memory-intensive to be stored and therefore diagonalized. As an illustration of the technique, I demonstrate how it can be used to determine the second eigenvalues of the transition matrices of several popular Monte Carlo algorithms. This information may be used to quantify how rapidly a Monte Carlo algorithm is converging to the equilibrium probability distribution it is sampling. I next present the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm. This algorithm generalizes the well-known Auxiliary-Field Quantum Monte Carlo algorithm for fermions to bosons and Bose-Fermi mixtures. Despite some shortcomings, the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm represents the first exact technique capable of studying Bose-Fermi mixtures of any size in any dimension. In Chapter Six, I describe a new Constant Stress Path Integral Monte Carlo algorithm for the study of quantum mechanical systems under high pressures. While
1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO
Energy Technology Data Exchange (ETDEWEB)
T. EVANS; ET AL
2000-08-01
We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.
Monte Carlo Simulations of the Photospheric Process
Santana, Rodolfo; Hernandez, Roberto A; Kumar, Pawan
2015-01-01
We present a Monte Carlo (MC) code we wrote to simulate the photospheric process and to study the photospheric spectrum above the peak energy. Our simulations were performed with a photon to electron ratio $N_{\\gamma}/N_{e} = 10^{5}$, as determined by observations of the GRB prompt emission. We searched an exhaustive parameter space to determine if the photospheric process can match the observed high-energy spectrum of the prompt emission. If we do not consider electron re-heating, we determined that the best conditions to produce the observed high-energy spectrum are low photon temperatures and high optical depths. However, for these simulations, the spectrum peaks at an energy below 300 keV by a factor $\\sim 10$. For the cases we consider with higher photon temperatures and lower optical depths, we demonstrate that additional energy in the electrons is required to produce a power-law spectrum above the peak-energy. By considering electron re-heating near the photosphere, the spectrum for these simulations h...
Directory of Open Access Journals (Sweden)
Bárbara Bustamante
2005-10-01
Full Text Available El talento de Carlos Alonso (Argentina, 1929 ha logrado conquistar un lenguaje con estilo propio. La creación de dibujos, pinturas, pasteles y tintas, collages y grabados fijaron en el campo visual la proyección de su subjetividad. Tanto la imagen como la palabra explicitan una visión crítica de la realidad, que tensiona al espectador obligándolo a una condición reflexiva y comprometida con el mensaje; este es el aspecto más destacado por los historiadores del arte. Sin embargo, la presente investigación pretende focalizar aspectos icónicos y plásticos de su hacer.
Monte Carlo lattice program KIM
International Nuclear Information System (INIS)
The Monte Carlo program KIM solves the steady-state linear neutron transport equation for a fixed-source problem or, by successive fixed-source runs, for the eigenvalue problem, in a two-dimensional thermal reactor lattice. Fluxes and reaction rates are the main quantities computed by the program, from which power distribution and few-group averaged cross sections are derived. The simulation ranges from 10 MeV to zero and includes anisotropic and inelastic scattering in the fast energy region, the epithermal Doppler broadening of the resonances of some nuclides, and the thermalization phenomenon by taking into account the thermal velocity distribution of some molecules. Besides the well known combinatorial geometry, the program allows complex configurations to be represented by a discrete set of points, an approach greatly improving calculation speed
International Nuclear Information System (INIS)
The Monte Carlo code MONK is a general program written to provide a high degree of flexibility to the user. MONK is distinguished by its detailed representation of nuclear data in point form i.e., the cross-section is tabulated at specific energies instead of the more usual group representation. The nuclear data are unadjusted in the point form but recently the code has been modified to accept adjusted group data as used in fast and thermal reactor applications. The various geometrical handling capabilities and importance sampling techniques are described. In addition to the nuclear data aspects, the following features are also described; geometrical handling routines, tracking cycles, neutron source and output facilities. 12 references. (U.S.)
A new method for commissioning Monte Carlo treatment planning systems
Aljarrah, Khaled Mohammed
2005-11-01
The Monte Carlo method is an accurate method for solving numerical problems in different fields. It has been used for accurate radiation dose calculation for radiation treatment of cancer. However, the modeling of an individual radiation beam produced by a medical linear accelerator for Monte Carlo dose calculation, i.e., the commissioning of a Monte Carlo treatment planning system, has been the bottleneck for the clinical implementation of Monte Carlo treatment planning. In this study a new method has been developed to determine the parameters of the initial electron beam incident on the target for a clinical linear accelerator. The interaction of the initial electron beam with the accelerator target produces x-ray and secondary charge particles. After successive interactions in the linac head components, the x-ray photons and the secondary charge particles interact with the patient's anatomy and deliver dose to the region of interest. The determination of the initial electron beam parameters is important for estimating the delivered dose to the patients. These parameters, such as beam energy and radial intensity distribution, are usually estimated through a trial and error process. In this work an easy and efficient method was developed to determine these parameters. This was accomplished by comparing calculated 3D dose distributions for a grid of assumed beam energies and radii in a water phantom with measurements data. Different cost functions were studied to choose the appropriate function for the data comparison. The beam parameters were determined on the light of this method. Due to the assumption that same type of linacs are exactly the same in their geometries and only differ by the initial phase space parameters, the results of this method were considered as a source data to commission other machines of the same type.
Energy Modulated Photon Radiotherapy: A Monte Carlo Feasibility Study
Ying Zhang; Yuanming Feng; Xin Ming; Jun Deng
2016-01-01
A novel treatment modality termed energy modulated photon radiotherapy (EMXRT) was investigated. The first step of EMXRT was to determine beam energy for each gantry angle/anatomy configuration from a pool of photon energy beams (2 to 10 MV) with a newly developed energy selector. An inverse planning system using gradient search algorithm was then employed to optimize photon beam intensity of various beam energies based on presimulated Monte Carlo pencil beam dose distributions in patient ana...
Directory of Open Access Journals (Sweden)
A.C.P. de A. Primavesi
1994-04-01
Full Text Available Em experimento conduzido em Latossolo Vermelho-Amarelo distrófico, em área da EMBRAPA - CPPSE em São Carlos, situada a 22°01'S e 47°53'W, com altitude de 856 m e média de precipitação anual de 1502 mm, procedeu-se a determinação da composição bromatológicade folhas, hastes com diâmetro menor que 6 mm e vagens, de genótipos de leucena. Os genótipos avaliados, foram: L.leucocephala cv. Texas 1074 (TI, L.leucocephala 29 A9 (T2, L.leucocephala 11 x L.dlversifolia 25 (T3, L.leucocephala 11 x L.diversifolia 26 (T4, L.leucocephala 24-19/2-39 x L.diverstfolia 26 (T5 e L.leucocephala c v. Cunningham (testemunha. Verificou-se que: os genótipos avahados não apresentaram diferenças nas determinações bromatológicas, realizadas nas folhas e talos finos; o genótipo T3 registrou o maior teor de proteína bruta (28,06%, de fósforo (0,29% e a maior relação PB/FDN e o menor teor de FDN para vagens; os genótipos apresentaram os seguintes teores médios, em porcentagem, para a composição bromatológicadas folhas, vagens e talos finos, respectivamente: Proteína bruta (18,57; 21,68; 6,41; Fibra detergente neutro (29,09; 41,58; 71,01; Fósforo (0,12; 0,22; 0,06; Cálcio (1,39; 0,36; 0,49; Magnesio (0,51; 0,28; 0,24; Tanino (1,32; 1,15; 0,28 e Digestibilidade "in vitro" (58,39; 61,22; 33,61; os teores de proteína e fósforo apresentaram a seguinte ordem decrescente nas partes das plantas: vagens > folhas > talos finos; os teores de cálcio: folhas > talos finos > vagens e de magnésio: folhas > vagens > talos finos.In a trial conducted on a distrofic Red-Yellow Latossol, at EMBRAPA-CPPSE, São Carlos, located at 22°01'S and 47'53'W, altitude of 856 m and with a mean annual rainfall of 1502 mm, the bromatological composition of leaves, stems smaller than 6 mm diameter and pods of leucena genotypes was determined. The genotypes evaluated were: L.leucocephala cv. Texas 1074 (T1, L.leucocephala 29 A9 (T2, L.leucocephala 11 x L.dlversifolia 25
Carlos Restrepo. Un verdadero Maestro
Directory of Open Access Journals (Sweden)
Pelayo Correa
2009-12-01
Full Text Available Carlos Restrepo fue el primer profesor de Patología y un miembro ilustre del grupo de pioneros que fundaron la Facultad de Medicina de la Universidad del Valle. Estos pioneros convergieron en Cali en la década de 1950, en posesión de un espíritu renovador y creativo que emprendió con mucho éxito la labor de cambiar la cultura académica del Valle del Cauca. Ellos encontraron una sociedad apacible, que disfrutaba de la generosidad de su entorno, sin deseos de romper las tradiciones centenarias de estilo de vida sencillo y satisfecho. Cuando los hijos tenían el deseo y la capacidad de seguir estudios universitarios, especialmente en el área de la medicina, la familia los enviaba a climas menos cálidos, que supuestamente favorecían la función cerebral y la acumulación de conocimientos. Los pioneros de la educación médica en el Valle del Cauca, en buena parte reclutados en universidades nacionales y extranjeras, sabían muy bien que el ambiente vallecaucano no impide una formación universitaria de primera clase. Carlos Restrepo era prototipo del espíritu de cambio y formación intelectual de las nuevas generaciones. Lo manifestaba de múltiples maneras, en buena parte con su genio alegre, extrovertido, optimista, de risa fácil y contagiosa. Pero esta fase amable de su personalidad no ocultaba su tarea formativa; exigía de sus discípulos dedicación y trabajo duro, con fidelidad expresados en memorables caricaturas que exageraban su genio ocasionalmente explosivo. El grupo de pioneros se enfocó con un espíritu de total entrega (tiempo completo y dedicación exclusiva y organizó la nueva Facultad en bien definidos y estructurados departamentos: Anatomía, Bioquímica, Fisiología, Farmacología, Patología, Medicina Interna, Cirugía, Obstetricia y Ginecología, Psiquiatría y Medicina Preventiva. Los departamentos integraron sus funciones primordiales en la enseñanza, la investigación y el servicio a la comunidad. El centro
Fast Monte Carlo for radiation therapy: the PEREGRINE Project
Energy Technology Data Exchange (ETDEWEB)
Hartmann Siantar, C.L.; Bergstrom, P.M.; Chandler, W.P.; Cox, L.J.; Daly, T.P.; Garrett, D.; House, R.K.; Moses, E.I.; Powell, C.L.; Patterson, R.W.; Schach von Wittenau, A.E.
1997-11-11
The purpose of the PEREGRINE program is to bring high-speed, high- accuracy, high-resolution Monte Carlo dose calculations to the desktop in the radiation therapy clinic. PEREGRINE is a three- dimensional Monte Carlo dose calculation system designed specifically for radiation therapy planning. It provides dose distributions from external beams of photons, electrons, neutrons, and protons as well as from brachytherapy sources. Each external radiation source particle passes through collimator jaws and beam modifiers such as blocks, compensators, and wedges that are used to customize the treatment to maximize the dose to the tumor. Absorbed dose is tallied in the patient or phantom as Monte Carlo simulation particles are followed through a Cartesian transport mesh that has been manually specified or determined from a CT scan of the patient. This paper describes PEREGRINE capabilities, results of benchmark comparisons, calculation times and performance, and the significance of Monte Carlo calculations for photon teletherapy. PEREGRINE results show excellent agreement with a comprehensive set of measurements for a wide variety of clinical photon beam geometries, on both homogeneous and heterogeneous test samples or phantoms. PEREGRINE is capable of calculating >350 million histories per hour for a standard clinical treatment plan. This results in a dose distribution with voxel standard deviations of <2% of the maximum dose on 4 million voxels with 1 mm resolution in the CT-slice plane in under 20 minutes. Calculation times include tracking particles through all patient specific beam delivery components as well as the patient. Most importantly, comparison of Monte Carlo dose calculations with currently-used algorithms reveal significantly different dose distributions for a wide variety of treatment sites, due to the complex 3-D effects of missing tissue, tissue heterogeneities, and accurate modeling of the radiation source.
Linear Scaling Quantum Monte Carlo Calculations
Williamson, Andrew
2002-03-01
New developments to the quantum Monte Carlo approach are presented that improve the scaling of the time required to calculate the total energy of a configuration of electronic coordinates from N^3 to nearly linear[1]. The first factor of N is achieved by applying a unitary transform to the set of single particle orbitals used to construct the Slater determinant, creating a set of maximally localized Wannier orbitals. These localized functions are then truncated beyond a given cutoff radius to introduce sparsity into the Slater determinant. The second factor of N is achieved by evaluating the maximally localized Wannier orbitals on a cubic spline grid, which removes the size dependence of the basis set (e.g. plane waves, Gaussians) typically used to expand the orbitals. Application of this method to the calculation of the binding energy of carbon fullerenes and silicon nanostructures will be presented. An extension of the approach to deal with excited states of systems will also be presented in the context of the calculation of the excitonic gap of a variety of systems. This work was performed under the auspices of the U.S. Dept. of Energy at the University of California/LLNL under contract no. W-7405-Eng-48. [1] A.J. Williamson, R.Q. Hood and J.C. Grossman, Phys. Rev. Lett. 87 246406 (2001)
Anomalous scaling in the random-force-driven Burgers equation. A Monte Carlo study
Energy Technology Data Exchange (ETDEWEB)
Mesterhazy, D. [Technische Univ. Darmstadt (Germany). Inst. fuer Kernphysik; Jansen, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)
2011-07-15
We present a new approach to determine numerically the statistical behavior of small-scale structures in hydrodynamic turbulence. Starting from the functional integral for the random-force-driven Burgers equation we show that Monte Carlo simulations allow for the computation of structure function scaling exponents to high precision. Given the general applicability of Monte Carlo methods, this opens up the possibility to address also other systems relevant to turbulence within this framework. (orig.)
Study of nuclear pairing with Configuration-Space Monte-Carlo approach
Lingle, Mark; Volya, Alexander
2015-01-01
Pairing correlations in nuclei play a decisive role in determining nuclear drip-lines, binding energies, and many collective properties. In this work a new Configuration-Space Monte-Carlo (CSMC) method for treating nuclear pairing correlations is developed, implemented, and demonstrated. In CSMC the Hamiltonian matrix is stochastically generated in Krylov subspace, resulting in the Monte-Carlo version of Lanczos-like diagonalization. The advantages of this approach over other techniques are d...
Monte Carlo simulations of plutonium gamma-ray spectra
International Nuclear Information System (INIS)
Monte Carlo calculations were investigated as a means of simulating the gamma-ray spectra of Pu. These simulated spectra will be used to develop and evaluate gamma-ray analysis techniques for various nondestructive measurements. Simulated spectra of calculational standards can be used for code intercomparisons, to understand systematic biases and to estimate minimum detection levels of existing and proposed nondestructive analysis instruments. The capability to simulate gamma-ray spectra from HPGe detectors could significantly reduce the costs of preparing large numbers of real reference materials. MCNP was used for the Monte Carlo transport of the photons. Results from the MCNP calculations were folded in with a detector response function for a realistic spectrum. Plutonium spectrum peaks were produced with Lorentzian shapes, for the x-rays, and Gaussian distributions. The MGA code determined the Pu isotopes and specific power of this calculated spectrum and compared it to a similar analysis on a measured spectrum
Nuclear pairing within a configuration-space Monte Carlo approach
Lingle, Mark; Volya, Alexander
2015-06-01
Pairing correlations in nuclei play a decisive role in determining nuclear drip lines, binding energies, and many collective properties. In this work a new configuration-space Monte Carlo (CSMC) method for treating nuclear pairing correlations is developed, implemented, and demonstrated. In CSMC the Hamiltonian matrix is stochastically generated in Krylov subspace, resulting in the Monte Carlo version of Lanczos-like diagonalization. The advantages of this approach over other techniques are discussed; the absence of the fermionic sign problem, probabilistic interpretation of quantum-mechanical amplitudes, and ability to handle truly large-scale problems with defined precision and error control are noteworthy merits of CSMC. The features of our CSMC approach are shown using models and realistic examples. Special attention is given to difficult limits: situations with nonconstant pairing strengths, cases with nearly degenerate excited states, limits when pairing correlations in finite systems are weak, and problems when the relevant configuration space is large.
Monte carlo simulation for soot dynamics
Zhou, Kun
2012-01-01
A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.
Fast quantum Monte Carlo on a GPU
Lutsyshyn, Y
2013-01-01
We present a scheme for the parallelization of quantum Monte Carlo on graphical processing units, focusing on bosonic systems and variational Monte Carlo. We use asynchronous execution schemes with shared memory persistence, and obtain an excellent acceleration. Comparing with single core execution, GPU-accelerated code runs over x100 faster. The CUDA code is provided along with the package that is necessary to execute variational Monte Carlo for a system representing liquid helium-4. The program was benchmarked on several models of Nvidia GPU, including Fermi GTX560 and M2090, and the latest Kepler architecture K20 GPU. Kepler-specific optimization is discussed.
Advanced computers and Monte Carlo
International Nuclear Information System (INIS)
High-performance parallelism that is currently available is synchronous in nature. It is manifested in such architectures as Burroughs ILLIAC-IV, CDC STAR-100, TI ASC, CRI CRAY-1, ICL DAP, and many special-purpose array processors designed for signal processing. This form of parallelism has apparently not been of significant value to many important Monte Carlo calculations. Nevertheless, there is much asynchronous parallelism in many of these calculations. A model of a production code that requires up to 20 hours per problem on a CDC 7600 is studied for suitability on some asynchronous architectures that are on the drawing board. The code is described and some of its properties and resource requirements ae identified to compare with corresponding properties and resource requirements are identified to compare with corresponding properties and resource requirements are identified to compare with corresponding properties and resources of some asynchronous multiprocessor architectures. Arguments are made for programer aids and special syntax to identify and support important asynchronous parallelism. 2 figures, 5 tables
Monte Carlo Simulation of River Meander Modelling
Posner, A. J.; Duan, J. G.
2010-12-01
This study first compares the first order analytical solutions for flow field by Ikeda et. al. (1981) and Johanesson and Parker (1989b). Ikeda et. al.’s (1981) linear bank erosion model was implemented to predict the rate of bank erosion in which the bank erosion coefficient is treated as a stochastic variable that varies with physical properties of the bank (e.g. cohesiveness, stratigraphy, vegetation density). The developed model was used to predict the evolution of meandering planforms. Then, the modeling results were analyzed and compared to the observed data. Since the migration of meandering channel consists of downstream translation, lateral expansion, and downstream or upstream rotations. Several measures are formulated in order to determine which of the resulting planform is closest to the experimental measured one. Results from the deterministic model highly depend on the calibrated erosion coefficient. Since field measurements are always limited, the stochastic model yielded more realistic predictions of meandering planform evolutions. Due to the random nature of bank erosion coefficient, the meandering planform evolution is a stochastic process that can only be accurately predicted by a stochastic model. Quasi-2D Ikeda (1989) flow solution with Monte Carlo Simulation of Bank Erosion Coefficient.
11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing
Nuyens, Dirk
2016-01-01
This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.
Monte Carlo methods for particle transport
Haghighat, Alireza
2015-01-01
The Monte Carlo method has become the de facto standard in radiation transport. Although powerful, if not understood and used appropriately, the method can give misleading results. Monte Carlo Methods for Particle Transport teaches appropriate use of the Monte Carlo method, explaining the method's fundamental concepts as well as its limitations. Concise yet comprehensive, this well-organized text: * Introduces the particle importance equation and its use for variance reduction * Describes general and particle-transport-specific variance reduction techniques * Presents particle transport eigenvalue issues and methodologies to address these issues * Explores advanced formulations based on the author's research activities * Discusses parallel processing concepts and factors affecting parallel performance Featuring illustrative examples, mathematical derivations, computer algorithms, and homework problems, Monte Carlo Methods for Particle Transport provides nuclear engineers and scientists with a practical guide ...
Frontiers of quantum Monte Carlo workshop: preface
International Nuclear Information System (INIS)
The introductory remarks, table of contents, and list of attendees are presented from the proceedings of the conference, Frontiers of Quantum Monte Carlo, which appeared in the Journal of Statistical Physics
"Shaakal" Carlos kaebas arreteerija kohtusse / Margo Pajuste
Pajuste, Margo
2006-01-01
Ilmunud ka: Postimees : na russkom jazõke 3. juuli lk. 11. Vangistatud kurikuulus terrorist "Shaakal" Carlos alustas kohtuasja oma kunagise vahistaja vastu. Ta süüdistab Prantsusmaa luureteenistuse endist juhti inimröövis
Simulation and the Monte Carlo method
Rubinstein, Reuven Y
2016-01-01
Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...
Smart detectors for Monte Carlo radiative transfer
Baes, Maarten
2008-01-01
Many optimization techniques have been invented to reduce the noise that is inherent in Monte Carlo radiative transfer simulations. As the typical detectors used in Monte Carlo simulations do not take into account all the information contained in the impacting photon packages, there is still room to optimize this detection process and the corresponding estimate of the surface brightness distributions. We want to investigate how all the information contained in the distribution of impacting photon packages can be optimally used to decrease the noise in the surface brightness distributions and hence to increase the efficiency of Monte Carlo radiative transfer simulations. We demonstrate that the estimate of the surface brightness distribution in a Monte Carlo radiative transfer simulation is similar to the estimate of the density distribution in an SPH simulation. Based on this similarity, a recipe is constructed for smart detectors that take full advantage of the exact location of the impact of the photon pack...
Monte Carlo simulation of granular fluids
Montanero, J. M.
2003-01-01
An overview of recent work on Monte Carlo simulations of a granular binary mixture is presented. The results are obtained numerically solving the Enskog equation for inelastic hard-spheres by means of an extension of the well-known direct Monte Carlo simulation (DSMC) method. The homogeneous cooling state and the stationary state reached using the Gaussian thermostat are considered. The temperature ratio, the fourth velocity moments and the velocity distribution functions are obtained for bot...
Monte Carlo simulation on kinetics of batch and semi-batch free radical polymerization
Shao, Jing
2015-10-27
Based on Monte Carlo simulation technology, we proposed a hybrid routine which combines reaction mechanism together with coarse-grained molecular simulation to study the kinetics of free radical polymerization. By comparing with previous experimental and simulation studies, we showed the capability of our Monte Carlo scheme on representing polymerization kinetics in batch and semi-batch processes. Various kinetics information, such as instant monomer conversion, molecular weight, and polydispersity etc. are readily calculated from Monte Carlo simulation. The kinetic constants such as polymerization rate k p is determined in the simulation without of “steady-state” hypothesis. We explored the mechanism for the variation of polymerization kinetics those observed in previous studies, as well as polymerization-induced phase separation. Our Monte Carlo simulation scheme is versatile on studying polymerization kinetics in batch and semi-batch processes.
Fermion-Dimer Scattering using Impurity Lattice Monte Carlo and the Adiabatic Projection Method
Elhatisari, Serdar
2014-01-01
We present lattice Monte Carlo calculations of fermion-dimer scattering in the limit of zero-range interactions using the adiabatic projection method. The adiabatic projection method uses a set of initial cluster states and Euclidean time projection to give a systematically improvable description of the low-lying scattering cluster states in a finite volume. We use L\\"uscher's finite-volume relations to determine the $s$-wave, $p$-wave, and $d$-wave phase shifts. For comparison, we also compute exact lattice results using Lanczos iteration and continuum results using the Skorniakov-Ter-Martirosian equation. For our Monte Carlo calculations we use a new lattice algorithm called impurity lattice Monte Carlo. This algorithm can be viewed as a hybrid technique which incorporates elements of both worldline and auxiliary-field Monte Carlo simulations.
Elhatisari, Serdar; Lee, Dean
2014-12-01
We present lattice Monte Carlo calculations of fermion-dimer scattering in the limit of zero-range interactions using the adiabatic projection method. The adiabatic projection method uses a set of initial cluster states and Euclidean time projection to give a systematically improvable description of the low-lying scattering cluster states in a finite volume. We use Lüscher's finite-volume relations to determine the s -wave, p -wave, and d -wave phase shifts. For comparison, we also compute exact lattice results using Lanczos iteration and continuum results using the Skorniakov-Ter-Martirosian equation. For our Monte Carlo calculations we use a new lattice algorithm called impurity lattice Monte Carlo. This algorithm can be viewed as a hybrid technique which incorporates elements of both worldline and auxiliary-field Monte Carlo simulations.
Carlos II: el centenario olvidado
Directory of Open Access Journals (Sweden)
Luis Antonio RIBOT GARCÍA
2009-12-01
Full Text Available RESUMEN: A partir de una reflexión inicial sobre el fenómeno de las conmemoraciones, el autor se plantea las causas por las que el tercer centenario de la muerte de Carlos II no dará lugar a ninguna conmemoración. Con independencia de las valoraciones de todo tipo que puedan hacerse de dichas celebraciones, lo cierto es que, en este caso, tal vez hubieran permitido acercar al gran público a uno de los monarcas peor conocidos y menos valorados de la historia de España. Lo más grave, sin embargo, es que la sombra del desconocimiento y el juicio peyorativo se extienden también sobre todo su reinado. Las investigaciones sobre aquel periodo, sin embargo, a pesar de que no abundan, muestran una realidad bastante distinta, en la que la decadencia y la pérdida de la hegemonía internacional convivieron con importantes iniciativas y realizaciones políticas, tanto en el ámbito interno de la Monarquía, como en las relaciones internacionales.ABSTRACT: Parting from an initial reflection about the phenomenon of commemorations, the author ponders the causes for which the third centenary of Charles IFs death will not be the subjet of any celebrations. Besides any evaluations which might be made of these events, the truth is that, perhaps, in this case, a commemoration would have brought the general public closer to one of the least known and worst valued monarchs in the history of Spain. What is more serious, however, is the fact that the shadow of ignorance and pejorative judgement extend also over the entirety of his reign. Though scarce, research about this period shows a very different reality, in wich decadence and the loss of international hegemony cohabitated with important political initiatives and achievements, both in the monarchy's internal domain and in the international arena.
Giner, Emmanuel; Toulouse, Julien
2016-01-01
We explore the use in quantum Monte Carlo (QMC) of trial wave functions consisting of a Jastrow factor multiplied by a truncated configuration-interaction (CI) expansion in Slater determinants obtained from a CI perturbatively selected iteratively (CIPSI) calculation. In the CIPSI algorithm, the CI expansion is iteratively enlarged by selecting the best determinants using perturbation theory, which provides an optimal and automatic way of constructing truncated CI expansions approaching the full CI limit. We perform a systematic study of variational Monte Carlo (VMC) and fixed-node diffusion Monte Carlo (DMC) total energies of first-row atoms from B to Ne with different levels of optimization of the parameters (Jastrow parameters, coefficients of the determinants, and orbital parameters) in these trial wave functions. The results show that the reoptimization of the coefficients of the determinants in VMC (together with the Jastrow factor) leads to an important lowering of both VMC and DMC total energies, and ...
Monte Carlo methods for pricing ﬁnancial options
Indian Academy of Sciences (India)
N Bolia; S Juneja
2005-04-01
Pricing ﬁnancial options is amongst the most important and challenging problems in the modern ﬁnancial industry. Except in the simplest cases, the prices of options do not have a simple closed form solution and efﬁcient computational methods are needed to determine them. Monte Carlo methods have increasingly become a popular computational tool to price complex ﬁnancial options, especially when the underlying space of assets has a large dimensionality, as the performance of other numerical methods typically suffer from the ‘curse of dimensionality’. However, even Monte-Carlo techniques can be quite slow as the problem-size increases, motivating research in variance reduction techniques to increase the efﬁciency of the simulations. In this paper, we review some of the popular variance reduction techniques and their application to pricing options. We particularly focus on the recent Monte-Carlo techniques proposed to tackle the difﬁcult problem of pricing American options. These include: regression-based methods, random tree methods and stochastic mesh methods. Further, we show how importance sampling, a popular variance reduction technique, may be combined with these methods to enhance their effectiveness. We also brieﬂy review the evolving options market in India.
Monte Carlo Simulation as a Research Management Tool
Energy Technology Data Exchange (ETDEWEB)
Douglas, L. J.
1986-06-01
Monte Carlo simulation provides a research manager with a performance monitoring tool to supplement the standard schedule- and resource-based tools such as the Program Evaluation and Review Technique (PERT) and Critical Path Method (CPM). The value of the Monte Carlo simulation in a research environment is that it 1) provides a method for ranking competing processes, 2) couples technical improvements to the process economics, and 3) provides a mechanism to determine the value of research dollars. In this paper the Monte Carlo simulation approach is developed and applied to the evaluation of three competing processes for converting lignocellulosic biomass to ethanol. The technique is shown to be useful for ranking the processes and illustrating the importance of the timeframe of the analysis on the decision process. The results show that acid hydrolysis processes have higher potential for near-term application (2-5 years), while the enzymatic hydrolysis approach has an equal chance to be competitive in the long term (beyond 10 years).
International Nuclear Information System (INIS)
Prompt gamma ray neutron activation analysis methodologies were standardized using a reflected neutron beam and Compton suppressed γ-ray spectrometer to quantify boron from trace to major concentrations. Neutron self-shielding correction factors for higher boron contents (0.2-10 mg) in samples were obtained from the sensitivity of chlorine by irradiating KCl with and without boron. This method was validated by determining boron concentrations in six boron compounds and applied to three borosilicate glass samples with boron contents in the range of 1-10 mg. Low concentrations of boron (10-58 mg kg-1) were also determined in two samples and five reference materials from NIST and IAEA. (author)
Development of Monte Carlo depletion code MCDEP
Energy Technology Data Exchange (ETDEWEB)
Kim, K. S.; Kim, K. Y.; Lee, J. C.; Ji, S. K. [KAERI, Taejon (Korea, Republic of)
2003-07-01
Monte Carlo neutron transport calculation has been used to obtain a reference solution in reactor physics analysis. The typical and widely-used Monte Carlo transport code is MCNP (Monte Carlo N-Particle Transport Code) developed in Los Alamos National Laboratory. The drawbacks of Monte-Carlo transport codes are the lacks of the capacities for the depletion and temperature dependent calculations. In this research we developed MCDEP (Monte Carlo Depletion Code Package) using MCNP with the capacity of the depletion calculation. This code package is the integration of MCNP and depletion module of ORIGEN-2 using the matrix exponential method. This code package enables the automatic MCNP and depletion calculations only with the initial MCNP and MCDEP inputs prepared by users. Depletion chains were simplified for the efficiency of computing time and the treatment of short-lived nuclides without cross section data. The results of MCDEP showed that the reactivity and pin power distributions for the PWR fuel pins and assemblies are consistent with those of CASMO-3 and HELIOS.
Monte Carlo simulation of large electron fields
Faddegon, Bruce A.; Perl, Joseph; Asai, Makoto
2008-03-01
Two Monte Carlo systems, EGSnrc and Geant4, the latter with two different 'physics lists,' were used to calculate dose distributions in large electron fields used in radiotherapy. Source and geometry parameters were adjusted to match calculated results to measurement. Both codes were capable of accurately reproducing the measured dose distributions of the six electron beams available on the accelerator. Depth penetration matched the average measured with a diode and parallel-plate chamber to 0.04 cm or better. Calculated depth dose curves agreed to 2% with diode measurements in the build-up region, although for the lower beam energies there was a discrepancy of up to 5% in this region when calculated results are compared to parallel-plate measurements. Dose profiles at the depth of maximum dose matched to 2-3% in the central 25 cm of the field, corresponding to the field size of the largest applicator. A 4% match was obtained outside the central region. The discrepancy observed in the bremsstrahlung tail in published results that used EGS4 is no longer evident. Simulations with the different codes and physics lists used different source energies, incident beam angles, thicknesses of the primary foils, and distance between the primary and secondary foil. The true source and geometry parameters were not known with sufficient accuracy to determine which parameter set, including the energy of the source, was closest to the truth. These results underscore the requirement for experimental benchmarks of depth penetration and electron scatter for beam energies and foils relevant to radiotherapy.
Optical Monte Carlo modeling of a true portwine stain anatomy
Barton, Jennifer K.; Pfefer, T. Joshua; Welch, Ashley J.; Smithies, Derek J.; Nelson, Jerry; van Gemert, Martin J.
1998-04-01
A unique Monte Carlo program capable of accommodating an arbitrarily complex geometry was used to determine the energy deposition in a true port wine stain anatomy. Serial histologic sections taken from a biopsy of a dark red, laser therapy resistant stain were digitized and used to create the program input for simulation at wavelengths of 532 and 585 nm. At both wavelengths, the greatest energy deposition occurred in the superficial blood vessels, and subsequently decreased with depth as the laser beam was attenuated. However, more energy was deposited in the epidermis and superficial blood vessels at 532 nm than at 585 nm.
Monte Carlo simulations of charge transport in heterogeneous organic semiconductors
Aung, Pyie Phyo; Khanal, Kiran; Luettmer-Strathmann, Jutta
2015-03-01
The efficiency of organic solar cells depends on the morphology and electronic properties of the active layer. Research teams have been experimenting with different conducting materials to achieve more efficient solar panels. In this work, we perform Monte Carlo simulations to study charge transport in heterogeneous materials. We have developed a coarse-grained lattice model of polymeric photovoltaics and use it to generate active layers with ordered and disordered regions. We determine carrier mobilities for a range of conditions to investigate the effect of the morphology on charge transport.
Calibration of the Top-Quark Monte Carlo Mass
Kieseler, Jan; Lipka, Katerina; Moch, Sven-Olaf
2016-04-01
We present a method to establish, experimentally, the relation between the top-quark mass mtMC as implemented in Monte Carlo generators and the Lagrangian mass parameter mt in a theoretically well-defined renormalization scheme. We propose a simultaneous fit of mtMC and an observable sensitive to mt, which does not rely on any prior assumptions about the relation between mt and mtMC. The measured observable is independent of mtMC and can be used subsequently for a determination of mt. The analysis strategy is illustrated with examples for the extraction of mt from inclusive and differential cross sections for hadroproduction of top quarks.
Quantum Monte Carlo with Variable Spins
Melton, Cody A; Mitas, Lubos
2016-01-01
We investigate the inclusion of variable spins in electronic structure quantum Monte Carlo, with a focus on diffusion Monte Carlo with Hamiltonians that include spin-orbit interactions. Following our previous introduction of fixed-phase spin-orbit diffusion Monte Carlo (FPSODMC), we thoroughly discuss the details of the method and elaborate upon its technicalities. We present a proof for an upper-bound property for complex nonlocal operators, which allows for the implementation of T-moves to ensure the variational property. We discuss the time step biases associated with our particular choice of spin representation. Applications of the method are also presented for atomic and molecular systems. We calculate the binding energies and geometry of the PbH and Sn$_2$ molecules, as well as the electron affinities of the 6$p$ row elements in close agreement with experiments.
Yours in Revolution: Retrofitting Carlos the Jackal
Directory of Open Access Journals (Sweden)
Samuel Thomas
2013-09-01
Full Text Available This paper explores the representation of ‘Carlos the Jackal’, the one-time ‘World’s Most Wanted Man’ and ‘International Face of Terror’ – primarily in cin-ema but also encompassing other forms of popular culture and aspects of Cold War policy-making. At the centre of the analysis is Olivier Assayas’s Carlos (2010, a transnational, five and a half hour film (first screened as a TV mini-series about the life and times of the infamous militant. Concentrating on the var-ious ways in which Assayas expresses a critical preoccupation with names and faces through complex formal composition, the project examines the play of ab-straction and embodiment that emerges from the narrativisation of terrorist vio-lence. Lastly, it seeks to engage with the hidden implications of Carlos in terms of the intertwined trajectories of formal experimentation and revolutionary politics.
Monte Carlo strategies in scientific computing
Liu, Jun S
2008-01-01
This paperback edition is a reprint of the 2001 Springer edition This book provides a self-contained and up-to-date treatment of the Monte Carlo method and develops a common framework under which various Monte Carlo techniques can be "standardized" and compared Given the interdisciplinary nature of the topics and a moderate prerequisite for the reader, this book should be of interest to a broad audience of quantitative researchers such as computational biologists, computer scientists, econometricians, engineers, probabilists, and statisticians It can also be used as the textbook for a graduate-level course on Monte Carlo methods Many problems discussed in the alter chapters can be potential thesis topics for masters’ or PhD students in statistics or computer science departments Jun Liu is Professor of Statistics at Harvard University, with a courtesy Professor appointment at Harvard Biostatistics Department Professor Liu was the recipient of the 2002 COPSS Presidents' Award, the most prestigious one for sta...
CosmoPMC: Cosmology Population Monte Carlo
Kilbinger, Martin; Cappe, Olivier; Cardoso, Jean-Francois; Fort, Gersende; Prunet, Simon; Robert, Christian P; Wraith, Darren
2011-01-01
We present the public release of the Bayesian sampling algorithm for cosmology, CosmoPMC (Cosmology Population Monte Carlo). CosmoPMC explores the parameter space of various cosmological probes, and also provides a robust estimate of the Bayesian evidence. CosmoPMC is based on an adaptive importance sampling method called Population Monte Carlo (PMC). Various cosmology likelihood modules are implemented, and new modules can be added easily. The importance-sampling algorithm is written in C, and fully parallelised using the Message Passing Interface (MPI). Due to very little overhead, the wall-clock time required for sampling scales approximately with the number of CPUs. The CosmoPMC package contains post-processing and plotting programs, and in addition a Monte-Carlo Markov chain (MCMC) algorithm. The sampling engine is implemented in the library pmclib, and can be used independently. The software is available for download at http://www.cosmopmc.info.
SPQR: a Monte Carlo reactor kinetics code
International Nuclear Information System (INIS)
The SPQR Monte Carlo code has been developed to analyze fast reactor core accident problems where conventional methods are considered inadequate. The code is based on the adiabatic approximation of the quasi-static method. This initial version contains no automatic material motion or feedback. An existing Monte Carlo code is used to calculate the shape functions and the integral quantities needed in the kinetics module. Several sample problems have been devised and analyzed. Due to the large statistical uncertainty associated with the calculation of reactivity in accident simulations, the results, especially at later times, differ greatly from deterministic methods. It was also found that in large uncoupled systems, the Monte Carlo method has difficulty in handling asymmetric perturbations
Budak, Mustafa Guray; Karadag, Mustafa; Yücel, Haluk
2016-04-01
In this work, the effective resonance energy, Ebarr -value for the 193Ir(n,γ)194Ir reaction was measured using cadmium ratio method. A dual monitor (197Au-98Mo), which has convenient resonance properties, was employed for characterization of the irradiation sites. Then analytical grade iridium oxide samples diluted with CaCO3 to lower neutron self-shielding effect stacked in small cylindrical Teflon boxes were irradiated once with a 1 mm thick Cd cylindrical box placed in a thermalized neutron field of an 241Am-Be neutron source then without it. The activities produced in samples during 193Ir(n,γ)194Ir reaction were measured using a p-type HPGe detector γ-ray spectrometer with a 44.8% relative efficiency. The correction factors for thermal, epithermal neutron self-shielding (Gth, Gepi), true coincidence summing (Fcoi) and gamma-ray self-absorption (Fs) effects were determined with appropriate approaches and programs. Thus, the experimental Ebarr -value was determined to be 2.65 ± 0.61 eV for 193Ir target nuclide. The recent data for Q0 and FCd values for Ebarr determination were based on k0-NAA online database. The present experimental Ebarr value was calculated and compared with more recent values for Q0 and FCd for 193Ir. Additionally, the Ebarr -values was theoretically calculated from the up-to-date resonance data obtained from ENDF/B VII library using two different approaches. Since there is no experimentally determined Ebarr -value for the 193Ir isotope, the results are compared with the calculated ones given in the literature.
Yours in revolution : retrofitting Carlos the Jackal.
Samuel Thomas
2013-01-01
This paper explores the representation of ‘Carlos the Jackal’, the one-time ‘World’s Most Wanted Man’ and ‘International Face of Terror’ – primarily in cin-ema but also encompassing other forms of popular culture and aspects of Cold War policy-making. At the centre of the analysis is Olivier Assayas’s Carlos (2010), a transnational, five and a half hour film (first screened as a TV mini-series) about the life and times of the infamous militant. Concentrating on the var-ious ways in which Assa...
Geodesic Monte Carlo on Embedded Manifolds.
Byrne, Simon; Girolami, Mark
2013-12-01
Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton-Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024
Monte Carlo dose computation for IMRT optimization*
Laub, W.; Alber, M.; Birkner, M.; Nüsslin, F.
2000-07-01
A method which combines the accuracy of Monte Carlo dose calculation with a finite size pencil-beam based intensity modulation optimization is presented. The pencil-beam algorithm is employed to compute the fluence element updates for a converging sequence of Monte Carlo dose distributions. The combination is shown to improve results over the pencil-beam based optimization in a lung tumour case and a head and neck case. Inhomogeneity effects like a broader penumbra and dose build-up regions can be compensated for by intensity modulation.
Monte Carlo simulation of granular fluids
Montanero, J M
2003-01-01
An overview of recent work on Monte Carlo simulations of a granular binary mixture is presented. The results are obtained numerically solving the Enskog equation for inelastic hard-spheres by means of an extension of the well-known direct Monte Carlo simulation (DSMC) method. The homogeneous cooling state and the stationary state reached using the Gaussian thermostat are considered. The temperature ratio, the fourth velocity moments and the velocity distribution functions are obtained for both cases. The shear viscosity characterizing the momentum transport in the thermostatted case is calculated as well. The simulation results are compared with analytical predictions showing an excellent agreement.
Monte carlo simulations of organic photovoltaics.
Groves, Chris; Greenham, Neil C
2014-01-01
Monte Carlo simulations are a valuable tool to model the generation, separation, and collection of charges in organic photovoltaics where charges move by hopping in a complex nanostructure and Coulomb interactions between charge carriers are important. We review the Monte Carlo techniques that have been applied to this problem, and describe the results of simulations of the various recombination processes that limit device performance. We show how these processes are influenced by the local physical and energetic structure of the material, providing information that is useful for design of efficient photovoltaic systems.
Monte Carlo dose distributions for radiosurgery
Energy Technology Data Exchange (ETDEWEB)
Perucha, M.; Leal, A.; Rincon, M.; Carrasco, E. [Sevilla Univ. (Spain). Dept. Fisiologia Medica y Biofisica; Sanchez-Doblado, F. [Sevilla Univ. (Spain). Dept. Fisiologia Medica y Biofisica]|[Hospital Univ. Virgen Macarena, Sevilla (Spain). Servicio de Oncologia Radioterapica; Nunez, L. [Clinica Puerta de Hierro, Madrid (Spain). Servicio de Radiofisica; Arrans, R.; Sanchez-Calzado, J.A.; Errazquin, L. [Hospital Univ. Virgen Macarena, Sevilla (Spain). Servicio de Oncologia Radioterapica; Sanchez-Nieto, B. [Royal Marsden NHS Trust (United Kingdom). Joint Dept. of Physics]|[Inst. of Cancer Research, Sutton, Surrey (United Kingdom)
2001-07-01
The precision of Radiosurgery Treatment planning systems is limited by the approximations of their algorithms and by their dosimetrical input data. This fact is especially important in small fields. However, the Monte Carlo methods is an accurate alternative as it considers every aspect of particle transport. In this work an acoustic neurinoma is studied by comparing the dose distribution of both a planning system and Monte Carlo. Relative shifts have been measured and furthermore, Dose-Volume Histograms have been calculated for target and adjacent organs at risk. (orig.)
Monte Carlo simulation of neutron scattering instruments
International Nuclear Information System (INIS)
A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width
Monte Carlo dose distributions for radiosurgery
International Nuclear Information System (INIS)
The precision of Radiosurgery Treatment planning systems is limited by the approximations of their algorithms and by their dosimetrical input data. This fact is especially important in small fields. However, the Monte Carlo methods is an accurate alternative as it considers every aspect of particle transport. In this work an acoustic neurinoma is studied by comparing the dose distribution of both a planning system and Monte Carlo. Relative shifts have been measured and furthermore, Dose-Volume Histograms have been calculated for target and adjacent organs at risk. (orig.)
Quantum Monte Carlo with very large multideterminant wavefunctions.
Scemama, Anthony; Applencourt, Thomas; Giner, Emmanuel; Caffarel, Michel
2016-07-01
An algorithm to compute efficiently the first two derivatives of (very) large multideterminant wavefunctions for quantum Monte Carlo calculations is presented. The calculation of determinants and their derivatives is performed using the Sherman-Morrison formula for updating the inverse Slater matrix. An improved implementation based on the reduction of the number of column substitutions and on a very efficient implementation of the calculation of the scalar products involved is presented. It is emphasized that multideterminant expansions contain in general a large number of identical spin-specific determinants: for typical configuration interaction-type wavefunctions the number of unique spin-specific determinants Ndetσ ( σ=↑,↓) with a non-negligible weight in the expansion is of order O(Ndet). We show that a careful implementation of the calculation of the Ndet -dependent contributions can make this step negligible enough so that in practice the algorithm scales as the total number of unique spin-specific determinants, Ndet↑+Ndet↓, over a wide range of total number of determinants (here, Ndet up to about one million), thus greatly reducing the total computational cost. Finally, a new truncation scheme for the multideterminant expansion is proposed so that larger expansions can be considered without increasing the computational time. The algorithm is illustrated with all-electron fixed-node diffusion Monte Carlo calculations of the total energy of the chlorine atom. Calculations using a trial wavefunction including about 750,000 determinants with a computational increase of ∼400 compared to a single-determinant calculation are shown to be feasible. © 2016 Wiley Periodicals, Inc. PMID:27302337
Fast sequential Monte Carlo methods for counting and optimization
Rubinstein, Reuven Y; Vaisman, Radislav
2013-01-01
A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the
Monte Carlo methods in AB initio quantum chemistry quantum Monte Carlo for molecules
Lester, William A; Reynolds, PJ
1994-01-01
This book presents the basic theory and application of the Monte Carlo method to the electronic structure of atoms and molecules. It assumes no previous knowledge of the subject, only a knowledge of molecular quantum mechanics at the first-year graduate level. A working knowledge of traditional ab initio quantum chemistry is helpful, but not essential.Some distinguishing features of this book are: Clear exposition of the basic theory at a level to facilitate independent study. Discussion of the various versions of the theory: diffusion Monte Carlo, Green's function Monte Carlo, and release n
Use of Monte Carlo Methods in brachytherapy; Uso del metodo de Monte Carlo en braquiterapia
Energy Technology Data Exchange (ETDEWEB)
Granero Cabanero, D.
2015-07-01
The Monte Carlo method has become a fundamental tool for brachytherapy dosimetry mainly because no difficulties associated with experimental dosimetry. In brachytherapy the main handicap of experimental dosimetry is the high dose gradient near the present sources making small uncertainties in the positioning of the detectors lead to large uncertainties in the dose. This presentation will review mainly the procedure for calculating dose distributions around a fountain using the Monte Carlo method showing the difficulties inherent in these calculations. In addition we will briefly review other applications of the method of Monte Carlo in brachytherapy dosimetry, as its use in advanced calculation algorithms, calculating barriers or obtaining dose applicators around. (Author)
On the use of stochastic approximation Monte Carlo for Monte Carlo integration
Liang, Faming
2009-03-01
The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration via a dynamically weighted estimator by calling some results from the literature of nonhomogeneous Markov chains. Our numerical results indicate that SAMC can yield significant savings over conventional Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, for the problems for which the energy landscape is rugged. © 2008 Elsevier B.V. All rights reserved.
Scalable Domain Decomposed Monte Carlo Particle Transport
Energy Technology Data Exchange (ETDEWEB)
O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)
2013-12-05
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.
Accelerating Hasenbusch's acceleration of hybrid Monte Carlo
International Nuclear Information System (INIS)
Hasenbusch has proposed splitting the pseudo-fermionic action into two parts, in order to speed-up Hybrid Monte Carlo simulations of QCD. We have tested a different splitting, also using clover-improved Wilson fermions. An additional speed-up between 5 and 20% over the original proposal was achieved in production runs. (orig.)
A comparison of Monte Carlo generators
Golan, Tomasz
2014-01-01
A comparison of GENIE, NEUT, NUANCE, and NuWro Monte Carlo neutrino event generators is presented using a set of four observables: protons multiplicity, total visible energy, most energetic proton momentum, and $\\pi^+$ two-dimensional energy vs cosine distribution.
Advances in Monte Carlo computer simulation
Swendsen, Robert H.
2011-03-01
Since the invention of the Metropolis method in 1953, Monte Carlo methods have been shown to provide an efficient, practical approach to the calculation of physical properties in a wide variety of systems. In this talk, I will discuss some of the advances in the MC simulation of thermodynamics systems, with an emphasis on optimization to obtain a maximum of useful information.
Monte Carlo methods beyond detailed balance
Schram, Raoul D.; Barkema, Gerard T.
2015-01-01
Monte Carlo algorithms are nearly always based on the concept of detailed balance and ergodicity. In this paper we focus on algorithms that do not satisfy detailed balance. We introduce a general method for designing non-detailed balance algorithms, starting from a conventional algorithm satisfying
Using CIPSI nodes in diffusion Monte Carlo
Caffarel, Michel; Giner, Emmanuel; Scemama, Anthony
2016-01-01
Several aspects of the recently proposed DMC-CIPSI approach consisting in using selected Configuration Interaction (SCI) approaches such as CIPSI (Configuration Interaction using a Perturbative Selection done Iteratively) to build accurate nodes for diffusion Monte Carlo (DMC) calculations are presented and discussed. The main ideas are illustrated with a number of calculations for diatomics molecules and for the benchmark G1 set.
A note on simultaneous Monte Carlo tests
DEFF Research Database (Denmark)
Hahn, Ute
In this short note, Monte Carlo tests of goodness of fit for data of the form X(t), t ∈ I are considered, that reject the null hypothesis if X(t) leaves an acceptance region bounded by an upper and lower curve for some t in I. A construction of the acceptance region is proposed that complies to a...
Monte Carlo Renormalization Group: a review
International Nuclear Information System (INIS)
The logic and the methods of Monte Carlo Renormalization Group (MCRG) are reviewed. A status report of results for 4-dimensional lattice gauge theories derived using MCRG is presented. Existing methods for calculating the improved action are reviewed and evaluated. The Gupta-Cordery improved MCRG method is described and compared with the standard one. 71 refs., 8 figs
Juan Carlos D'Olivo: A portrait
Aguilar-Arévalo, Alexis A.
2013-06-01
This report attempts to give a brief bibliographical sketch of the academic life of Juan Carlos D'Olivo, researcher and teacher at the Instituto de Ciencias Nucleares of UNAM, devoted to advancing the fields of High Energy Physics and Astroparticle Physics in Mexico and Latin America.
Monte Carlo simulation of zinc protoporphyrin fluorescence in the retina
Chen, Xiaoyan; Lane, Stephen
2010-02-01
We have used Monte Carlo simulation of autofluorescence in the retina to determine that noninvasive detection of nutritional iron deficiency is possible. Nutritional iron deficiency (which leads to iron deficiency anemia) affects more than 2 billion people worldwide, and there is an urgent need for a simple, noninvasive diagnostic test. Zinc protoporphyrin (ZPP) is a fluorescent compound that accumulates in red blood cells and is used as a biomarker for nutritional iron deficiency. We developed a computational model of the eye, using parameters that were identified either by literature search, or by direct experimental measurement to test the possibility of detecting ZPP non-invasively in retina. By incorporating fluorescence into Steven Jacques' original code for multi-layered tissue, we performed Monte Carlo simulation of fluorescence in the retina and determined that if the beam is not focused on a blood vessel in a neural retina layer or if part of light is hitting the vessel, ZPP fluorescence will be 10-200 times higher than background lipofuscin fluorescence coming from the retinal pigment epithelium (RPE) layer directly below. In addition we found that if the light can be focused entirely onto a blood vessel in the neural retina layer, the fluorescence signal comes only from ZPP. The fluorescence from layers below in this second situation does not contribute to the signal. Therefore, the possibility that a device could potentially be built and detect ZPP fluorescence in retina looks very promising.
Spatial distribution of reflected gamma rays by Monte Carlo simulation
International Nuclear Information System (INIS)
In nuclear facilities, the reflection of gamma rays of the walls and metals constitutes an unknown origin of radiation. These reflected gamma rays must be estimated and determined. This study concerns reflected gamma rays on metal slabs. We evaluated the spatial distribution of the reflected gamma rays spectra by using the Monte Carlo method. An appropriate estimator for the double differential albedo is used to determine the energy spectra and the angular distribution of reflected gamma rays by slabs of iron and aluminium. We took into the account the principal interactions of gamma rays with matter: photoelectric, coherent scattering (Rayleigh), incoherent scattering (Compton) and pair creation. The Klein-Nishina differential cross section was used to select direction and energy of scattered photons after each Compton scattering. The obtained spectra show peaks at 0.511* MeV for higher source energy. The Results are in good agreement with those obtained by the TRIPOLI code [J.C. Nimal et al., TRIPOLI02: Programme de Monte Carlo Polycinsetique a Trois dimensions, CEA Rapport, Commissariat a l'Energie Atomique.
Spatial distribution of reflected gamma rays by Monte Carlo simulation
Energy Technology Data Exchange (ETDEWEB)
Jehouani, A. [LPTN, Departement de Physique, Faculte des Sciences Semlalia, B.P. 2390, 40000 Marrakech (Morocco)], E-mail: jehouani@ucam.ac.ma; Merzouki, A. [LPTN, Departement de Physique, Faculte des Sciences Semlalia, B.P. 2390, 40000 Marrakech (Morocco); Remote Sensing and Geomatics of the Environment Laboratory, Ottawa-Carleton Geoscience Centre, Marion Hall, 140 Louis Pasteur, Ottawa, ON, KIN 6N5 (Canada); Boutadghart, F.; Ghassoun, J. [LPTN, Departement de Physique, Faculte des Sciences Semlalia, B.P. 2390, 40000 Marrakech (Morocco)
2007-10-15
In nuclear facilities, the reflection of gamma rays of the walls and metals constitutes an unknown origin of radiation. These reflected gamma rays must be estimated and determined. This study concerns reflected gamma rays on metal slabs. We evaluated the spatial distribution of the reflected gamma rays spectra by using the Monte Carlo method. An appropriate estimator for the double differential albedo is used to determine the energy spectra and the angular distribution of reflected gamma rays by slabs of iron and aluminium. We took into the account the principal interactions of gamma rays with matter: photoelectric, coherent scattering (Rayleigh), incoherent scattering (Compton) and pair creation. The Klein-Nishina differential cross section was used to select direction and energy of scattered photons after each Compton scattering. The obtained spectra show peaks at 0.511{sup *} MeV for higher source energy. The Results are in good agreement with those obtained by the TRIPOLI code [J.C. Nimal et al., TRIPOLI02: Programme de Monte Carlo Polycinsetique a Trois dimensions, CEA Rapport, Commissariat a l'Energie Atomique. ].
Spatial distribution of reflected gamma rays by Monte Carlo simulation
Jehouani, A.; Merzouki, A.; Boutadghart, F.; Ghassoun, J.
2007-10-01
In nuclear facilities, the reflection of gamma rays of the walls and metals constitutes an unknown origin of radiation. These reflected gamma rays must be estimated and determined. This study concerns reflected gamma rays on metal slabs. We evaluated the spatial distribution of the reflected gamma rays spectra by using the Monte Carlo method. An appropriate estimator for the double differential albedo is used to determine the energy spectra and the angular distribution of reflected gamma rays by slabs of iron and aluminium. We took into the account the principal interactions of gamma rays with matter: photoelectric, coherent scattering (Rayleigh), incoherent scattering (Compton) and pair creation. The Klein-Nishina differential cross section was used to select direction and energy of scattered photons after each Compton scattering. The obtained spectra show peaks at 0.511∗ MeV for higher source energy. The Results are in good agreement with those obtained by the TRIPOLI code [J.C. Nimal et al., TRIPOLI02: Programme de Monte Carlo Polycinśetique à Trois dimensions, CEA Rapport, Commissariat à l'Energie Atomique. [1
Gold nanoparticle DNA damage in radiotherapy: A Monte Carlo study
Directory of Open Access Journals (Sweden)
Chun He
2016-07-01
Full Text Available This study investigated the DNA damage due to the dose enhancement of using gold nanoparticles (GNPs as a radiation sensitizer in radiotherapy. Nanodosimetry of a photon irradiated GNP was performed with Monte Carlo simulations using Geant4-DNA (ver. 10.2 in the nanometer scale. In the simulation model, GNP spheres (with diameters of 30, 50, and 100 nm and a DNA model were placed in a water cube (1 µm3. The GNPs were irradiated by photon beams with varying energies (50, 100, and 150 keV, which produced secondary electrons, enhancing the dose to the DNA. To investigate the dose enhancement effect at the DNA level, energy deposition to the DNA with and without the GNP were determined in simulations for calculation of the dose enhancement ratio (DER. The distance between the GNP and the DNA molecule was varied to determine its effect on the DER. Monte Carlo results were collected for three variables; GNP size, distances between the GNP and DNA molecule, and the photon beam energy. The DER was found to increase with the size of GNP and decrease with the distance between the GNP and DNA molecule. The largest DER was found to be 3.7 when a GNP (100 nm diameter was irradiated by a 150 keV photon beam set at 30 nm from the DNA molecule. We conclude that there is significant dependency of the DER on GNP size, distance to the DNA and photon energy and have simulated those relationships.
Kuidas kirjutatakse ajalugu? / Carlo Ginzburg ; interv. Marek Tamm
Ginzburg, Carlo
2007-01-01
Ülevaade Pisa Euroopa kultuuride professori C. Ginzburg'i teostest. Varem. ilm.: Märgid, jäljed ja tõendid : intervjuu Carlo Ginzburgiga // Ginzburg, Carlo. Juust ja vaglad. - Tallinn, 2000. - Lk. 262-271
A Monte Carlo Evaluation of Estimated Parameters of Five Shrinkage Estimate Formuli.
Newman, Isadore; And Others
1979-01-01
A Monte Carlo simulation was employed to determine the accuracy with which the shrinkage in R squared can be estimated by five different shrinkage formulas. The study dealt with the use of shrinkage formulas for various sample sizes, different R squared values, and different degrees of multicollinearity. (Author/JKS)
The information-based complexity of approximation problem by adaptive Monte Carlo methods
Institute of Scientific and Technical Information of China (English)
2008-01-01
In this paper, we study the complexity of information of approximation problem on the multivariate Sobolev space with bounded mixed derivative MWpr,α(Td), 1 < p < ∞, in the norm of Lq(Td), 1 < q < ∞, by adaptive Monte Carlo methods. Applying the discretization technique and some properties of pseudo-s-scale, we determine the exact asymptotic orders of this problem.
Jeraj, Robert; Keall, Paul
2000-12-01
The effect of the statistical uncertainty, or noise, in inverse treatment planning for intensity modulated radiotherapy (IMRT) based on Monte Carlo dose calculation was studied. Sets of Monte Carlo beamlets were calculated to give uncertainties at Dmax ranging from 0.2% to 4% for a lung tumour plan. The weights of these beamlets were optimized using a previously described procedure based on a simulated annealing optimization algorithm. Several different objective functions were used. It was determined that the use of Monte Carlo dose calculation in inverse treatment planning introduces two errors in the calculated plan. In addition to the statistical error due to the statistical uncertainty of the Monte Carlo calculation, a noise convergence error also appears. For the statistical error it was determined that apparently successfully optimized plans with a noisy dose calculation (3% 1σ at Dmax ), which satisfied the required uniformity of the dose within the tumour, showed as much as 7% underdose when recalculated with a noise-free dose calculation. The statistical error is larger towards the tumour and is only weakly dependent on the choice of objective function. The noise convergence error appears because the optimum weights are determined using a noisy calculation, which is different from the optimum weights determined for a noise-free calculation. Unlike the statistical error, the noise convergence error is generally larger outside the tumour, is case dependent and strongly depends on the required objectives.
A Monte Carlo Study of Eight Confidence Interval Methods for Coefficient Alpha
Romano, Jeanine L.; Kromrey, Jeffrey D.; Hibbard, Susan T.
2010-01-01
The purpose of this research is to examine eight of the different methods for computing confidence intervals around alpha that have been proposed to determine which of these, if any, is the most accurate and precise. Monte Carlo methods were used to simulate samples under known and controlled population conditions. In general, the differences in…
Monte Carlo Renormalization Group study for SU(3) lattice gauge theory
International Nuclear Information System (INIS)
A special Monte Carlo Renormalization Group method, the so-called ratio method is discussed. Possible systematic error of the method is investigated, and a systematic improvement is proposed based on perturbation theory. The method is applied to determine the β-function of 4 dimensional SU(3) pure gauge theory
Monte Carlo calculation of received dose from ingestion and inhalation of natural uranium
International Nuclear Information System (INIS)
For the purpose of this study eighty samples are taken from the area Bela Crkva and Vrsac. The activity of radionuclide in the soil is determined by gamma- ray spectrometry. Monte Carlo method is used to calculate effective dose received by population resulting from the inhalation and ingestion of natural uranium. The estimated doses were compared with the legally prescribed levels. (author)
International Nuclear Information System (INIS)
It is likely that the quark confinement mechanism at large N should be understood purely in terms of high-order planar Feynman diagrams; in particular, the center of the gauge group can play no role whatever. The author considers the diagrammatic expansion of loop integrals in planar wrong-sign phi4 theory. It is shown that the sum of all fishnet diagrams contributing to the loop can be expressed as the grand partition function of an unusual gas, whose dynamics can be simulated on a computer. The 'molecules' of this gas correspond to vertices of the position-space diagrams, the molecular interactions are determined by the propagators, and the coupling constant plays the role of a chemical potential. The most remarkable feature of this gas is the existence of a critical coupling gsub(c), where string formation takes place. As g → gsub(c) the fishnet vertices tend to cluster around the minimal surface of the loop, thereby forming a string. The role of asymptotic freedom in bringing the coupling to the critical point, and the connection to the Polyakov string, are also discussed. In the Hamiltonian formulation, a very straightforward explanation of quark confinement is presented. (Auth.)
Reliability analysis of tunnel surrounding rock stability by Monte-Carlo method
Institute of Scientific and Technical Information of China (English)
XI Jia-mi; YANG Geng-she
2008-01-01
Discussed advantages of improved Monte-Carlo method and feasibility aboutproposed approach applying in reliability analysis for tunnel surrounding rock stability. Onthe basis of deterministic parsing for tunnel surrounding rock, reliability computing methodof surrounding rock stability was derived from improved Monte-Carlo method. The com-puting method considered random of related parameters, and therefore satisfies relativityamong parameters. The proposed method can reasonably determine reliability of sur-rounding rock stability. Calculation results show that this method is a scientific method indiscriminating and checking surrounding rock stability.
Top Quark Mass Calibration for Monte Carlo Event Generators
Butenschoen, Mathias; Hoang, Andre H; Mateu, Vicent; Preisser, Moritz; Stewart, Iain W
2016-01-01
The most precise top quark mass measurements use kinematic reconstruction methods, determining the top mass parameter of a Monte Carlo event generator, $m_t^{\\rm MC}$. Due to hadronization and parton shower dynamics, relating $m_t^{\\rm MC}$ to a field theory mass is difficult. We present a calibration procedure to determine this relation using hadron level QCD predictions for observables with kinematic mass sensitivity. Fitting $e^+e^-$ 2-Jettiness calculations at NLL/NNLL order to Pythia 8.205, $m_t^{\\rm MC}$ differs from the pole mass by $900$/$600$ MeV, and agrees with the MSR mass within uncertainties, $m_t^{\\rm MC}\\simeq m_{t,1\\,{\\rm GeV}}^{\\rm MSR}$.
CORPORATE VALUATION USING TWO-DIMENSIONAL MONTE CARLO SIMULATION
Directory of Open Access Journals (Sweden)
Toth Reka
2010-12-01
Full Text Available In this paper, we have presented a corporate valuation model. The model combine several valuation methods in order to get more accurate results. To determine the corporate asset value we have used the Gordon-like two-stage asset valuation model based on the calculation of the free cash flow to the firm. We have used the free cash flow to the firm to determine the corporate market value, which was calculated with use of the Black-Scholes option pricing model in frame of the two-dimensional Monte Carlo simulation method. The combined model and the use of the two-dimensional simulation model provides a better opportunity for the corporate value estimation.
Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry
Energy Technology Data Exchange (ETDEWEB)
Bostani, Maryam, E-mail: mbostani@mednet.ucla.edu; McMillan, Kyle; Cagnon, Chris H.; McNitt-Gray, Michael F. [Departments of Biomedical Physics and Radiology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90024 (United States); Mueller, Jonathon W. [United States Air Force, Keesler Air Force Base, Biloxi, Mississippi 39534 (United States); Cody, Dianna D. [University of Texas M.D. Anderson Cancer Center, Houston, Texas 77030 (United States); DeMarco, John J. [Departments of Biomedical Physics and Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90024 (United States)
2015-02-15
Purpose: The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. Methods: MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for all exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. Results: The calculated mean percent difference between TLD measurements and Monte Carlo simulations was −4.9% with standard deviation of 8.7% and a range of −22.7% to 5.7%. Conclusions: The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.
PREFACE: First European Workshop on Monte Carlo Treatment Planning
Reynaert, Nick
2007-07-01
The "First European Workshop on Monte Carlo treatment planning", was an initiative of the European working group on Monte Carlo treatment planning (EWG-MCTP). It was organised at Ghent University (Belgium) on 22-25October 2006. The meeting was very successful and was attended by 150 participants. The impressive list of invited speakers and the scientific contributions (posters and oral presentations) have led to a very interesting program, that was well appreciated by all attendants. In addition, the presence of seven vendors of commercial MCTP software systems provided serious added value to the workshop. For each vendor, a representative has given a presentation in a dedicated session, explaining the current status of their system. It is clear that, for "traditional" radiotherapy applications (using photon or electron beams), Monte Carlo dose calculations have become the state of the art, and are being introduced into almost all commercial treatment planning systems. Invited lectures illustrated that scientific challenges are currently associated with 4D applications (e.g. respiratory motion) and the introduction of MC dose calculations in inverse planning. But it was striking that the Monte Carlo technique is also becoming very important in more novel treatment modalities such as BNCT, hadron therapy, stereotactic radiosurgery, Tomotherapy, etc. This emphasizes the continuous growing interest in MCTP. The people who attended the dosimetry session will certainly remember the high level discussion on the determination of correction factors for different ion chambers, used in small fields. The following proceedings will certainly confirm the high scientific level of the meeting. I would like to thank the members of the local organizing committee for all the hard work done before, during and after this meeting. The organisation of such an event is not a trivial task and it would not have been possible without the help of all my colleagues. I would also like to thank
Monte Carlo simulation experiments on box-type radon dosimeter
Energy Technology Data Exchange (ETDEWEB)
Jamil, Khalid, E-mail: kjamil@comsats.edu.pk; Kamran, Muhammad; Illahi, Ahsan; Manzoor, Shahid
2014-11-11
Epidemiological studies show that inhalation of radon gas ({sup 222}Rn) may be carcinogenic especially to mine workers, people living in closed indoor energy conserved environments and underground dwellers. It is, therefore, of paramount importance to measure the {sup 222}Rn concentrations (Bq/m{sup 3}) in indoors environments. For this purpose, box-type passive radon dosimeters employing ion track detector like CR-39 are widely used. Fraction of the number of radon alphas emitted in the volume of the box type dosimeter resulting in latent track formation on CR-39 is the latent track registration efficiency. Latent track registration efficiency is ultimately required to evaluate the radon concentration which consequently determines the effective dose and the radiological hazards. In this research, Monte Carlo simulation experiments were carried out to study the alpha latent track registration efficiency for box type radon dosimeter as a function of dosimeter’s dimensions and range of alpha particles in air. Two different self developed Monte Carlo simulation techniques were employed namely: (a) Surface ratio (SURA) method and (b) Ray hitting (RAHI) method. Monte Carlo simulation experiments revealed that there are two types of efficiencies i.e. intrinsic efficiency (η{sub int}) and alpha hit efficiency (η{sub hit}). The η{sub int} depends upon only on the dimensions of the dosimeter and η{sub hit} depends both upon dimensions of the dosimeter and range of the alpha particles. The total latent track registration efficiency is the product of both intrinsic and hit efficiencies. It has been concluded that if diagonal length of box type dosimeter is kept smaller than the range of alpha particle then hit efficiency is achieved as 100%. Nevertheless the intrinsic efficiency keeps playing its role. The Monte Carlo simulation experimental results have been found helpful to understand the intricate track registration mechanisms in the box type dosimeter. This paper
Monte Carlo simulation experiments on box-type radon dosimeter
Jamil, Khalid; Kamran, Muhammad; Illahi, Ahsan; Manzoor, Shahid
2014-11-01
Epidemiological studies show that inhalation of radon gas (222Rn) may be carcinogenic especially to mine workers, people living in closed indoor energy conserved environments and underground dwellers. It is, therefore, of paramount importance to measure the 222Rn concentrations (Bq/m3) in indoors environments. For this purpose, box-type passive radon dosimeters employing ion track detector like CR-39 are widely used. Fraction of the number of radon alphas emitted in the volume of the box type dosimeter resulting in latent track formation on CR-39 is the latent track registration efficiency. Latent track registration efficiency is ultimately required to evaluate the radon concentration which consequently determines the effective dose and the radiological hazards. In this research, Monte Carlo simulation experiments were carried out to study the alpha latent track registration efficiency for box type radon dosimeter as a function of dosimeter's dimensions and range of alpha particles in air. Two different self developed Monte Carlo simulation techniques were employed namely: (a) Surface ratio (SURA) method and (b) Ray hitting (RAHI) method. Monte Carlo simulation experiments revealed that there are two types of efficiencies i.e. intrinsic efficiency (ηint) and alpha hit efficiency (ηhit). The ηint depends upon only on the dimensions of the dosimeter and ηhit depends both upon dimensions of the dosimeter and range of the alpha particles. The total latent track registration efficiency is the product of both intrinsic and hit efficiencies. It has been concluded that if diagonal length of box type dosimeter is kept smaller than the range of alpha particle then hit efficiency is achieved as 100%. Nevertheless the intrinsic efficiency keeps playing its role. The Monte Carlo simulation experimental results have been found helpful to understand the intricate track registration mechanisms in the box type dosimeter. This paper explains that how radon concentration from the
Calculation of effective delayed neutron fraction with modified library of Monte Carlo code
International Nuclear Information System (INIS)
Highlights: ► We propose a new Monte Carlo method to calculate the effective delayed neutron fraction by changing the library. ► We study the stability of our method. When the particles and cycles are sufficiently great, the stability is very good. ► The final result is determined to make the deviation least. ► We verify our method on several benchmarks, and the results are very good. - Abstract: A new Monte Carlo method is proposed to calculate the effective delayed neutron fraction βeff. Based on perturbation theory, βeff is calculated with modified library of Monte Carlo code. To verify the proposed method, calculations are performed on several benchmarks. The error of the method is analyzed and the way to reduce error is proposed. The results are in good agreement with the reference data
Monte Carlo simulations of fluid vesicles
Sreeja, K. K.; Ipsen, John H.; Kumar, P. B. Sunil
2015-07-01
Lipid vesicles are closed two dimensional fluid surfaces that are studied extensively as model systems for understanding the physical properties of biological membranes. Here we review the recent developments in the Monte Carlo techniques for simulating fluid vesicles and discuss some of their applications. The technique, which treats the membrane as an elastic sheet, is most suitable for the study of large scale conformations of membranes. The model can be used to study vesicles with fixed and varying topologies. Here we focus on the case of multi-component membranes with the local lipid and protein composition coupled to the membrane curvature leading to a variety of shapes. The phase diagram is more intriguing in the case of fluid vesicles having an in-plane orientational order that induce anisotropic directional curvatures. Methods to explore the steady state morphological structures due to active flux of materials have also been described in the context of Monte Carlo simulations.
Hybrid Monte Carlo with Chaotic Mixing
Kadakia, Nirag
2016-01-01
We propose a hybrid Monte Carlo (HMC) technique applicable to high-dimensional multivariate normal distributions that effectively samples along chaotic trajectories. The method is predicated on the freedom of choice of the HMC momentum distribution, and due to its mixing properties, exhibits sample-to-sample autocorrelations that decay far faster than those in the traditional hybrid Monte Carlo algorithm. We test the methods on distributions of varying correlation structure, finding that the proposed technique produces superior covariance estimates, is less reliant on step-size tuning, and can even function with sparse or no momentum re-sampling. The method presented here is promising for more general distributions, such as those that arise in Bayesian learning of artificial neural networks and in the state and parameter estimation of dynamical systems.
PHOTOS Monte Carlo and its theoretical accuracy
Was, Z; Nanava, G
2008-01-01
Because of properties of QED, the bremsstrahlung corrections to decays of particles or resonances can be calculated, with a good precision, separately from other effects. Thanks to the widespread use of event records such calculations can be embodied into a separate module of Monte Carlo simulation chains, as used in High Energy Experiments of today. The PHOTOS Monte Carlo program is used for this purpose since nearly 20 years now. In the following talk let us review the main ideas and constraints which shaped the program version of today and enabled it widespread use. Finally, we will underline importance of aspects related to reliability of program results: event record contents and implementation of channel specific matrix elements.
Composite biasing in Monte Carlo radiative transfer
Baes, Maarten; Lunttila, Tuomas; Bianchi, Simone; Camps, Peter; Juvela, Mika; Kuiper, Rolf
2016-01-01
Biasing or importance sampling is a powerful technique in Monte Carlo radiative transfer, and can be applied in different forms to increase the accuracy and efficiency of simulations. One of the drawbacks of the use of biasing is the potential introduction of large weight factors. We discuss a general strategy, composite biasing, to suppress the appearance of large weight factors. We use this composite biasing approach for two different problems faced by current state-of-the-art Monte Carlo radiative transfer codes: the generation of photon packages from multiple components, and the penetration of radiation through high optical depth barriers. In both cases, the implementation of the relevant algorithms is trivial and does not interfere with any other optimisation techniques. Through simple test models, we demonstrate the general applicability, accuracy and efficiency of the composite biasing approach. In particular, for the penetration of high optical depths, the gain in efficiency is spectacular for the spe...
No-compromise reptation quantum Monte Carlo
International Nuclear Information System (INIS)
Since its publication, the reptation quantum Monte Carlo algorithm of Baroni and Moroni (1999 Phys. Rev. Lett. 82 4745) has been applied to several important problems in physics, but its mathematical foundations are not well understood. We show that their algorithm is not of typical Metropolis-Hastings type, and we specify conditions required for the generated Markov chain to be stationary and to converge to the intended distribution. The time-step bias may add up, and in many applications it is only the middle of a reptile that is the most important. Therefore, we propose an alternative, 'no-compromise reptation quantum Monte Carlo' to stabilize the middle of the reptile. (fast track communication)
EU Commissioner Carlos Moedas visits SESAME
CERN Bulletin
2015-01-01
The European Commissioner for research, science and innovation, Carlos Moedas, visited the SESAME laboratory in Jordan on Monday 13 April. When it begins operation in 2016, SESAME, a synchrotron light source, will be the Middle East’s first major international science centre, carrying out experiments ranging from the physical sciences to environmental science and archaeology. CERN Director-General Rolf Heuer (left) and European Commissioner Carlos Moedas with the model SESAME magnet. © European Union, 2015. Commissioner Moedas was accompanied by a European Commission delegation led by Robert-Jan Smits, Director-General of DG Research and Innovation, as well as Rolf Heuer, CERN Director-General, Jean-Pierre Koutchouk, coordinator of the CERN-EC Support for SESAME Magnets (CESSAMag) project and Princess Sumaya bint El Hassan of Jordan, a leading advocate of science in the region. They toured the SESAME facility together with SESAME Director, Khaled Tou...
Monte Carlo Shell Model Mass Predictions
International Nuclear Information System (INIS)
The nuclear mass calculation is discussed in terms of large-scale shell model calculations. First, the development and limitations of the conventional shell model calculations are mentioned. In order to overcome the limitations, the Quantum Monte Carlo Diagonalization (QMCD) method has been proposed. The basic formulation and features of the QMCD method are presented as well as its application to the nuclear shell model, referred to as Monte Carlo Shell Model (MCSM). The MCSM provides us with a breakthrough in shell model calculations: the structure of low-lying states can be studied with realistic interactions for a nearly unlimited variety of nuclei. Thus, the MCSM can contribute significantly to the study of nuclear masses. An application to N∼20 unstable nuclei far from the β-stability line is mentioned
Status of Monte Carlo at Los Alamos
Energy Technology Data Exchange (ETDEWEB)
Thompson, W.L.; Cashwell, E.D.; Godfrey, T.N.K.; Schrandt, R.G.; Deutsch, O.L.; Booth, T.E.
1980-05-01
Four papers were presented by Group X-6 on April 22, 1980, at the Oak Ridge Radiation Shielding Information Center (RSIC) Seminar-Workshop on Theory and Applications of Monte Carlo Methods. These papers are combined into one report for convenience and because they are related to each other. The first paper (by Thompson and Cashwell) is a general survey about X-6 and MCNP and is an introduction to the other three papers. It can also serve as a resume of X-6. The second paper (by Godfrey) explains some of the details of geometry specification in MCNP. The third paper (by Cashwell and Schrandt) illustrates calculating flux at a point with MCNP; in particular, the once-more-collided flux estimator is demonstrated. Finally, the fourth paper (by Thompson, Deutsch, and Booth) is a tutorial on some variance-reduction techniques. It should be required for a fledging Monte Carlo practitioner.
Multilevel Monte Carlo Approaches for Numerical Homogenization
Efendiev, Yalchin R.
2015-10-01
In this article, we study the application of multilevel Monte Carlo (MLMC) approaches to numerical random homogenization. Our objective is to compute the expectation of some functionals of the homogenized coefficients, or of the homogenized solutions. This is accomplished within MLMC by considering different sizes of representative volumes (RVEs). Many inexpensive computations with the smallest RVE size are combined with fewer expensive computations performed on larger RVEs. Likewise, when it comes to homogenized solutions, different levels of coarse-grid meshes are used to solve the homogenized equation. We show that, by carefully selecting the number of realizations at each level, we can achieve a speed-up in the computations in comparison to a standard Monte Carlo method. Numerical results are presented for both one-dimensional and two-dimensional test-cases that illustrate the efficiency of the approach.
Status of Monte Carlo at Los Alamos
International Nuclear Information System (INIS)
At Los Alamos the early work of Fermi, von Neumann, and Ulam has been developed and supplemented by many followers, notably Cashwell and Everett, and the main product today is the continuous-energy, general-purpose, generalized-geometry, time-dependent, coupled neutron-photon transport code called MCNP. The Los Alamos Monte Carlo research and development effort is concentrated in Group X-6. MCNP treats an arbitrary three-dimensional configuration of arbitrary materials in geometric cells bounded by first- and second-degree surfaces and some fourth-degree surfaces (elliptical tori). Monte Carlo has evolved into perhaps the main method for radiation transport calculations at Los Alamos. MCNP is used in every technical division at the Laboratory by over 130 users about 600 times a month accounting for nearly 200 hours of CDC-7600 time
Monte Carlo study of real time dynamics
Alexandru, Andrei; Bedaque, Paulo F; Vartak, Sohan; Warrington, Neill C
2016-01-01
Monte Carlo studies involving real time dynamics are severely restricted by the sign problem that emerges from highly oscillatory phase of the path integral. In this letter, we present a new method to compute real time quantities on the lattice using the Schwinger-Keldysh formalism via Monte Carlo simulations. The key idea is to deform the path integration domain to a complex manifold where the phase oscillations are mild and the sign problem is manageable. We use the previously introduced "contraction algorithm" to create a Markov chain on this alternative manifold. We substantiate our approach by analyzing the quantum mechanical anharmonic oscillator. Our results are in agreement with the exact ones obtained by diagonalization of the Hamiltonian. The method we introduce is generic and in principle applicable to quantum field theory albeit very slow. We discuss some possible improvements that should speed up the algorithm.
Monte Carlo analysis of Musashi TRIGA mark II reactor core
Energy Technology Data Exchange (ETDEWEB)
Matsumoto, Tetsuo [Atomic Energy Research Laboratory, Musashi Institute of Technology, Kawasaki, Kanagawa (Japan)
1999-08-01
The analysis of the TRIGA-II core at the Musashi Institute of Technology Research Reactor (Musashi reactor, 100 kW) was performed by the three-dimensional continuous-energy Monte Carlo code (MCNP4A). Effective multiplication factors (k{sub eff}) for the several fuel-loading patterns including the initial core criticality experiment, the fuel element and control rod reactivity worth as well as the neutron flux measurements were used in the validation process of the physical model and neutron cross section data from the ENDF/B-V evaluation. The calculated k{sub eff} overestimated the experimental data by about 1.0%{delta}k/k for both the initial core and the several fuel-loading arrangements. The calculated reactivity worths of control rod and fuel element agree well the measured ones within the uncertainties. The comparison of neutron flux distribution was consistent with the experimental ones which were measured by activation methods at the sample irradiation tubes. All in all, the agreement between the MCNP predictions and the experimentally determined values is good, which indicated that the Monte Carlo model is enough to simulate the Musashi TRIGA-II reactor core. (author)
Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo
Martinez, Josue G.
2010-06-01
The authors consider the analysis of hierarchical longitudinal functional data based upon a functional principal components approach. In contrast to standard frequentist approaches to selecting the number of principal components, the authors do model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order to overcome this, the authors show how to apply Stochastic Approximation Monte Carlo (SAMC) to this problem, a method that has the potential to explore the entire space and does not become trapped in local extrema. The combination of reversible jump methods and SAMC in hierarchical longitudinal functional data is simplified by a polar coordinate representation of the principal components. The approach is easy to implement and does well in simulated data in determining the distribution of the number of principal components, and in terms of its frequentist estimation properties. Empirical applications are also presented.
Energy Modulated Photon Radiotherapy: A Monte Carlo Feasibility Study
Directory of Open Access Journals (Sweden)
Ying Zhang
2016-01-01
Full Text Available A novel treatment modality termed energy modulated photon radiotherapy (EMXRT was investigated. The first step of EMXRT was to determine beam energy for each gantry angle/anatomy configuration from a pool of photon energy beams (2 to 10 MV with a newly developed energy selector. An inverse planning system using gradient search algorithm was then employed to optimize photon beam intensity of various beam energies based on presimulated Monte Carlo pencil beam dose distributions in patient anatomy. Finally, 3D dose distributions in six patients of different tumor sites were simulated with Monte Carlo method and compared between EMXRT plans and clinical IMRT plans. Compared to current IMRT technique, the proposed EMXRT method could offer a better paradigm for the radiotherapy of lung cancers and pediatric brain tumors in terms of normal tissue sparing and integral dose. For prostate, head and neck, spine, and thyroid lesions, the EMXRT plans were generally comparable to the IMRT plans. Our feasibility study indicated that lower energy (<6 MV photon beams could be considered in modern radiotherapy treatment planning to achieve a more personalized care for individual patient with dosimetric gains.
Energy Modulated Photon Radiotherapy: A Monte Carlo Feasibility Study.
Zhang, Ying; Feng, Yuanming; Ming, Xin; Deng, Jun
2016-01-01
A novel treatment modality termed energy modulated photon radiotherapy (EMXRT) was investigated. The first step of EMXRT was to determine beam energy for each gantry angle/anatomy configuration from a pool of photon energy beams (2 to 10 MV) with a newly developed energy selector. An inverse planning system using gradient search algorithm was then employed to optimize photon beam intensity of various beam energies based on presimulated Monte Carlo pencil beam dose distributions in patient anatomy. Finally, 3D dose distributions in six patients of different tumor sites were simulated with Monte Carlo method and compared between EMXRT plans and clinical IMRT plans. Compared to current IMRT technique, the proposed EMXRT method could offer a better paradigm for the radiotherapy of lung cancers and pediatric brain tumors in terms of normal tissue sparing and integral dose. For prostate, head and neck, spine, and thyroid lesions, the EMXRT plans were generally comparable to the IMRT plans. Our feasibility study indicated that lower energy (<6 MV) photon beams could be considered in modern radiotherapy treatment planning to achieve a more personalized care for individual patient with dosimetric gains. PMID:26977413
Monte Carlo simulations and dosimetric studies of an irradiation facility
Belchior, A.; Botelho, M. L.; Vaz, P.
2007-09-01
There is an increasing utilization of ionizing radiation for industrial applications. Additionally, the radiation technology offers a variety of advantages in areas, such as sterilization and food preservation. For these applications, dosimetric tests are of crucial importance in order to assess the dose distribution throughout the sample being irradiated. The use of Monte Carlo methods and computational tools in support of the assessment of the dose distributions in irradiation facilities can prove to be economically effective, representing savings in the utilization of dosemeters, among other benefits. One of the purposes of this study is the development of a Monte Carlo simulation, using a state-of-the-art computational tool—MCNPX—in order to determine the dose distribution inside an irradiation facility of Cobalt 60. This irradiation facility is currently in operation at the ITN campus and will feature an automation and robotics component, which will allow its remote utilization by an external user, under REEQ/996/BIO/2005 project. The detailed geometrical description of the irradiation facility has been implemented in MCNPX, which features an accurate and full simulation of the electron-photon processes involved. The validation of the simulation results obtained was performed by chemical dosimetry methods, namely a Fricke solution. The Fricke dosimeter is a standard dosimeter and is widely used in radiation processing for calibration purposes.
A Monte Carlo simulation of ion transport at finite temperatures
International Nuclear Information System (INIS)
We have developed a Monte Carlo simulation for ion transport in hot background gases, which is an alternative way of solving the corresponding Boltzmann equation that determines the distribution function of ions. We consider the limit of low ion densities when the distribution function of the background gas remains unchanged due to collision with ions. Special attention has been paid to properly treating the thermal motion of the host gas particles and their influence on ions, which is very important at low electric fields, when the mean ion energy is comparable to the thermal energy of the host gas. We found the conditional probability distribution of gas velocities that correspond to an ion of specific velocity which collides with a gas particle. Also, we have derived exact analytical formulae for piecewise calculation of the collision frequency integrals. We address the cases when the background gas is monocomponent and when it is a mixture of different gases. The techniques described here are required for Monte Carlo simulations of ion transport and for hybrid models of non-equilibrium plasmas. The range of energies where it is necessary to apply the technique has been defined. The results we obtained are in excellent agreement with the existing ones obtained by complementary methods. Having verified our algorithm, we were able to produce calculations for Ar+ ions in Ar and propose them as a new benchmark for thermal effects. The developed method is widely applicable for solving the Boltzmann equation that appears in many different contexts in physics. (paper)
Monte Carlo simulation of the TRIGA mark 2 criticality experiment
International Nuclear Information System (INIS)
The criticality analysis of the TRIGA-2 bench-mark experiment at the Musashi Institute of Technology Research Reactor (MuITR, 100 kW) was performed by the three-dimensional continuous-energy Monte Carlo code (MCNP4A). To minimize errors due to an inexact geometry model, all fresh fuel and control rods as well as vicinity of the core were precisely modeled. Core multiplication factors (Keff) in the initial core critical experiment and in the excess reactivity adjustment for the several fuel-loading patterns as well as the fuel element reactivity worth distributions were used in the validation process of the physical model and neutron cross section data from the ENDF/B-V evaluation. The calculated Keff overestimated the experimental data by 1.0% for both the initial core and the several fuel-loading arrangements (fuel or graphite element was added only to the outer-ring), but the discrepancy increased to 1.8% for the some fuel-loading patterns (graphite element was positioned in the inner-ring). The comparison result of the fuel element worth distribution showed above tendency. Al in all, the agreement between the MCNP predictions and the experimentally determined values is good, which indicates that the Monte Carlo model is enough to simulate criticality of the TRIGA-2 reactor. (author)
The lund Monte Carlo for jet fragmentation
International Nuclear Information System (INIS)
We present a Monte Carlo program based on the Lund model for jet fragmentation. Quark, gluon, diquark and hadron jets are considered. Special emphasis is put on the fragmentation of colour singlet jet systems, for which energy, momentum and flavour are conserved explicitly. The model for decays of unstable particles, in particular the weak decay of heavy hadrons, is described. The central part of the paper is a detailed description on how to use the FORTRAN 77 program. (Author)
Autocorrelations in hybrid Monte Carlo simulations
International Nuclear Information System (INIS)
Simulations of QCD suffer from severe critical slowing down towards the continuum limit. This problem is known to be prominent in the topological charge, however, all observables are affected to various degree by these slow modes in the Monte Carlo evolution. We investigate the slowing down in high statistics simulations and propose a new error analysis method, which gives a realistic estimate of the contribution of the slow modes to the errors. (orig.)
Simulated Annealing using Hybrid Monte Carlo
Salazar, Rafael; Toral, Raúl
1997-01-01
We propose a variant of the simulated annealing method for optimization in the multivariate analysis of differentiable functions. The method uses global actualizations via the hybrid Monte Carlo algorithm in their generalized version for the proposal of new configurations. We show how this choice can improve upon the performance of simulated annealing methods (mainly when the number of variables is large) by allowing a more effective searching scheme and a faster annealing schedule.
A Monte Carlo for BFKL Physics
Orr, Lynne H.; Stirling, W. J.
2000-01-01
Virtual photon scattering in e^+e^- collisions can result in events with the electron-positron pair at large rapidity separation with hadronic activity in between. The BFKL equation resums large logarithms that dominate the cross section for this process. We report here on a Monte Carlo method for solving the BFKL equation that allows kinematic constraints to be taken into account. The application to e^+e^- collisions is in progress.
Monte Carlo Simulations of Star Clusters
Giersz, M
2000-01-01
A revision of Stod\\'o{\\l}kiewicz's Monte Carlo code is used to simulate evolution of large star clusters. The survey on the evolution of multi-mass N-body systems influenced by the tidal field of a parent galaxy and by stellar evolution is discussed. For the first time, the simulation on the "star-by-star" bases of evolution of 1,000,000 body star cluster is presented. \\
A Ballistic Monte Carlo Approximation of {\\pi}
Dumoulin, Vincent
2014-01-01
We compute a Monte Carlo approximation of {\\pi} using importance sampling with shots coming out of a Mossberg 500 pump-action shotgun as the proposal distribution. An approximated value of 3.136 is obtained, corresponding to a 0.17% error on the exact value of {\\pi}. To our knowledge, this represents the first attempt at estimating {\\pi} using such method, thus opening up new perspectives towards computing mathematical constants using everyday tools.
Lookahead Strategies for Sequential Monte Carlo
Lin, Ming; Chen, Rong; Liu, Jun
2013-01-01
Based on the principles of importance sampling and resampling, sequential Monte Carlo (SMC) encompasses a large set of powerful techniques dealing with complex stochastic dynamic systems. Many of these systems possess strong memory, with which future information can help sharpen the inference about the current state. By providing theoretical justification of several existing algorithms and introducing several new ones, we study systematically how to construct efficient SMC algorithms to take ...
Handbook of Markov chain Monte Carlo
Brooks, Steve
2011-01-01
""Handbook of Markov Chain Monte Carlo"" brings together the major advances that have occurred in recent years while incorporating enough introductory material for new users of MCMC. Along with thorough coverage of the theoretical foundations and algorithmic and computational methodology, this comprehensive handbook includes substantial realistic case studies from a variety of disciplines. These case studies demonstrate the application of MCMC methods and serve as a series of templates for the construction, implementation, and choice of MCMC methodology.
Monte Carlo methods for preference learning
DEFF Research Database (Denmark)
Viappiani, P.
2012-01-01
Utility elicitation is an important component of many applications, such as decision support systems and recommender systems. Such systems query the users about their preferences and give recommendations based on the system’s belief about the utility function. Critical to these applications is th...... is the acquisition of prior distribution about the utility parameters and the possibility of real time Bayesian inference. In this paper we consider Monte Carlo methods for these problems....
The Moment Guided Monte Carlo Method
Degond, Pierre; Dimarco, Giacomo; Pareschi, Lorenzo
2009-01-01
In this work we propose a new approach for the numerical simulation of kinetic equations through Monte Carlo schemes. We introduce a new technique which permits to reduce the variance of particle methods through a matching with a set of suitable macroscopic moment equations. In order to guarantee that the moment equations provide the correct solutions, they are coupled to the kinetic equation through a non equilibrium term. The basic idea, on which the method relies, consists in guiding the p...
Monte Carlo dose mapping on deforming anatomy
Zhong, Hualiang; Siebers, Jeffrey V.
2009-10-01
This paper proposes a Monte Carlo-based energy and mass congruent mapping (EMCM) method to calculate the dose on deforming anatomy. Different from dose interpolation methods, EMCM separately maps each voxel's deposited energy and mass from a source image to a reference image with a displacement vector field (DVF) generated by deformable image registration (DIR). EMCM was compared with other dose mapping methods: energy-based dose interpolation (EBDI) and trilinear dose interpolation (TDI). These methods were implemented in EGSnrc/DOSXYZnrc, validated using a numerical deformable phantom and compared for clinical CT images. On the numerical phantom with an analytically invertible deformation map, EMCM mapped the dose exactly the same as its analytic solution, while EBDI and TDI had average dose errors of 2.5% and 6.0%. For a lung patient's IMRT treatment plan, EBDI and TDI differed from EMCM by 1.96% and 7.3% in the lung patient's entire dose region, respectively. As a 4D Monte Carlo dose calculation technique, EMCM is accurate and its speed is comparable to 3D Monte Carlo simulation. This method may serve as a valuable tool for accurate dose accumulation as well as for 4D dosimetry QA.
Carlos Castillo-Chavez: a century ahead.
Schatz, James
2013-01-01
When the opportunity to contribute a short essay about Dr. Carlos Castillo-Chavez presented itself in the context of this wonderful birthday celebration my immediate reaction was por supuesto que sí! Sixteen years ago, I travelled to Cornell University with my colleague at the National Security Agency (NSA) Barbara Deuink to meet Carlos and hear about his vision to expand the talent pool of mathematicians in our country. Our motivation was very simple. First of all, the Agency relies heavily on mathematicians to carry out its mission. If the U.S. mathematics community is not healthy, NSA is not healthy. Keeping our country safe requires a team of the sharpest minds in the nation to tackle amazing intellectual challenges on a daily basis. Second, the Agency cares deeply about diversity. Within the mathematical sciences, students with advanced degrees from the Chicano, Latino, Native American, and African-American communities are underrepresented. It was clear that addressing this issue would require visionary leadership and a long-term commitment. Carlos had the vision for a program that would provide promising undergraduates from minority communities with an opportunity to gain confidence and expertise through meaningful research experiences while sharing in the excitement of mathematical and scientific discovery. His commitment to the venture was unquestionable and that commitment has not waivered since the inception of the Mathematics and Theoretical Biology Institute (MTBI) in 1996.
Clinical dosimetry in photon radiotherapy. A Monte Carlo based investigation
International Nuclear Information System (INIS)
Practical clinical dosimetry is a fundamental step within the radiation therapy process and aims at quantifying the absorbed radiation dose within a 1-2% uncertainty. To achieve this level of accuracy, corrections are needed for calibrated and air-filled ionization chambers, which are used for dose measurement. The procedures of correction are based on cavity theory of Spencer-Attix and are defined in current dosimetry protocols. Energy dependent corrections for deviations from calibration beams account for changed ionization chamber response in the treatment beam. The corrections applied are usually based on semi-analytical models or measurements and are generally hard to determine due to their magnitude of only a few percents or even less. Furthermore the corrections are defined for fixed geometrical reference-conditions and do not apply to non-reference conditions in modern radiotherapy applications. The stochastic Monte Carlo method for the simulation of radiation transport is becoming a valuable tool in the field of Medical Physics. As a suitable tool for calculation of these corrections with high accuracy the simulations enable the investigation of ionization chambers under various conditions. The aim of this work is the consistent investigation of ionization chamber dosimetry in photon radiation therapy with the use of Monte Carlo methods. Nowadays Monte Carlo systems exist, which enable the accurate calculation of ionization chamber response in principle. Still, their bare use for studies of this type is limited due to the long calculation times needed for a meaningful result with a small statistical uncertainty, inherent to every result of a Monte Carlo simulation. Besides heavy use of computer hardware, techniques methods of variance reduction to reduce the needed calculation time can be applied. Methods for increasing the efficiency in the results of simulation were developed and incorporated in a modern and established Monte Carlo simulation environment
Directory of Open Access Journals (Sweden)
Jaime Alberto Galgani
2009-12-01
Full Text Available La única obra de Carlos Fuentes perteneciente al género policial es La cabeza de la hidra (1978. Centrada en las problemáticas asociadas con el contrabando de petróleo hacia los Estados Unidos, durante la década de los 70, este relato se ubica en el contexto de la novela negra latinoamericana, presentando ciertos rasgos que la acercan al neopolicial. Con una notable hibridación de géneros, se presenta la parodia narrativa de un detective que viene a ser la versión invertida, latinoamericana, de James Bond. El artículo indaga en la novela para determinar la función que el género policial cumple en ella.The only work by Carlos Fuentes belonging to the detective story genre is La cabeza de la hidra (1978. This story, which centers on problems relating to oil smuggling into the United States during the 70s, is to be found within the contexts of the Latin-American black novel, presenting certain features which approximate the neo-detective narrative. With a remarkable hybridization of genres, the narrative parody of a detective is presented which happens to be the Latin-American opposite versión of James Bond. The article analyses the novel with the purpose of determining the function that the neo-detective genre performs in it.
The Adjoint Monte Carlo - a viable option for efficient radiotherapy treatment planning
International Nuclear Information System (INIS)
In cancer therapy using collimated beams of photons, the radiation oncologist must determine a set of beams that delivers the required dose to each point in the tumor and minimizes the risk of damage to the healthy tissue and vital organs. Currently, the oncologist determines these beams iteratively, by using a sequence of dose calculations using approximate numerical methods. In this paper, a more accurate and potentially faster approach, based on the Adjoint Monte Carlo method, is presented (authors)
Guideline of Monte Carlo calculation. Neutron/gamma ray transport simulation by Monte Carlo method
2002-01-01
This report condenses basic theories and advanced applications of neutron/gamma ray transport calculations in many fields of nuclear energy research. Chapters 1 through 5 treat historical progress of Monte Carlo methods, general issues of variance reduction technique, cross section libraries used in continuous energy Monte Carlo codes. In chapter 6, the following issues are discussed: fusion benchmark experiments, design of ITER, experiment analyses of fast critical assembly, core analyses of JMTR, simulation of pulsed neutron experiment, core analyses of HTTR, duct streaming calculations, bulk shielding calculations, neutron/gamma ray transport calculations of the Hiroshima atomic bomb. Chapters 8 and 9 treat function enhancements of MCNP and MVP codes, and a parallel processing of Monte Carlo calculation, respectively. An important references are attached at the end of this report.
MEVSIM A Monte Carlo Event Generator for STAR
Ray, R L
2000-01-01
A fast, simple to use Monte Carlo based event generator is presented which is intended to facilitate simulation studies and the development of analysis software for the Solenoidal Tracker at RHIC (Relativistic Heavy Ion Collider) (STAR) experiment at the Brookhaven National Laboratory (BNL). This new event generator provides a fast, convenient means for producing large numbers of uncorrelated A+A collision events which can be used for a variety of applications in STAR, including quality assurance evaluation of event reconstruction software, determination of detector acceptances and tracking efficiencies, physics analysis of event-by-event global variables, studies of strange, rare and exotic particle reconstruction, and so on. The user may select the number of events, the particle types, the multiplicities, the one-body momentum space distributions and the detector acceptance ranges. The various algorithms used in the code and its capabilities are explained. Additional user information is also discussed. The ...
Calibration of the top-quark Monte-Carlo mass
Energy Technology Data Exchange (ETDEWEB)
Kieseler, Jan; Lipka, Katerina [DESY Hamburg (Germany); Moch, Sven-Olaf [Hamburg Univ. (Germany). II. Inst. fuer Theoretische Physik
2015-11-15
We present a method to establish experimentally the relation between the top-quark mass m{sup MC}{sub t} as implemented in Monte-Carlo generators and the Lagrangian mass parameter m{sub t} in a theoretically well-defined renormalization scheme. We propose a simultaneous fit of m{sup MC}{sub t} and an observable sensitive to m{sub t}, which does not rely on any prior assumptions about the relation between m{sub t} and m{sup MC}{sub t}. The measured observable is independent of m{sup MC}{sub t} and can be used subsequently for a determination of m{sub t}. The analysis strategy is illustrated with examples for the extraction of m{sub t} from inclusive and differential cross sections for hadro-production of top-quarks.
Quantum Monte Carlo Calculations in Solids with Downfolded Hamiltonians.
Ma, Fengjie; Purwanto, Wirawan; Zhang, Shiwei; Krakauer, Henry
2015-06-01
We present a combination of a downfolding many-body approach with auxiliary-field quantum Monte Carlo (AFQMC) calculations for extended systems. Many-body calculations operate on a simpler Hamiltonian which retains material-specific properties. The Hamiltonian is systematically improvable and allows one to dial, in principle, between the simplest model and the original Hamiltonian. As a by-product, pseudopotential errors are essentially eliminated using frozen orbitals constructed adaptively from the solid environment. The computational cost of the many-body calculation is dramatically reduced without sacrificing accuracy. Excellent accuracy is achieved for a range of solids, including semiconductors, ionic insulators, and metals. We apply the method to calculate the equation of state of cubic BN under ultrahigh pressure, and determine the spin gap in NiO, a challenging prototypical material with strong electron correlation effects. PMID:26196632
Monte Carlo Modeling of Crystal Channeling at High Energies
Schoofs, Philippe; Cerutti, Francesco
Charged particles entering a crystal close to some preferred direction can be trapped in the electromagnetic potential well existing between consecutive planes or strings of atoms. This channeling effect can be used to extract beam particles if the crystal is bent beforehand. Crystal channeling is becoming a reliable and efficient technique for collimating beams and removing halo particles. At CERN, the installation of silicon crystals in the LHC is under scrutiny by the UA9 collaboration with the goal of investigating if they are a viable option for the collimation system upgrade. This thesis describes a new Monte Carlo model of planar channeling which has been developed from scratch in order to be implemented in the FLUKA code simulating particle transport and interactions. Crystal channels are described through the concept of continuous potential taking into account thermal motion of the lattice atoms and using Moliere screening function. The energy of the particle transverse motion determines whether or n...
Calibration of the Top-Quark Monte-Carlo Mass
Kieseler, Jan; Moch, Sven-Olaf
2015-01-01
We present a method to establish experimentally the relation between the top-quark mass $m_t^{MC}$ as implemented in Monte-Carlo generators and the Lagrangian mass parameter $m_t$ in a theoretically well-defined renormalization scheme. We propose a simultaneous fit of $m_t^{MC}$ and an observable sensitive to $m_t$, which does not rely on any prior assumptions about the relation between $m_t$ and $m_t^{MC}$. The measured observable is independent of $m_t^{MC}$ and can be used subsequently for a determination of $m_t$. The analysis strategy is illustrated with examples for the extraction of $m_t$ from inclusive and differential cross sections for hadro-production of top-quarks.
Quantum Monte Carlo study of bilayer ionic Hubbard model
Jiang, M.; Schulthess, T. C.
2016-04-01
The interaction-driven insulator-to-metal transition has been reported in the ionic Hubbard model (IHM) for moderate interaction U , while its metallic phase only occupies a narrow region in the phase diagram. To explore the enlargement of the metallic regime, we extend the ionic Hubbard model to two coupled layers and study the interplay of interlayer hybridization V and two types of intralayer staggered potentials Δ : one with the same (in-phase) and the other with a π -phase shift (antiphase) potential between layers. Our determinant quantum Monte Carlo (DQMC) simulations at lowest accessible temperatures demonstrate that the interaction-driven metallic phase between Mott and band insulators expands in the Δ -V phase diagram of bilayer IHM only for in-phase ionic potentials; while antiphase potential always induces an insulator with charge density order. This implies possible further extension of the ionic Hubbard model from the bilayer case here to a realistic three-dimensional model.
Iterative Monte Carlo analysis of spin-dependent parton distributions
Sato, Nobuo; Kuhn, S E; Ethier, J J; Accardi, A
2016-01-01
We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at $x \\gtrsim 0.1$. The study also provides the first determination of the flavor-separated twist-3 PDFs and the $d_2$ moment of the nucleon within a global PDF analysis.
Calibration of the top-quark Monte-Carlo mass
International Nuclear Information System (INIS)
We present a method to establish experimentally the relation between the top-quark mass mMCt as implemented in Monte-Carlo generators and the Lagrangian mass parameter mt in a theoretically well-defined renormalization scheme. We propose a simultaneous fit of mMCt and an observable sensitive to mt, which does not rely on any prior assumptions about the relation between mt and mMCt. The measured observable is independent of mMCt and can be used subsequently for a determination of mt. The analysis strategy is illustrated with examples for the extraction of mt from inclusive and differential cross sections for hadro-production of top-quarks.
Monte-Carlo study of Dirac semimetals phase diagram
Braguta, V V; Kotov, A Yu; Nikolaev, A A
2016-01-01
In this paper the phase diagram of Dirac semimetals is studied within lattice Monte-Carlo simulation. In particular, we concentrate on the dynamical chiral symmetry breaking which results in semimetal/insulator transition. Using numerical simulation we determined the values of the critical coupling constant of the semimetal/insulator transition for different values of the anisotropy of the Fermi velocity. This measurement allowed us to draw tentative phase diagram for Dirac semimetals. It turns out that within the Dirac model with Coulomb interaction both Na$_3$Bi and Cd$_3$As$_2$ known experimentally to be Dirac semimetals would lie deeply in the insulating region of the phase diagram. It probably shows a decisive role of screening of the interelectron interaction in real materials, similar to the situation in graphene.
Path Integral Monte Carlo Calculation of the Deuterium Hugoniot
International Nuclear Information System (INIS)
Restricted path integral Monte Carlo simulations have been used to calculate the equilibrium properties of deuterium for two densities: 0.674 and 0.838 g cm -3 (rs=2.00 and 1.86) in the temperature range of 105≤T≤106 K . We carefully assess size effects and dependence on the time step of the path integral. Further, we compare the results obtained with a free particle nodal restriction with those from a self-consistent variational principle, which includes interactions and bound states. By using the calculated internal energies and pressures, we determine the shock Hugoniot and compare with recent laser shock wave experiments as well as other theories. (c) 2000 The American Physical Society
Quantum Monte Carlo calculations of two neutrons in finite volume
Klos, P; Tews, I; Gandolfi, S; Gezerlis, A; Hammer, H -W; Hoferichter, M; Schwenk, A
2016-01-01
Ab initio calculations provide direct access to the properties of pure neutron systems that are challenging to study experimentally. In addition to their importance for fundamental physics, their properties are required as input for effective field theories of the strong interaction. In this work, we perform auxiliary-field diffusion Monte Carlo calculations of the ground and first excited state of two neutrons in a finite box, considering a simple contact potential as well as chiral effective field theory interactions. We compare the results against exact diagonalizations and present a detailed analysis of the finite-volume effects, whose understanding is crucial for determining observables from the calculated energies. Using the L\\"uscher formula, we extract the low-energy S-wave scattering parameters from ground- and excited-state energies for different box sizes.
Neutron stimulated emission computed tomography: a Monte Carlo simulation approach
Energy Technology Data Exchange (ETDEWEB)
Sharma, A C [Department of Biomedical Engineering, Duke University, 136 Hudson Hall, Durham, NC 27708 (United States); Harrawood, B P [Duke Advance Imaging Labs, Department of Radiology, 2424 Erwin Rd, Suite 302, Durham, NC 27705 (United States); Bender, J E [Department of Biomedical Engineering, Duke University, 136 Hudson Hall, Durham, NC 27708 (United States); Tourassi, G D [Duke Advance Imaging Labs, Department of Radiology, 2424 Erwin Rd, Suite 302, Durham, NC 27705 (United States); Kapadia, A J [Department of Biomedical Engineering, Duke University, 136 Hudson Hall, Durham, NC 27708 (United States)
2007-10-21
A Monte Carlo simulation has been developed for neutron stimulated emission computed tomography (NSECT) using the GEANT4 toolkit. NSECT is a new approach to biomedical imaging that allows spectral analysis of the elements present within the sample. In NSECT, a beam of high-energy neutrons interrogates a sample and the nuclei in the sample are stimulated to an excited state by inelastic scattering of the neutrons. The characteristic gammas emitted by the excited nuclei are captured in a spectrometer to form multi-energy spectra. Currently, a tomographic image is formed using a collimated neutron beam to define the line integral paths for the tomographic projections. These projection data are reconstructed to form a representation of the distribution of individual elements in the sample. To facilitate the development of this technique, a Monte Carlo simulation model has been constructed from the GEANT4 toolkit. This simulation includes modeling of the neutron beam source and collimation, the samples, the neutron interactions within the samples, the emission of characteristic gammas, and the detection of these gammas in a Germanium crystal. In addition, the model allows the absorbed radiation dose to be calculated for internal components of the sample. NSECT presents challenges not typically addressed in Monte Carlo modeling of high-energy physics applications. In order to address issues critical to the clinical development of NSECT, this paper will describe the GEANT4 simulation environment and three separate simulations performed to accomplish three specific aims. First, comparison of a simulation to a tomographic experiment will verify the accuracy of both the gamma energy spectra produced and the positioning of the beam relative to the sample. Second, parametric analysis of simulations performed with different user-defined variables will determine the best way to effectively model low energy neutrons in tissue, which is a concern with the high hydrogen content in
Kumar, Sudhir; Srinivasan, P; Sharma, S D
2010-06-01
A cylindrical graphite ionization chamber of sensitive volume 1002.4 cm(3) was designed and fabricated at Bhabha Atomic Research Centre (BARC) for use as a reference dosimeter to measure the strength of high dose rate (HDR) (192)Ir brachytherapy sources. The air kerma calibration coefficient (N(K)) of this ionization chamber was estimated analytically using Burlin general cavity theory and by the Monte Carlo method. In the analytical method, calibration coefficients were calculated for each spectral line of an HDR (192)Ir source and the weighted mean was taken as N(K). In the Monte Carlo method, the geometry of the measurement setup and physics related input data of the HDR (192)Ir source and the surrounding material were simulated using the Monte Carlo N-particle code. The total photon energy fluence was used to arrive at the reference air kerma rate (RAKR) using mass energy absorption coefficients. The energy deposition rates were used to simulate the value of charge rate in the ionization chamber and N(K) was determined. The Monte Carlo calculated N(K) agreed within 1.77 % of that obtained using the analytical method. The experimentally determined RAKR of HDR (192)Ir sources, using this reference ionization chamber by applying the analytically estimated N(K), was found to be in agreement with the vendor quoted RAKR within 1.43%.
Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis
Energy Technology Data Exchange (ETDEWEB)
Wilson, Paul; Evans, Thomas; Tautges, Tim
2012-12-24
This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well
Fission Matrix Capability for MCNP Monte Carlo
Energy Technology Data Exchange (ETDEWEB)
Carney, Sean E. [Los Alamos National Laboratory; Brown, Forrest B. [Los Alamos National Laboratory; Kiedrowski, Brian C. [Los Alamos National Laboratory; Martin, William R. [Los Alamos National Laboratory
2012-09-05
In a Monte Carlo criticality calculation, before the tallying of quantities can begin, a converged fission source (the fundamental eigenvector of the fission kernel) is required. Tallies of interest may include powers, absorption rates, leakage rates, or the multiplication factor (the fundamental eigenvalue of the fission kernel, k{sub eff}). Just as in the power iteration method of linear algebra, if the dominance ratio (the ratio of the first and zeroth eigenvalues) is high, many iterations of neutron history simulations are required to isolate the fundamental mode of the problem. Optically large systems have large dominance ratios, and systems containing poor neutron communication between regions are also slow to converge. The fission matrix method, implemented into MCNP[1], addresses these problems. When Monte Carlo random walk from a source is executed, the fission kernel is stochastically applied to the source. Random numbers are used for: distances to collision, reaction types, scattering physics, fission reactions, etc. This method is used because the fission kernel is a complex, 7-dimensional operator that is not explicitly known. Deterministic methods use approximations/discretization in energy, space, and direction to the kernel. Consequently, they are faster. Monte Carlo directly simulates the physics, which necessitates the use of random sampling. Because of this statistical noise, common convergence acceleration methods used in deterministic methods do not work. In the fission matrix method, we are using the random walk information not only to build the next-iteration fission source, but also a spatially-averaged fission kernel. Just like in deterministic methods, this involves approximation and discretization. The approximation is the tallying of the spatially-discretized fission kernel with an incorrect fission source. We address this by making the spatial mesh fine enough that this error is negligible. As a consequence of discretization we get a
A Monte Carlo approach to water management
Koutsoyiannis, D.
2012-04-01
Common methods for making optimal decisions in water management problems are insufficient. Linear programming methods are inappropriate because hydrosystems are nonlinear with respect to their dynamics, operation constraints and objectives. Dynamic programming methods are inappropriate because water management problems cannot be divided into sequential stages. Also, these deterministic methods cannot properly deal with the uncertainty of future conditions (inflows, demands, etc.). Even stochastic extensions of these methods (e.g. linear-quadratic-Gaussian control) necessitate such drastic oversimplifications of hydrosystems that may make the obtained results irrelevant to the real world problems. However, a Monte Carlo approach is feasible and can form a general methodology applicable to any type of hydrosystem. This methodology uses stochastic simulation to generate system inputs, either unconditional or conditioned on a prediction, if available, and represents the operation of the entire system through a simulation model as faithful as possible, without demanding a specific mathematical form that would imply oversimplifications. Such representation fully respects the physical constraints, while at the same time it evaluates the system operation constraints and objectives in probabilistic terms, and derives their distribution functions and statistics through Monte Carlo simulation. As the performance criteria of a hydrosystem operation will generally be highly nonlinear and highly nonconvex functions of the control variables, a second Monte Carlo procedure, implementing stochastic optimization, is necessary to optimize system performance and evaluate the control variables of the system. The latter is facilitated if the entire representation is parsimonious, i.e. if the number of control variables is kept at a minimum by involving a suitable system parameterization. The approach is illustrated through three examples for (a) a hypothetical system of two reservoirs
Modulated pulse bathymetric lidar Monte Carlo simulation
Luo, Tao; Wang, Yabo; Wang, Rong; Du, Peng; Min, Xia
2015-10-01
A typical modulated pulse bathymetric lidar system is investigated by simulation using a modulated pulse lidar simulation system. In the simulation, the return signal is generated by Monte Carlo method with modulated pulse propagation model and processed by mathematical tools like cross-correlation and digital filter. Computer simulation results incorporating the modulation detection scheme reveal a significant suppression of the water backscattering signal and corresponding target contrast enhancement. More simulation experiments are performed with various modulation and reception variables to investigate the effect of them on the bathymetric system performance.
Monte Carlo Simulation of an American Option
Directory of Open Access Journals (Sweden)
Gikiri Thuo
2007-04-01
Full Text Available We implement gradient estimation techniques for sensitivity analysis of option pricing which can be efficiently employed in Monte Carlo simulation. Using these techniques we can simultaneously obtain an estimate of the option value together with the estimates of sensitivities of the option value to various parameters of the model. After deriving the gradient estimates we incorporate them in an iterative stochastic approximation algorithm for pricing an option with early exercise features. We illustrate the procedure using an example of an American call option with a single dividend that is analytically tractable. In particular we incorporate estimates for the gradient with respect to the early exercise threshold level.
Discovering correlated fermions using quantum Monte Carlo.
Wagner, Lucas K; Ceperley, David M
2016-09-01
It has become increasingly feasible to use quantum Monte Carlo (QMC) methods to study correlated fermion systems for realistic Hamiltonians. We give a summary of these techniques targeted at researchers in the field of correlated electrons, focusing on the fundamentals, capabilities, and current status of this technique. The QMC methods often offer the highest accuracy solutions available for systems in the continuum, and, since they address the many-body problem directly, the simulations can be analyzed to obtain insight into the nature of correlated quantum behavior. PMID:27518859
A Monte Carlo algorithm for degenerate plasmas
Energy Technology Data Exchange (ETDEWEB)
Turrell, A.E., E-mail: a.turrell09@imperial.ac.uk; Sherlock, M.; Rose, S.J.
2013-09-15
A procedure for performing Monte Carlo calculations of plasmas with an arbitrary level of degeneracy is outlined. It has possible applications in inertial confinement fusion and astrophysics. Degenerate particles are initialised according to the Fermi–Dirac distribution function, and scattering is via a Pauli blocked binary collision approximation. The algorithm is tested against degenerate electron–ion equilibration, and the degenerate resistivity transport coefficient from unmagnetised first order transport theory. The code is applied to the cold fuel shell and alpha particle equilibration problem of inertial confinement fusion.
Monte Carlo method in radiation transport problems
International Nuclear Information System (INIS)
In neutral radiation transport problems (neutrons, photons), two values are important: the flux in the phase space and the density of particles. To solve the problem with Monte Carlo method leads to, among other things, build a statistical process (called the play) and to provide a numerical value to a variable x (this attribution is called score). Sampling techniques are presented. Play biasing necessity is proved. A biased simulation is made. At last, the current developments (rewriting of programs for instance) are presented due to several reasons: two of them are the vectorial calculation apparition and the photon and neutron transport in vacancy media
Introduction to Monte-Carlo method
International Nuclear Information System (INIS)
We recall first some well known facts about random variables and sampling. Then we define the Monte-Carlo method in the case where one wants to compute a given integral. Afterwards, we ship to discrete Markov chains for which we define random walks, and apply to finite difference approximations of diffusion equations. Finally we consider Markov chains with continuous state (but discrete time), transition probabilities and random walks, which are the main piece of this work. The applications are: diffusion and advection equations, and the linear transport equation with scattering
IN MEMORIAM CARLOS RESTREPO. UN VERDADERO MAESTRO
Pelayo Correa
2009-01-01
Carlos Restrepo fue el primer profesor de Patología y un miembro ilustre del grupo de pioneros que fundaron la Facultad de Medicina de la Universidad del Valle. Estos pioneros convergieron en Cali en la década de 1950, en posesión de un espíritu renovador y creativo que emprendió con mucho éxito la labor de cambiar la cultura académica del Valle del Cauca. Ellos encontraron una sociedad apacible, que disfrutaba de la generosidad de su entorno, sin deseos de romper las tradiciones centenarias ...
Hybrid Monte Carlo simulation of polymer chains
Irbäck, A
1993-01-01
We develop the hybrid Monte Carlo method for simulations of single off-lattice polymer chains. We discuss implementation and choice of simulation parameters in some detail. The performance of the algorithm is tested on models for homopolymers with short- or long-range self-repulsion, using chains with $16\\le N\\le 512$ monomers. Without excessive fine tuning, we find that the computational cost grows as $N^{2+z^\\prime}$ with $0.64
Carlos Pereda y la cultura argumental
Eduardo Harada O.
2010-01-01
En este artículo se discute la fenomenología de la atención argumental de Carlos Pereda. Se trata de mostrar que esta fenomenología toma en cuenta todos los aspectos de la argumentación, principalmente, las reglas y virtudes epistémicas que sirven para controlar esta actividad de manera interna así como evitar los vértigos argumentales, además, no sólo estudia a los argumentos o apoyos determinados o deductivos sino, igualmente, a los subdeterminados, pues sostiene que éstos son una parte imp...
The Moment Guided Monte Carlo Method
Degond, Pierre; Pareschi, Lorenzo
2009-01-01
In this work we propose a new approach for the numerical simulation of kinetic equations through Monte Carlo schemes. We introduce a new technique which permits to reduce the variance of particle methods through a matching with a set of suitable macroscopic moment equations. In order to guarantee that the moment equations provide the correct solutions, they are coupled to the kinetic equation through a non equilibrium term. The basic idea, on which the method relies, consists in guiding the particle positions and velocities through moment equations so that the concurrent solution of the moment and kinetic models furnishes the same macroscopic quantities.
by means of FLUKA Monte Carlo method
Directory of Open Access Journals (Sweden)
Ermis Elif Ebru
2015-01-01
Full Text Available Calculations of gamma-ray mass attenuation coefficients of various detector materials (crystals were carried out by means of FLUKA Monte Carlo (MC method at different gamma-ray energies. NaI, PVT, GSO, GaAs and CdWO4 detector materials were chosen in the calculations. Calculated coefficients were also compared with the National Institute of Standards and Technology (NIST values. Obtained results through this method were highly in accordance with those of the NIST values. It was concluded from the study that FLUKA MC method can be an alternative way to calculate the gamma-ray mass attenuation coefficients of the detector materials.
Energy Technology Data Exchange (ETDEWEB)
Marcus, Ryan C. [Los Alamos National Laboratory
2012-07-24
Overview of this presentation is (1) Exascale computing - different technologies, getting there; (2) high-performance proof-of-concept MCMini - features and results; and (3) OpenCL toolkit - Oatmeal (OpenCL Automatic Memory Allocation Library) - purpose and features. Despite driver issues, OpenCL seems like a good, hardware agnostic tool. MCMini demonstrates the possibility for GPGPU-based Monte Carlo methods - it shows great scaling for HPC application and algorithmic equivalence. Oatmeal provides a flexible framework to aid in the development of scientific OpenCL codes.
Discovering correlated fermions using quantum Monte Carlo
Wagner, Lucas K.; Ceperley, David M.
2016-09-01
It has become increasingly feasible to use quantum Monte Carlo (QMC) methods to study correlated fermion systems for realistic Hamiltonians. We give a summary of these techniques targeted at researchers in the field of correlated electrons, focusing on the fundamentals, capabilities, and current status of this technique. The QMC methods often offer the highest accuracy solutions available for systems in the continuum, and, since they address the many-body problem directly, the simulations can be analyzed to obtain insight into the nature of correlated quantum behavior.
Monte Carlo simulations for heavy ion dosimetry
Geithner, Oksana
2006-01-01
Water-to-air stopping power ratio ( ) calculations for the ionization chamber dosimetry of clinically relevant ion beams with initial energies from 50 to 450 MeV/u have been performed using the Monte Carlo technique. To simulate the transport of a particle in water the computer code SHIELD-HIT v2 was used which is a substantially modified version of its predecessor SHIELD-HIT v1. The code was partially rewritten, replacing formerly used single precision variables with double precision variabl...
Markov chains analytic and Monte Carlo computations
Graham, Carl
2014-01-01
Markov Chains: Analytic and Monte Carlo Computations introduces the main notions related to Markov chains and provides explanations on how to characterize, simulate, and recognize them. Starting with basic notions, this book leads progressively to advanced and recent topics in the field, allowing the reader to master the main aspects of the classical theory. This book also features: Numerous exercises with solutions as well as extended case studies.A detailed and rigorous presentation of Markov chains with discrete time and state space.An appendix presenting probabilistic notions that are nec
State-of-the-art Monte Carlo 1988
Energy Technology Data Exchange (ETDEWEB)
Soran, P.D.
1988-06-28
Particle transport calculations in highly dimensional and physically complex geometries, such as detector calibration, radiation shielding, space reactors, and oil-well logging, generally require Monte Carlo transport techniques. Monte Carlo particle transport can be performed on a variety of computers ranging from APOLLOs to VAXs. Some of the hardware and software developments, which now permit Monte Carlo methods to be routinely used, are reviewed in this paper. The development of inexpensive, large, fast computer memory, coupled with fast central processing units, permits Monte Carlo calculations to be performed on workstations, minicomputers, and supercomputers. The Monte Carlo renaissance is further aided by innovations in computer architecture and software development. Advances in vectorization and parallelization architecture have resulted in the development of new algorithms which have greatly reduced processing times. Finally, the renewed interest in Monte Carlo has spawned new variance reduction techniques which are being implemented in large computer codes. 45 refs.
Monte Carlo simulations in theoretical physic; Simulations Monte Carlo en physique theorique
Energy Technology Data Exchange (ETDEWEB)
Billoire, A.
1991-12-31
After a presentation of the MONTE CARLO method principle, the method is applied, first to the critical exponents calculations in the three dimensions ISING model, and secondly to the discrete quantum chromodynamic with calculation times in function of computer power. 28 refs., 4 tabs.
Challenges and prospects for whole-coremonte Carlo analysis
International Nuclear Information System (INIS)
The advantages for using Monte Carlo methods to analyze full-core reactor configurations include essentially exact representation of geometry and physical phenomena that are important for reactor analysis. But this substantial advantage comes at a substantial cost because of the computational burden, both in terms of memory demand and computational time. This paper focuses on the challenges facing full-core Monte Carlo for keff calculations and the prospects for Monte Carlo becoming a routine tool for reactor analysis.
Temperature variance study in Monte-Carlo photon transport theory
International Nuclear Information System (INIS)
We study different Monte-Carlo methods for solving radiative transfer problems, and particularly Fleck's Monte-Carlo method. We first give the different time-discretization schemes and the corresponding stability criteria. Then we write the temperature variance as a function of the variances of temperature and absorbed energy at the previous time step. Finally we obtain some stability criteria for the Monte-Carlo method in the stationary case
Discrete diffusion Monte Carlo for frequency-dependent radiative transfer
Energy Technology Data Exchange (ETDEWEB)
Densmore, Jeffrey D [Los Alamos National Laboratory; Kelly, Thompson G [Los Alamos National Laboratory; Urbatish, Todd J [Los Alamos National Laboratory
2010-11-17
Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique.
Unbiased combinations of nonanalog Monte Carlo techniques and fair games
International Nuclear Information System (INIS)
Historically, Monte Carlo variance reduction techniques have developed one at a time in response to calculational needs. This paper provides the theoretical basis for obtaining unbiased Monte Carlo estimates from all possible combinations of variance reduction techniques. Hitherto, the techniques have not been proven to be unbiased in arbitrary combinations. The authors are unaware of any Monte Carlo techniques (in any linear process) that are not treated by the theorem herein. (author)
Alternative Monte Carlo Approach for General Global Illumination
Institute of Scientific and Technical Information of China (English)
徐庆; 李朋; 徐源; 孙济洲
2004-01-01
An alternative Monte Carlo strategy for the computation of global illumination problem was presented.The proposed approach provided a new and optimal way for solving Monte Carlo global illumination based on the zero variance importance sampling procedure. A new importance driven Monte Carlo global illumination algorithm in the framework of the new computing scheme was developed and implemented. Results, which were obtained by rendering test scenes, show that this new framework and the newly derived algorithm are effective and promising.
MontePython: Implementing Quantum Monte Carlo using Python
J.K. Nilsen
2006-01-01
We present a cross-language C++/Python program for simulations of quantum mechanical systems with the use of Quantum Monte Carlo (QMC) methods. We describe a system for which to apply QMC, the algorithms of variational Monte Carlo and diffusion Monte Carlo and we describe how to implement theses methods in pure C++ and C++/Python. Furthermore we check the efficiency of the implementations in serial and parallel cases to show that the overhead using Python can be negligible.
Combinatorial nuclear level density by a Monte Carlo method
Cerf, N.
1993-01-01
We present a new combinatorial method for the calculation of the nuclear level density. It is based on a Monte Carlo technique, in order to avoid a direct counting procedure which is generally impracticable for high-A nuclei. The Monte Carlo simulation, making use of the Metropolis sampling scheme, allows a computationally fast estimate of the level density for many fermion systems in large shell model spaces. We emphasize the advantages of this Monte Carlo approach, particularly concerning t...
THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE
Energy Technology Data Exchange (ETDEWEB)
WATERS, LAURIE S. [Los Alamos National Laboratory; MCKINNEY, GREGG W. [Los Alamos National Laboratory; DURKEE, JOE W. [Los Alamos National Laboratory; FENSIN, MICHAEL L. [Los Alamos National Laboratory; JAMES, MICHAEL R. [Los Alamos National Laboratory; JOHNS, RUSSELL C. [Los Alamos National Laboratory; PELOWITZ, DENISE B. [Los Alamos National Laboratory
2007-01-10
MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.
Monte Carlo generators in ATLAS software
International Nuclear Information System (INIS)
This document describes how Monte Carlo (MC) generators can be used in the ATLAS software framework (Athena). The framework is written in C++ using Python scripts for job configuration. Monte Carlo generators that provide the four-vectors describing the results of LHC collisions are written in general by third parties and are not part of Athena. These libraries are linked from the LCG Generator Services (GENSER) distribution. Generators are run from within Athena and the generated event output is put into a transient store, in HepMC format, using StoreGate. A common interface, implemented via inheritance of a GeneratorModule class, guarantees common functionality for the basic generation steps. The generator information can be accessed and manipulated by helper packages like TruthHelper. The ATLAS detector simulation as well access the truth information from StoreGate1. Steering is done through specific interfaces to allow for flexible configuration using ATLAS Python scripts. Interfaces to most general purpose generators, including: Pythia6, Pythia8, Herwig, Herwig++ and Sherpa are provided, as well as to more specialized packages, for example Phojet and Cascade. A second type of interface exist for the so called Matrix Element generators that only generate the particles produced in the hard scattering process and write events in the Les Houches event format. A generic interface to pass these events to Pythia6 and Herwig for parton showering and hadronisation has been written.
Information Geometry and Sequential Monte Carlo
Sim, Aaron; Stumpf, Michael P H
2012-01-01
This paper explores the application of methods from information geometry to the sequential Monte Carlo (SMC) sampler. In particular the Riemannian manifold Metropolis-adjusted Langevin algorithm (mMALA) is adapted for the transition kernels in SMC. Similar to its function in Markov chain Monte Carlo methods, the mMALA is a fully adaptable kernel which allows for efficient sampling of high-dimensional and highly correlated parameter spaces. We set up the theoretical framework for its use in SMC with a focus on the application to the problem of sequential Bayesian inference for dynamical systems as modelled by sets of ordinary differential equations. In addition, we argue that defining the sequence of distributions on geodesics optimises the effective sample sizes in the SMC run. We illustrate the application of the methodology by inferring the parameters of simulated Lotka-Volterra and Fitzhugh-Nagumo models. In particular we demonstrate that compared to employing a standard adaptive random walk kernel, the SM...
Quantum Monte Carlo Calculations of Neutron Matter
Carlson, J; Ravenhall, D G
2003-01-01
Uniform neutron matter is approximated by a cubic box containing a finite number of neutrons, with periodic boundary conditions. We report variational and Green's function Monte Carlo calculations of the ground state of fourteen neutrons in a periodic box using the Argonne $\\vep $ two-nucleon interaction at densities up to one and half times the nuclear matter density. The effects of the finite box size are estimated using variational wave functions together with cluster expansion and chain summation techniques. They are small at subnuclear densities. We discuss the expansion of the energy of low-density neutron gas in powers of its Fermi momentum. This expansion is strongly modified by the large nn scattering length, and does not begin with the Fermi-gas kinetic energy as assumed in both Skyrme and relativistic mean field theories. The leading term of neutron gas energy is ~ half the Fermi-gas kinetic energy. The quantum Monte Carlo results are also used to calibrate the accuracy of variational calculations ...
Reactor perturbation calculations by Monte Carlo methods
International Nuclear Information System (INIS)
Whilst Monte Carlo methods are useful for reactor calculations involving complicated geometry, it is difficult to apply them to the calculation of perturbation worths because of the large amount of computing time needed to obtain good accuracy. Various ways of overcoming these difficulties are investigated in this report, with the problem of estimating absorbing control rod worths particularly in mind. As a basis for discussion a method of carrying out multigroup reactor calculations by Monte Carlo methods is described. Two methods of estimating a perturbation worth directly, without differencing two quantities of like magnitude, are examined closely but are passed over in favour of a third method based on a correlation technique. This correlation method is described, and demonstrated by a limited range of calculations for absorbing control rods in a fast reactor. In these calculations control rod worths of between 1% and 7% in reactivity are estimated to an accuracy better than 10% (3 standard errors) in about one hour's computing time on the English Electric KDF.9 digital computer. (author)
The macro response Monte Carlo method for electron transport
Energy Technology Data Exchange (ETDEWEB)
Svatos, M M
1998-09-01
The main goal of this thesis was to prove the feasibility of basing electron depth dose calculations in a phantom on first-principles single scatter physics, in an amount of time that is equal to or better than current electron Monte Carlo methods. The Macro Response Monte Carlo (MRMC) method achieves run times that are on the order of conventional electron transport methods such as condensed history, with the potential to be much faster. This is possible because MRMC is a Local-to-Global method, meaning the problem is broken down into two separate transport calculations. The first stage is a local, in this case, single scatter calculation, which generates probability distribution functions (PDFs) to describe the electron's energy, position and trajectory after leaving the local geometry, a small sphere or "kugel" A number of local kugel calculations were run for calcium and carbon, creating a library of kugel data sets over a range of incident energies (0.25 MeV - 8 MeV) and sizes (0.025 cm to 0.1 cm in radius). The second transport stage is a global calculation, where steps that conform to the size of the kugels in the library are taken through the global geometry. For each step, the appropriate PDFs from the MRMC library are sampled to determine the electron's new energy, position and trajectory. The electron is immediately advanced to the end of the step and then chooses another kugel to sample, which continues until transport is completed. The MRMC global stepping code was benchmarked as a series of subroutines inside of the Peregrine Monte Carlo code. It was compared to Peregrine's class II condensed history electron transport package, EGS4, and MCNP for depth dose in simple phantoms having density inhomogeneities. Since the kugels completed in the library were of relatively small size, the zoning of the phantoms was scaled down from a clinical size, so that the energy deposition algorithms for spreading dose across 5-10 zones per kugel could
Monte Carlo optimization for site selection of new chemical plants.
Cai, Tianxing; Wang, Sujing; Xu, Qiang
2015-11-01
Geographic distribution of chemical manufacturing sites has significant impact on the business sustainability of industrial development and regional environmental sustainability as well. The common site selection rules have included the evaluation of the air quality impact of a newly constructed chemical manufacturing site to surrounding communities. In order to achieve this target, the simultaneous consideration should cover the regional background air-quality information, the emissions of new manufacturing site, and statistical pattern of local meteorological conditions. According to the above information, the risk assessment can be conducted for the potential air-quality impacts from candidate locations of a new chemical manufacturing site, and thus the optimization of the final site selection can be achieved by minimizing its air-quality impacts. This paper has provided a systematic methodology for the above purpose. There are total two stages of modeling and optimization work: i) Monte Carlo simulation for the purpose to identify background pollutant concentration based on currently existing emission sources and regional statistical meteorological conditions; and ii) multi-objective (simultaneous minimization of both peak pollutant concentration and standard deviation of pollutant concentration spatial distribution at air-quality concern regions) Monte Carlo optimization for optimal location selection of new chemical manufacturing sites according to their design data of potential emission. This study can be helpful to both determination of the potential air-quality impact for geographic distribution of multiple chemical plants with respect to regional statistical meteorological conditions, and the identification of an optimal site for each new chemical manufacturing site with the minimal environment impact to surrounding communities. The efficacy of the developed methodology has been demonstrated through the case studies.
A Monte Carlo investigation of lung brachytherapy treatment planning
International Nuclear Information System (INIS)
Iodine-125 (125I) and Caesium-131 (131Cs) brachytherapy have been used in conjunction with sublobar resection to reduce the local recurrence of stage I non-small cell lung cancer compared with resection alone. Treatment planning for this procedure is typically performed using only a seed activity nomogram or look-up table to determine seed strand spacing for the implanted mesh. Since the post-implant seed geometry is difficult to predict, the nomogram is calculated using the TG-43 formalism for seeds in a planar geometry. In this work, the EGSnrc user-code BrachyDose is used to recalculate nomograms using a variety of tissue models for 125I and 131Cs seeds. Calculated prescription doses are compared to those calculated using TG-43. Additionally, patient CT and contour data are used to generate virtual implants to study the effects that post-implant deformation and patient-specific tissue heterogeneity have on perturbing nomogram-derived dose distributions. Differences of up to 25% in calculated prescription dose are found between TG-43 and Monte Carlo calculations with the TG-43 formalism underestimating prescription doses in general. Differences between the TG-43 formalism and Monte Carlo calculated prescription doses are greater for 125I than for 131Cs seeds. Dose distributions are found to change significantly based on implant deformation and tissues surrounding implants for patient-specific virtual implants. Results suggest that accounting for seed grid deformation and the effects of non-water media, at least approximately, are likely required to reliably predict dose distributions in lung brachytherapy patients. (paper)
Quantum Monte Carlo for electronic structure: Recent developments and applications
International Nuclear Information System (INIS)
Quantum Monte Carlo (QMC) methods have been found to give excellent results when applied to chemical systems. The main goal of the present work is to use QMC to perform electronic structure calculations. In QMC, a Monte Carlo simulation is used to solve the Schroedinger equation, taking advantage of its analogy to a classical diffusion process with branching. In the present work the author focuses on how to extend the usefulness of QMC to more meaningful molecular systems. This study is aimed at questions concerning polyatomic and large atomic number systems. The accuracy of the solution obtained is determined by the accuracy of the trial wave function's nodal structure. Efforts in the group have given great emphasis to finding optimized wave functions for the QMC calculations. Little work had been done by systematically looking at a family of systems to see how the best wave functions evolve with system size. In this work the author presents a study of trial wave functions for C, CH, C2H and C2H2. The goal is to study how to build wave functions for larger systems by accumulating knowledge from the wave functions of its fragments as well as gaining some knowledge on the usefulness of multi-reference wave functions. In a MC calculation of a heavy atom, for reasonable time steps most moves for core electrons are rejected. For this reason true equilibration is rarely achieved. A method proposed by Batrouni and Reynolds modifies the way the simulation is performed without altering the final steady-state solution. It introduces an acceleration matrix chosen so that all coordinates (i.e., of core and valence electrons) propagate at comparable speeds. A study of the results obtained using their proposed matrix suggests that it may not be the optimum choice. In this work the author has found that the desired mixing of coordinates between core and valence electrons is not achieved when using this matrix. A bibliography of 175 references is included
A Monte Carlo Green's function method for three-dimensional neutron transport
International Nuclear Information System (INIS)
This paper describes a Monte Carlo transport kernel capability, which has recently been incorporated into the RACER continuous-energy Monte Carlo code. The kernels represent a Green's function method for neutron transport from a fixed-source volume out to a particular volume of interest. This method is very powerful transport technique. Also, since kernels are evaluated numerically by Monte Carlo, the problem geometry can be arbitrarily complex, yet exact. This method is intended for problems where an ex-core neutron response must be determined for a variety of reactor conditions. Two examples are ex-core neutron detector response and vessel critical weld fast flux. The response is expressed in terms of neutron transport kernels weighted by a core fission source distribution. In these types of calculations, the response must be computed for hundreds of source distributions, but the kernels only need to be calculated once. The advance described in this paper is that the kernels are generated with a highly accurate three-dimensional Monte Carlo transport calculation instead of an approximate method such as line-of-sight attenuation theory or a synthesized three-dimensional discrete ordinates solution
A configuration space Monte Carlo algorithm for solving the nuclear pairing problem
Lingle, Mark
Nuclear pairing correlations using Quantum Monte Carlo are studied in this dissertation. We start by defining the nuclear pairing problem and discussing several historical methods developed to solve this problem, paying special attention to the applicability of such methods. A numerical example discussing pairing correlations in several calcium isotopes using the BCS and Exact Pairing solutions are presented. The ground state energies, correlation energies, and occupation numbers are compared to determine the applicability of each approach to realistic cases. Next we discuss some generalities related to the theory of Markov Chains and Quantum Monte Carlo in regards to nuclear structure. Finally we present our configuration space Monte Carlo algorithm starting from a discussion of a path integral approach by the authors. Some general features of the Pairing Hamiltonian that boost the effectiveness of a configuration space Monte Carlo approach are mentioned. The full details of our method are presented and special attention is paid to convergence and error control. We present a series of examples illustrating the effectiveness of our approach. These include situations with non-constant pairing strengths, limits when pairing correlations are weak, the computation of excited states, and problems when the relevant configuration space is large. We conclude with a chapter examining some of the effects of continuum states in 24O.
Monte Carlo calculations of the impact of a hip prosthesis on the dose distribution
International Nuclear Information System (INIS)
Because of the ageing of the population, an increasing number of patients with hip prostheses are undergoing pelvic irradiation. Treatment planning systems (TPS) currently available are not always able to accurately predict the dose distribution around such implants. In fact, only Monte Carlo simulation has the ability to precisely calculate the impact of a hip prosthesis during radiotherapeutic treatment. Monte Carlo phantoms were developed to evaluate the dose perturbations during pelvic irradiation. A first model, constructed with the DOSXYZnrc usercode, was elaborated to determine the dose increase at the tissue-metal interface as well as the impact of the material coating the prosthesis. Next, CT-based phantoms were prepared, using the usercode CTCreate, to estimate the influence of the geometry and the composition of such implants on the beam attenuation. Thanks to a program that we developed, the study was carried out with CT-based phantoms containing a hip prosthesis without metal artefacts. Therefore, anthropomorphic phantoms allowed better definition of both patient anatomy and the hip prosthesis in order to better reproduce the clinical conditions of pelvic irradiation. The Monte Carlo results revealed the impact of certain coatings such as PMMA on dose enhancement at the tissue-metal interface. Monte Carlo calculations in CT-based phantoms highlighted the marked influence of the implant's composition, its geometry as well as its position within the beam on dose distribution
Learning About Ares I from Monte Carlo Simulation
Hanson, John M.; Hall, Charlie E.
2008-01-01
This paper addresses Monte Carlo simulation analyses that are being conducted to understand the behavior of the Ares I launch vehicle, and to assist with its design. After describing the simulation and modeling of Ares I, the paper addresses the process used to determine what simulations are necessary, and the parameters that are varied in order to understand how the Ares I vehicle will behave in flight. Outputs of these simulations furnish a significant group of design customers with data needed for the development of Ares I and of the Orion spacecraft that will ride atop Ares I. After listing the customers, examples of many of the outputs are described. Products discussed in this paper include those that support structural loads analysis, aerothermal analysis, flight control design, failure/abort analysis, determination of flight performance reserve, examination of orbit insertion accuracy, determination of the Upper Stage impact footprint, analysis of stage separation, analysis of launch probability, analysis of first stage recovery, thrust vector control and reaction control system design, liftoff drift analysis, communications analysis, umbilical release, acoustics, and design of jettison systems.
Comparison between Monte Carlo method and deterministic method
International Nuclear Information System (INIS)
A fast critical assembly consists of a lattice of plates of sodium, plutonium or uranium, resulting in a high inhomogeneity. The inhomogeneity in the lattice should be evaluated carefully to determine the bias factor accurately. Deterministic procedures are generally used for the lattice calculation. To reduce the required calculation time, various one-dimensional lattice models have been developed previously to replace multi-dimensional models. In the present study, calculations are made for a two-dimensional model and results are compared with those obtained with one-dimensional models in terms of the average microscopic cross section of a lattice and diffusion coefficient. Inhomogeneity in a lattice affects the effective cross section and distribution of neutrons in the lattice. The background cross section determined by the method proposed by Tone is used here to calculate the effective cross section, and the neutron distribution is determined by the collision probability method. Several other methods have been proposed to calculate the effective cross section. The present study also applies the continuous energy Monte Carlo method to the calculation. A code based on this method is employed to evaluate several one-dimensional models. (Nogami, K.)
Monte Carlo Implementation of Polarized Hadronization
Matevosyan, Hrayr H; Thomas, Anthony W
2016-01-01
We study the polarized quark hadronization in a Monte Carlo (MC) framework based on the recent extension of the quark-jet framework, where a self-consistent treatment of the quark polarization transfer in a sequential hadronization picture has been presented. Here, we first adopt this approach for MC simulations of hadronization process with finite number of produced hadrons, expressing the relevant probabilities in terms of the eight leading twist quark-to-quark transverse momentum dependent (TMD) splitting functions (SFs) for elementary $q \\to q'+h$ transition. We present explicit expressions for the unpolarized and Collins fragmentation functions (FFs) of unpolarized hadrons emitted at rank two. Further, we demonstrate that all the current spectator-type model calculations of the leading twist quark-to-quark TMD SFs violate the positivity constraints, and propose quark model based ansatz for these input functions that circumvents the problem. We validate our MC framework by explicitly proving the absence o...
Commensurabilities between ETNOs: a Monte Carlo survey
Marcos, C de la Fuente
2016-01-01
Many asteroids in the main and trans-Neptunian belts are trapped in mean motion resonances with Jupiter and Neptune, respectively. As a side effect, they experience accidental commensurabilities among themselves. These commensurabilities define characteristic patterns that can be used to trace the source of the observed resonant behaviour. Here, we explore systematically the existence of commensurabilities between the known ETNOs using their heliocentric and barycentric semimajor axes, their uncertainties, and Monte Carlo techniques. We find that the commensurability patterns present in the known ETNO population resemble those found in the main and trans-Neptunian belts. Although based on small number statistics, such patterns can only be properly explained if most, if not all, of the known ETNOs are subjected to the resonant gravitational perturbations of yet undetected trans-Plutonian planets. We show explicitly that some of the statistically significant commensurabilities are compatible with the Planet Nin...
Hybrid algorithms in quantum Monte Carlo
International Nuclear Information System (INIS)
With advances in algorithms and growing computing powers, quantum Monte Carlo (QMC) methods have become a leading contender for high accuracy calculations for the electronic structure of realistic systems. The performance gain on recent HPC systems is largely driven by increasing parallelism: the number of compute cores of a SMP and the number of SMPs have been going up, as the Top500 list attests. However, the available memory as well as the communication and memory bandwidth per element has not kept pace with the increasing parallelism. This severely limits the applicability of QMC and the problem size it can handle. OpenMP/MPI hybrid programming provides applications with simple but effective solutions to overcome efficiency and scalability bottlenecks on large-scale clusters based on multi/many-core SMPs. We discuss the design and implementation of hybrid methods in QMCPACK and analyze its performance on current HPC platforms characterized by various memory and communication hierarchies.
Nuclear reactions in Monte Carlo codes
Ferrari, Alfredo
2002-01-01
The physics foundations of hadronic interactions as implemented in most Monte Carlo codes are presented together with a few practical examples. The description of the relevant physics is presented schematically split into the major steps in order to stress the different approaches required for the full understanding of nuclear reactions at intermediate and high energies. Due to the complexity of the problem, only a few semi-qualitative arguments are developed in this paper. The description will be necessarily schematic and somewhat incomplete, but hopefully it will be useful for a first introduction into this topic. Examples are shown mostly for the high energy regime, where all mechanisms mentioned in the paper are at work and to which perhaps most of the readers are less accustomed. Examples for lower energies can be found in the references. (43 refs) .
San Carlos Apache Tribe - Energy Organizational Analysis
Energy Technology Data Exchange (ETDEWEB)
Rapp, James; Albert, Steve
2012-04-01
The San Carlos Apache Tribe (SCAT) was awarded $164,000 in late-2011 by the U.S. Department of Energy (U.S. DOE) Tribal Energy Program's "First Steps Toward Developing Renewable Energy and Energy Efficiency on Tribal Lands" Grant Program. This grant funded: The analysis and selection of preferred form(s) of tribal energy organization (this Energy Organization Analysis, hereinafter referred to as "EOA"). Start-up staffing and other costs associated with the Phase 1 SCAT energy organization. An intern program. Staff training. Tribal outreach and workshops regarding the new organization and SCAT energy programs and projects, including two annual tribal energy summits (2011 and 2012). This report documents the analysis and selection of preferred form(s) of a tribal energy organization.
Monte Carlo Exploration of Warped Higgsless Models
Hewett, J L; Rizzo, T G
2004-01-01
We have performed a detailed Monte Carlo exploration of the parameter space for a warped Higgsless model of electroweak symmetry breaking in 5 dimensions. This model is based on the $SU(2)_L\\times SU(2)_R\\times U(1)_{B-L}$ gauge group in an AdS$_5$ bulk with arbitrary gauge kinetic terms on both the Planck and TeV branes. Constraints arising from precision electroweak measurements and collider data are found to be relatively easy to satisfy. We show, however, that the additional requirement of perturbative unitarity up to the cut-off, $\\simeq 10$ TeV, in $W_L^+W_L^-$ elastic scattering in the absence of dangerous tachyons eliminates all models. If successful models of this class exist, they must be highly fine-tuned.
Variable length trajectory compressible hybrid Monte Carlo
Nishimura, Akihiko
2016-01-01
Hybrid Monte Carlo (HMC) generates samples from a prescribed probability distribution in a configuration space by simulating Hamiltonian dynamics, followed by the Metropolis (-Hastings) acceptance/rejection step. Compressible HMC (CHMC) generalizes HMC to a situation in which the dynamics is reversible but not necessarily Hamiltonian. This article presents a framework to further extend the algorithm. Within the existing framework, each trajectory of the dynamics must be integrated for the same amount of (random) time to generate a valid Metropolis proposal. Our generalized acceptance/rejection mechanism allows a more deliberate choice of the integration time for each trajectory. The proposed algorithm in particular enables an effective application of variable step size integrators to HMC-type sampling algorithms based on reversible dynamics. The potential of our framework is further demonstrated by another extension of HMC which reduces the wasted computations due to unstable numerical approximations and corr...
Newton, Paul K.; Mason, Jeremy; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Norton, Larry; Kuhn, Peter
2013-01-01
The classic view of metastatic cancer progression is that it is a unidirectional process initiated at the primary tumor site, progressing to variably distant metastatic sites in a fairly predictable, though not perfectly understood, fashion. A Markov chain Monte Carlo mathematical approach can determine a pathway diagram that classifies metastatic tumors as ‘spreaders’ or ‘sponges’ and orders the timescales of progression from site to site. In light of recent experimental evidence highlightin...
Energy Technology Data Exchange (ETDEWEB)
Tholomier, M.; Vicario, E.; Doghmane, N.
1987-10-01
The contribution of backscattered electrons to Auger electrons yield was studied with a multiple scattering Monte-Carlo simulation. The Auger backscattering factor has been calculated in the 5 keV-60 keV energy range. The dependence of the Auger backscattering factor on the primary energy and the beam incidence angle were determined. Spatial distributions of backscattered electrons and Auger electrons are presented for a point incident beam. Correlations between these distributions are briefly investigated.
Anomalous scaling in the random-force-driven Burgers equation. A Monte Carlo study
Energy Technology Data Exchange (ETDEWEB)
Mesterhazy, David [TU Darmstadt (Germany). Inst. fuer Kernphysik; Jansen, Karl [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann Inst. fuer Computing
2011-12-15
We present a new approach to determine the small-scale statistical behavior of hydrodynamic turbulence by means of lattice simulations. Using the functional integral representation of the random-force-driven Burgers equation we show that high-order moments of velocity differences satisfy anomalous scaling. The general applicability of Monte Carlo methods provides the opportunity to study also other systems of interest within this framework. (orig.)
Nagata, H; Žukovič, M.; Idogaki, T.
2013-01-01
The three-dimensional XY model with bilinear-biquadratic exchange interactions $J$ and $J'$, respectively, has been studied by Monte Carlo simulations. From the detailed analysis of the thermal variation of various physical quantities, as well as the order parameter and energy histogram analysis, the phase diagram including two different ordered phases has been determined. There is a single phase boundary from a paramagnetic to a dipole-quadrupole ordered phase, which is of second order in a ...
Evaluation of the material assignment method used by a Monte Carlo treatment planning system.
Isambert, A; Brualla, L; Lefkopoulos, D
2009-12-01
An evaluation of the conversion process from Hounsfield units (HU) to material composition in computerised tomography (CT) images, employed by the Monte Carlo based treatment planning system ISOgray (DOSIsoft), is presented. A boundary in the HU for the material conversion between "air" and "lung" materials was determined based on a study using 22 patients. The dosimetric consequence of the new boundary was quantitatively evaluated for a lung patient plan.
Experimental and Monte Carlo evaluation of an ionization chamber in a 60Co beam
Perini, A. P.; Neves, L. P.; Santos, W. S.; Caldas, L. V. E.
2016-07-01
Recently a special parallel-plate ionization chamber was developed and characterized at the Instituto de Pesquisas Energeticas e Nucleares. The operational tests presented results within the recommended limits. In order to determine the influence of some components of the ionization chamber on its response, Monte Carlo simulations were carried out. The experimental and simulation results pointed out that the dosimeter evaluated in the present work has favorable properties to be applied to 60Co dosimetry at calibration laboratories.
Experimental and Monte Carlo evaluation of an ionization chamber in a {sup 60}Co beam
Energy Technology Data Exchange (ETDEWEB)
Perini, Ana P.; Neves, Lucio Pereira, E-mail: anapaula.perini@ufu.br [Universidade Federal de Uberlandia (INFIS/UFU), MG (Brazil). Instituto de Fisica; Santos, William S.; Caldas, Linda V.E. [Instituto de Pesquisas Energeticas e Nucleres (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)
2015-07-01
Recently a special parallel-plate ionization chamber was developed and characterized at the Instituto de Pesquisas Energeticas e Nucleares. The operational tests presented results within the recommended limits. In order to determine the influence of some components of the ionization chamber on its response, Monte Carlo simulations were carried out. The experimental and simulation results pointed out that the dosimeter evaluated in the present work has favorable properties to be applied to {sup 60}Co dosimetry at calibration laboratories. (author)
Monte Carlo Simulations of Type Ia Supernova Observations in Supernova Surveys
Li, Weidong; Filippenko, Alexei V.; Riess, Adam G.
2000-01-01
We have performed Monte Carlo simulations of type Ia supernova (SN Ia) surveys to quantify their efficiency in discovering peculiar overluminous and underluminous SNe Ia. We determined how the type of survey (magnitude-limited, distance-limited, or a hybrid) and its characteristics (observations frequency and detection limit) affect the discovery of peculiar SNe Ia. We find that there are strong biases against the discovery of peculiar SNe Ia introduced by at least four observational effects:...
Monte Carlo modelling of TRIGA research reactor
El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.
2010-10-01
The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.
Accelerated GPU based SPECT Monte Carlo simulations
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-01
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency
Criticality benchmarking of ANET Monte Carlo code
International Nuclear Information System (INIS)
In this work the new Monte Carlo code ANET is tested on criticality calculations. ANET is developed based on the high energy physics code GEANT of CERN and aims at progressively satisfying several requirements regarding both simulations of GEN II/III reactors, as well as of innovative nuclear reactor designs such as the Accelerator Driven Systems (ADSs). Here ANET is applied on three different nuclear configurations, including a subcritical assembly, a Material Testing Reactor and the conceptual configuration of an ADS. In the first case, calculation of the effective multiplication factor (keff) are performed for the Training Nuclear Reactor of the Aristotle University of Thessaloniki, while in the second case keff is computed for the fresh fueled core of the Portuguese research reactor (RPJ) just after its conversion to Low Enriched Uranium, considering the control rods at the position that renders the reactor critical. In both cases ANET computations are compared with corresponding results obtained by three different well established codes, including both deterministic (XSDRNPM/CITATION) and Monte Carlo (TRIPOLI, MCNP). In the RPI case, keff computations are also compared with observations during the reactor core commissioning since the control rods are considered at criticality position. The above verification studies show ANET to produce reasonable results since they are satisfactorily compared with other models as well as with observations. For the third case (ADS), preliminary ANET computations of keff for various intensities of the proton beam are presented, showing also a reasonable code performance concerning both the order of magnitude and the relative variation of the computed parameter. (author)
Monte Carlo analysis of radiative transport in oceanographic lidar measurements
Energy Technology Data Exchange (ETDEWEB)
Cupini, E.; Ferro, G. [ENEA, Divisione Fisica Applicata, Centro Ricerche Ezio Clementel, Bologna (Italy); Ferrari, N. [Bologna Univ., Bologna (Italy). Dipt. Ingegneria Energetica, Nucleare e del Controllo Ambientale
2001-07-01
The analysis of oceanographic lidar systems measurements is often carried out with semi-empirical methods, since there is only a rough understanding of the effects of many environmental variables. The development of techniques for interpreting the accuracy of lidar measurements is needed to evaluate the effects of various environmental situations, as well as of different experimental geometric configurations and boundary conditions. A Monte Carlo simulation model represents a tool that is particularly well suited for answering these important questions. The PREMAR-2F Monte Carlo code has been developed taking into account the main molecular and non-molecular components of the marine environment. The laser radiation interaction processes of diffusion, re-emission, refraction and absorption are treated. In particular are considered: the Rayleigh elastic scattering, produced by atoms and molecules with small dimensions with respect to the laser emission wavelength (i.e. water molecules), the Mie elastic scattering, arising from atoms or molecules with dimensions comparable to the laser wavelength (hydrosols), the Raman inelastic scattering, typical of water, the absorption of water, inorganic (sediments) and organic (phytoplankton and CDOM) hydrosols, the fluorescence re-emission of chlorophyll and yellow substances. PREMAR-2F is an extension of a code for the simulation of the radiative transport in atmospheric environments (PREMAR-2). The approach followed in PREMAR-2 was to combine conventional Monte Carlo techniques with analytical estimates of the probability of the receiver to have a contribution from photons coming back after an interaction in the field of view of the lidar fluorosensor collecting apparatus. This offers an effective mean for modelling a lidar system with realistic geometric constraints. The retrieved semianalytic Monte Carlo radiative transfer model has been developed in the frame of the Italian Research Program for Antarctica (PNRA) and it is
QWalk: A Quantum Monte Carlo Program for Electronic Structure
Wagner, Lucas K; Mitas, Lubos
2007-01-01
We describe QWalk, a new computational package capable of performing Quantum Monte Carlo electronic structure calculations for molecules and solids with many electrons. We describe the structure of the program and its implementation of Quantum Monte Carlo methods. It is open-source, licensed under the GPL, and available at the web site http://www.qwalk.org
Recent Developments in Quantum Monte Carlo: Methods and Applications
Aspuru-Guzik, Alan; Austin, Brian; Domin, Dominik; Galek, Peter T. A.; Handy, Nicholas; Prasad, Rajendra; Salomon-Ferrer, Romelia; Umezawa, Naoto; Lester, William A.
2007-12-01
The quantum Monte Carlo method in the diffusion Monte Carlo form has become recognized for its capability of describing the electronic structure of atomic, molecular and condensed matter systems to high accuracy. This talk will briefly outline the method with emphasis on recent developments connected with trial function construction, linear scaling, and applications to selected systems.
Adjoint electron-photon transport Monte Carlo calculations with ITS
International Nuclear Information System (INIS)
A general adjoint coupled electron-photon Monte Carlo code for solving the Boltzmann-Fokker-Planck equation has recently been created. It is a modified version of ITS 3.0, a coupled electronphoton Monte Carlo code that has world-wide distribution. The applicability of the new code to radiation-interaction problems of the type found in space environments is demonstrated
Neutron point-flux calculation by Monte Carlo
International Nuclear Information System (INIS)
A survey of the usual methods for estimating flux at a point is given. The associated variance-reducing techniques in direct Monte Carlo games are explained. The multigroup Monte Carlo codes MC for critical systems and PUNKT for point source-point detector-systems are represented, and problems in applying the codes to practical tasks are discussed. (author)
CERN Summer Student Report 2016 Monte Carlo Data Base Improvement
Caciulescu, Alexandru Razvan
2016-01-01
During my Summer Student project I worked on improving the Monte Carlo Data Base and MonALISA services for the ALICE Collaboration. The project included learning the infrastructure for tracking and monitoring of the Monte Carlo productions as well as developing a new RESTful API for seamless integration with the JIRA issue tracking framework.
Quantum Monte Carlo Simulations : Algorithms, Limitations and Applications
Raedt, H. De
1992-01-01
A survey is given of Quantum Monte Carlo methods currently used to simulate quantum lattice models. The formalisms employed to construct the simulation algorithms are sketched. The origin of fundamental (minus sign) problems which limit the applicability of the Quantum Monte Carlo approach is shown
Practical schemes for accurate forces in quantum Monte Carlo
Moroni, S.; Saccani, S.; Filippi, C.
2014-01-01
While the computation of interatomic forces has become a well-established practice within variational Monte Carlo (VMC), the use of the more accurate Fixed-Node Diffusion Monte Carlo (DMC) method is still largely limited to the computation of total energies on structures obtained at a lower level of
Managing the Knowledge Commons: Interview with Carlo Vercellone -
Vercellone, Carlo
2015-01-01
Interview with Dr. Carlo Vercellone, one of the leading theorists of cognitive capitalism and economist at the CNRS Lab of The Sorbonne Economic Centre (Centre d'Economie de la Sorbonne, CES). - See more at: http://www.nesta.org.uk/blog/managing-knowledge-commons-interview-carlo-vercellone#sthash.1F1Ig5dF.dpuf,
IN MEMORIAM CARLOS RESTREPO. UN VERDADERO MAESTRO
Directory of Open Access Journals (Sweden)
Pelayo Correa
2009-06-01
Full Text Available Carlos Restrepo fue el primer profesor de Patología y un miembro ilustre del grupo de pioneros que fundaron la Facultad de Medicina de la Universidad del Valle. Estos pioneros convergieron en Cali en la década de 1950, en posesión de un espíritu renovador y creativo que emprendió con mucho éxito la labor de cambiar la cultura académica del Valle del Cauca. Ellos encontraron una sociedad apacible, que disfrutaba de la generosidad de su entorno, sin deseos de romper las tradiciones centenarias de estilo de vida sencillo y satisfecho. Cuando los hijos tenían el deseo y la capacidad de seguir estudios universitarios, especialmente en el área de la medicina, la familia los enviaba a climas menos cálidos, que supuestamente favorecían la función cerebral y la acumulación de conocimientos. Los pioneros de la educación médica en el Valle del Cauca, en buena parte reclutados en universidades nacionales y extranjeras, sabían muy bien que el ambiente vallecaucano no impide una formación universitaria de primera clase.Carlos Restrepo era prototipo del espíritu de cambio y formación intelectual de las nuevas generaciones. Lo manifestaba de múltiples maneras, en buena parte con su genio alegre, extrovertido, optimista, de risa fácil y contagiosa. Pero esta fase amable de su personalidad no ocultaba su tarea formativa; exigía de sus discípulos dedicación y trabajo duro, con fidelidad expresados en memorables caricaturas que exageraban su genio ocasionalmente explosivo.El grupo de pioneros se enfocó con un espíritu de total entrega (tiempo completo y dedicación exclusiva y organizó la nueva Facultad en bien definidos y estructurados departamentos: Anatomía, Bioquímica, Fisiología, Farmacología, Patología, Medicina Interna, Cirugía, Obstetricia y Ginecología, Psiquiatría y Medicina Preventiva. Los departamentos integraron sus funciones primordiales en la enseñanza, la investigación y el servicio a la comunidad. El centro
Baräo, Fernando; Nakagawa, Masayuki; Távora, Luis; Vaz, Pedro
2001-01-01
This book focusses on the state of the art of Monte Carlo methods in radiation physics and particle transport simulation and applications, the latter involving in particular, the use and development of electron--gamma, neutron--gamma and hadronic codes. Besides the basic theory and the methods employed, special attention is paid to algorithm development for modeling, and the analysis of experiments and measurements in a variety of fields ranging from particle to medical physics.
Hybrid SN/Monte Carlo research and results
International Nuclear Information System (INIS)
The neutral particle transport equation is solved by a hybrid method that iteratively couples regions where deterministic (SN) and stochastic (Monte Carlo) methods are applied. The Monte Carlo and SN regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid method has been successfully applied to realistic shielding problems. The vectorized Monte Carlo algorithm in the hybrid method has been ported to the massively parallel architecture of the Connection Machine. Comparisons of performance on a vector machine (Cray Y-MP) and the Connection Machine (CM-2) show that significant speedups are obtainable for vectorized Monte Carlo algorithms on massively parallel machines, even when realistic problems requiring variance reduction are considered. However, the architecture of the Connection Machine does place some limitations on the regime in which the Monte Carlo algorithm may by expected to perform well. (author)
Reconstruction of Monte Carlo replicas from Hessian parton distributions
Hou, Tie-Jiun; Huston, Joey; Nadolsky, Pavel; Schmidt, Carl; Stump, Daniel; Wang, Bo-Ting; Xie, Ke-Ping; Dulat, Sayipjamal; Pumplin, Jon; Yuan, C -P
2016-01-01
We explore connections between two common methods for quantifying the uncertainty in parton distribution functions (PDFs), based on the Hessian error matrix and Monte-Carlo sampling. CT14 parton distributions in the Hessian representation are converted into Monte-Carlo replicas by a numerical method that reproduces important properties of CT14 Hessian PDFs: the asymmetry of CT14 uncertainties and positivity of individual parton distributions. The ensembles of CT14 Monte-Carlo replicas constructed this way at NNLO and NLO are suitable for various collider applications, such as cross section reweighting. Master formulas for computation of asymmetric standard deviations in the Monte-Carlo representation are derived. A numerical program is made available for conversion of Hessian PDFs into Monte-Carlo replicas according to normal, log-normal, and Watt-Thorne sampling procedures.
Problems in radiation shielding calculations with Monte Carlo methods
International Nuclear Information System (INIS)
The Monte Carlo method is a very useful tool for solving a large class of radiation transport problem. In contrast with deterministic method, geometric complexity is a much less significant problem for Monte Carlo calculations. However, the accuracy of Monte Carlo calculations is of course, limited by statistical error of the quantities to be estimated. In this report, we point out some typical problems to solve a large shielding system including radiation streaming. The Monte Carlo coupling technique was developed to settle such a shielding problem accurately. However, the variance of the Monte Carlo results using the coupling technique of which detectors were located outside the radiation streaming, was still not enough. So as to bring on more accurate results for the detectors located outside the streaming and also for a multi-legged-duct streaming problem, a practicable way of ''Prism Scattering technique'' is proposed in the study. (author)
A hybrid Monte Carlo and response matrix Monte Carlo method in criticality calculation
International Nuclear Information System (INIS)
Full core calculations are very useful and important in reactor physics analysis, especially in computing the full core power distributions, optimizing the refueling strategies and analyzing the depletion of fuels. To reduce the computing time and accelerate the convergence, a method named Response Matrix Monte Carlo (RMMC) method based on analog Monte Carlo simulation was used to calculate the fixed source neutron transport problems in repeated structures. To make more accurate calculations, we put forward the RMMC method based on non-analog Monte Carlo simulation and investigate the way to use RMMC method in criticality calculations. Then a new hybrid RMMC and MC (RMMC+MC) method is put forward to solve the criticality problems with combined repeated and flexible geometries. This new RMMC+MC method, having the advantages of both MC method and RMMC method, can not only increase the efficiency of calculations, also simulate more complex geometries rather than repeated structures. Several 1-D numerical problems are constructed to test the new RMMC and RMMC+MC method. The results show that RMMC method and RMMC+MC method can efficiently reduce the computing time and variations in the calculations. Finally, the future research directions are mentioned and discussed at the end of this paper to make RMMC method and RMMC+MC method more powerful. (authors)
Cohesion energetics of carbon allotropes: Quantum Monte Carlo study
International Nuclear Information System (INIS)
We have performed quantum Monte Carlo calculations to study the cohesion energetics of carbon allotropes, including sp3-bonded diamond, sp2-bonded graphene, sp–sp2 hybridized graphynes, and sp-bonded carbyne. The computed cohesive energies of diamond and graphene are found to be in excellent agreement with the corresponding values determined experimentally for diamond and graphite, respectively, when the zero-point energies, along with the interlayer binding in the case of graphite, are included. We have also found that the cohesive energy of graphyne decreases systematically as the ratio of sp-bonded carbon atoms increases. The cohesive energy of γ-graphyne, the most energetically stable graphyne, turns out to be 6.766(6) eV/atom, which is smaller than that of graphene by 0.698(12) eV/atom. Experimental difficulty in synthesizing graphynes could be explained by their significantly smaller cohesive energies. Finally, we conclude that the cohesive energy of a newly proposed graphyne can be accurately estimated with the carbon–carbon bond energies determined from the cohesive energies of graphene and three different graphynes considered here
Monte Carlo simulation of inelastic neutrino scattering in DUMAND
International Nuclear Information System (INIS)
Detailed Monte Carlo calculations simulating the detection in the DUMAND 1-km3 optical detector of inelastic neutrino scattering by nucleons at 2 TeV and above show that the measurement of the y distribution is subject to systematic errors due to experimental errors and intrinsic fluctuations which produce errors in the energy determinations of hadronic cascade and muon; uncertainty in the exact amount of antineutrino fraction in the cosmic-ray neutrino flux. The nature of these errors is explored, and methods for removing them from the data developed. The remaining uncertainties are those in the evaluation of the errors in energy determination, and in the antineutrino contamination. It appears that these errors, not statistical ones, will eventually govern the accuracy of the y distributions obtained. Nonetheless, the effect of the boson propagator on the y distribution is so marked that no plausible scenario can be found in which the residual errors cast doubt on whether or not the propagator effect is present
Monte Carlo simulation algorithm for B-DNA.
Howell, Steven C; Qiu, Xiangyun; Curtis, Joseph E
2016-11-01
Understanding the structure-function relationship of biomolecules containing DNA has motivated experiments aimed at determining molecular structure using methods such as small-angle X-ray and neutron scattering (SAXS and SANS). SAXS and SANS are useful for determining macromolecular shape in solution, a process which benefits by using atomistic models that reproduce the scattering data. The variety of algorithms available for creating and modifying model DNA structures lack the ability to rapidly modify all-atom models to generate structure ensembles. This article describes a Monte Carlo algorithm for simulating DNA, not with the goal of predicting an equilibrium structure, but rather to generate an ensemble of plausible structures which can be filtered using experimental results to identify a sub-ensemble of conformations that reproduce the solution scattering of DNA macromolecules. The algorithm generates an ensemble of atomic structures through an iterative cycle in which B-DNA is represented using a wormlike bead-rod model, new configurations are generated by sampling bend and twist moves, then atomic detail is recovered by back mapping from the final coarse-grained configuration. Using this algorithm on commodity computing hardware, one can rapidly generate an ensemble of atomic level models, each model representing a physically realistic configuration that could be further studied using molecular dynamics. © 2016 Wiley Periodicals, Inc. PMID:27671358
Energy Technology Data Exchange (ETDEWEB)
Grimes, Joshua, E-mail: grimes.joshua@mayo.edu [Department of Physics and Astronomy, University of British Columbia, Vancouver V5Z 1L8 (Canada); Celler, Anna [Department of Radiology, University of British Columbia, Vancouver V5Z 1L8 (Canada)
2014-09-15
Purpose: The authors’ objective was to compare internal dose estimates obtained using the Organ Level Dose Assessment with Exponential Modeling (OLINDA/EXM) software, the voxel S value technique, and Monte Carlo simulation. Monte Carlo dose estimates were used as the reference standard to assess the impact of patient-specific anatomy on the final dose estimate. Methods: Six patients injected with{sup 99m}Tc-hydrazinonicotinamide-Tyr{sup 3}-octreotide were included in this study. A hybrid planar/SPECT imaging protocol was used to estimate {sup 99m}Tc time-integrated activity coefficients (TIACs) for kidneys, liver, spleen, and tumors. Additionally, TIACs were predicted for {sup 131}I, {sup 177}Lu, and {sup 90}Y assuming the same biological half-lives as the {sup 99m}Tc labeled tracer. The TIACs were used as input for OLINDA/EXM for organ-level dose calculation and voxel level dosimetry was performed using the voxel S value method and Monte Carlo simulation. Dose estimates for {sup 99m}Tc, {sup 131}I, {sup 177}Lu, and {sup 90}Y distributions were evaluated by comparing (i) organ-level S values corresponding to each method, (ii) total tumor and organ doses, (iii) differences in right and left kidney doses, and (iv) voxelized dose distributions calculated by Monte Carlo and the voxel S value technique. Results: The S values for all investigated radionuclides used by OLINDA/EXM and the corresponding patient-specific S values calculated by Monte Carlo agreed within 2.3% on average for self-irradiation, and differed by as much as 105% for cross-organ irradiation. Total organ doses calculated by OLINDA/EXM and the voxel S value technique agreed with Monte Carlo results within approximately ±7%. Differences between right and left kidney doses determined by Monte Carlo were as high as 73%. Comparison of the Monte Carlo and voxel S value dose distributions showed that each method produced similar dose volume histograms with a minimum dose covering 90% of the volume (D90
A Cross-Section Adjustment Method for Double Heterogeneity Problem in VHTGR Analysis
International Nuclear Information System (INIS)
Very High Temperature Gas-Cooled Reactors (VHTGRs) draw strong interest as candidates for a Gen-IV reactor concept, in which TRISO (tristructuralisotropic) fuel is employed to enhance the fuel performance. However, randomly dispersed TRISO fuel particles in a graphite matrix induce the so-called double heterogeneity problem. For design and analysis of such reactors with the double heterogeneity problem, the Monte Carlo method is widely used due to its complex geometry and continuous-energy capabilities. However, its huge computational burden, even in the modern high computing power, is still problematic to perform wholecore analysis in reactor design procedure. To address the double heterogeneity problem using conventional lattice codes, the RPT (Reactivityequivalent Physical Transformation) method considers a homogenized fuel region that is geometrically transformed to provide equivalent self-shielding effect. Another method is the coupled Monte Carlo/Collision Probability method, in which the absorption and nu-fission resonance cross-section libraries in the deterministic CPM3 lattice code are modified group-wise by the double heterogeneity factors determined by Monte Carlo results. In this paper, a new two-step Monte Carlo homogenization method is described as an alternative to those methods above. In the new method, a single cross-section adjustment factor is introduced to provide self-shielding effect equivalent to the self-shielding in heterogeneous geometry for a unit cell of compact fuel. Then, the homogenized fuel compact material with the equivalent cross-section adjustment factor is used in continuous-energy Monte Carlo calculation for various types of fuel blocks (or assemblies). The procedure of cross-section adjustment is implemented in the MCNP5 code
International Nuclear Information System (INIS)
A collaboration of the EURADOS working group on 'Internal Dosimetry' and the United States Transuranium and Uranium Registries (USTUR) has taken place to carry out an intercomparison on measurements and Monte Carlo modelling determining americium deposited in the bone of a USTUR leg phantom. Preliminary results and conclusions of this intercomparison exercise are presented here. (authors)
Carlos Gardel, el patrimonio que sonrie
Directory of Open Access Journals (Sweden)
María Julia Carozzi
2003-10-01
Full Text Available Analizando los modos en que los porteños recordaron a Carlos Gardel en el mes del 68 aniversario de su muerte, el artículo intenta dar cuenta de una de las formas en que los habitantes de la ciudad de Buenos Aires conciben aquello que es memorable, identifican aquello en que se reconocen como porteños y singularizan aquello frente a lo cual experimentan sentimientos de pertenencia colectiva. El trabajo señala la centralidad que el milagro, la mimesis y el contacto directo con su cuerpo desempeñan en la preservación de la memoria de Gardel, quien encarna tanto al tango como a su éxito en el mundo. El caso de Gardel se presenta como un ejemplo de la organización de la memoria y la identidad de los porteños en particular y los argentinos en general en torno a personas reales a quienes se les asigna un valor extraordinario. Al sostener su profundo enraizamiento en cuerpos humanos concretos, tornan problemática la adopción local de los conceptos globalmente aceptados de patrimonio histórico y cultural.The article analyses one of the ways in which the inhabitants of Buenos Aires conceive that which is memorable, source of positive identification and origin of feelings of communitas by examining their commemoration of the 68th anniversary of the death of Carlos Gardel. It underscores the central role that miracles, mimesis and direct bodily contact play in the preservation of the memory of the star, who incarnates both the tango and its world-wide success. The case of Gardel is presented as an example of the centrality that real persons of extraordinary value have in the organization of local memory and collective identity. Since they are embedded in concrete human bodies, they reveal problems in the local adoption of globally accepted concepts of historical and cultural heritage.
Recent advances and future prospects for Monte Carlo
Energy Technology Data Exchange (ETDEWEB)
Brown, Forrest B [Los Alamos National Laboratory
2010-01-01
The history of Monte Carlo methods is closely linked to that of computers: The first known Monte Carlo program was written in 1947 for the ENIAC; a pre-release of the first Fortran compiler was used for Monte Carlo In 1957; Monte Carlo codes were adapted to vector computers in the 1980s, clusters and parallel computers in the 1990s, and teraflop systems in the 2000s. Recent advances include hierarchical parallelism, combining threaded calculations on multicore processors with message-passing among different nodes. With the advances In computmg, Monte Carlo codes have evolved with new capabilities and new ways of use. Production codes such as MCNP, MVP, MONK, TRIPOLI and SCALE are now 20-30 years old (or more) and are very rich in advanced featUres. The former 'method of last resort' has now become the first choice for many applications. Calculations are now routinely performed on office computers, not just on supercomputers. Current research and development efforts are investigating the use of Monte Carlo methods on FPGAs. GPUs, and many-core processors. Other far-reaching research is exploring ways to adapt Monte Carlo methods to future exaflop systems that may have 1M or more concurrent computational processes.
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
International Nuclear Information System (INIS)
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Energy Technology Data Exchange (ETDEWEB)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark
International Nuclear Information System (INIS)
There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as
Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark.
Renner, F; Wulff, J; Kapsch, R-P; Zink, K
2015-10-01
There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as
Monte Carlo simulation of MOSFET dosimeter for electron backscatter using the GEANT4 code.
Chow, James C L; Leung, Michael K K
2008-06-01
The aim of this study is to investigate the influence of the body of the metal-oxide-semiconductor field effect transistor (MOSFET) dosimeter in measuring the electron backscatter from lead. The electron backscatter factor (EBF), which is defined as the ratio of dose at the tissue-lead interface to the dose at the same point without the presence of backscatter, was calculated by the Monte Carlo simulation using the GEANT4 code. Electron beams with energies of 4, 6, 9, and 12 MeV were used in the simulation. It was found that in the presence of the MOSFET body, the EBFs were underestimated by about 2%-0.9% for electron beam energies of 4-12 MeV, respectively. The trend of the decrease of EBF with an increase of electron energy can be explained by the small MOSFET dosimeter, mainly made of epoxy and silicon, not only attenuated the electron fluence of the electron beam from upstream, but also the electron backscatter generated by the lead underneath the dosimeter. However, this variation of the EBF underestimation is within the same order of the statistical uncertainties as the Monte Carlo simulations, which ranged from 1.3% to 0.8% for the electron energies of 4-12 MeV, due to the small dosimetric volume. Such small EBF deviation is therefore insignificant when the uncertainty of the Monte Carlo simulation is taken into account. Corresponding measurements were carried out and uncertainties compared to Monte Carlo results were within +/- 2%. Spectra of energy deposited by the backscattered electrons in dosimetric volumes with and without the lead and MOSFET were determined by Monte Carlo simulations. It was found that in both cases, when the MOSFET body is either present or absent in the simulation, deviations of electron energy spectra with and without the lead decrease with an increase of the electron beam energy. Moreover, the softer spectrum of the backscattered electron when lead is present can result in a reduction of the MOSFET response due to stronger
A Monte Carlo approach for estimating measurement uncertainty using standard spreadsheet software.
Chew, Gina; Walczyk, Thomas
2012-03-01
Despite the importance of stating the measurement uncertainty in chemical analysis, concepts are still not widely applied by the broader scientific community. The Guide to the expression of uncertainty in measurement approves the use of both the partial derivative approach and the Monte Carlo approach. There are two limitations to the partial derivative approach. Firstly, it involves the computation of first-order derivatives of each component of the output quantity. This requires some mathematical skills and can be tedious if the mathematical model is complex. Secondly, it is not able to predict the probability distribution of the output quantity accurately if the input quantities are not normally distributed. Knowledge of the probability distribution is essential to determine the coverage interval. The Monte Carlo approach performs random sampling from probability distributions of the input quantities; hence, there is no need to compute first-order derivatives. In addition, it gives the probability density function of the output quantity as the end result, from which the coverage interval can be determined. Here we demonstrate how the Monte Carlo approach can be easily implemented to estimate measurement uncertainty using a standard spreadsheet software program such as Microsoft Excel. It is our aim to provide the analytical community with a tool to estimate measurement uncertainty using software that is already widely available and that is so simple to apply that it can even be used by students with basic computer skills and minimal mathematical knowledge.
Minimizing the cost of splitting in Monte Carlo radiation transport simulation
Energy Technology Data Exchange (ETDEWEB)
Juzaitis, R.J.
1980-10-01
A deterministic analysis of the computational cost associated with geometric splitting/Russian roulette in Monte Carlo radiation transport calculations is presented. Appropriate integro-differential equations are developed for the first and second moments of the Monte Carlo tally as well as time per particle history, given that splitting with Russian roulette takes place at one (or several) internal surfaces of the geometry. The equations are solved using a standard S/sub n/ (discrete ordinates) solution technique, allowing for the prediction of computer cost (formulated as the product of sample variance and time per particle history, sigma/sup 2//sub s/tau p) associated with a given set of splitting parameters. Optimum splitting surface locations and splitting ratios are determined. Benefits of such an analysis are particularly noteworthy for transport problems in which splitting is apt to be extensively employed (e.g., deep penetration calculations).
Monte Carlo wave packet approach to dissociative multiple ionization in diatomic molecules
DEFF Research Database (Denmark)
Leth, Henriette Astrup; Madsen, Lars Bojer; Mølmer, Klaus
2010-01-01
A detailed description of the Monte Carlo wave packet technique applied to dissociative multiple ionization of diatomic molecules in short intense laser pulses is presented. The Monte Carlo wave packet technique relies on the Born-Oppenheimer separation of electronic and nuclear dynamics...... separately for each molecular charge state. Our model circumvents the solution of a multiparticle Schrödinger equation and makes it possible to extract the kinetic energy release spectrum via the Coulomb explosion channel as well as the physical origin of the different structures in the spectrum....... The computational effort is restricted and the model is applicable to any molecular system where electronic Born-Oppenheimer curves, dipole moment functions, and ionization rates as a function of nuclear coordinates can be determined....
Bourva, L C A
1999-01-01
The general purpose neutron-photon-electron Monte Carlo N-Particle code, MCNP sup T sup M , has been used to simulate the neutronic characteristics of the on-site laboratory passive neutron coincidence counter to be installed, under Euratom Safeguards Directorate supervision, at the Sellafield reprocessing plant in Cumbria, UK. This detector is part of a series of nondestructive assay instruments to be installed for the accurate determination of the plutonium content of nuclear materials. The present work focuses on one aspect of this task, namely, the accurate calculation of the coincidence gate utilisation factor. This parameter is an important term in the interpretative model used to analyse the passive neutron coincidence count data acquired using pulse train deconvolution electronics based on the shift register technique. It accounts for the limited proportion of neutrons detected within the time interval for which the electronics gate is open. The Monte Carlo code MCF, presented in this work, represents...
Simulation model based on Monte Carlo method for traffic assignment in local area road network
Institute of Scientific and Technical Information of China (English)
Yuchuan DU; Yuanjing GENG; Lijun SUN
2009-01-01
For a local area road network, the available traffic data of traveling are the flow volumes in the key intersections, not the complete OD matrix. Considering the circumstance characteristic and the data availability of a local area road network, a new model for traffic assignment based on Monte Carlo simulation of intersection turning movement is provided in this paper. For good stability in temporal sequence, turning ratio is adopted as the important parameter of this model. The formulation for local area road network assignment problems is proposed on the assumption of random turning behavior. The traffic assignment model based on the Monte Carlo method has been used in traffic analysis for an actual urban road network. The results comparing surveying traffic flow data and determining flow data by the previous model verify the applicability and validity of the proposed methodology.
Monte Carlo simulation applied in total reflection x-ray fluorescence: Preliminary results
Energy Technology Data Exchange (ETDEWEB)
Meira, Luiza L. C.; Inocente, Guilherme F.; Vieira, Leticia D.; Mesa, Joel [Departamento de Fisica e Biofisica - Instituto de Biociencias de Botucatu, Universidade Estadual Paulista Julio de Mesquita Filho (Brazil)
2013-05-06
The X-ray Fluorescence (XRF) analysis is a technique for the qualitative and quantitative determination of chemical constituents in a sample. This method is based on detection of the characteristic radiation intensities emitted by the elements of the sample, when properly excited. A variant of this technique is the Total Reflection X-ray Fluorescence (TXRF) that utilizes electromagnetic radiation as excitation source. In total reflection of X-ray, the angle of refraction of the incident beam tends to zero and the refracted beam is tangent to the sample support interface. Thus, there is a minimum angle of incidence at which no refracted beam exists and all incident radiation undergoes total reflection. In this study, we evaluated the influence of the energy variation of the beam of incident x-rays, using the MCNPX code (Monte Carlo NParticle) based on Monte Carlo method.
Study of nuclear pairing with Configuration-Space Monte-Carlo approach
Lingle, Mark
2015-01-01
Pairing correlations in nuclei play a decisive role in determining nuclear drip-lines, binding energies, and many collective properties. In this work a new Configuration-Space Monte-Carlo (CSMC) method for treating nuclear pairing correlations is developed, implemented, and demonstrated. In CSMC the Hamiltonian matrix is stochastically generated in Krylov subspace, resulting in the Monte-Carlo version of Lanczos-like diagonalization. The advantages of this approach over other techniques are discussed; the absence of the fermionic sign problem, probabilistic interpretation of quantum-mechanical amplitudes, and ability to handle truly large-scale problems with defined precision and error control, are noteworthy merits of CSMC. The features of our CSMC approach are shown using models and realistic examples. Special attention is given to difficult limits: situations with non-constant pairing strengths, cases with nearly degenerate excited states, limits when pairing correlations in finite systems are weak, and pr...
Monte Carlo calculations for gamma-ray mass attenuation coefficients of some soil samples
International Nuclear Information System (INIS)
Highlights: • Gamma-ray mass attenuation coefficients of soils. • Radiation shielding properties of soil. • Comparison of calculated results with the theoretical and experimental ones. • The method can be applied to various media. - Abstract: We developed a simple Monte Carlo code to determine the mass attenuation coefficients of some soil samples at nine different gamma-ray energies (59.5, 80.9, 122.1, 159.0, 356.5, 511.0, 661.6, 1173.2 and 1332.5 keV). Results of the Monte Carlo calculations have been compared with tabulations based upon the results of photon cross section database (XCOM) and with experimental results by other researchers for the same samples. The calculated mass attenuation coefficients were found to be very close to the theoretical values and the experimental results
The Local Hybrid Monte Carlo algorithm for free field theory: Reexamining overrelaxation
International Nuclear Information System (INIS)
We analyze the autocorrelations for the local hybrid Monte Carlo algorithm (A. D. Kennedy, 1993) in the context of free field theory. In this case this is just Adler's overrelaxation algorithm (S. L. Adler, 1981). We consider the algorithm with even/odd, lexicographic, and random updates, and show that its efficiency depends crucially on this ordering of sites when optimized for a given class of operators. In particular, we show that, contrary to previous expectations, it is possible to eliminate critical slowing down (zint=0) for a class of interesting observables, including the magnetic susceptibility: this can be done with lexicographic updates but is not possible with even/odd (zint=1) or random (zint=2) updates. We are considering the dynamical critical exponent zint for integrated autocorrelations rather than for the exponential autocorrelation time; this is reasonable because it is the integrated autocorrelation which determines the cost of a Monte Carlo computation. (orig.)
Prudnikov, V. V.; Prudnikov, P. V.; Romanovskii, D. E.
2015-11-01
The Monte Carlo study of three-layer and spin-valve magnetic structures with giant magnetoresistance effects has been performed with the application of the Heisenberg anisotropic model to the description of the magnetic properties of thin ferromagnetic films. The dependences of the magnetic characteristics on the temperature and external magnetic field have been obtained for the ferromagnetic and antiferromagnetic configurations of these structures. A Monte Carlo method for determining the magnetoresistance coefficient has been developed. The magnetoresistance coefficient has been calculated for three-layer and spin-valve magnetic structures at various thicknesses of ferromagnetic films. It has been shown that the calculated temperature dependence of the magnetoresistance coefficient is in good agreement with experimental data obtained for the Fe(001)/Cr(001) multilayer structure and the CFAS/Ag/CFAS/IrMn spin valve based on the Co2FeAl0.5Si0.5 (CFAS) Heusler alloy.
Prudnikov, V. V.; Prudnikov, P. V.; Romanovskiy, D. E.
2016-06-01
A Monte Carlo study of trilayer and spin-valve magnetic structures with giant magnetoresistance effects is carried out. The anisotropic Heisenberg model is used for description of magnetic properties of ultrathin ferromagnetic films forming these structures. The temperature and magnetic field dependences of magnetic characteristics are considered for ferromagnetic and antiferromagnetic configurations of these multilayer structures. The methodology for determination of the magnetoresistance by the Monte Carlo method is introduced; this permits us to calculate the magnetoresistance of multilayer structures for different thicknesses of the ferromagnetic films. The calculated temperature dependence of the magnetoresistance agrees very well with the experimental results measured for the Fe(0 0 1)–Cr(0 0 1) multilayer structure and CFAS–Ag–CFAS–IrMn spin-valve structure based on the half-metallic Heusler alloy Co2FeAl0.5Si0.5.
Monte Carlo Calculation for Landmine Detection using Prompt Gamma Neutron Activation Analysis
Energy Technology Data Exchange (ETDEWEB)
Park, Seungil; Kim, Seong Bong; Yoo, Suk Jae [Plasma Technology Research Center, Gunsan (Korea, Republic of); Shin, Sung Gyun; Cho, Moohyun [POSTECH, Pohang (Korea, Republic of); Han, Seunghoon; Lim, Byeongok [Samsung Thales, Yongin (Korea, Republic of)
2014-05-15
Identification and demining of landmines are a very important issue for the safety of the people and the economic development. To solve the issue, several methods have been proposed in the past. In Korea, National Fusion Research Institute (NFRI) is developing a landmine detector using prompt gamma neutron activation analysis (PGNAA) as a part of the complex sensor-based landmine detection system. In this paper, the Monte Carlo calculation results for this system are presented. Monte Carlo calculation was carried out for the design of the landmine detector using PGNAA. To consider the soil effect, average soil composition is analyzed and applied to the calculation. This results has been used to determine the specification of the landmine detector.
Harries, Tim J
2015-01-01
We present a set of new numerical methods that are relevant to calculating radiation pressure terms in hydrodynamics calculations, with a particular focus on massive star formation. The radiation force is determined from a Monte Carlo estimator and enables a complete treatment of the detailed microphysics, including polychromatic radiation and anisotropic scattering, in both the free-streaming and optically-thick limits. Since the new method is computationally demanding we have developed two new methods that speed up the algorithm. The first is a photon packet splitting algorithm that enables efficient treatment of the Monte Carlo process in very optically thick regions. The second is a parallelisation method that distributes the Monte Carlo workload over many instances of the hydrodynamic domain, resulting in excellent scaling of the radiation step. We also describe the implementation of a sink particle method that enables us to follow the accretion onto, and the growth of, the protostars. We detail the resu...
Paixão, Lucas; Oliveira, Bruno Beraldo; Viloria, Carolina; de Oliveira, Marcio Alves; Teixeira, Maria Helena Araújo; Nogueira, Maria do Socorro
2015-01-01
Objective Derive filtered tungsten X-ray spectra used in digital mammography systems by means of Monte Carlo simulations. Materials and Methods Filtered spectra for rhodium filter were obtained for tube potentials between 26 and 32 kV. The half-value layer (HVL) of simulated filtered spectra were compared with those obtained experimentally with a solid state detector Unfors model 8202031-H Xi R/F & MAM Detector Platinum and 8201023-C Xi Base unit Platinum Plus w mAs in a Hologic Selenia Dimensions system using a direct radiography mode. Results Calculated HVL values showed good agreement as compared with those obtained experimentally. The greatest relative difference between the Monte Carlo calculated HVL values and experimental HVL values was 4%. Conclusion The results show that the filtered tungsten anode X-ray spectra and the EGSnrc Monte Carlo code can be used for mean glandular dose determination in mammography. PMID:26811553
Directory of Open Access Journals (Sweden)
Lucas Paixão
2015-12-01
Full Text Available Abstract Objective: Derive filtered tungsten X-ray spectra used in digital mammography systems by means of Monte Carlo simulations. Materials and Methods: Filtered spectra for rhodium filter were obtained for tube potentials between 26 and 32 kV. The half-value layer (HVL of simulated filtered spectra were compared with those obtained experimentally with a solid state detector Unfors model 8202031-H Xi R/F & MAM Detector Platinum and 8201023-C Xi Base unit Platinum Plus w mAs in a Hologic Selenia Dimensions system using a direct radiography mode. Results: Calculated HVL values showed good agreement as compared with those obtained experimentally. The greatest relative difference between the Monte Carlo calculated HVL values and experimental HVL values was 4%. Conclusion: The results show that the filtered tungsten anode X-ray spectra and the EGSnrc Monte Carlo code can be used for mean glandular dose determination in mammography.
Shi, Wei-Yu; Su, Li-Jun; Song, Yi; Ma, Ming-Guo; Du, Sheng
2015-10-01
The soil CO2 emission is recognized as one of the largest fluxes in the global carbon cycle. Small errors in its estimation can result in large uncertainties and have important consequences for climate model predictions. Monte Carlo approach is efficient for estimating and reducing spatial scale sampling errors. However, that has not been used in soil CO2 emission studies. Here, soil respiration data from 51 PVC collars were measured within farmland cultivated by maize covering 25 km(2) during the growing season. Based on Monte Carlo approach, optimal sample sizes of soil temperature, soil moisture, and soil CO2 emission were determined. And models of soil respiration can be effectively assessed: Soil temperature model is the most effective model to increasing accuracy among three models. The study demonstrated that Monte Carlo approach may improve soil respiration accuracy with limited sample size. That will be valuable for reducing uncertainties of global carbon cycle. PMID:26664693
The Monte Carlo method the method of statistical trials
Shreider, YuA
1966-01-01
The Monte Carlo Method: The Method of Statistical Trials is a systematic account of the fundamental concepts and techniques of the Monte Carlo method, together with its range of applications. Some of these applications include the computation of definite integrals, neutron physics, and in the investigation of servicing processes. This volume is comprised of seven chapters and begins with an overview of the basic features of the Monte Carlo method and typical examples of its application to simple problems in computational mathematics. The next chapter examines the computation of multi-dimensio
Finding Planet Nine: a Monte Carlo approach
Marcos, C de la Fuente
2016-01-01
Planet Nine is a hypothetical planet located well beyond Pluto that has been proposed in an attempt to explain the observed clustering in physical space of the perihelia of six extreme trans-Neptunian objects or ETNOs. The predicted approximate values of its orbital elements include a semimajor axis of 700 au, an eccentricity of 0.6, an inclination of 30 degrees, and an argument of perihelion of 150 degrees. Searching for this putative planet is already under way. Here, we use a Monte Carlo approach to create a synthetic population of Planet Nine orbits and study its visibility statistically in terms of various parameters and focusing on the aphelion configuration. Our analysis shows that, if Planet Nine exists and is at aphelion, it might be found projected against one out of four specific areas in the sky. Each area is linked to a particular value of the longitude of the ascending node and two of them are compatible with an apsidal antialignment scenario. In addition and after studying the current statistic...
Monte Carlo Production Management at CMS
Boudoul, G.; Pol, A; Srimanobhas, P; Vlimant, J R; Franzoni, Giovanni
2015-01-01
The analysis of the LHC data at the Compact Muon Solenoid (CMS) experiment requires the production of a large number of simulated events.During the runI of LHC (2010-2012), CMS has produced over 12 Billion simulated events,organized in approximately sixty different campaigns each emulating specific detector conditions and LHC running conditions (pile up).In order toaggregate the information needed for the configuration and prioritization of the events production,assure the book-keeping and of all the processing requests placed by the physics analysis groups,and to interface with the CMS production infrastructure,the web-based service Monte Carlo Management (McM) has been developed and put in production in 2012.McM is based on recent server infrastructure technology (CherryPy + java) and relies on a CouchDB database back-end.This contribution will coverthe one and half year of operational experience managing samples of simulated events for CMS,the evolution of its functionalitiesand the extension of its capabi...
Monte Carlo simulations for focusing elliptical guides
Energy Technology Data Exchange (ETDEWEB)
Valicu, Roxana [FRM2 Garching, Muenchen (Germany); Boeni, Peter [E20, TU Muenchen (Germany)
2009-07-01
The aim of the Monte Carlo simulations using McStas Programme was to improve the focusing of the neutron beam existing at PGAA (FRM II) by prolongation of the existing elliptic guide (coated now with supermirrors with m=3) with a new part. First we have tried with an initial length of the additional guide of 7,5cm and coatings for the neutron guide of supermirrors with m=4,5 and 6. The gain (calculated by dividing the intensity in the focal point after adding the guide by the intensity at the focal point with the initial guide) obtained for this coatings indicated that a coating with m=5 would be appropriate for a first trial. The next step was to vary the length of the additional guide for this m value and therefore choosing the appropriate length for the maximal gain. With the m value and the length of the guide fixed we have introduced an aperture 1 cm before the focal point and we have varied the radius of this aperture in order to obtain a focused beam. We have observed a dramatic decrease in the size of the beam in the focal point after introducing this aperture. The simulation results, the gains obtained and the evolution of the beam size will be presented.
Diffusion Monte Carlo in internal coordinates.
Petit, Andrew S; McCoy, Anne B
2013-08-15
An internal coordinate extension of diffusion Monte Carlo (DMC) is described as a first step toward a generalized reduced-dimensional DMC approach. The method places no constraints on the choice of internal coordinates other than the requirement that they all be independent. Using H(3)(+) and its isotopologues as model systems, the methodology is shown to be capable of successfully describing the ground state properties of molecules that undergo large amplitude, zero-point vibrational motions. Combining the approach developed here with the fixed-node approximation allows vibrationally excited states to be treated. Analysis of the ground state probability distribution is shown to provide important insights into the set of internal coordinates that are less strongly coupled and therefore more suitable for use as the nodal coordinates for the fixed-node DMC calculations. In particular, the curvilinear normal mode coordinates are found to provide reasonable nodal surfaces for the fundamentals of H(2)D(+) and D(2)H(+) despite both molecules being highly fluxional.
Commensurabilities between ETNOs: a Monte Carlo survey
de la Fuente Marcos, C.; de la Fuente Marcos, R.
2016-07-01
Many asteroids in the main and trans-Neptunian belts are trapped in mean motion resonances with Jupiter and Neptune, respectively. As a side effect, they experience accidental commensurabilities among themselves. These commensurabilities define characteristic patterns that can be used to trace the source of the observed resonant behaviour. Here, we explore systematically the existence of commensurabilities between the known ETNOs using their heliocentric and barycentric semimajor axes, their uncertainties, and Monte Carlo techniques. We find that the commensurability patterns present in the known ETNO population resemble those found in the main and trans-Neptunian belts. Although based on small number statistics, such patterns can only be properly explained if most, if not all, of the known ETNOs are subjected to the resonant gravitational perturbations of yet undetected trans-Plutonian planets. We show explicitly that some of the statistically significant commensurabilities are compatible with the Planet Nine hypothesis; in particular, a number of objects may be trapped in the 5:3 and 3:1 mean motion resonances with a putative Planet Nine with semimajor axis ˜700 au.
Monte Carlo simulations for heavy ion dosimetry
Energy Technology Data Exchange (ETDEWEB)
Geithner, O.
2006-07-26
Water-to-air stopping power ratio (s{sub w,air}) calculations for the ionization chamber dosimetry of clinically relevant ion beams with initial energies from 50 to 450 MeV/u have been performed using the Monte Carlo technique. To simulate the transport of a particle in water the computer code SHIELD-HIT v2 was used which is a substantially modified version of its predecessor SHIELD-HIT v1. The code was partially rewritten, replacing formerly used single precision variables with double precision variables. The lowest particle transport specific energy was decreased from 1 MeV/u down to 10 keV/u by modifying the Bethe- Bloch formula, thus widening its range for medical dosimetry applications. Optional MSTAR and ICRU-73 stopping power data were included. The fragmentation model was verified using all available experimental data and some parameters were adjusted. The present code version shows excellent agreement with experimental data. Additional to the calculations of stopping power ratios, s{sub w,air}, the influence of fragments and I-values on s{sub w,air} for carbon ion beams was investigated. The value of s{sub w,air} deviates as much as 2.3% at the Bragg peak from the recommended by TRS-398 constant value of 1.130 for an energy of 50 MeV/u. (orig.)
A continuation multilevel Monte Carlo algorithm
Collier, Nathan
2014-09-05
We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending when the required error tolerance is satisfied. CMLMC assumes discretization hierarchies that are defined a priori for each level and are geometrically refined across levels. The actual choice of computational work across levels is based on parametric models for the average cost per sample and the corresponding variance and weak error. These parameters are calibrated using Bayesian estimation, taking particular notice of the deepest levels of the discretization hierarchy, where only few realizations are available to produce the estimates. The resulting CMLMC estimator exhibits a non-trivial splitting between bias and statistical contributions. We also show the asymptotic normality of the statistical error in the MLMC estimator and justify in this way our error estimate that allows prescribing both required accuracy and confidence in the final result. Numerical results substantiate the above results and illustrate the corresponding computational savings in examples that are described in terms of differential equations either driven by random measures or with random coefficients. © 2014, Springer Science+Business Media Dordrecht.
Monte Carlo study of nanowire magnetic properties
Institute of Scientific and Technical Information of China (English)
R.Masrour; L.Bahmad; A.Benyoussef
2013-01-01
In this work,we use Monte Carlo simulations to study the magnetic properties of a nanowire system based on a honeycomb lattice,in the absence as well as in the presence of both an external magnetic field and crystal field.The system is formed with NL layers having spins that can take the values σ =+1/2 and S =+1,0.The blocking temperature is deduced,for each spin configuration,depending on the crystal field A.The effect of the exchange interaction coupling Jp between the spin configurations σ and S is studied for different values of temperature at fixed crystal field.The established ground-state phase diagram,in the plane (Jp,A),shows that the only stable configurations are:(1/2,0),(1/2,+1),and (1/2,-1).The thermal magnetization and susceptibility are investigated for the two spin configurations,in the absence as well as in the presence of a crystal field.Finally,we establish the hysteresis cycle for different temperature values,showing that there is almost no remaining magnetization in the absence of the external magnetic field,and that the studied system exhibits a super-paramagnetic behavior.
Markov Chain Monte Carlo and Irreversibility
Ottobre, Michela
2016-06-01
Markov Chain Monte Carlo (MCMC) methods are statistical methods designed to sample from a given measure π by constructing a Markov chain that has π as invariant measure and that converges to π. Most MCMC algorithms make use of chains that satisfy the detailed balance condition with respect to π; such chains are therefore reversible. On the other hand, recent work [18, 21, 28, 29] has stressed several advantages of using irreversible processes for sampling. Roughly speaking, irreversible diffusions converge to equilibrium faster (and lead to smaller asymptotic variance as well). In this paper we discuss some of the recent progress in the study of nonreversible MCMC methods. In particular: i) we explain some of the difficulties that arise in the analysis of nonreversible processes and we discuss some analytical methods to approach the study of continuous-time irreversible diffusions; ii) most of the rigorous results on irreversible diffusions are available for continuous-time processes; however, for computational purposes one needs to discretize such dynamics. It is well known that the resulting discretized chain will not, in general, retain all the good properties of the process that it is obtained from. In particular, if we want to preserve the invariance of the target measure, the chain might no longer be reversible. Therefore iii) we conclude by presenting an MCMC algorithm, the SOL-HMC algorithm [23], which results from a nonreversible discretization of a nonreversible dynamics.
Monte Carlo Production Management at CMS
Boudoul, G.; Franzoni, G.; Norkus, A.; Pol, A.; Srimanobhas, P.; Vlimant, J.-R.
2015-12-01
The analysis of the LHC data at the Compact Muon Solenoid (CMS) experiment requires the production of a large number of simulated events. During the RunI of LHC (20102012), CMS has produced over 12 Billion simulated events, organized in approximately sixty different campaigns each emulating specific detector conditions and LHC running conditions (pile up). In order to aggregate the information needed for the configuration and prioritization of the events production, assure the book-keeping of all the processing requests placed by the physics analysis groups, and to interface with the CMS production infrastructure, the web- based service Monte Carlo Management (McM) has been developed and put in production in 2013. McM is based on recent server infrastructure technology (CherryPy + AngularJS) and relies on a CouchDB database back-end. This contribution covers the one and half year of operational experience managing samples of simulated events for CMS, the evolution of its functionalities and the extension of its capability to monitor the status and advancement of the events production.
Measuring Berry curvature with quantum Monte Carlo
Kolodrubetz, Michael
2014-01-01
The Berry curvature and its descendant, the Berry phase, play an important role in quantum mechanics. They can be used to understand the Aharonov-Bohm effect, define topological Chern numbers, and generally to investigate the geometric properties of a quantum ground state manifold. While Berry curvature has been well-studied in the regimes of few-body physics and non-interacting particles, its use in the regime of strong interactions is hindered by the lack of numerical methods to solve it. In this paper we fill this gap by implementing a quantum Monte Carlo method to solve for the Berry curvature, based on interpreting Berry curvature as a leading correction to imaginary time ramps. We demonstrate our algorithm using the transverse-field Ising model in one and two dimensions, the latter of which is non-integrable. Despite the fact that the Berry curvature gives information about the phase of the wave function, we show that our algorithm has no sign or phase problem for standard sign-problem-free Hamiltonians...
Monte Carlo method application to shielding calculations
International Nuclear Information System (INIS)
CANDU spent fuel discharged from the reactor core contains Pu, so it must be stressed in two directions: tracing for the fuel reactivity in order to prevent critical mass formation and personnel protection during the spent fuel manipulation. The basic tasks accomplished by the shielding calculations in a nuclear safety analysis consist in dose rates calculations in order to prevent any risks both for personnel protection and impact on the environment during the spent fuel manipulation, transport and storage. To perform photon dose rates calculations the Monte Carlo MORSE-SGC code incorporated in SAS4 sequence from SCALE system was used. The paper objective was to obtain the photon dose rates to the spent fuel transport cask wall, both in radial and axial directions. As source of radiation one spent CANDU fuel bundle was used. All the geometrical and material data related to the transport cask were considered according to the shipping cask type B model, whose prototype has been realized and tested in the Institute for Nuclear Research Pitesti. (authors)
Rare event simulation using Monte Carlo methods
Rubino, Gerardo
2009-01-01
In a probabilistic model, a rare event is an event with a very small probability of occurrence. The forecasting of rare events is a formidable task but is important in many areas. For instance a catastrophic failure in a transport system or in a nuclear power plant, the failure of an information processing system in a bank, or in the communication network of a group of banks, leading to financial losses. Being able to evaluate the probability of rare events is therefore a critical issue. Monte Carlo Methods, the simulation of corresponding models, are used to analyze rare events. This book sets out to present the mathematical tools available for the efficient simulation of rare events. Importance sampling and splitting are presented along with an exposition of how to apply these tools to a variety of fields ranging from performance and dependability evaluation of complex systems, typically in computer science or in telecommunications, to chemical reaction analysis in biology or particle transport in physics. ...
Parallel Monte Carlo simulation of aerosol dynamics
Zhou, K.
2014-01-01
A highly efficient Monte Carlo (MC) algorithm is developed for the numerical simulation of aerosol dynamics, that is, nucleation, surface growth, and coagulation. Nucleation and surface growth are handled with deterministic means, while coagulation is simulated with a stochastic method (Marcus-Lushnikov stochastic process). Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI). The parallel computing efficiency is investigated through numerical examples. Near 60% parallel efficiency is achieved for the maximum testing case with 3.7 million MC particles running on 93 parallel computing nodes. The algorithm is verified through simulating various testing cases and comparing the simulation results with available analytical and/or other numerical solutions. Generally, it is found that only small number (hundreds or thousands) of MC particles is necessary to accurately predict the aerosol particle number density, volume fraction, and so forth, that is, low order moments of the Particle Size Distribution (PSD) function. Accurately predicting the high order moments of the PSD needs to dramatically increase the number of MC particles. 2014 Kun Zhou et al.
Directory of Open Access Journals (Sweden)
Marcos Roberto Gois de Oliveira
2013-01-01
Carlo ao modelo de avaliação determinístico convencional,desenvolvendo-se assim um modelo estocástico que, como tal, permite uma análise estatística do risco. Oobjetivo deste trabalho foi avaliar a pertinência da utilização da técnica de Simulação de Monte Carlo na mensuração das incertezas inerentes à metodologia de avaliação de empresas pelo fluxo de caixa descontado,identificando-se se essa metodologia de simulação incrementa a acurácia da avaliação de empresas pelofluxo de caixa descontado. Os resultados deste estudo comprovam a eficácia operacional da utilização daSimulação de Monte Carlo na avaliação de empresas pelo fluxo de caixa descontado, confirmando que aqualidade dos resultados obtidos por meio da adoção dessa metodologia de simulação apresentou uma relevante melhoria em relação aos resultados obtidos por meio da utilização do modelo determinístico deavaliação.
International Nuclear Information System (INIS)
Numerous variance reduction techniques, such as splitting/Russian roulette, weight windows, and the exponential transform exist for improving the efficiency of Monte Carlo transport calculations. Typically, however, these methods, while reducing the variance in the problem area of interest tend to increase the variance in other, presumably less important, regions. As such, these methods tend to be not as effective in Monte Carlo calculations which require the minimization of the variance everywhere. Recently, ''Local'' Exponential Transform (LET) methods have been developed as a means of approximating the zero-variance solution. A numerical solution to the adjoint diffusion equation is used, along with an exponential representation of the adjoint flux in each cell, to determine ''local'' biasing parameters. These parameters are then used to bias the forward Monte Carlo transport calculation in a manner similar to the conventional exponential transform, but such that the transform parameters are now local in space and energy, not global. Results have shown that the Local Exponential Transform often offers a significant improvement over conventional geometry splitting/Russian roulette with weight windows. Since the biasing parameters for the Local Exponential Transform were determined from a low-order solution to the adjoint transport problem, the LET has been applied in problems where it was desirable to minimize the variance in a detector region. The purpose of this paper is to show that by basing the LET method upon a low-order solution to the forward transport problem, one can instead obtain biasing parameters which will minimize the maximum variance in a Monte Carlo transport calculation
Monte Carlo simulation of electron swarms in H2
International Nuclear Information System (INIS)
A Monte-Carlo simulation of the motion of an electron swarm in molecular hydrogen was studied in the range E/N = 1.4-170 Td (1 Td = 10-17V/cms2). The simulation was performed for 400-600 electrons at several values of E/N for two different sets of inelastic collision cross sections at high values of E/N. The longitudinal diffusion coefficient Dsub(L), lateral diffusion coefficient D, swarm drift velocity W, average swarm energy epsilon, and the ionization and excitation production coefficients were obtained and compared with experimental results where these are available. It was found that the results obtained differ significantly from the experimental values and this is attributed to the isotopic scattering model used in this work. However, the results lend support to the experimental technique reported by Blevin et al used to determine these transport parameters, and in particular confirm their result that Dsub(L) > D at high values of E/N. (author)
KENO V: the newest KENO Monte Carlo criticality program
International Nuclear Information System (INIS)
KENO V is a new multigroup Monte Carlo criticality program developed in the tradition of KENO and KENO IV for use in the SCALE system. The primary purpose of KENO V is to determine k-effective. Other calculated quantities include lifetime and generation time, energy-dependent leakages, energy- and region-dependent absorptions, fissions, fluxes, and fission densities. KENO V combines many of the efficient performance capabilities of KENO IV with improvements such as flexible data input, the ability to specify origins for cylindrical and spherical geometry regions, the capability of super grouping energy-dependent data, a P/sub n/ scattering model in the cross sections, a procedure for matching lethargy boundaries between albedos and cross sections to extend the usefulness of the albedo feature, and improved restart capabilities. This advanced user-oriented program combines simplified data input and efficient computer storage allocation to readily solve large problems whose computer storage requirements precluded solution when using KENO IV. 2 figures, 1 table
GPU based Monte Carlo for PET image reconstruction: parameter optimization
International Nuclear Information System (INIS)
This paper presents the optimization of a fully Monte Carlo (MC) based iterative image reconstruction of Positron Emission Tomography (PET) measurements. With our MC re- construction method all the physical effects in a PET system are taken into account thus superior image quality is achieved in exchange for increased computational effort. The method is feasible because we utilize the enormous processing power of Graphical Processing Units (GPUs) to solve the inherently parallel problem of photon transport. The MC approach regards the simulated positron decays as samples in mathematical sums required in the iterative reconstruction algorithm, so to complement the fast architecture, our work of optimization focuses on the number of simulated positron decays required to obtain sufficient image quality. We have achieved significant results in determining the optimal number of samples for arbitrary measurement data, this allows to achieve the best image quality with the least possible computational effort. Based on this research recommendations can be given for effective partitioning of computational effort into the iterations in limited time reconstructions. (author)
Monte Carlo shell model for ab initio nuclear structure
International Nuclear Information System (INIS)
The Monte Carlo Shell Model (MCSM) has been developed mainly for conventional shell-model calculations with an assumed inert core. Recently the algorithm and code itself have been heavily revised and rewritten so as to accommodate massively parallel computing environments. Now we can apply the MCSM not only to conventional shell-model calculations but also to no-core calculations. The MCSM approach proceeds through a sequence of diagonalization steps within the Hilbert subspace spanned by the deformed Slater determinants in the HO single-particle basis. Importance truncated bases are stochastically sampled so as to minimize the energy variationally. By increasing the number of importance-truncated basis, the computed energy converges from above to the exact value and gives the variational upper bound. In benchmark calculations, there is a good agreement in p-shell nuclei between the results of the MCSM and of the FCI (Full Configuration Interaction) method. The N(shell)=5 results reveal the onset of systematic convergence pattern. Further work is needed to investigate the extrapolation to the infinite basis space in the N(shell) truncation
Modelling laser light propagation in thermoplastics using Monte Carlo simulations
Parkinson, Alexander
Laser welding has great potential as a fast, non-contact joining method for thermoplastic parts. In the laser transmission welding of thermoplastics, light passes through a semi-transparent part to reach the weld interface. There, it is absorbed as heat, which causes melting and subsequent welding. The distribution and quantity of light reaching the interface are important for predicting the quality of a weld, but are experimentally difficult to estimate. A model for simulating the path of this laser light through these light-scattering plastic parts has been developed. The technique uses a Monte-Carlo approach to generate photon paths through the material, accounting for absorption, scattering and reflection between boundaries in the transparent polymer. It was assumed that any light escaping the bottom surface contributed to welding. The photon paths are then scaled according to the input beam profile in order to simulate non-Gaussian beam profiles. A method for determining the 3 independent optical parameters to accurately predict transmission and beam power distribution at the interface was established using experimental data for polycarbonate at 4 different glass fibre concentrations and polyamide-6 reinforced with 20% long glass fibres. Exit beam profiles and transmissions predicted by the simulation were found to be in generally good agreement (R2>0.90) with experimental measurements. The simulations allowed the prediction of transmission and power distributions at other thicknesses as well as information on reflection, energy absorption and power distributions at other thicknesses for these materials.
Monte Carlo simulations of intensity profiles for energetic particle propagation
Tautz, R. C.; Bolte, J.; Shalchi, A.
2016-02-01
Aims: Numerical test-particle simulations are a reliable and frequently used tool for testing analytical transport theories and predicting mean-free paths. The comparison between solutions of the diffusion equation and the particle flux is used to critically judge the applicability of diffusion to the stochastic transport of energetic particles in magnetized turbulence. Methods: A Monte Carlo simulation code is extended to allow for the generation of intensity profiles and anisotropy-time profiles. Because of the relatively low number density of computational particles, a kernel function has to be used to describe the spatial extent of each particle. Results: The obtained intensity profiles are interpreted as solutions of the diffusion equation by inserting the diffusion coefficients that have been directly determined from the mean-square displacements. The comparison shows that the time dependence of the diffusion coefficients needs to be considered, in particular the initial ballistic phase and the often subdiffusive perpendicular coefficient. Conclusions: It is argued that the perpendicular component of the distribution function is essential if agreement between the diffusion solution and the simulated flux is to be obtained. In addition, time-dependent diffusion can provide a better description than the classic diffusion equation only after the initial ballistic phase.
Quantum Monte Carlo for electronic structure: Recent developments and applications
Energy Technology Data Exchange (ETDEWEB)
Rodriquez, M. M.S. [Lawrence Berkeley Lab. and Univ. of California, Berkeley, CA (United States). Dept. of Chemistry
1995-04-01
Quantum Monte Carlo (QMC) methods have been found to give excellent results when applied to chemical systems. The main goal of the present work is to use QMC to perform electronic structure calculations. In QMC, a Monte Carlo simulation is used to solve the Schroedinger equation, taking advantage of its analogy to a classical diffusion process with branching. In the present work the author focuses on how to extend the usefulness of QMC to more meaningful molecular systems. This study is aimed at questions concerning polyatomic and large atomic number systems. The accuracy of the solution obtained is determined by the accuracy of the trial wave function`s nodal structure. Efforts in the group have given great emphasis to finding optimized wave functions for the QMC calculations. Little work had been done by systematically looking at a family of systems to see how the best wave functions evolve with system size. In this work the author presents a study of trial wave functions for C, CH, C{sub 2}H and C{sub 2}H{sub 2}. The goal is to study how to build wave functions for larger systems by accumulating knowledge from the wave functions of its fragments as well as gaining some knowledge on the usefulness of multi-reference wave functions. In a MC calculation of a heavy atom, for reasonable time steps most moves for core electrons are rejected. For this reason true equilibration is rarely achieved. A method proposed by Batrouni and Reynolds modifies the way the simulation is performed without altering the final steady-state solution. It introduces an acceleration matrix chosen so that all coordinates (i.e., of core and valence electrons) propagate at comparable speeds. A study of the results obtained using their proposed matrix suggests that it may not be the optimum choice. In this work the author has found that the desired mixing of coordinates between core and valence electrons is not achieved when using this matrix. A bibliography of 175 references is included.
GPU-Monte Carlo based fast IMRT plan optimization
Directory of Open Access Journals (Sweden)
Yongbao Li
2014-03-01
Full Text Available Purpose: Intensity-modulated radiation treatment (IMRT plan optimization needs pre-calculated beamlet dose distribution. Pencil-beam or superposition/convolution type algorithms are typically used because of high computation speed. However, inaccurate beamlet dose distributions, particularly in cases with high levels of inhomogeneity, may mislead optimization, hindering the resulting plan quality. It is desire to use Monte Carlo (MC methods for beamlet dose calculations. Yet, the long computational time from repeated dose calculations for a number of beamlets prevents this application. It is our objective to integrate a GPU-based MC dose engine in lung IMRT optimization using a novel two-steps workflow.Methods: A GPU-based MC code gDPM is used. Each particle is tagged with an index of a beamlet where the source particle is from. Deposit dose are stored separately for beamlets based on the index. Due to limited GPU memory size, a pyramid space is allocated for each beamlet, and dose outside the space is neglected. A two-steps optimization workflow is proposed for fast MC-based optimization. At first step, a rough dose calculation is conducted with only a few number of particle per beamlet. Plan optimization is followed to get an approximated fluence map. In the second step, more accurate beamlet doses are calculated, where sampled number of particles for a beamlet is proportional to the intensity determined previously. A second-round optimization is conducted, yielding the final result.Results: For a lung case with 5317 beamlets, 105 particles per beamlet in the first round, and 108 particles per beam in the second round are enough to get a good plan quality. The total simulation time is 96.4 sec.Conclusion: A fast GPU-based MC dose calculation method along with a novel two-step optimization workflow are developed. The high efficiency allows the use of MC for IMRT optimizations.--------------------------------Cite this article as: Li Y, Tian Z
Complete Monte Carlo Simulation of Neutron Scattering Experiments
International Nuclear Information System (INIS)
The majority of experiments investigating the elastic scattering of fast neutrons were done some 30 years ago. At that time it was not possible to obtain valid corrections for the finite geometry and the finite sample size of the experimental set up, not even having the main frame computers of the Los Alamos National Laboratory at one’s disposal. The reason was not only the limited calculation capacity of those ancient computers but also, to an even higher degree, the lack of powerful Monte Carlo codes and the very limited data base for the isotope in question. The computing power of a present day PC is about ten thousand times that of a super computer of the1970ies. Moreover, most PCs are idle over-night so that using a powerful Monte Carlo program, like MCNPX from Los Alamos, corrections of important scattering experiments can be determined reliably at practically no computer cost. Surely one of the most important experiments is neutron scattering from liquid helium-3, especially considering the expensive and complicated cryogenic target. A complete documentation of such an experiment as performed in the year 1971 at the Los Alamos National Laboratory is available. Therefore it is now possible to perform a thorough simulation of the experiment: starting from the production of mono-energetic neutrons in a gas target, followed by the interaction in the ambient air, and the interaction with the cryostat structure, and finally the scattering medium itself. Another simulation deals with the scattering from hydrogen as a reference measurement. As two thirds of all available differential scattering cross sections of that reaction depend on these measurements the newly arrived at corrections prove to be highly significant because they are smaller by a factor of five. Moreover, it was necessary to simulate another experiment on this reaction, using a white neutron source. This way it was possible to convert the corresponding relative yield excitation functions to
Monte Carlo computations of the hadronic mass spectrum
International Nuclear Information System (INIS)
This paper summarizes two talks presented at the Orbis Scientiae Meeting, 1982. Monte Carlo results on the mass gap (or glueball mass) and on the masses of the lightest quark-model hadrons are illustrated
Monte Carlo techniques for analyzing deep penetration problems
Energy Technology Data Exchange (ETDEWEB)
Cramer, S.N.; Gonnord, J.; Hendricks, J.S.
1985-01-01
A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications. 29 refs.
Carlo Ginzburg: anomaalia viitab normile / intervjueerinud Marek Tamm
Ginzburg, Carlo, 1939-
2014-01-01
Intervjuu itaalia ajaloolase Carlo Ginzburgiga tema raamatu "Ükski saar pole saar : neli pilguheitu inglise kirjandusele globaalsest vaatenurgast" eesti keeles ilmumise puhul. Teos ilmus Tallinna Ülikooli Kirjastuses
Bayesian phylogeny analysis via stochastic approximation Monte Carlo
Cheon, Sooyoung
2009-11-01
Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time. © 2009 Elsevier Inc. All rights reserved.
Suppression of the initial transient in Monte Carlo criticality simulations
International Nuclear Information System (INIS)
Criticality Monte Carlo calculations aim at estimating the effective multiplication factor (k-effective) for a fissile system through iterations simulating neutrons propagation (making a Markov chain). Arbitrary initialization of the neutron population can deeply bias the k-effective estimation, defined as the mean of the k-effective computed at each iteration. A simplified model of this cycle k-effective sequence is built, based on characteristics of industrial criticality Monte Carlo calculations. Statistical tests, inspired by Brownian bridge properties, are designed to discriminate stationarity of the cycle k-effective sequence. The initial detected transient is, then, suppressed in order to improve the estimation of the system k-effective. The different versions of this methodology are detailed and compared, firstly on a plan of numerical tests fitted on criticality Monte Carlo calculations, and, secondly on real criticality calculations. Eventually, the best methodologies observed in these tests are selected and allow to improve industrial Monte Carlo criticality calculations. (author)
On the Markov Chain Monte Carlo (MCMC) method
Indian Academy of Sciences (India)
Rajeeva L Karandikar
2006-04-01
Markov Chain Monte Carlo (MCMC) is a popular method used to generate samples from arbitrary distributions, which may be speciﬁed indirectly. In this article, we give an introduction to this method along with some examples.
An Introduction to Multilevel Monte Carlo for Option Valuation
Higham, Desmond J
2015-01-01
Monte Carlo is a simple and flexible tool that is widely used in computational finance. In this context, it is common for the quantity of interest to be the expected value of a random variable defined via a stochastic differential equation. In 2008, Giles proposed a remarkable improvement to the approach of discretizing with a numerical method and applying standard Monte Carlo. His multilevel Monte Carlo method offers an order of speed up given by the inverse of epsilon, where epsilon is the required accuracy. So computations can run 100 times more quickly when two digits of accuracy are required. The multilevel philosophy has since been adopted by a range of researchers and a wealth of practically significant results has arisen, most of which have yet to make their way into the expository literature. In this work, we give a brief, accessible, introduction to multilevel Monte Carlo and summarize recent results applicable to the task of option evaluation.
Monte Carlo techniques for analyzing deep penetration problems
International Nuclear Information System (INIS)
A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications. 29 refs
Monte Carlo variance reduction approaches for non-Boltzmann tallies
International Nuclear Information System (INIS)
Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed
Proton therapy Monte Carlo SRNA-VOX code
Ilić Radovan D.
2012-01-01
The most powerful feature of the Monte Carlo method is the possibility of simulating all individual particle interactions in three dimensions and performing numerical experiments with a preset error. These facts were the motivation behind the development of a general-purpose Monte Carlo SRNA program for proton transport simulation in technical systems described by standard geometrical forms (plane, sphere, cone, cylinder, cube). Some of the possible applications of the SRNA program are:...
Measuring the reliability of MCMC inference with bidirectional Monte Carlo
Grosse, Roger B.; Ancha, Siddharth; Roy, Daniel M.
2016-01-01
Markov chain Monte Carlo (MCMC) is one of the main workhorses of probabilistic inference, but it is notoriously hard to measure the quality of approximate posterior samples. This challenge is particularly salient in black box inference methods, which can hide details and obscure inference failures. In this work, we extend the recently introduced bidirectional Monte Carlo technique to evaluate MCMC-based posterior inference algorithms. By running annealed importance sampling (AIS) chains both ...
HISTORY AND TERRITORY HEURISTICS FOR MONTE CARLO GO
BRUNO BOUZY
2006-01-01
Recently, the Monte Carlo approach has been applied to computer go with promising success. INDIGO uses such an approach which can be enhanced with specific heuristics. This paper assesses two heuristics within the 19 × 19 Monte Carlo go framework of INDIGO: the territory heuristic and the history heuristic, both in their internal and external versions. The external territory heuristic is more effective, leading to a 40-point improvement on 19 × 19 boards. The external history heuristic brings...
Identification of Logical Errors through Monte-Carlo Simulation
Emmett, Hilary L
2010-01-01
The primary focus of Monte Carlo simulation is to identify and quantify risk related to uncertainty and variability in spreadsheet model inputs. The stress of Monte Carlo simulation often reveals logical errors in the underlying spreadsheet model that might be overlooked during day-to-day use or traditional "what-if" testing. This secondary benefit of simulation requires a trained eye to recognize warning signs of poor model construction.
Computing Greeks with Multilevel Monte Carlo Methods using Importance Sampling
Euget, Thomas
2012-01-01
This paper presents a new efficient way to reduce the variance of an estimator of popular payoffs and greeks encounter in financial mathematics. The idea is to apply Importance Sampling with the Multilevel Monte Carlo recently introduced by M.B. Giles. So far, Importance Sampling was proved successful in combination with standard Monte Carlo method. We will show efficiency of our approach on the estimation of financial derivatives prices and then on the estimation of Greeks (i.e. sensitivitie...
The computation of Greeks with multilevel Monte Carlo
Burgos, Sylvestre Jean-Baptiste Louis; Michael B. Giles
2014-01-01
In mathematical finance, the sensitivities of option prices to various market parameters, also known as the “Greeks”, reflect the exposure to different sources of risk. Computing these is essential to predict the impact of market moves on portfolios and to hedge them adequately. This is commonly done using Monte Carlo simulations. However, obtaining accurate estimates of the Greeks can be computationally costly. Multilevel Monte Carlo offers complexity improvements over standard Monte Carl...
On the inner workings of Monte Carlo codes
Dubbeldam, D.; Torres Knoop, A.; Walton, K.S.
2013-01-01
We review state-of-the-art Monte Carlo (MC) techniques for computing fluid coexistence properties (Gibbs simulations) and adsorption simulations in nanoporous materials such as zeolites and metal-organic frameworks. Conventional MC is discussed and compared to advanced techniques such as reactive MC, configurational-bias Monte Carlo and continuous fractional MC. The latter technique overcomes the problem of low insertion probabilities in open systems. Other modern methods are (hyper-)parallel...
Public Infrastructure for Monte Carlo Simulation: publicMC@BATAN
Waskita, A A; Akbar, Z; Handoko, L T; 10.1063/1.3462759
2010-01-01
The first cluster-based public computing for Monte Carlo simulation in Indonesia is introduced. The system has been developed to enable public to perform Monte Carlo simulation on a parallel computer through an integrated and user friendly dynamic web interface. The beta version, so called publicMC@BATAN, has been released and implemented for internal users at the National Nuclear Energy Agency (BATAN). In this paper the concept and architecture of publicMC@BATAN are presented.
Monte Carlo method for solving a parabolic problem
Directory of Open Access Journals (Sweden)
Tian Yi
2016-01-01
Full Text Available In this paper, we present a numerical method based on random sampling for a parabolic problem. This method combines use of the Crank-Nicolson method and Monte Carlo method. In the numerical algorithm, we first discretize governing equations by Crank-Nicolson method, and obtain a large sparse system of linear algebraic equations, then use Monte Carlo method to solve the linear algebraic equations. To illustrate the usefulness of this technique, we apply it to some test problems.
Monte Carlo Simulation of Optical Properties of Wake Bubbles
Institute of Scientific and Technical Information of China (English)
CAO Jing; WANG Jiang-An; JIANG Xing-Zhou; SHI Sheng-Wei
2007-01-01
Based on Mie scattering theory and the theory of multiple light scattering, the light scattering properties of air bubbles in a wake are analysed by Monte Carlo simulation. The results show that backscattering is enhanced obviously due to the existence of bubbles, especially with the increase of bubble density, and that it is feasible to use the Monte Carlo method to study the properties of light scattering by air bubbles.
Monte Carlo methods and applications in nuclear physics
International Nuclear Information System (INIS)
Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs
A Particle Population Control Method for Dynamic Monte Carlo
Sweezy, Jeremy; Nolen, Steve; Adams, Terry; Zukaitis, Anthony
2014-06-01
A general particle population control method has been derived from splitting and Russian Roulette for dynamic Monte Carlo particle transport. A well-known particle population control method, known as the particle population comb, has been shown to be a special case of this general method. This general method has been incorporated in Los Alamos National Laboratory's Monte Carlo Application Toolkit (MCATK) and examples of it's use are shown for both super-critical and sub-critical systems.
Lattice Monte Carlo simulations of polymer melts
Hsu, Hsiao-Ping
2014-12-01
We use Monte Carlo simulations to study polymer melts consisting of fully flexible and moderately stiff chains in the bond fluctuation model at a volume fraction 0.5. In order to reduce the local density fluctuations, we test a pre-packing process for the preparation of the initial configurations of the polymer melts, before the excluded volume interaction is switched on completely. This process leads to a significantly faster decrease of the number of overlapping monomers on the lattice. This is useful for simulating very large systems, where the statistical properties of the model with a marginally incomplete elimination of excluded volume violations are the same as those of the model with strictly excluded volume. We find that the internal mean square end-to-end distance for moderately stiff chains in a melt can be very well described by a freely rotating chain model with a precise estimate of the bond-bond orientational correlation between two successive bond vectors in equilibrium. The plot of the probability distributions of the reduced end-to-end distance of chains of different stiffness also shows that the data collapse is excellent and described very well by the Gaussian distribution for ideal chains. However, while our results confirm the systematic deviations between Gaussian statistics for the chain structure factor Sc(q) [minimum in the Kratky-plot] found by Wittmer et al. [EPL 77, 56003 (2007)] for fully flexible chains in a melt, we show that for the available chain length these deviations are no longer visible, when the chain stiffness is included. The mean square bond length and the compressibility estimated from collective structure factors depend slightly on the stiffness of the chains.
Monte Carlo Volcano Seismic Moment Tensors
Waite, G. P.; Brill, K. A.; Lanza, F.
2015-12-01
Inverse modeling of volcano seismic sources can provide insight into the geometry and dynamics of volcanic conduits. But given the logistical challenges of working on an active volcano, seismic networks are typically deficient in spatial and temporal coverage; this potentially leads to large errors in source models. In addition, uncertainties in the centroid location and moment-tensor components, including volumetric components, are difficult to constrain from the linear inversion results, which leads to a poor understanding of the model space. In this study, we employ a nonlinear inversion using a Monte Carlo scheme with the objective of defining robustly resolved elements of model space. The model space is randomized by centroid location and moment tensor eigenvectors. Point sources densely sample the summit area and moment tensors are constrained to a randomly chosen geometry within the inversion; Green's functions for the random moment tensors are all calculated from modeled single forces, making the nonlinear inversion computationally reasonable. We apply this method to very-long-period (VLP) seismic events that accompany minor eruptions at Fuego volcano, Guatemala. The library of single force Green's functions is computed with a 3D finite-difference modeling algorithm through a homogeneous velocity-density model that includes topography, for a 3D grid of nodes, spaced 40 m apart, within the summit region. The homogenous velocity and density model is justified by long wavelength of VLP data. The nonlinear inversion reveals well resolved model features and informs the interpretation through a better understanding of the possible models. This approach can also be used to evaluate possible station geometries in order to optimize networks prior to deployment.
Measured and Monte Carlo calculated k{sub Q} factors: Accuracy and comparison
Energy Technology Data Exchange (ETDEWEB)
Muir, B. R.; McEwen, M. R.; Rogers, D. W. O. [Ottawa Medical Physics Institute (OMPI), Ottawa Carleton Institute for Physics, Carleton University Campus, 1125 Colonel By Drive, Ottawa, Ontario K1S 5B6 (Canada); Institute for National Measurement Standards, National Research Council of Canada, Ottawa, Ontario K1A 0R6 (Canada); Ottawa Medical Physics Institute (OMPI), Ottawa Carleton Institute for Physics, Carleton University Campus, 1125 Colonel By Drive, Ottawa, Ontario K1S 5B6 (Canada)
2011-08-15
Purpose: The journal Medical Physics recently published two papers that determine beam quality conversion factors, k{sub Q}, for large sets of ion chambers. In the first paper [McEwen Med. Phys. 37, 2179-2193 (2010)], k{sub Q} was determined experimentally, while the second paper [Muir and Rogers Med. Phys. 37, 5939-5950 (2010)] provides k{sub Q} factors calculated using Monte Carlo simulations. This work investigates a variety of additional consistency checks to verify the accuracy of the k{sub Q} factors determined in each publication and a comparison of the two data sets. Uncertainty introduced in calculated k{sub Q} factors by possible variation of W/e with beam energy is investigated further. Methods: The validity of the experimental set of k{sub Q} factors relies on the accuracy of the NE2571 reference chamber measurements to which k{sub Q} factors for all other ion chambers are correlated. The stability of NE2571 absorbed dose to water calibration coefficients is determined and comparison to other experimental k{sub Q} factors is analyzed. Reliability of Monte Carlo calculated k{sub Q} factors is assessed through comparison to other publications that provide Monte Carlo calculations of k{sub Q} as well as an analysis of the sleeve effect, the effect of cavity length and self-consistencies between graphite-walled Farmer-chambers. Comparison between the two data sets is given in terms of the percent difference between the k{sub Q} factors presented in both publications. Results: Monitoring of the absorbed dose calibration coefficients for the NE2571 chambers over a period of more than 15 yrs exhibit consistency at a level better than 0.1%. Agreement of the NE2571 k{sub Q} factors with a quadratic fit to all other experimental data from standards labs for the same chamber is observed within 0.3%. Monte Carlo calculated k{sub Q} factors are in good agreement with most other Monte Carlo calculated k{sub Q} factors. Expected results are observed for the sleeve
Uncertainty optimization applied to the Monte Carlo analysis of planetary entry trajectories
Way, David Wesley
2001-10-01
Future robotic missions to Mars, as well as any human missions, will require precise entries to ensure safe landings near science objectives and pre-deployed assets. Planning for these missions will depend heavily on Monte Carlo analyses to evaluate active guidance algorithms, assess the impact of off-nominal conditions, and account for uncertainty. The dependability of Monte Carlo forecasts, however, is limited by the accuracy and completeness of the assumed uncertainties. This is because Monte Carlo analysis is a forward driven problem; beginning with the input uncertainties and proceeding to the forecast output statistics. An improvement to the Monte Carlo analysis is needed that will allow the problem to be worked in reverse. In this way, the largest allowable dispersions that achieve the required mission objectives can be determined quantitatively. This thesis proposes a methodology to optimize the uncertainties in the Monte Carlo analysis of spacecraft landing footprints. A metamodel is used to first write polynomial expressions for the size of the landing footprint as functions of the independent uncertainty extrema. The coefficients of the metamodel are determined by performing experiments. The metamodel is then used in a constrained optimization procedure to minimize a cost-tolerance function. First, a two-dimensional proof-of-concept problem was used to evaluate the feasibility of this optimization method. Next, the optimization method was further demonstrated on the Mars Surveyor Program 2001 Lander. The purpose of this example was to demonstrate that the methodology developed during the proof-of-concept could be scaled to solve larger, more complicated, "real world" problems. This research has shown that is possible to control the size of the landing footprint and establish tolerances for mission uncertainties. A simplified metamodel was developed, which is enabling for realistic problems with more than just a few uncertainties. A confidence interval on
Rambalakos, Andreas
Current federal aviation regulations in the United States and around the world mandate the need for aircraft structures to meet damage tolerance requirements through out the service life. These requirements imply that the damaged aircraft structure must maintain adequate residual strength in order to sustain its integrity that is accomplished by a continuous inspection program. The multifold objective of this research is to develop a methodology based on a direct Monte Carlo simulation process and to assess the reliability of aircraft structures. Initially, the structure is modeled as a parallel system with active redundancy comprised of elements with uncorrelated (statistically independent) strengths and subjected to an equal load distribution. Closed form expressions for the system capacity cumulative distribution function (CDF) are developed by expanding the current expression for the capacity CDF of a parallel system comprised by three elements to a parallel system comprised with up to six elements. These newly developed expressions will be used to check the accuracy of the implementation of a Monte Carlo simulation algorithm to determine the probability of failure of a parallel system comprised of an arbitrary number of statistically independent elements. The second objective of this work is to compute the probability of failure of a fuselage skin lap joint under static load conditions through a Monte Carlo simulation scheme by utilizing the residual strength of the fasteners subjected to various initial load distributions and then subjected to a new unequal load distribution resulting from subsequent fastener sequential failures. The final and main objective of this thesis is to present a methodology for computing the resulting gradual deterioration of the reliability of an aircraft structural component by employing a direct Monte Carlo simulation approach. The uncertainties associated with the time to crack initiation, the probability of crack detection, the
Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments
Energy Technology Data Exchange (ETDEWEB)
Pevey, Ronald E.
2005-09-15
Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL.
The use of Monte-Carlo codes for treatment planning in external-beam radiotherapy
Energy Technology Data Exchange (ETDEWEB)
Alan, E.; Nahum, PhD. [Copenhagen University Hospital, Radiation Physics Dept. (Denmark)
2003-07-01
Monte Carlo simulation of radiation transport is a very powerful technique. There are basically no exact solutions to the Boltzmann transport equation. Even, the 'straightforward' situation (in radiotherapy) of an electron beam depth-dose distribution in water proves to be too difficult for analytical methods without making gross approximations such as ignoring energy-loss straggling, large-angle single scattering and Bremsstrahlung production. monte Carlo is essential when radiation is transport from one medium into another. As the particle (be it a neutron, photon, electron, proton) crosses the boundary then a new set of interaction cross-sections is simply read in and the simulation continues as though the new medium were infinite until the next boundary is encountered. Radiotherapy involves directing a beam of megavoltage x rays or electrons (occasionally protons) at a very complex object, the human body. Monte Carlo simulation has proved in valuable at many stages of the process of accurately determining the distribution of absorbed dose in the patient. Some of these applications will be reviewed here. (Rogers and al 1990; Andreo 1991; Mackie 1990). (N.C.)
Monte Carlo Molecular Simulation with Isobaric-Isothermal and Gibbs-NPT Ensembles
Du, Shouhong
2012-05-01
This thesis presents Monte Carlo methods for simulations of phase behaviors of Lennard-Jones fluids. The isobaric-isothermal (NPT) ensemble and Gibbs-NPT ensemble are introduced in detail. NPT ensemble is employed to determine the phase diagram of pure component. The reduced simulation results are verified by comparison with the equation of state by by Johnson et al. and results with L-J parameters of methane agree considerably with the experiment measurements. We adopt the blocking method for variance estimation and error analysis of the simulation results. The relationship between variance and number of Monte Carlo cycles, error propagation and Random Number Generator performance are also investigated. We review the Gibbs-NPT ensemble employed for phase equilibrium of binary mixture. The phase equilibrium is achieved by performing three types of trial move: particle displacement, volume rearrangement and particle transfer. The simulation models and the simulation details are introduced. The simulation results of phase coexistence for methane and ethane are reported with comparison of the experimental data. Good agreement is found for a wide range of pressures. The contribution of this thesis work lies in the study of the error analysis with respect to the Monte Carlo cycles and number of particles in some interesting aspects.
Force calibration using errors-in-variables regression and Monte Carlo uncertainty evaluation
Bartel, Thomas; Stoudt, Sara; Possolo, Antonio
2016-06-01
An errors-in-variables regression method is presented as an alternative to the ordinary least-squares regression computation currently employed for determining the calibration function for force measuring instruments from data acquired during calibration. A Monte Carlo uncertainty evaluation for the errors-in-variables regression is also presented. The corresponding function (which we call measurement function, often called analysis function in gas metrology) necessary for the subsequent use of the calibrated device to measure force, and the associated uncertainty evaluation, are also derived from the calibration results. Comparisons are made, using real force calibration data, between the results from the errors-in-variables and ordinary least-squares analyses, as well as between the Monte Carlo uncertainty assessment and the conventional uncertainty propagation employed at the National Institute of Standards and Technology (NIST). The results show that the errors-in-variables analysis properly accounts for the uncertainty in the applied calibrated forces, and that the Monte Carlo method, owing to its intrinsic ability to model uncertainty contributions accurately, yields a better representation of the calibration uncertainty throughout the transducer’s force range than the methods currently in use. These improvements notwithstanding, the differences between the results produced by the current and by the proposed new methods generally are small because the relative uncertainties of the inputs are small and most contemporary load cells respond approximately linearly to such inputs. For this reason, there will be no compelling need to revise any of the force calibration reports previously issued by NIST.
Lemak, Alexander; Steren, Carlos A; Arrowsmith, Cheryl H; Llinás, Miguel
2008-05-01
ABACUS [Grishaev et al. (2005) Proteins 61:36-43] is a novel protocol for automated protein structure determination via NMR. ABACUS starts from molecular fragments defined by unassigned J-coupled spin-systems and involves a Monte Carlo stochastic search in assignment space, probabilistic sequence selection, and assembly of fragments into structures that are used to guide the stochastic search. Here, we report further development of the two main algorithms that increase the flexibility and robustness of the method. Performance of the BACUS [Grishaev and Llinás (2004) J Biomol NMR 28:1-101] algorithm was significantly improved through use of sequential connectivities available from through-bond correlated 3D-NMR experiments, and a new set of likelihood probabilities derived from a database of 56 ultra high resolution X-ray structures. A Multicanonical Monte Carlo procedure, Fragment Monte Carlo (FMC), was developed for sequence-specific assignment of spin-systems. It relies on an enhanced assignment sampling and provides the uncertainty of assignments in a quantitative manner. The efficiency of the protocol was validated on data from four proteins of between 68-116 residues, yielding 100% accuracy in sequence specific assignment of backbone and side chain resonances.
Directory of Open Access Journals (Sweden)
UDOANYA RAYMOND MANUEL
2014-04-01
Full Text Available This paper presents the importance of applying queuing theory to the Automated Teller Machine (ATM using Monte Carlo Simulation in order to determine, control and manage the level of queuing congestion found within the Automated Teller Machine (ATM centre in Nigeria and also it contains the empirical data analysis of the queuing systems obtained at the Automated Teller Machine (ATM located within the Bank premises for a period of three (3 months. Monte Carlo Simulation is applied to this study in order to review the queuing congestion and queuing discipline at the Automated Teller Machine facilities or Automated Teller Machine service centers, and also estimate the arrival time, waiting time and service time of each customer found during the peak hours and off peak hours. An experiment was been carried out with the aid of a stop watch, recording material, etc on order to obtain the time in which every customer spends at the Automated Teller Machine (ATM service centre from the time of arrival to the time of departure. The model contains five servers which are heavily congested during the peak hours and during the off peak hours, servers are found being idle. Policy recommendations that could be use to manage and control the high level of queuing congestion at Automated Teller Machine (ATM centers were made using the statistical results presented by Monte Carlo simulation software attached to this work, such results include having not more than 15 customers within 1 hour, etc.
Energy Technology Data Exchange (ETDEWEB)
Hart, S. W. D. [University of Tennessee, Knoxville (UTK); Maldonado, G. Ivan [University of Tennessee, Knoxville (UTK); Celik, Cihangir [ORNL; Leal, Luiz C [ORNL
2014-01-01
For many Monte Carlo codes cross sections are generally only created at a set of predetermined temperatures. This causes an increase in error as one moves further and further away from these temperatures in the Monte Carlo model. This paper discusses recent progress in the Scale Monte Carlo module KENO to create problem dependent, Doppler broadened, cross sections. Currently only broadening the 1D cross sections and probability tables is addressed. The approach uses a finite difference method to calculate the temperature dependent cross-sections for the 1D data, and a simple linear-logarithmic interpolation in the square root of temperature for the probability tables. Work is also ongoing to address broadening theS (alpha , beta) tables. With the current approach the temperature dependent cross sections are Doppler broadened before transport starts, and, for all but a few isotopes, the impact on cross section loading is negligible. Results can be compared with those obtained by using multigroup libraries, as KENO currently does interpolation on the multigroup cross sections to determine temperature dependent cross-sections. Current results compare favorably with these expected results.
Djibrilla Saley, A.; Jardani, A.; Soueid Ahmed, A.; Raphael, A.; Dupont, J. P.
2016-11-01
Estimating spatial distributions of the hydraulic conductivity in heterogeneous aquifers has always been an important and challenging task in hydrology. Generally, the hydraulic conductivity field is determined from hydraulic head or pressure measurements. In the present study, we propose to use temperature data as source of information for characterizing the spatial distributions of the hydraulic conductivity field. In this way, we performed a laboratory sandbox experiment with the aim of imaging the heterogeneities of the hydraulic conductivity field from thermal monitoring. During the laboratory experiment, we injected a hot water pulse, which induces a heat plume motion into the sandbox. The induced plume was followed by a set of thermocouples placed in the sandbox. After the temperature data acquisition, we performed a hydraulic tomography using the stochastic Hybrid Monte Carlo approach, also called the Hamiltonian Monte Carlo (HMC) algorithm to invert the temperature data. This algorithm is based on a combination of the Metropolis Monte Carlo method and the Hamiltonian dynamics approach. The parameterization of the inverse problem was done with the Karhunen-Loève (KL) expansion to reduce the dimensionality of the unknown parameters. Our approach has provided successful reconstruction of the hydraulic conductivity field with low computational effort.
The use of Monte-Carlo codes for treatment planning in external-beam radiotherapy
International Nuclear Information System (INIS)
Monte Carlo simulation of radiation transport is a very powerful technique. There are basically no exact solutions to the Boltzmann transport equation. Even, the 'straightforward' situation (in radiotherapy) of an electron beam depth-dose distribution in water proves to be too difficult for analytical methods without making gross approximations such as ignoring energy-loss straggling, large-angle single scattering and Bremsstrahlung production. monte Carlo is essential when radiation is transport from one medium into another. As the particle (be it a neutron, photon, electron, proton) crosses the boundary then a new set of interaction cross-sections is simply read in and the simulation continues as though the new medium were infinite until the next boundary is encountered. Radiotherapy involves directing a beam of megavoltage x rays or electrons (occasionally protons) at a very complex object, the human body. Monte Carlo simulation has proved in valuable at many stages of the process of accurately determining the distribution of absorbed dose in the patient. Some of these applications will be reviewed here. (Rogers and al 1990; Andreo 1991; Mackie 1990). (N.C.)
Applying Monte Carlo Simulation to Biomedical Literature to Approximate Genetic Network.
Al-Dalky, Rami; Taha, Kamal; Al Homouz, Dirar; Qasaimeh, Murad
2016-01-01
Biologists often need to know the set of genes associated with a given set of genes or a given disease. We propose in this paper a classifier system called Monte Carlo for Genetic Network (MCforGN) that can construct genetic networks, identify functionally related genes, and predict gene-disease associations. MCforGN identifies functionally related genes based on their co-occurrences in the abstracts of biomedical literature. For a given gene g , the system first extracts the set of genes found within the abstracts of biomedical literature associated with g. It then ranks these genes to determine the ones with high co-occurrences with g . It overcomes the limitations of current approaches that employ analytical deterministic algorithms by applying Monte Carlo Simulation to approximate genetic networks. It does so by conducting repeated random sampling to obtain numerical results and to optimize these results. Moreover, it analyzes results to obtain the probabilities of different genes' co-occurrences using series of statistical tests. MCforGN can detect gene-disease associations by employing a combination of centrality measures (to identify the central genes in disease-specific genetic networks) and Monte Carlo Simulation. MCforGN aims at enhancing state-of-the-art biological text mining by applying novel extraction techniques. We evaluated MCforGN by comparing it experimentally with nine approaches. Results showed marked improvement. PMID:26415184
Ramamoorthy, Karthikeyan
The main aim of this research is the development and validation of computational schemes for advanced lattice codes. The advanced lattice code which forms the primary part of this research is "DRAGON Version4". The code has unique features like self shielding calculation with capabilities to represent distributed and mutual resonance shielding effects, leakage models with space-dependent isotropic or anisotropic streaming effect, availability of the method of characteristics (MOC), burnup calculation with reaction-detailed energy production etc. Qualified reactor physics codes are essential for the study of all existing and envisaged designs of nuclear reactors. Any new design would require a thorough analysis of all the safety parameters and burnup dependent behaviour. Any reactor physics calculation requires the estimation of neutron fluxes in various regions of the problem domain. The calculation goes through several levels before the desired solution is obtained. Each level of the lattice calculation has its own significance and any compromise at any step will lead to poor final result. The various levels include choice of nuclear data library and energy group boundaries into which the multigroup library is cast; self shielding of nuclear data depending on the heterogeneous geometry and composition; tracking of geometry, keeping error in volume and surface to an acceptable minimum; generation of regionwise and groupwise collision probabilities or MOC-related information and their subsequent normalization thereof, solution of transport equation using the previously generated groupwise information and obtaining the fluxes and reaction rates in various regions of the lattice; depletion of fuel and of other materials based on normalization with constant power or constant flux. Of the above mentioned levels, the present research will mainly focus on two aspects, namely self shielding and depletion. The behaviour of the system is determined by composition of resonant
Multilevel markov chain monte carlo method for high-contrast single-phase flow problems
Efendiev, Yalchin R.
2014-12-19
In this paper we propose a general framework for the uncertainty quantification of quantities of interest for high-contrast single-phase flow problems. It is based on the generalized multiscale finite element method (GMsFEM) and multilevel Monte Carlo (MLMC) methods. The former provides a hierarchy of approximations of different resolution, whereas the latter gives an efficient way to estimate quantities of interest using samples on different levels. The number of basis functions in the online GMsFEM stage can be varied to determine the solution resolution and the computational cost, and to efficiently generate samples at different levels. In particular, it is cheap to generate samples on coarse grids but with low resolution, and it is expensive to generate samples on fine grids with high accuracy. By suitably choosing the number of samples at different levels, one can leverage the expensive computation in larger fine-grid spaces toward smaller coarse-grid spaces, while retaining the accuracy of the final Monte Carlo estimate. Further, we describe a multilevel Markov chain Monte Carlo method, which sequentially screens the proposal with different levels of approximations and reduces the number of evaluations required on fine grids, while combining the samples at different levels to arrive at an accurate estimate. The framework seamlessly integrates the multiscale features of the GMsFEM with the multilevel feature of the MLMC methods following the work in [26], and our numerical experiments illustrate its efficiency and accuracy in comparison with standard Monte Carlo estimates. © Global Science Press Limited 2015.
A sequential Monte Carlo model of the combined GB gas and electricity network
International Nuclear Information System (INIS)
A Monte Carlo model of the combined GB gas and electricity network was developed to determine the reliability of the energy infrastructure. The model integrates the gas and electricity network into a single sequential Monte Carlo simulation. The model minimises the combined costs of the gas and electricity network, these include gas supplies, gas storage operation and electricity generation. The Monte Carlo model calculates reliability indices such as loss of load probability and expected energy unserved for the combined gas and electricity network. The intention of this tool is to facilitate reliability analysis of integrated energy systems. Applications of this tool are demonstrated through a case study that quantifies the impact on the reliability of the GB gas and electricity network given uncertainties such as wind variability, gas supply availability and outages to energy infrastructure assets. Analysis is performed over a typical midwinter week on a hypothesised GB gas and electricity network in 2020 that meets European renewable energy targets. The efficacy of doubling GB gas storage capacity on the reliability of the energy system is assessed. The results highlight the value of greater gas storage facilities in enhancing the reliability of the GB energy system given various energy uncertainties. -- Highlights: •A Monte Carlo model of the combined GB gas and electricity network was developed. •Reliability indices are calculated for the combined GB gas and electricity system. •The efficacy of doubling GB gas storage capacity on reliability of the energy system is assessed. •Integrated reliability indices could be used to assess the impact of investment in energy assets