Monte Carlo validation of self shielding and void effect calculations
International Nuclear Information System (INIS)
Tellier, H.; Coste, M.; Raepsaet, C.; Soldevila, M.; Van der Gucht, C.
1995-01-01
The self shielding validation and the void effect are studied with Monte Carlo method. The satisfactory comparison obtained between the APOLLO 2 results of the self shielding effect and the TRIPOLI and MCNP results allows us to be confident in the multigroup transport code. (K.A.)
Characteristic Determination Of Self Shielding Factor And Cadmium Ratio Of Cylindrical Probe
International Nuclear Information System (INIS)
Hamzah, Amir; Budi R, Ita; Pinem, Suriam
1996-01-01
Determination of thermal, epithermal and total self shielding factor and cadmium ratio of cylindrical probe has been done by measurement and calculation. Self shielding factor can be determined by dividing probe activity to Al-alloy probe activity. Due to the lack of cylindrical probe made of Al-alloy, self shielding factor can be determined by parabolic extrapolation of measured activities to 0 cm radius to divide those activities. Theoretically, self shielding factor can be determined by making numerical solution of two dimensional integral equations using Romberg method. To simplify, the calculation is based on single collision theory with the assumption of monoenergetic neutron and isotropic distribution. For gold cylindrical probe, the calculation results are quite close to the measurement one with the relative discrepancy for activities, cadmium ratio and self shielding factor of bare probe are less then 11.5%, 3,5% and 1.5% respectively. The program can be used for the calculation of other kinds of cylindrical probes. Due to dependency to radius, cylindrical probe made of copper has the best characteristic of self shielding factor and cadmium ratio
An ''exact'' treatment of self-shielding and covers in neutron spectra determinations
International Nuclear Information System (INIS)
Griffin, P.J.; Kelly, J.G.
1995-01-01
Most neutron spectrum determination methodologies ignore self-shielding effects in dosimetry foils and treat covers with an exponential attenuation model. This work provides a quantitative analysis of the approximations in this approach. It also provides a methodology for improving the fidelity of the treatment of the dosimetry sensor response to a level consistent with the user's spectrum characterization approach. A library of correction functions for the energy-dependent sensor response has been compiled that addresses dosimetry foils/configurations in use at the Sandia National Laboratories Radiation Metrology Laboratory
Determination of self shielding factors and gamma attenuation effects for tree ring samples
International Nuclear Information System (INIS)
Dagistan Sahin; Kenan Uenlue
2012-01-01
Determination of tree ring chemistry using Neutron Activation Analysis (NAA) is part of an ongoing research between Penn State University (PSU) and Cornell University, The Malcolm and Carolyn Wiener Laboratory for Aegean and Near Eastern Dendrochronology. Tree-ring chemistry yields valuable data for environmental event signatures. These signatures are a complex function of elemental concentration. To be certain about concentration of signature elements, it is necessary to perform the measurements and corrections with the lowest error and maximum accuracy possible. Accurate and precise values of energy dependent neutron flux at dry irradiation tubes and detector efficiency for tree ring sample are calculated for Penn State Breazeale Reactor (PSBR). For the calculation of energy dependent and self shielding corrected neutron flux, detailed model of the TRIGA Mark III reactor at PSU with updated fuel compositions was prepared using the MCNP utility for reactor evolution (MURE) libraries. Dry irradiation tube, sample holder and sample were also included in the model. The thermal flux self-shielding correction factors due to the sample holder and sample for were calculated and verified with previously published values. The Geant-4 model of the gamma spectroscopy system, developed at Radiation Science and Engineering Center (RSEC), was improved and absolute detector efficiency for tree-ring samples was calculated. (author)
DiJulio, D. D.; Cooper-Jensen, C. P.; Llamas-Jansa, I.; Kazi, S.; Bentley, P. M.
2018-06-01
A combined measurement and Monte-Carlo simulation study was carried out in order to characterize the particle self-shielding effect of B4C grains in neutron shielding concrete. Several batches of a specialized neutron shielding concrete, with varying B4C grain sizes, were exposed to a 2 Å neutron beam at the R2D2 test beamline at the Institute for Energy Technology located in Kjeller, Norway. The direct and scattered neutrons were detected with a neutron detector placed behind the concrete blocks and the results were compared to Geant4 simulations. The particle self-shielding effect was included in the Geant4 simulations by calculating effective neutron cross-sections during the Monte-Carlo simulation process. It is shown that this method well reproduces the measured results. Our results show that shielding calculations for low-energy neutrons using such materials would lead to an underestimate of the shielding required for a certain design scenario if the particle self-shielding effect is not included in the calculations.
International Nuclear Information System (INIS)
Kaul, D.C.
1982-01-01
Throughout the last two decades many efforts have been made to estimate the effect of body self-shielding on organ doses from externally incident neutrons and gamma rays. These began with the use of simple geometry phantoms and have culminated in the use of detailed anthropomorphic phantoms. In a recent effort, adjoint Monte Carlo analysis techniques have been used to determine dose and dose equivalent to the active marrow as a function of energy and angle of neutron fluence externally incident on an anthropomorphic phantom. When combined with fluences from actual nuclear devices, these dose-to-fluence factors result in marrow dose values that demonstrate great sensitivity to variations in device type, range, and body orientation. Under a state-of-the-art radiation transport analysis demonstration program for the Japanese cities, sponsored by the Defense Nuclear Agency at the request of the National Council on Radiation Protection and Measurements, the marrow dose study referred to above is being repeated to obtain spectral distributions within the marrow for externally incident neutrons and gamma rays of arbitrary energy and angle. This is intended to allow radiobiologists and epidemiologists to select and to modify numbers of merit for correlation with health effects and to permit a greater understanding of the relationship between human and laboratory subject dosimetry
International Nuclear Information System (INIS)
Tzika, F.; Stamatelatos, I.E.
2004-01-01
Thermal neutron self-shielding within large samples was studied using the Monte Carlo neutron transport code MCNP. The code enabled a three-dimensional modeling of the actual source and geometry configuration including reactor core, graphite pile and sample. Neutron flux self-shielding correction factors derived for a set of materials of interest for large sample neutron activation analysis are presented and evaluated. Simulations were experimentally verified by measurements performed using activation foils. The results of this study can be applied in order to determine neutron self-shielding factors of unknown samples from the thermal neutron fluxes measured at the surface of the sample
International Nuclear Information System (INIS)
Pelloni, S.; Cheng, E.T.
1985-02-01
The Swiss LOTUS fusion-fission hybrid test facility was used to investigate the influence of the self-shielding of resonance cross sections on the tritium breeding and on the thorium ratios. Nucleonic analyses were performed using the discrete-ordinates transport codes ANISN and ONEDANT, the surface-flux code SURCU, and the version 3 of the MCNP code for the Li 2 CO 3 and the Li 2 O blanket designs with lead, thorium and beryllium multipliers. Except for the MCNP calculation which bases on the ENDF/B-V files, all nuclear data are generated from the ENDF/B-IV basic library. For the deterministic methods three NJOY group libraries were considered. The first, a 39 neutron group self-shielded library, was generated at EIR. The second bases on the same group structure as the first does and consists of infinitely diluted cross sections. Finally the third library was processed at LANL and consists of coupled 30+12 neutron and gamma groups; these cross sections are not self-shielded. The Monte Carlo analysis bases on a continuous and on a discrete 262 group library from the ENDF/B-V evaluation. It is shown that the results agree well within 3% between the unshielded libraries and between the different transport codes and theories. The self-shielding of resonance cross sections results in a decrease of the thorium capture rate and in an increase of the tritium breeding of about 6%. The remaining computed ratios are not affected by the self-shielding of cross sections. (Auth.)
A new formulation for resonance self-shielding factors
Energy Technology Data Exchange (ETDEWEB)
Palma, Daniel A.P.; Martinez, Aquilino S.; Silva, Fernando C. da [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear]. E-mail: aquilino@lmp.ufrj.br
2007-07-01
The activation technique allows either absolute or relative very precise neutron intensity measurements. This technique requires the knowledge of the Doppler broadening function to determine resonance self-shielding factors. In the present work a new formulation is proposed for the self-shielding factors where the Doppler broadening function is calculated using the Frobenius's method and compared to the values obtained from the four-pole Pade method. This calculation method is shown to be effective from the point of view of accuracy. (author)
A new formulation for resonance self-shielding factors
International Nuclear Information System (INIS)
Palma, Daniel A.P.; Martinez, Aquilino S.; Silva, Fernando C. da
2007-01-01
The activation technique allows either absolute or relative very precise neutron intensity measurements. This technique requires the knowledge of the Doppler broadening function to determine resonance self-shielding factors. In the present work a new formulation is proposed for the self-shielding factors where the Doppler broadening function is calculated using the Frobenius's method and compared to the values obtained from the four-pole Pade method. This calculation method is shown to be effective from the point of view of accuracy. (author)
Uranium self-shielding in fast reactor blankets
Energy Technology Data Exchange (ETDEWEB)
Kadiroglu, O.K.; Driscoll, M.J.
1976-03-01
The effects of heterogeneity on resonance self-shielding are examined with particular emphasis on the blanket region of the fast breeder reactor and on its dominant reaction--capture in /sup 238/U. The results, however, apply equally well to scattering resonances, to other isotopes (fertile, fissile and structural species) and to other environments, so long as the underlying assumptions of narrow resonance theory apply. The heterogeneous resonance integral is first cast into a modified homogeneous form involving the ratio of coolant-to-fuel fluxes. A generalized correlation (useful in its own right in many other applications) is developed for this ratio, using both integral transport and collision probability theory to infer the form of correlation, and then relying upon Monte Carlo calculations to establish absolute values of the correlation coefficients. It is shown that a simple linear prescription can be developed for the flux ratio as a function of only fuel optical thickness and the fraction of the slowing-down source generated by the coolant. This in turn permitted derivation of a new equivalence theorem relating the heterogeneous self-shielding factor to the homogeneous self-shielding factor at a modified value of the background scattering cross section per absorber nucleus. A simple version of this relation is developed and used to show that heterogeneity has a negligible effect on the calculated blanket breeding ratio in fast reactors.
Self-shielding models of MICROX-2 code: Review and updates
International Nuclear Information System (INIS)
Hou, J.; Choi, H.; Ivanov, K.N.
2014-01-01
Highlights: • The MICROX-2 code has been improved to expand its application to advanced reactors. • New fine-group cross section libraries based on ENDF/B-VII have been generated. • Resonance self-shielding and spatial self-shielding models have been improved. • The improvements were assessed by a series of benchmark calculations against MCNPX. - Abstract: The MICROX-2 is a transport theory code that solves for the neutron slowing-down and thermalization equations of a two-region lattice cell. The MICROX-2 code has been updated to expand its application to advanced reactor concepts and fuel cycle simulations, including generation of new fine-group cross section libraries based on ENDF/B-VII. In continuation of previous work, the MICROX-2 methods are reviewed and updated in this study, focusing on its resonance self-shielding and spatial self-shielding models for neutron spectrum calculations. The improvement of self-shielding method was assessed by a series of benchmark calculations against the Monte Carlo code, using homogeneous and heterogeneous pin cell models. The results have shown that the implementation of the updated self-shielding models is correct and the accuracy of physics calculation is improved. Compared to the existing models, the updates reduced the prediction error of the infinite multiplication factor by ∼0.1% and ∼0.2% for the homogeneous and heterogeneous pin cell models, respectively, considered in this study
International Nuclear Information System (INIS)
Nasrabadi, M.N.; Mohammadi, A.; Jalali, M.
2009-01-01
In this paper bulk sample prompt gamma neutron activation analysis (BSPGNAA) was applied to aqueous sample analysis using a relative method. For elemental analysis of an unknown bulk sample, gamma self-shielding coefficient was required. Gamma self-shielding coefficient of unknown samples was estimated by an experimental method and also by MCNP code calculation. The proposed methodology can be used for the determination of the elemental concentration of unknown aqueous samples by BSPGNAA where knowledge of the gamma self-shielding within the sample volume is required.
Validation of calculated self-shielding factors for Rh foils
Jaćimović, R.; Trkov, A.; Žerovnik, G.; Snoj, L.; Schillebeeckx, P.
2010-10-01
Rhodium foils of about 5 mm diameter were obtained from IRMM. One foil had thickness of 0.006 mm and three were 0.112 mm thick. They were irradiated in the pneumatic transfer system and in the carousel facility of the TRIGA reactor at the Jožef Stefan Institute. The foils were irradiated bare and enclosed in small cadmium boxes (about 2 g weight) of 1 mm thickness to minimise the perturbation of the local neutron flux. They were co-irradiated with 5 mm diameter and 0.2 mm thick Al-Au (0.1%) alloy monitor foils. The resonance self-shielding corrections for the 0.006 and 0.112 mm thick samples were calculated by the Monte Carlo simulation and amount to about 10% and 60%, respectively. The consistency of measurements confirmed the validity of self-shielding factors. Trial estimates of Q0 and k0 factors for the 555.8 keV gamma line of 104Rh were made and amount to 6.65±0.18 and (6.61±0.12)×10 -2, respectively.
Self-shielding factors for TLD-600 and TLD-100 in an isotropic flux of thermal neutrons
International Nuclear Information System (INIS)
Horowitz, Y.S.; Dubi, A.; Ben Shahar, B.
1976-01-01
The applications of lithium fluoride thermoluminescent dosemeters in mixed n-γ environments, and the dependence of LiF-TL on linear energy transfer are both topics of current interest. Monte Carlo calculations have therefore been carried out to determine the thermal neutron absorption probability (and consequently the self-shielding factor) for an isotropic flux of neutrons impinging on different sized cylindrical samples of LiF TLD-100 and TLD-600. The calculations were performed for cylinders of radius up to 10 cm and heights of 0.1 to 1.5 cm. The Monte Carlo results were found to be significantly different from the analytic calculations for infinitely long cylinders, but, as expected, converged to the same value for (r/h) << 1. (U.K.)
MPACT Subgroup Self-Shielding Efficiency Improvements
International Nuclear Information System (INIS)
Stimpson, Shane; Liu, Yuxuan; Collins, Benjamin S.; Clarno, Kevin T.
2016-01-01
Recent developments to improve the efficiency of the MOC solvers in MPACT have yielded effective kernels that loop over several energy groups at once, rather that looping over one group at a time. These kernels have produced roughly a 2x speedup on the MOC sweeping time during eigenvalue calculation. However, the self-shielding subgroup calculation had not been reevaluated to take advantage of these new kernels, which typically requires substantial solve time. The improvements covered in this report start by integrating the multigroup kernel concepts into the subgroup calculation, which are then used as the basis for further extensions. The next improvement that is covered is what is currently being termed as ''Lumped Parameter MOC''. Because the subgroup calculation is a purely fixed source problem and multiple sweeps are performed only to update the boundary angular fluxes, the sweep procedure can be condensed to allow for the instantaneous propagation of the flux across a spatial domain, without the need to sweep along all segments in a ray. Once the boundary angular fluxes are considered to be converged, an additional sweep that will tally the scalar flux is completed. The last improvement that is investigated is the possible reduction of the number of azimuthal angles per octant in the shielding sweep. Typically 16 azimuthal angles per octant are used for self-shielding and eigenvalue calculations, but it is possible that the self-shielding sweeps are less sensitive to the number of angles than the full eigenvalue calculation.
Resonance self-shielding calculation with regularized random ladders
Energy Technology Data Exchange (ETDEWEB)
Ribon, P.
1986-01-01
The straightforward method for calculation of resonance self-shielding is to generate one or several resonance ladders, and to process them as resolved resonances. The main drawback of Monte Carlo methods used to generate the ladders, is the difficulty of reducing the dispersion of data and results. Several methods are examined, and it is shown how one (a regularized sampling method) improves the accuracy. Analytical methods to compute the effective cross-section have recently appeared: they are basically exempt from dispersion, but are inevitably approximate. The accuracy of the most sophisticated one is checked. There is a neutron energy range which is improperly considered as statistical. An examination is presented of what happens when it is treated as statistical, and how it is possible to improve the accuracy of calculations in this range. To illustrate the results calculations have been performed in a simple case: nucleus /sup 238/U, at 300 K, between 4250 and 4750 eV.
The resonance self-shielding calculation with regularized random ladders
International Nuclear Information System (INIS)
Ribon, P.
1986-01-01
The straightforward method for calculation of resonance self-shielding is to generate one or several resonance ladders, and to process them as resolved resonances. The main drawback of Monte Carlo methods used to generate the ladders, is the difficulty of reducing the dispersion of data and results. Several methods are examined, and it is shown how one (a regularized sampling method) improves the accuracy. Analytical methods to compute the effective cross-section have recently appeared: they are basically exempt from dispersion, but are inevitably approximate. The accuracy of the most sophisticated one is checked. There is a neutron energy range which is improperly considered as statistical. An examination is presented of what happens when it is treated as statistical, and how it is possible to improve the accuracy of calculations in this range. To illustrate the results calculations have been performed in a simple case: nucleus 238 U, at 300 K, between 4250 and 4750 eV. (author)
Monte Carlo determination of heteroepitaxial misfit structures
DEFF Research Database (Denmark)
Baker, J.; Lindgård, Per-Anker
1996-01-01
We use Monte Carlo simulations to determine the structure of KBr overlayers on a NaCl(001) substrate, a system with large (17%) heteroepitaxial misfit. The equilibrium relaxation structure is determined for films of 2-6 ML, for which extensive helium-atom scattering data exist for comparison...
Self-Shielding Treatment to Perform Cell Calculation for Seed Furl In Th/U Pwr Using Dragon Code
Directory of Open Access Journals (Sweden)
Ahmed Amin El Said Abd El Hameed
2015-08-01
Full Text Available Time and precision of the results are the most important factors in any code used for nuclear calculations. Despite of the high accuracy of Monte Carlo codes, MCNP and Serpent, in many cases their relatively long computational time leads to difficulties in using any of them as the main calculation code. Usually, Monte Carlo codes are used only to benchmark the results. The deterministic codes, which are usually used in nuclear reactor’s calculations, have limited precision, due to the approximations in the methods used to solve the multi-group transport equation. Self- Shielding treatment, an algorithm that produces an average cross-section defined over the complete energy domain of the neutrons in a nuclear reactor, is responsible for the biggest error in any deterministic codes. There are mainly two resonance self-shielding models commonly applied: models based on equivalence and dilution and models based on subgroup approach. The fundamental problem with any self-shielding method is that it treats any isotope as there are no other isotopes with resonance present in the reactor. The most practical way to solve this problem is to use multi-energy groups (50-200 that are chosen in a way that allows us to use all major resonances without self-shielding. In this paper, we perform cell calculations, for a fresh seed fuel pin which is used in thorium/uranium reactors, by solving 172 energy group transport equation using the deterministic DRAGON code, for the two types of self-shielding models (equivalence and dilution models and subgroup models Using WIMS-D5 and DRAGON data libraries. The results are then tested by comparing it with the stochastic MCNP5 code. We also tested the sensitivity of the results to a specific change in self-shielding method implemented, for example the effect of applying Livolant-Jeanpierre Normalization scheme and Rimman Integration improvement on the equivalence and dilution method, and the effect of using Ribbon
International Nuclear Information System (INIS)
Le Tellier, R.; Hebert, A.
2005-01-01
In this paper, we present the usage of the method of characteristics (MOC) with advanced self-shielding models for a fundamental lattice calculation on an ACR-type cell i.e. a cluster geometry with light water coolant and heavy water moderator. Comparison with the collision probability method (CP) show the consistency of the method of characteristics as implemented both in flux and self-shielding calculations. Acceleration techniques are tested in the different calculations and prove to be efficient. Comparisons with the Monte-Carlo code Tripoli4 show the advantage of a subgroup approach for self-shielding calculations : the difference in k eff is less than one standard deviation of the Tripoli4 calculation and in terms of total absorption rates, in the resolved resonances group, the maximum relative error is of the order of 3% localised in the most outer region of the central pin. (author)
Self-Shielding Of Transmission Lines
Energy Technology Data Exchange (ETDEWEB)
Christodoulou, Christos [Univ. of New Mexico, Albuquerque, NM (United States)
2017-03-01
The use of shielding to contend with noise or harmful EMI/EMR energy is not a new concept. An inevitable trade that must be made for shielding is physical space and weight. Space was often not as much of a painful design trade in older larger systems as they are in today’s smaller systems. Today we are packing in an exponentially growing number of functionality within the same or smaller volumes. As systems become smaller and space within systems become more restricted, the implementation of shielding becomes more problematic. Often, space that was used to design a more mechanically robust component must be used for shielding. As the system gets smaller and space is at more of a premium, the trades starts to result in defects, designs with inadequate margin in other performance areas, and designs that are sensitive to manufacturing variability. With these challenges in mind, it would be ideal to maximize attenuation of harmful fields as they inevitably couple onto transmission lines without the use of traditional shielding. Dr. Tom Van Doren proposed a design concept for transmission lines to a class of engineers while visiting New Mexico. This design concept works by maximizing Electric field (E) and Magnetic Field (H) field containment between operating transmission lines to achieve what he called “Self-Shielding”. By making the geometric centroid of the outgoing current coincident with the return current, maximum field containment is achieved. The reciprocal should be true as well, resulting in greater attenuation of incident fields. Figure’s 1(a)-1(b) are examples of designs where the current centroids are coincident. Coax cables are good examples of transmission lines with co-located centroids but they demonstrate excellent field attenuation for other reasons and can’t be used to test this design concept. Figure 1(b) is a flex circuit design that demonstrate the implementation of self-shielding vs a standard conductor layout.
International Nuclear Information System (INIS)
Noorddin Ibrahim; Rosnie Akang
2009-01-01
Full text: One of the major problems encountered during the irradiation of large inhomogeneous samples in performing activation analysis using neutron is the perturbation of the neutron field due to absorption and scattering of neutron within the sample as well as along the neutron guide in the case of prompt gamma activation analysis. The magnitude of this perturbation shown by self-shielding coefficient and flux depression depend on several factors including the average neutron energy, the size and shape of the sample, as well as the macroscopic absorption cross section of the sample. In this study, we use Monte Carlo N-Particle codes to simulate the variation of neutron self-shielding coefficient and thermal flux depression factor as a function of the macroscopic thermal absorption cross section. The simulation works was carried out using the high performance computing facility available at UTM while the experimental work was performed at the tangential beam port of Reactor TRIGA PUSPATI, Malaysia Nuclear Agency. The neutron flux measured along the beam port is found to be in good agreement with the simulated data. Our simulation results also reveal that total flux perturbation factor decreases as the value of absorption increases. This factor is close to unity for low absorbing sample and tends towards zero for strong absorber. In addition, sample with long mean chord length produces smaller flux perturbation than the shorter mean chord length. When comparing both the graphs of self-shielding factor and total disturbance, we can conclude that the total disturbance of the thermal neutron flux on the large samples is dominated by the self-shielding effect. (Author)
International Nuclear Information System (INIS)
Karthikeyan, Ramamoorthy; Hebert, Alain
2008-01-01
A high conversion light water reactor lattice has been analysed using the code DRAGON Version4. This analysis was performed to test the performance of the advanced self-shielding models incorporated in DRAGON Version4. The self-shielding models are broadly classified into two groups - 'equivalence in dilution' and 'subgroup approach'. Under the 'equivalence in dilution' approach we have analysed the generalized Stamm'ler model with and without Nordheim model and Riemann integration. These models have been analysed also using the Livolant-Jeanpierre normalization. Under the 'subgroup approach', we have analysed Statistical self-shielding model based on physical probability tables and Ribon extended self-shielding model based on mathematical probability tables. This analysis will help in understanding the performance of advanced self-shielding models for a lattice that is tight and has a large fraction of fissions happening in the resonance region. The nuclear data for the analysis was generated in-house. NJOY99.90 was used for generating libraries in DRAGLIB format for analysis using DRAGON and A Compact ENDF libraries for analysis using MCNP5. The evaluated datafiles were chosen based on the recommendations of the IAEA Co-ordinated Research Project on the WIMS Library Update Project. The reference solution for the problem was obtained using Monte Carlo code MCNP5. It was found that the Ribon extended self-shielding model based on mathematical probability tables using correlation model performed better than all other models
Uncertainty Analysis with Considering Resonance Self-shielding Effect
International Nuclear Information System (INIS)
Han, Tae Young
2016-01-01
If infinitely diluted multi-group cross sections were used for the sensitivity, the covariance data from the evaluated nuclear data library (ENDL) was directly applied. However, in case of using a self-shielded multi-group cross section, the covariance data should be corrected considering self-shielding effect. Usually, implicit uncertainty can be defined as the uncertainty change by the resonance self-shielding effect as described above. MUSAD ( Modules of Uncertainty and Sensitivity Analysis for DeCART ) has been developed for a multiplication factor and cross section uncertainty based on the generalized perturbation theory and it, however, can only quantify the explicit uncertainty by the self-shielded multi-group cross sections without considering the implicit effect. Thus, this paper addresses the implementation of the implicit uncertainty analysis module into the code and the numerical results for the verification are provided. The implicit uncertainty analysis module has been implemented into MUSAD based on infinitely-diluted cross section-based consistent method. The verification calculation was performed on MHTGR 350 Ex.I-1a and the differences with McCARD result decrease from 40% to 1% in CZP case and 3% in HFP case. From this study, it is expected that MUSAD code can reasonably produce the complete uncertainty on VHTR or LWR where the resonance self-shielding effect should be significantly considered
Uncertainty Analysis with Considering Resonance Self-shielding Effect
Energy Technology Data Exchange (ETDEWEB)
Han, Tae Young [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2016-10-15
If infinitely diluted multi-group cross sections were used for the sensitivity, the covariance data from the evaluated nuclear data library (ENDL) was directly applied. However, in case of using a self-shielded multi-group cross section, the covariance data should be corrected considering self-shielding effect. Usually, implicit uncertainty can be defined as the uncertainty change by the resonance self-shielding effect as described above. MUSAD ( Modules of Uncertainty and Sensitivity Analysis for DeCART ) has been developed for a multiplication factor and cross section uncertainty based on the generalized perturbation theory and it, however, can only quantify the explicit uncertainty by the self-shielded multi-group cross sections without considering the implicit effect. Thus, this paper addresses the implementation of the implicit uncertainty analysis module into the code and the numerical results for the verification are provided. The implicit uncertainty analysis module has been implemented into MUSAD based on infinitely-diluted cross section-based consistent method. The verification calculation was performed on MHTGR 350 Ex.I-1a and the differences with McCARD result decrease from 40% to 1% in CZP case and 3% in HFP case. From this study, it is expected that MUSAD code can reasonably produce the complete uncertainty on VHTR or LWR where the resonance self-shielding effect should be significantly considered.
International Nuclear Information System (INIS)
Coste-Delclaux, M.; Aggery, A.; Huot, N.
2005-01-01
APOLLO2 is a modular multigroup transport code developed by Cea in Saclay. Until last year, the self-shielding module could only treat one resonant isotope mixed with moderator isotopes. Consequently, the resonant mixture self-shielding treatment was an iterative one. Each resonant isotope of the mixture was treated separately, the other resonant isotopes of the mixture being then considered as moderator isotopes, that is to say non-resonant isotopes. This treatment could be iterated. Last year, we have developed a new method that consists in treating the resonant mixture as a unique entity. A main feature of APOLLO2 self-shielding module is that some implemented models are very general and therefore very powerful and versatile. We can give, as examples, the use of probability tables in order to describe the microscopic cross-section fluctuations or the TR slowing-down model that can deal with any resonance shape. The self-shielding treatment of a resonant mixture was developed essentially thanks to these two models. The calculations of a simplified Jules Horowitz reactor using a Monte-Carlo code (TRIPOLI4) as a reference and APOLLO2 in its standard and improved versions, show that, as far as the effective multiplication factor is concerned, the mixture treatment does not bring an improvement, because the new treatment suppresses compensation between the reaction rate discrepancies. The discrepancy of 300 pcm that appears with the reference calculation is in accordance with the technical specifications of the Jules Horowitz reactor
Directory of Open Access Journals (Sweden)
Shane Stimpson
2017-09-01
Full Text Available An essential component of the neutron transport solver is the resonance self-shielding calculation used to determine equivalence cross sections. The neutron transport code, MPACT, is currently using the subgroup self-shielding method, in which the method of characteristics (MOC is used to solve purely absorbing fixed-source problems. Recent efforts incorporating multigroup kernels to the MOC solvers in MPACT have reduced runtime by roughly 2×. Applying the same concepts for self-shielding and developing a novel lumped parameter approach to MOC, substantial improvements have also been made to the self-shielding computational efficiency without sacrificing any accuracy. These new multigroup and lumped parameter capabilities have been demonstrated on two test cases: (1 a single lattice with quarter symmetry known as VERA (Virtual Environment for Reactor Applications Progression Problem 2a and (2 a two-dimensional quarter-core slice known as Problem 5a-2D. From these cases, self-shielding computational time was reduced by roughly 3–4×, with a corresponding 15–20% increase in overall memory burden. An azimuthal angle sensitivity study also shows that only half as many angles are needed, yielding an additional speedup of 2×. In total, the improvements yield roughly a 7–8× speedup. Given these performance benefits, these approaches have been adopted as the default in MPACT.
International Nuclear Information System (INIS)
Stimpson, Shane G.; Liu, Yuxuan; Collins, Benjamin S.; Clarno, Kevin T.
2017-01-01
An essential component of the neutron transport solver is the resonance self-shielding calculation used to determine equivalence cross sections. The neutron transport code, MPACT, is currently using the subgroup self-shielding method, in which the method of characteristics (MOC) is used to solve purely absorbing fixed-source problems. Recent efforts incorporating multigroup kernels to the MOC solvers in MPACT have reduced runtime by roughly 2×. Applying the same concepts for self-shielding and developing a novel lumped parameter approach to MOC, substantial improvements have also been made to the self-shielding computational efficiency without sacrificing any accuracy. These new multigroup and lumped parameter capabilities have been demonstrated on two test cases: (1) a single lattice with quarter symmetry known as VERA (Virtual Environment for Reactor Applications) Progression Problem 2a and (2) a two-dimensional quarter-core slice known as Problem 5a-2D. From these cases, self-shielding computational time was reduced by roughly 3–4×, with a corresponding 15–20% increase in overall memory burden. An azimuthal angle sensitivity study also shows that only half as many angles are needed, yielding an additional speedup of 2×. In total, the improvements yield roughly a 7–8× speedup. Furthermore given these performance benefits, these approaches have been adopted as the default in MPACT.
International Nuclear Information System (INIS)
Froehner, F.H.; Larson, Duane C.; Tagesen, Siegfried; Petrizzi, Luigi; Hasegawa, Akira; Nakagawa, Tsuneo; Hogenbirk, Alfred; Weigmann, H.
1995-01-01
A Working Party on International Evaluation Co-operation was established under the sponsorship of the OECD/NEA Nuclear Science Committee (NSC) to promote the exchange of information on nuclear data evaluations, validation, and related topics. Its aim is also to provide a framework for co-operative activities between members of the major nuclear data evaluation projects. This includes the possible exchange of scientists in order to encourage co-operation. Requirements for experimental data resulting from this activity are compiled. The Working Party determines common criteria for evaluated nuclear data files with a view to assessing and improving the quality and completeness of evaluated data. The Parties to the project are: ENDF (United States), JEFF/EFF (NEA Data Bank Member countries), and JENDL (Japan). Co-operation with evaluation projects of non-OECD countries are organised through the Nuclear Data Section of the International Atomic Energy Agency (IAEA). NEA/NSC Subgroup 15 has had the task to assess self-shielding effects in the unresolved resonance range of structural materials, in particular their importance at various energies, and possible ways to deal with them in shielding and activation work. The principal results achieved are summarised briefly, in particular: - New data base consisting of high-resolution transmission data measured at Oak Ridge and Geel; - Improved theoretical understanding of cross-section fluctuations, including their prediction, that has been derived from the Hauser-Feshbach theory; - Benchmark results on the importance of self-shielding in iron at various energies; - Consequences for information storage in evaluated nuclear data files; - Practical utilisation of self-shielding information from evaluated files. Benchmark results as well as the Hauser-Feshbach theory show that self-shielding effects are important up to a 4-or 5-MeV neutron energy. Fluctuation factors extracted from high-resolution total cross-section data can be
REPOSITORY LAYOUT SUPPORTING DESIGN FEATURE NO.13 - WASTE PACKAGE SELF SHIELDING
International Nuclear Information System (INIS)
Owen, J.
1999-01-01
The objective of this analysis is to develop a repository layout, for Feature No. 13, that will accommodate self-shielding waste packages (WP) with an areal mass loading of 25 metric tons of uranium per acre (MTU/acre). The scope of this analysis includes determination of the number of emplacement drifts, amount of emplacement drift excavation required, and a preliminary layout for illustrative purposes
Church, Cody; Mawko, George; Archambault, John Paul; Lewandowski, Robert; Liu, David; Kehoe, Sharon; Boyd, Daniel; Abraham, Robert; Syme, Alasdair
2018-02-01
Radiopaque microspheres may provide intraprocedural and postprocedural feedback during transarterial radioembolization (TARE). Furthermore, the potential to use higher resolution x-ray imaging techniques as opposed to nuclear medicine imaging suggests that significant improvements in the accuracy and precision of radiation dosimetry calculations could be realized for this type of therapy. This study investigates the absorbed dose kernel for novel radiopaque microspheres including contributions of both short and long-lived contaminant radionuclides while concurrently quantifying the self-shielding of the glass network. Monte Carlo simulations using EGSnrc were performed to determine the dose kernels for all monoenergetic electron emissions and all beta spectra for radionuclides reported in a neutron activation study of the microspheres. Simulations were benchmarked against an accepted 90 Y dose point kernel. Self-shielding was quantified for the microspheres by simulating an isotropically emitting, uniformly distributed source, in glass and in water. The ratio of the absorbed doses was scored as a function of distance from a microsphere. The absorbed dose kernel for the microspheres was calculated for (a) two bead formulations following (b) two different durations of neutron activation, at (c) various time points following activation. Self-shielding varies with time postremoval from the reactor. At early time points, it is less pronounced due to the higher energies of the emissions. It is on the order of 0.4-2.8% at a radial distance of 5.43 mm with increased size from 10 to 50 μm in diameter during the time that the microspheres would be administered to a patient. At long time points, self-shielding is more pronounced and can reach values in excess of 20% near the end of the range of the emissions. Absorbed dose kernels for 90 Y, 90m Y, 85m Sr, 85 Sr, 87m Sr, 89 Sr, 70 Ga, 72 Ga, and 31 Si are presented and used to determine an overall kernel for the
Self shielding in cylindrical fissile sources in the APNea system
International Nuclear Information System (INIS)
Hensley, D.
1997-01-01
In order for a source of fissile material to be useful as a calibration instrument, it is necessary to know not only how much fissile material is in the source but also what the effective fissile content is. Because uranium and plutonium absorb thermal neutrons so Efficiently, material in the center of a sample is shielded from the external thermal flux by the surface layers of the material. Differential dieaway measurements in the APNea System of five different sets of cylindrical fissile sources show the various self shielding effects that are routinely encountered. A method for calculating the self shielding effect is presented and its predictions are compared with the experimental results
Neutron self-shielding with k0-NAA irradiations
International Nuclear Information System (INIS)
Chilian, C.; Chambon, R.; Kennedy, G.
2010-01-01
A sample of SMELS Type II reference material was mixed with powdered Cd-nitrate neutron absorber and analysed by k 0 NAA for 10 elements. The thermal neutron self-shielding effect was found to be 34.8%. When flux monitors were irradiated sufficiently far from the absorbing sample, it was found that the self-shielding could be corrected accurately using an analytical formula and an iterative calculation. When the flux monitors were irradiated 2 mm from the absorbing sample, the calculations over-corrected the concentrations by as much as 30%. It is recommended to irradiate flux monitors at least 14 mm from a 10 mm diameter absorbing sample.
RZ calculations for self shielded multigroup cross sections
Energy Technology Data Exchange (ETDEWEB)
Li, M.; Sanchez, R.; Zmijarevic, I.; Stankovski, Z. [Commissariat a l' Energie Atomique CEA, Direction de l' Energie Nucleaire, DEN/DM2S/SERMA/LENR, 91191 Gif-sur-Yvette Cedex (France)
2006-07-01
A collision probability method has been implemented for RZ geometries. The method accounts for white albedo, specular and translation boundary condition on the top and bottom surfaces of the geometry and for a white albedo condition on the outer radial surface. We have applied the RZ CP method to the calculation of multigroup self shielded cross sections for Gadolinia absorbers in BWRs. (authors)
RZ calculations for self shielded multigroup cross sections
International Nuclear Information System (INIS)
Li, M.; Sanchez, R.; Zmijarevic, I.; Stankovski, Z.
2006-01-01
A collision probability method has been implemented for RZ geometries. The method accounts for white albedo, specular and translation boundary condition on the top and bottom surfaces of the geometry and for a white albedo condition on the outer radial surface. We have applied the RZ CP method to the calculation of multigroup self shielded cross sections for Gadolinia absorbers in BWRs. (authors)
International Nuclear Information System (INIS)
Sudarshan, K.; Tripathi, R.; Nair, A.G.C.; Acharya, R.; Reddy, A.V.R.; Goswami, A.
2005-01-01
A simple method using an internal standard is proposed to correct for the self-shielding effect of B, Cd and Gd in a matrix. This would increase the linear dynamic range of PGNAA in analyzing samples containing these elements. The method is validated by analyzing synthetic samples containing large amounts of B, Cd, Hg and Gd, the elements having high neutron absorption cross-section, in aqueous solutions and solid forms. A simple Monte-Carlo simulation to find the extent of self-shielding in the matrix is presented. The method is applied to the analysis of titanium boride alloy containing large amount of boron. The satisfactory results obtained showed the efficacy of the method of correcting for the self-shielding effects in the sample
Radiation monitoring in a self-shielded cyclotron installation
International Nuclear Information System (INIS)
Capaccioli, L.; Gori, C.; Mazzocchi, S.; Spano, G.
2002-01-01
As nuclear medicine is approaching a new era with the spectacular growth of PET diagnosis, the number of medical cyclotrons installed within the major hospitals is increasing accordingly. Therefore modern medical cyclotron are highly engineered and highly reliable apparatus, characterised with reduced accelerating energies (as the major goal is the production of fluorine 18) and often self-shielded. However specific dedicated monitors are still necessary in order to assure the proper radioprotection. At the Careggi University Hospital in Florence a Mini trace 10 MeV self-shielded cyclotron produced by General Electric has been installed in 2000. In a contiguous radiochemistry laboratory, the preparation and quality control of 1 8F DG and other radiopharmaceuticals takes place. Aim of this work is the characterisation and the proper calibration of the above mentioned monitors and control devices
Unresolved resonance self shielding calculation: causes and importance of discrepancies
International Nuclear Information System (INIS)
Ribon, P.; Tellier, H.
1986-09-01
To compute the self shielding coefficient, it is necessary to know the point-wise cross-sections. In the unresolved resonance region, we do not know the parameters of each level but only the average parameters. Therefore we simulate the point-wise cross-section by random sampling of the energy levels and resonance parameters with respect to the Wigner law and the X 2 distributions, and by computing the cross-section in the same way as in the resolved regions. The result of this statistical calculation obviously depends on the initial parameters but also on the method of sampling, on the formalism which is used to compute the cross-section or on the weighting neutron flux. In this paper, we will survey the main phenomena which can induce discrepancies in self shielding computations. Results are given for typical dilutions which occur in nuclear reactors. 8 refs
Unresolved resonance self shielding calculation: causes and importance of discrepancies
International Nuclear Information System (INIS)
Ribon, P.; Tellier, H.
1986-01-01
To compute the self shielding coefficient, it is necessary to know the point-wise cross-sections. In the unresolved resonance region, the parameters of each level are not known; only the average parameters. Therefore the authors simulate the point-wise cross-section by random sampling of the energy levels and resonance parameters with respect to the Wigner law and the x 2 distributions, and by computing the cross-section in the same way as in the resolved regions. The result of this statistical calculation obviously depends on the initial parameters but also on the method of sampling, on the formalism which is used to compute the cross-section or on the weighting neutron flux. In this paper, the authors survey the main phenomena which can induce discrepancies in self shielding computations. Results are given for typical dilutions which occur in nuclear reactors
Revisiting the stamm'ler self-shielding method
International Nuclear Information System (INIS)
Hebert, A.
2004-01-01
The generalized Stamm'ler method is been used in lattice codes such as PHOENIX, WIMS-AECL and DRAGON-IST for computing self-shielded cross sections, prior to the main flux calculation. This method is handicapped by deficiencies, such as its low accuracy and its inability to represent distributed self-shielding effects in a fuel rod or across a fuel bundle. The paper describes improvements that could be made to the generalized Stamm'ler method in order to mitigate these two defects. A validation is presented for the case of 238 U nuclides located in different geometries. The isotopic absorption rates obtained with the proposed numerical scheme are compared with exact values obtained with a fine-group elastic slowing-down calculation in the resolved energy domain. (author)
International Nuclear Information System (INIS)
Le Tellier, R.; Hebert, A.; Le Tellier, R.; Santamarina, A.; Litaize, O.
2008-01-01
Calculations based on the characteristics method and different self-shielding models are presented for 9 x 9 boiling water reactor (BWR) assemblies fully loaded with mixed-oxide (MOX) fuel. The geometry of these assemblies was recovered from the BASALA experimental program. We have focused our study on three configurations simulating the different voiding conditions that an assembly can undergo in a BWR pressure vessel. A parametric study was carried out with respect to the spatial discretization, the tracking parameters, and the anisotropy order. Comparisons with Monte Carlo calculations in terms of k eff , radiative capture, and fission rates were performed to validate the computational tools. The results are in good agreement between the stochastic and deterministic approaches. The mutual self-shielding model recently introduced within the framework of the Ribon extending self-shielding method appears to be useful for this type of assemblies. Indeed, in the calculation of these MOX benchmarks, the overlapping of resonances, especially between 238 U and 240 Pu, plays an important role due to the spectral strengthening of the flux as the voiding percentage is increased. The method of characteristics is shown to be adequate to perform accurate calculations handling a fine spatial discretization. (authors)
Self Shielding in Nuclear Fissile Assay Using LSDS
International Nuclear Information System (INIS)
Lee, Yong Deok; Park, Chang Je; Park, Geun Il; Song, Kee Chan
2012-01-01
The new technology for isotopic fissile material contents assay is under development at KAERI using lead slowing down spectrometer(LSDS). LSDS is very sensitive to distinguish fission signals from each fissile isotope in spent and recycled fuel. The accumulation of spent fuel is current big issue. The amount of spent fuels will reach the maximum storage capacity of the pools soon. Therefore, an interim storage must be searched and it should be optimized in design by applying accurate fissile content. When the storage has taken effect, all the nuclear materials must be also specified and verified for safety, economics and management. Generally, the spent fuel from PWR has unburned ∼1 % U235, produced ∼0.5 % plutonium from decay chain, ∼3 % fission products, ∼ 0.1 % minor actinides (MA) and uranium remainder. About 1.5 % fissile materials still exist in the spent fuel. Therefore, for reutilization of fissile materials in spent fuel at SFR, resource material is produced through pyro process. Fissile material contents in resource material must be analyzed before fabricating SFR fuel for reactor safety and economics. In assay of fissile content of spent fuel and recycled fuel, intense radiation background gives limitation on the direct analysis of fissile materials. However, LSDS is not influenced by such a radiation background in fissile assay. Based on the decided geometry setup, self shielding parameter was calculated at the fuel assay zone by introducing spent fuel or pyro produced nuclear material. When nuclear material is inserted into the assay area, the spent fuel assembly or pyro recycled fuel material perturbs the spatial distribution of the slowing down neutrons in lead and the prompt fast fission neutrons produced by fissile materials are also perturbed. The self shielding factor is interpreted as that how much of absorption is created inside the fuel area when it is in the lead. Self shielding effect provides a non-linear property in the isotopic
Situations of potential exposure in self-shielding electron accelerators
International Nuclear Information System (INIS)
Rios, D.A.S.; Rios, P.B.; Sordi, G.M.A.A.; Carneiro, J.C.G.G.
2017-01-01
The study discusses situations in the industrial environment that may lead to potential exposure of Occupationally Exposed Individuals and Public Individuals in self-shielding electron accelerators. Although these exposure situations are unlikely, simulation exercises can lead to improvements in the operating procedure as well as suggest changes in production line design in order to increase radiation protection at work. These studies can also be used in training and demonstrate a solid application of the ALARA principle in the daily activities of radiative installations
Insufficient self-shielding correction in VITAMIN-B6
International Nuclear Information System (INIS)
Konno, Chikara; Ochiai, Kentaro; Ohnishi, Seiki
2011-01-01
We carried out a simple benchmark calculation test with a multigroup cross-section library VITAMIN-B6 generated from ENDF/B-VI. The model of this test consisted of an iron sphere of 1 m in radius with an isotropic 20 MeV neutron source in the center. Neutron spectra in the sphere were calculated with an Sn code ANISN and VITAMIN-B6 or FENDL/MG-1.1. A calculation with MCNP and ENDF/B-VI was carried out as a reference. The neutron spectra with ANISN and FENDL/MG-1.1 agreed with those with MCNP, while those with ANISN and VITAMIN-B6 were at most 50% different from those with MCNP. We uncovered that the discrepancy came from insufficient self-shielding correction due to the followings; 1) The smallest background cross section of 56 Fe in VITAMIN-B6 is 1. 2) The weighting flux used in generating VITAMIN-B6 is not adequate. VITAMIN-B6 should be revised for adequate self-shielding correction. (author)
Resonance Self-Shielding Methodologies in SCALE 6
International Nuclear Information System (INIS)
Williams, Mark L.
2011-01-01
SCALE 6 includes several problem-independent multigroup (MG) libraries that were processed from the evaluated nuclear data file ENDF/B using a generic flux spectrum. The library data must be self-shielded and corrected for problem-specific spectral effects for use in MG neutron transport calculations. SCALE 6 computes problem-dependent MG cross sections through a combination of the conventional Bondarenko shielding-factor method and a deterministic continuous-energy (CE) calculation of the fine-structure spectra in the resolved resonance and thermal energy ranges. The CE calculation can be performed using an infinite medium approximation, a simplified two-region method for lattices, or a one-dimensional discrete ordinates transport calculation with pointwise (PW) cross-section data. This paper describes the SCALE-resonance self-shielding methodologies, including the deterministic calculation of the CE flux spectra using PW nuclear data and the method for using CE spectra to produce problem-specific MG cross sections for various configurations (including doubly heterogeneous lattices). It also presents results of verification and validation studies.
International Nuclear Information System (INIS)
Nasrabadi, M.N.; Jalali, M.; Mohammadi, A.
2007-01-01
In this work thermal neutron self-shielding in aqueous bulk samples containing neutron absorbing materials is studied using bulk sample prompt gamma neutron activation analysis (BSPGNAA) with the MCNP code. The code was used to perform three dimensional simulations of a neutron source, neutron detector and sample of various material compositions. The MCNP model was validated against experimental measurements of the neutron flux performed using a BF 3 detector. Simulations were performed to predict thermal neutron self-shielding in aqueous bulk samples containing neutron absorbing solutes. In practice, the MCNP calculations are combined with experimental measurements of the relative thermal neutron flux over the sample's surface, with respect to a reference water sample, to derive the thermal neutron self-shielding within the sample. The proposed methodology can be used for the determination of the elemental concentration of unknown aqueous samples by BSPGNAA where knowledge of the average thermal neutron flux within the sample volume is required
URR-PACK: Calculating Self-Shielding in the Unresolved Resonance Energy Range
International Nuclear Information System (INIS)
Cullen, Dermott E.; Trkov, Andrej
2016-07-01
This report describes HOW to calculate self-shielding in the unresolved resonance region (URR), in terms of the computer codes we provide to allow a user to do these calculations himself. Here we only describe HOW to calculate; a longer companion report describes in detail WHY it is necessary to include URR self-shielding.
The problem of resonance self-shielding effect in neutron multigroup calculations
International Nuclear Information System (INIS)
Wang Qingming; Huang Jinghua
1991-01-01
It is not allowed to neglect the resonance self-shielding effect in hybrid blanket and fast reactor neutron designs. The authors discussed the importance as well as the method of considering the resonance self-shielding effect in hybrid blanket and fast reactor neutron multigroup calculations
The self shielding module of Apollo.II; Module d`autoprotection du code Apollo.II
Energy Technology Data Exchange (ETDEWEB)
Sanchez, R.
1994-06-01
This note discusses the methods used in the APOLLO.II code for the calculation of self shielded multigroup cross sections. Basically, the calculation consists in characterizing a heterogenous medium with a single parameter: the background cross section, which is in then used to interpolate reaction rates from pre tabulated values. Very fine multigroup slowing down calculations in homogenous media are used to generate these tables, which contain absorption, diffusion and production reaction rates per group, resonant isotope, temperature and background cross section. Multigroup self shielded cross sections are determined from an equivalence that preserves absorption rates at a slowing down problem with given sources. This article gives a detailed description of the PIC and ``dilution matrix`` formalisms that are used in the homogenization step, as well as the utilization of Bell macro-groups and the different quadrature formulas that may be used in the calculations. Self shielding techniques for isotopic resonant mixtures are also discussed. (author). 2 refs., 193 figs., 2 tabs.
International Nuclear Information System (INIS)
Leal, L.C.; de Saussure, G.; Perez, R.B.
1990-01-01
The URR computer code has been developed to calculate cross-section probability tables, Bondarenko self-shielding factors, and self-indication ratios for fertile and fissile isotopes in the unresolved resonance region. Monte Carlo methods are utilized to select appropriate resonance parameters and to compute the cross sections at the desired reference energy. The neutron cross sections are calculated by the single-level Breit-Wigner formalism with s-, p-, and d-wave contributions. The cross-section probability tables are constructed by sampling by Doppler broadened cross-sections. The various self-shielding factors are computer numerically as Lebesgue integrals over the cross-section probability tables
International Nuclear Information System (INIS)
Marques, Andre Luis Ferreira; Ting, Daniel Kao Sun; Mendonca, Arlindo Gilson
1996-01-01
A calculation methodology of Flux Depression, Self-Shielding and Cadmium Factors is presented, using the ANISN code, for experiments conducted at the IPEN/MB-01 Research Reactor. The correction factors were determined considering thermal neutron flux and 0.125 e 0.250 mm diameter of 197 Au wires. (author)
International Nuclear Information System (INIS)
Li, J.; Nuenighoff; Pohl, C.; Allelein, H.J.
2010-01-01
The gas-cooled, high temperature reactor (HTR) represents a valuable option for the future development of nuclear technology, because of its excellent safety features. One main safety feature is the negative temperature coefficient which is due to the Doppler broadening of the (n,y) resonance absorption cross section. A second important effect is the spatial self-shielding due to the double heterogeneous geometry of a pebble bed reactor. At FZ-Juelich two reactor analysis codes have been developed: VSOP for core design and MGT for transient analysis. Currently an update of the nuclear cross section libraries to ENDF/B-VII.0 of both codes takes place. In order to take the temperature dependency as well as the spatial self-shielding into account the absorption cross sections σ (n,y) for the resonance absorbers like 232 Th and 238 U have to be provided as function of incident neutron energy, temperature and nuclide concentration. There are two reasons for choosing the Monte-Carlo approach to calculate group wise cross sections. First, the former applied ZUT-DGL code to generate the resonance cross section tables for MGT is so far not able to handle the new resonance description based on Reich-Moore instead of Single-level Breit-Wigner. Second, the rising interest in PuO 2 fuel motivated an investigation on the generation of group wise cross sections describing thermal resonances of 240 Pu and 242 Pu. (orig.)
Self-shielding for thick slabs in a converging neutron beam
Mildner, D F R
1999-01-01
We have previously given a correction to the neutron self-shielding for a thin slab to account for the increased average path length through the slab when irradiated in a converging neutron beam. This expression overstates the case for the self-shielding for a thick (or highly absorbing) slab. We give a better approximation to the increase in effective shielding correction for a slab placed in a converging neutron beam. It is negligible at large absorption mean free paths. (author)
Self-shielding of hydrogen in the IGM during the epoch of reionization
Chardin, Jonathan; Kulkarni, Girish; Haehnelt, Martin G.
2018-04-01
We investigate self-shielding of intergalactic hydrogen against ionizing radiation in radiative transfer simulations of cosmic reionization carefully calibrated with Lyα forest data. While self-shielded regions manifest as Lyman-limit systems in the post-reionization Universe, here we focus on their evolution during reionization (redshifts z = 6-10). At these redshifts, the spatial distribution of hydrogen-ionizing radiation is highly inhomogeneous, and some regions of the Universe are still neutral. After masking the neutral regions and ionizing sources in the simulation, we find that the hydrogen photoionization rate depends on the local hydrogen density in a manner very similar to that in the post-reionization Universe. The characteristic physical hydrogen density above which self-shielding becomes important at these redshifts is about nH ˜ 3 × 10-3 cm-3, or ˜20 times the mean hydrogen density, reflecting the fact that during reionization photoionization rates are typically low enough that the filaments in the cosmic web are often self-shielded. The value of the typical self-shielding density decreases by a factor of 3 between redshifts z = 3 and 10, and follows the evolution of the average photoionization rate in ionized regions in a simple fashion. We provide a simple parameterization of the photoionization rate as a function of density in self-shielded regions during the epoch of reionization.
New Improvements in Mixture Self-Shielding Treatment with APOLLO2 Code
International Nuclear Information System (INIS)
Coste-Delclaux, M.
2006-01-01
Full text of the presentation follows: APOLLO2 is a modular multigroup transport code developed at the CEA in Saclay (France). Previously, the self-shielding module could only treat one resonant isotope mixed with moderator isotopes. Consequently, the resonant mixture self-shielding treatment was an iterative one. Each resonant isotope of the mixture was treated separately, the other resonant isotopes of the mixture being then considered as moderator isotopes, that is to say non-resonant isotopes. This treatment could be iterated. Recently, we have developed a new method that consists in treating the resonant mixture as a unique entity. A main feature of APOLLO2 self-shielding module is that some implemented models are very general and therefore very powerful and versatile. We can give, as examples, the use of probability tables in order to describe the microscopic cross-section fluctuations or the TR slowing-down model that can deal with any resonance shape. The self-shielding treatment of a resonant mixture was developed essentially thanks to these two models. The goal of this paper is to describe the improvements on the self-shielding treatment of a resonant mixture and to present, as an application, the calculation of the ATRIUM-10 BWR benchmark. We will conclude by some prospects on remaining work in the self-shielding domain. (author)
International Nuclear Information System (INIS)
Palma, Daniel A.; Goncalves, Alessandro C.; Martinez, Aquilino S.; Silva, Fernando C.
2008-01-01
The activation technique allows much more precise measurements of neutron intensity, relative or absolute. The technique requires the knowledge of the Doppler broadening function ψ(x,ξ) to determine the resonance self-shielding factors in the epithermal range G epi (τ,ξ). Two new analytical approximations for the Doppler broadening function ψ(x,ξ) are proposed. The approximations proposed are compared with other methods found in literature for the calculation of the ψ(x,ξ) function, that is, the 4-pole Pade method and the Frobenius method, when applied to the calculation of G epi (τ,ξ). The results obtained provided satisfactory accuracy. (authors)
Directory of Open Access Journals (Sweden)
Zeng Huilin
2014-10-01
Full Text Available In order to realize the automatic welding of pipes in a complex operation environment, an automatic welding system has been developed by use of all-position self-shielded flux cored wires due to their advantages, such as all-position weldability, good detachability, arc's stability, low incomplete fusion, no need for welding protective gas or protection against wind when the wind speed is < 8 m/s. This system consists of a welding carrier, a guide rail, an auto-control system, a welding source, a wire feeder, and so on. Welding experiments with this system were performed on the X-80 pipeline steel to determine proper welding parameters. The welding technique comprises root welding, filling welding and cover welding and their welding parameters were obtained from experimental analysis. On this basis, the mechanical properties tests were carried out on welded joints in this case. Results show that this system can help improve the continuity and stability of the whole welding process and the welded joints' inherent quality, appearance shape, and mechanical performance can all meet the welding criteria for X-80 pipeline steel; with no need for windbreak fences, the overall welding cost will be sharply reduced. Meanwhile, more positive proposals were presented herein for the further research and development of this self-shielded flux core wires.
Computing Moment-Based Probability Tables for Self-Shielding Calculations in Lattice Codes
International Nuclear Information System (INIS)
Hebert, Alain; Coste, Mireille
2002-01-01
As part of the self-shielding model used in the APOLLO2 lattice code, probability tables are required to compute self-shielded cross sections for coarse energy groups (typically with 99 or 172 groups). This paper describes the replacement of the multiband tables (typically with 51 subgroups) with moment-based tables in release 2.5 of APOLLO2. An improved Ribon method is proposed to compute moment-based probability tables, allowing important savings in CPU resources while maintaining the accuracy of the self-shielding algorithm. Finally, a validation is presented where the absorption rates obtained with each of these techniques are compared with exact values obtained using a fine-group elastic slowing-down calculation in the resolved energy domain. Other results, relative to the Rowland's benchmark and to three assembly production cases, are also presented
New improvements in the self-shielding formalism of the Apollo-2 code
International Nuclear Information System (INIS)
Coste, M.; Tellier, H.; Ribon, P.; Raepsaet, V.; Van der Gucht, C.
1993-01-01
One important modelization of a transport code working on a coarse energy mesh is the self-shielding. The French transport code APPOLO 2, developed at the Commissariat a l'Energie Atomique, uses a self-shielding formalism based on a double equivalence. First a homogenization gives the reaction rates in a heterogeneous geometry, and then a multigroup equivalence gives, once the reaction rates are known, the self-shielded cross-sections. The homogenization is a very sensitive part because it is the one which requires physical modelizations. We have added a new model which allows us to treat numerous narrow resonances statistically distributed in the same group of the multigroup mesh. It is important to notice that for a narrow resonance isolated in a group, that new model is equivalent to the previous narrow resonance model (NR)
A Monte Carlo simulation technique to determine the optimal portfolio
Directory of Open Access Journals (Sweden)
Hassan Ghodrati
2014-03-01
Full Text Available During the past few years, there have been several studies for portfolio management. One of the primary concerns on any stock market is to detect the risk associated with various assets. One of the recognized methods in order to measure, to forecast, and to manage the existing risk is associated with Value at Risk (VaR, which draws much attention by financial institutions in recent years. VaR is a method for recognizing and evaluating of risk, which uses the standard statistical techniques and the method has been used in other fields, increasingly. The present study has measured the value at risk of 26 companies from chemical industry in Tehran Stock Exchange over the period 2009-2011 using the simulation technique of Monte Carlo with 95% confidence level. The used variability in the present study has been the daily return resulted from the stock daily price change. Moreover, the weight of optimal investment has been determined using a hybrid model called Markowitz and Winker model in each determined stocks. The results showed that the maximum loss would not exceed from 1259432 Rials at 95% confidence level in future day.
Calculation of the electron trajectory for 200 kV self-shielded electron accelerator
International Nuclear Information System (INIS)
Wang Shuiqing
2000-01-01
In order to calculate the electron trajectory of 200 kV self-shielded electron accelerator, the electric field is calculated with a TRAJ program. In this program, following electron track mash points one by one, the electron beam trajectories are calculated. Knowing the effect of grid voltage on electron optics and gaining grid voltage focusing effect in the various energy grades, the authors have gained scientific basis for adjusting grid voltage, and also accumulated a wealth of experience for designing self-shielded electron accelerator or electron curtain in future
SUBGR: A Program to Generate Subgroup Data for the Subgroup Resonance Self-Shielding Calculation
Energy Technology Data Exchange (ETDEWEB)
Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2016-06-06
The Subgroup Data Generation (SUBGR) program generates subgroup data, including levels and weights from the resonance self-shielded cross section table as a function of background cross section. Depending on the nuclide and the energy range, these subgroup data can be generated by (a) narrow resonance approximation, (b) pointwise flux calculations for homogeneous media; and (c) pointwise flux calculations for heterogeneous lattice cells. The latter two options are performed by the AMPX module IRFFACTOR. These subgroup data are to be used in the Consortium for Advanced Simulation of Light Water Reactors (CASL) neutronic simulator MPACT, for which the primary resonance self-shielding method is the subgroup method.
International Nuclear Information System (INIS)
Patino, N.E.; Abbate, M.J.; Sbaffoni, M.M.
1990-01-01
The procedures employed in the treatment of the resonance shielding effect have been identified as one of the causes of the large discrepancies found in the neutronic calculation of high conversion light water reactors (HCLWRs), indicating the need for a revision of the self-shielding procedures employed. In this work some well known techniques applied in HCLWR self-shielding calculations are evaluated; the study involves the comparison of methods for the generation of group constants, the analysis of the impact of considering some isotopes as infinitely diluted and the evaluation of the usual approximations utilized for the treatment of heterogeneities
Self-shielding effect in unresolved resonance data in JENDL-4.0
International Nuclear Information System (INIS)
Konno, Chikara; Takakura, Kosuke; Ochiai, Kentaro; Sato, Satoshi; Kato, Yoshinari
2012-01-01
At International Conference on Nuclear Data for Science and Technology in 2007 we pointed out that most of unresolved resonance data in JENDL-3.3 have a problem related to self-shielding correction. Here with a simple calculation model we have investigated whether the latest JENDL, JENDL-4.0, was improved for the problem or not. The results suggest that unresolved resonance data in JENDL-4.0 have no problem, but it seems that self-shielding effects for the unresolved resonance data in JENDL-4.0 are too large. New benchmark experiments for unresolved resonance data are strongly recommended in order to verify unresolved resonance data. (author)
International Nuclear Information System (INIS)
Cullen, D.E.
1978-01-01
Bonderenko self-shielded cross sections and multiband parameters from the Lawrence Livermore Laboratory Evaluated-Nuclear-Data Library (ENDL) as of July 4, 1978 are presented. These data include total, elastic, capture, and fission cross sections in the TART 175 group structure. Multiband parameters are listed. Bonderenko self-shielded cross section and the multiband parameters are presented on microfiche
Theoretical evaluation of self-shielding factors due to scattering resonances in foils
International Nuclear Information System (INIS)
Selander, W.N.
1960-06-01
A semi-analytical method is given for evaluating self-shielding factors for activation measurements which use thin foils having neutron scattering resonances. The energy loss by scattering in the foil is taken into account. The energy-dependent neutron angular distribution is expanded as a double series, the coefficients of which are (energy dependent) solutions of an infinite set of coupled integral equations. These are truncated in some suitable manner and solved numerically. The leading term of the series is proportional to the average, or effective flux in the activation sample. The product of this terra and the neutron capture cross-section is integrated numerically over the resonance to give the resonance self-shielding correction. Figure 4 shows resonance self-shielding factors derived in this mariner for the 132ev resonance in Co-59 and figure 5 shows similar results for the two Mn-55 resonances at 337ev and 1080ev. Self-shielding factors for 1/v capture are not significantly different from unity. (author)
License Application Design Selection Feature Report: Waste Package Self Shielding Design Feature 13
International Nuclear Information System (INIS)
Tang, J.S.
2000-01-01
In the Viability Assessment (VA) reference design, handling of waste packages (WPs) in the emplacement drifts is performed remotely, and human access to the drifts is precluded when WPs are present. This report will investigate the feasibility of using a self-shielded WP design to reduce the radiation levels in the emplacement drifts to a point that, when coupled with ventilation, will create an acceptable environment for human access. This provides the benefit of allowing human entry to emplacement drifts to perform maintenance on ground support and instrumentation, and carry out performance confirmation activities. More direct human control of WP handling and emplacement operations would also be possible. However, these potential benefits must be weighed against the cost of implementation, and potential impacts on pre- and post-closure performance of the repository and WPs. The first section of this report will provide background information on previous investigations of the self-shielded WP design feature, summarize the objective and scope of this document, and provide quality assurance and software information. A shielding performance and cost study that includes several candidate shield materials will then be performed in the subsequent section to allow selection of two self-shielded WP design options for further evaluation. Finally, the remaining sections will evaluate the impacts of the two WP self-shielding options on the repository design, operations, safety, cost, and long-term performance of the WPs with respect to the VA reference design
Enhancement of thermal neutron self-shielding in materials surrounded by reflectors
International Nuclear Information System (INIS)
Cornelia Chilian; Gregory Kennedy
2012-01-01
Materials containing from 41 to 1124 mg chlorine and surrounded by polyethylene containers of various thicknesses, from 0.01 to 5.6 mm, were irradiated in a research reactor neutron spectrum and the 38 Cl activity produced was measured as a function of polyethylene reflector thickness. For the material containing the higher amount of chlorine, the 38 Cl specific activity decreased with increasing reflector thickness, indicating increased neutron self-shielding. It was found that the amount of neutron self-shielding increased by as much as 52% with increasing reflector thickness. This is explained by neutrons which have exited the material subsequently reflecting back into it and thus increasing the total mean path length in the material. All physical and empirical models currently used to predict neutron self-shielding have ignored this effect and need to be modified. A method is given for measuring the adjustable parameter of a self-shielding model for a particular sample size and combination of neutron reflectors. (author)
Success and prospects for low energy, self-shielded electron beam accelerators
International Nuclear Information System (INIS)
Laeuppi, U.V.
1988-01-01
The advantages of self-shielded, low energy, electron beam accelerators for electron beam processing are described. Applications of these accelerators for cross-linking plastic films, drying of coated materials and printing inks and for curing processes are discussed. (U.K.)
Advanced resonance self-shielding method for gray resonance treatment in lattice physics code GALAXY
International Nuclear Information System (INIS)
Koike, Hiroki; Yamaji, Kazuya; Kirimura, Kazuki; Sato, Daisuke; Matsumoto, Hideki; Yamamoto, Akio
2012-01-01
A new resonance self-shielding method based on the equivalence theory is developed for general application to the lattice physics calculations. The present scope includes commercial light water reactor (LWR) design applications which require both calculation accuracy and calculation speed. In order to develop the new method, all the calculation processes from cross-section library preparation to effective cross-section generation are reviewed and reframed by adopting the current enhanced methodologies for lattice calculations. The new method is composed of the following four key methods: (1) cross-section library generation method with a polynomial hyperbolic tangent formulation, (2) resonance self-shielding method based on the multi-term rational approximation for general lattice geometry and gray resonance absorbers, (3) spatially dependent gray resonance self-shielding method for generation of intra-pellet power profile and (4) integrated reaction rate preservation method between the multi-group and the ultra-fine-group calculations. From the various verifications and validations, applicability of the present resonance treatment is totally confirmed. As a result, the new resonance self-shielding method is established, not only by extension of a past concentrated effort in the reactor physics research field, but also by unification of newly developed unique and challenging techniques for practical application to the lattice physics calculations. (author)
Energy Technology Data Exchange (ETDEWEB)
Palma, Daniel A. [CEFET QUIMICA de Nilopolis/RJ, Rio de Janeiro (Brazil); Goncalves, Alessandro C.; Martinez, Aquilino S.; Silva, Fernando C. [COPPE/UFRJ - Programa de Engenharia Nuclear, Rio de Janeiro (Brazil)
2008-07-01
The activation technique allows much more precise measurements of neutron intensity, relative or absolute. The technique requires the knowledge of the Doppler broadening function psi(x,xi) to determine the resonance self-shielding factors in the epithermal range G{sub epi} (tau,xi). Two new analytical approximations for the Doppler broadening function psi(x,xi) are proposed. The approximations proposed are compared with other methods found in literature for the calculation of the psi(x,xi) function, that is, the 4-pole Pade method and the Frobenius method, when applied to the calculation of G{sub epi} (tau,xi). The results obtained provided satisfactory accuracy. (authors)
Monte Carlo determination of the spin-dependent potentials
International Nuclear Information System (INIS)
Campostrini, M.; Moriarty, K.J.M.; Rebbi, C.
1987-05-01
Calculation of the bound states of heavy quark systems by a Hamiltonian formulation based on an expansion of the interaction into inverse powers of the quark mass is discussed. The potentials for the spin-orbit and spin-spin coupling between quark and antiquark, which are responsible for the fine and hyperfine splittings in heavy quark spectroscopy, are expressed as expectation values of Wilson loop factors with suitable insertions of chromomagnetic or chromoelectric fields. A Monte Carlo simulation has been used to evaluate the expectation values and, from them, the spin-dependent potentials. The Monte Carlo calculation is reported to show a long-range, non-perturbative component in the interaction
International Nuclear Information System (INIS)
Koike, Hiroki; Kirimura, Kazuki; Yamaji, Kazuya; Kosaka, Shinya; Yamamoto, Akio
2018-01-01
A unified resonance self-shielding method, which can treat general sub-divided fuel regions, is developed for lattice physics calculations in reactor physics field. In a past study, a hybrid resonance treatment has been developed by theoretically integrating equivalence theory and ultra-fine-group slowing-down calculation. It can be applied to a wide range of neutron spectrum conditions including low moderator density ranges in severe accident states, as long as each fuel region is not sub-divided. In order to extend the method for radially and azimuthally sub-divided multi-region geometry, a new resonance treatment is established by incorporating the essence of sub-group method. The present method is composed of two-step flux calculation, i.e. 'coarse geometry + fine energy' (first step) and 'fine geometry + coarse energy' (second step) calculations. The first step corresponds to a hybrid model of the equivalence theory and the ultra-fine-group calculation, and the second step corresponds to the sub-group method. From the verification results, effective cross-sections by the new method show good agreement with the continuous energy Monte-Carlo results for various multi-region geometries including non-uniform fuel compositions and temperature distributions. The present method can accurately generate effective cross-sections with short computation time in general lattice physics calculations. (author)
International Nuclear Information System (INIS)
Nakagawa, Masayuki; Ishiguro, Yukio; Tokuno, Yukio.
1978-01-01
The self-shielding factors for elastic removal cross sections of light and medium weight nuclides were calculated for the parameter, σ 0 within the conventional concept of the group constant sets. The numerical study were performed for obtaining a simple and accurate method. The present results were compared with the exact values and the conventional ones, and shown to be remarkably improved. It became apparent that the anisotropy of the elastic scattering did not affect to the self-shielding factors though it did to the infinite dilution cross sections. With use of the present revised set, the neutron flux were calculated in an iron medium and in a prototype FBR and compared with those by the fine spectrum calculations and the conventional set. The present set showed the considerable improvement in the vicinity of the large resonance regions of sodium, iron and oxygen. (auth.)
Self-shielding characteristics of aqueous self-cooled blankets for next generation fusion devices
International Nuclear Information System (INIS)
Pelloni, S.; Cheng, E.T.; Embrechts, M.J.
1987-11-01
The present study examines self-shielding characteristics for two aqueous self-cooled tritium producing driver blankets for next generation fusion devices. The aqueous Self-Cooled Blanket concept (ASCB) is a very simple blanket concept that relies on just structural material and coolant. Lithium compounds are dissolved in water to provide for tritium production. An ASCB driver blanket would provide a low technology and low temperature environment for blanket test modules in a next generation fusion reactor. The primary functions of such a blanket would be shielding, energy removal and tritium production. One driver blanket considered in this study concept relates to the one proposed for the Next European Torus (NET), while the second concept is indicative for the inboard shield design for the Engineering Test Reactor proposed by the USA (TIBER II/ETR). The driver blanket for NET is based on stainless steel for the structural material and aqueous solution, while the inboard shielding blanket for TIBER II/ETR is based on a tungsten/aqueous solution combination. The purpose of this study is to investigate self-shielding and heterogeneity effects in aqueous self-cooled blankets. It is found that no significant gains in tritium breeding can be achieved in the stainless steel blanket if spatial and energy self-shielding effects are considered, and the heterogeneity effects are also insignificant. The tungsten blanket shows a 5 percent increase in tritium production in the shielding blanket when energy and spatial self-shielding effects are accounted for. However, the tungsten blanket shows a drastic increase in the tritium breeding ratio due to heterogeneity effects. (author) 17 refs., 9 figs., 9 tabs
International Nuclear Information System (INIS)
Downar, T.
2009-01-01
The overall objective of the work here has been to eliminate the approximations used in current resonance treatments by developing continuous energy multi-dimensional transport calculations for problem dependent self-shielding calculations. The work here builds on the existing resonance treatment capabilities in the ORNL SCALE code system. The overall objective of the work here has been to eliminate the approximations used in current resonance treatments by developing continuous energy multidimensional transport calculations for problem dependent self-shielding calculations. The work here builds on the existing resonance treatment capabilities in the ORNL SCALE code system. Specifically, the methods here utilize the existing continuous energy SCALE5 module, CENTRM, and the multi-dimensional discrete ordinates solver, NEWT to develop a new code, CENTRM( ) NEWT. The work here addresses specific theoretical limitations in existing CENTRM resonance treatment, as well as investigates advanced numerical and parallel computing algorithms for CENTRM and NEWT in order to reduce the computational burden. The result of the work here will be a new computer code capable of performing problem dependent self-shielding analysis for both existing and proposed GENIV fuel designs. The objective of the work was to have an immediate impact on the safety analysis of existing reactors through improvements in the calculation of fuel temperature effects, as well as on the analysis of more sophisticated GENIV/NGNP systems through improvements in the depletion/transmutation of actinides for Advanced Fuel Cycle Initiatives.
AUTOSECOL: an automatic calculation of the self-shielding of heavy isotope resonances
International Nuclear Information System (INIS)
Grandotto-Biettoli, Marc.
The formalism is based on separating both types of resonance effects: local energy effects creating a fine structure in the flux, and bulk effects resulting in a slow variation in the flux. Effective reaction rates are defined that, used as tables in a multigroup calculation of cells with a large pitch in regard to resonance widths, allow an exact account of the dependence of the effective integral upon fast variations in the flux. These tables are used to introduce this phenomenon of resonance self-shielding in the multigroup Apollo program for solving the neutron transport equation, they are derived from nuclear data with using some parameters relating to the physical state of the resonant isotope inside the fuel medium. The AUTOSECOL system provides a library of effective reaction rates for taking account of the resonance self-shielding effect on the neutron flux in nuclear reactor cells. Its versatility in regard to the methods previously used for solving the same problem allows a rapid testing of the consequences of considering the self-shielding effect of new isotope resonances, a following up of the evolution in nuclear data evaluation, and rapidly studying the interest lying in new data. Results obtained with AUTOSECOL are compared with those obtained when using the SECOL code for computing the effective reaction rates of 235 U, 239 Pu, 107 Ag, 109 Ag, and 241 Pu [fr
Importance of self-shielding for improving sensitivity coefficients in light water nuclear reactors
International Nuclear Information System (INIS)
Foad, Basma; Takeda, Toshikazu
2014-01-01
Highlights: • A new method has been developed for calculating sensitivity coefficients. • This method is based on the use of infinite dilution cross-sections instead of effective cross-sections. • The change of self-shielding factor due to cross-section perturbation has been considered. • SRAC and SAINT codes are used for calculating improved sensitivities, while MCNP code has been used for verification. - Abstract: In order to perform sensitivity analyzes in light water reactors where self-shielding effect becomes important, a new method has been developed for calculating sensitivity coefficient of core characteristics relative to the infinite dilution cross-sections instead of the effective cross-sections. This method considers the change of the self-shielding factor due to cross-section perturbation for different nuclides and reactions. SRAC and SAINT codes are used to calculate the improved sensitivity; while the accuracy of the present method has been verified by MCNP code and good agreement has been found
International Nuclear Information System (INIS)
Zou Jun; He Zhaozhong; Zeng Qin; Qiu Yuefeng; Wang Minghuang
2010-01-01
A multigroup library HENDL2.1/SS (Hybrid Evaluated Nuclear Data Library/Self-Shielding) based on ENDF/B-VII.0 evaluate data has been generated using Bondarenko and flux calculator method for the correction of self-shielding effect of neutronics analyses. To validate the reliability of the multigroup library HENDL2.1/SS, transport calculations for fusion-fission hybrid system FDS-I were performed in this paper. It was verified that the calculations with the HENDL2.1/SS gave almost the same results with MCNP calculations and were better than calculations with the HENDL2.0/MG which is another multigroup library without self-shielding correction. The test results also showed that neglecting resonance self-shielding caused underestimation of the K eff , neutron fluxes and waste transmutation ratios in the multigroup calculations of FDS-I.
International Nuclear Information System (INIS)
Abe, Alfredo Y.; Santos, Adimir dos
1995-01-01
The present work summarizes the verification of the treatment of self-shielding based on Bondarenko method in HAMMER-TECHNION cell code for the Pu O 2 -U O 2 critical system using JENDL-3 nuclear data library. The results obtained are in excellent agreement with the original treatment of self-shielding employed by HAMMER-TECHNION cell code. (author). 9 refs, 1 fig, 9 tabs
Determination of the optical properties of turbid media from a single Monte Carlo simulation
International Nuclear Information System (INIS)
Kienle, A.; Patterson, M.S.
1996-01-01
We describe a fast, accurate method for determination of the optical coefficients of 'semi-infinite' and 'infinite' turbid media. For the particular case of time-resolved reflectance from a biological medium, we show that a single Monte Carlo simulation can be used to fit the data and to derive the absorption and reduced scattering coefficients. Tests with independent Monte Carlo simulations showed that the errors in the deduced absorption and reduced scattering coefficients are smaller than 1% and 2%, respectively. (author)
Adaptive algorithms for a self-shielding wavelet-based Galerkin method
International Nuclear Information System (INIS)
Fournier, D.; Le Tellier, R.
2009-01-01
The treatment of the energy variable in deterministic neutron transport methods is based on a multigroup discretization, considering the flux and cross-sections to be constant within a group. In this case, a self-shielding calculation is mandatory to correct sections of resonant isotopes. In this paper, a different approach based on a finite element discretization on a wavelet basis is used. We propose adaptive algorithms constructed from error estimates. Such an approach is applied to within-group scattering source iterations. A first implementation is presented in the special case of the fine structure equation for an infinite homogeneous medium. Extension to spatially-dependent cases is discussed. (authors)
Gbedo, Yémalin Gabin; Mangin-Brinet, Mariane
2017-07-01
We present a new procedure to determine parton distribution functions (PDFs), based on Markov chain Monte Carlo (MCMC) methods. The aim of this paper is to show that we can replace the standard χ2 minimization by procedures grounded on statistical methods, and on Bayesian inference in particular, thus offering additional insight into the rich field of PDFs determination. After a basic introduction to these techniques, we introduce the algorithm we have chosen to implement—namely Hybrid (or Hamiltonian) Monte Carlo. This algorithm, initially developed for Lattice QCD, turns out to be very interesting when applied to PDFs determination by global analyses; we show that it allows us to circumvent the difficulties due to the high dimensionality of the problem, in particular concerning the acceptance. A first feasibility study is performed and presented, which indicates that Markov chain Monte Carlo can successfully be applied to the extraction of PDFs and of their uncertainties.
CREST : a computer program for the calculation of composition dependent self-shielded cross-sections
International Nuclear Information System (INIS)
Kapil, S.K.
1977-01-01
A computer program CREST for the calculation of the composition and temperature dependent self-shielded cross-sections using the shielding factor approach has been described. The code includes the editing and formation of the data library, calculation of the effective shielding factors and cross-sections, a fundamental mode calculation to generate the neutron spectrum for the system which is further used to calculate the effective elastic removal cross-sections. Studies to explore the sensitivity of reactor parameters to changes in group cross-sections can also be carried out by using the facility available in the code to temporarily change the desired constants. The final self-shielded and transport corrected group cross-sections can be dumped on cards or magnetic tape in a suitable form for their direct use in a transport or diffusion theory code for detailed reactor calculations. The program is written in FORTRAN and can be accommodated in a computer with 32 K work memory. The input preparation details, sample problem and the listing of the program are given. (author)
International Nuclear Information System (INIS)
Hebert, A.
1997-01-01
The subgroup method is used to compute self-shielded cross sections defined over coarse energy groups in the resolved energy domain. The validity of the subgroup approach was extended beyond the unresolved energy domain by partially taking into account correlation effects between the slowing-down source with the collision probability terms of the transport equation. This approach enables one to obtain a pure subgroup solution of the self-shielding problem without relying on any form of equivalence in dilution. Specific improvements are presented on existing subgroup methods: an N-term rational approximation for the fuel-to-fuel collision probability, a new Pade deflation technique for computing probability tables, and the introduction of a superhomogenization correction. The absorption rates obtained after self-shielding are compared with exact values obtained using an elastic slowing-down calculation where each resonance is modeled individually in the resolved energy domain
Self-shielding and burn-out effects in the irradiation of strongly-neutron-absorbing material
International Nuclear Information System (INIS)
Sekine, T.; Baba, H.
1978-01-01
Self-shielding and burn-out effects are discussed in the evaluation of radioisotopes formed by neutron irradiation of a strongly-neutron-absorbing material. A method of the evaluation of such effects is developed both for thermal and epithermal neutrons. Gadolinium oxide uniformly mixed with graphite powder was irradiated by reactor-neutrons together with pieces of a Co-Al alloy wire (the content of Co being 0.475%) as the neutron flux monitor. The configuration of the samples and flux monitors in each of two irradiations is illustrated. The yields of activities produced in the irradiated samples were determined by the γ-spectrometry with a Ge(Li) detector of a relative detection efficiency of 8%. Activities at the end of irradiation were estimated by corrections due to pile-up, self-absorption, detection efficiency, branching ratio, and decay of the activity. Results of the calculation are discussed in comparison with the observed yields of 153 Gd, 160 Tb, and 161 Tb for the case of neutron irradiation of disc-shaped targets of gadolinium oxide. (T.G.)
Simple formalism for efficient derivatives and multi-determinant expansions in quantum Monte Carlo
Filippi, Claudia; Assaraf, R.; Moroni, S.
2016-01-01
We present a simple and general formalism to compute efficiently the derivatives of a multi-determinant Jastrow-Slater wave function, the local energy, the interatomic forces, and similar quantities needed in quantum Monte Carlo. Through a straightforward manipulation of matrices evaluated on the
Monte Carlo simulation: tool for the calibration in analytical determination of radionuclides
International Nuclear Information System (INIS)
Gonzalez, Jorge A. Carrazana; Ferrera, Eduardo A. Capote; Gomez, Isis M. Fernandez; Castro, Gloria V. Rodriguez; Ricardo, Niury Martinez
2013-01-01
This work shows how is established the traceability of the analytical determinations using this calibration method. Highlights the advantages offered by Monte Carlo simulation for the application of corrections by differences in chemical composition, density and height of the samples analyzed. Likewise, the results obtained by the LVRA in two exercises organized by the International Agency for Atomic Energy (IAEA) are presented. In these exercises (an intercomparison and a proficiency test) all reported analytical results were obtained based on calibrations in efficiency by Monte Carlo simulation using the DETEFF program
GROUPIE2007, Bondarenko Self-Shielded Cross sections from ENDF/B
International Nuclear Information System (INIS)
2007-01-01
1 - Description of problem or function - GROUPIE reads evaluated data in ENDF/B Format and uses these to calculate unshielded group averaged Cross sections, Bondarenko self-shielded Cross sections, and multiband parameters. The program allows the user to specify arbitrary energy groups and an arbitrary energy-dependent neutron spectrum (weighting function). IAEA0849/15: This version include the updates up to January 30, 2007. Changes in ENDF/B-VII Format and procedures, as well as the evaluations themselves, make it impossible for versions of the ENDF/B pre-processing codes earlier than PREPRO 2007 (2007 Version) to accurately process current ENDF/B-VII evaluations. The present code can handle all existing ENDF/B-VI evaluations through release 8, which will be the last release of ENDF/B-VI. 2 - Modifications from previous versions: Groupie VERS. 2007-1 (Jan. 2007): checked against all ENDF/B-VII; increased page size from 120,000 to 600,000 points. 3 - Method of solution: All integrals are performed analytically; in no case is iteration or any approximate form of integration used. GROUPIE reads either the 0 deg. Kelvin Cross sections or the Doppler broadened Cross sections to calculate the self-shielded Cross sections and multiband parameters for 25 values of the 'background' Cross sections (representing the combined effects of all other isotopes and of leakage). 4 - Restrictions on the complexity of the problem: GROUPIE requires that the energy-dependent neutron spectrum and all Cross sections be given in tabular form, with linear interpolation between tabulated values. There is no limit to the size of the table used to describe the spectrum, so the spectrum may be described in as much detail as required. - If only unshielded averages are calculated, the program can handle up to 3000 groups. If self-shielded averages and/or multiband parameters are calculated, the program can handle up to 175 groups. These limits can easily be extended. - The program only uses the
International Nuclear Information System (INIS)
Chiba, Go; Tsuji, Masashi; Narabayashi, Tadashi
2014-01-01
In order to properly quantify fission reactor neutronics parameter uncertainties, we have to use covariance data and sensitivity profiles consistently. In the present paper, we establish two consistent methodologies for uncertainty quantification: a self-shielded cross section-based consistent methodology and an infinitely-diluted cross section-based consistent methodology. With these methodologies and the covariance data of uranium-238 nuclear data given in JENDL-3.3, we quantify uncertainties of infinite neutron multiplication factors of light water reactor and fast reactor fuel cells. While an inconsistent methodology gives results which depend on the energy group structure of neutron flux and neutron-nuclide reaction cross section representation, both the consistent methodologies give fair results with no such dependences.
Nuclear reactions and self-shielding effects of gamma-ray database for nuclear materials
Energy Technology Data Exchange (ETDEWEB)
Fujita, Mitsutane; Noda, Tetsuji [National Research Institute for Metals, Tsukuba, Ibaraki (Japan)
2001-03-01
A database for transmutation and radioactivity of nuclear materials is required for selection and design of materials used in various nuclear reactors. The database based on the FENDL/A-2.0 on the Internet and the additional data collected from several references has been developed in NRIM site of 'Data-Free-Way' on the Internet. Recently, the function predicted self-shielding effect of materials for {gamma}-ray was added to this database. The user interface for this database has been constructed for retrieval of necessary data and for graphical presentation of the relation between the energy spectrum of neutron and neutron capture cross section. It is demonstrated that the possibility of chemical compositional change and radioactivity in a material caused by nuclear reactions can be easily retrieved using a browser such as Netscape or Explorer. (author)
Design of a control system for self-shielded irradiators with remote access capability
International Nuclear Information System (INIS)
Iyengar, R.D.; Verma, P.B.; Prasad, V.V.S.S.; George, Jain R.; Das, Tripti; Deshmukh, D.K.
2001-01-01
With self-shielded irradiators like Gamma chambers, and Blood irradiators are being sold by BRIT to customers both within and outside the country, it has become necessary to improve the quality of service without increasing the overheads. The recent advances in the field of communications and information technology can be exploited for improving the quality of service to the customers. A state of the art control system with remote accessibility has been designed for these irradiators enhancing their performance. This will provide an easy access to these units wherever they might be located, through the Internet. With this technology it will now be possible to attend to the needs of the customers, as regards fault rectification, error debugging, system software update, performance testing, data acquisition etc. This will not only reduce the downtime of these irradiators but also reduce the overheads. (author)
Self-shielding flex-circuit drift tube, drift tube assembly and method of making
Jones, David Alexander
2016-04-26
The present disclosure is directed to an ion mobility drift tube fabricated using flex-circuit technology in which every other drift electrode is on a different layer of the flex-circuit and each drift electrode partially overlaps the adjacent electrodes on the other layer. This results in a self-shielding effect where the drift electrodes themselves shield the interior of the drift tube from unwanted electro-magnetic noise. In addition, this drift tube can be manufactured with an integral flex-heater for temperature control. This design will significantly improve the noise immunity, size, weight, and power requirements of hand-held ion mobility systems such as those used for explosive detection.
Nuclear reactions and self-shielding effects of gamma-ray database for nuclear materials
International Nuclear Information System (INIS)
Fujita, Mitsutane; Noda, Tetsuji
2001-01-01
A database for transmutation and radioactivity of nuclear materials is required for selection and design of materials used in various nuclear reactors. The database based on the FENDL/A-2.0 on the Internet and the additional data collected from several references has been developed in NRIM site of 'Data-Free-Way' on the Internet. Recently, the function predicted self-shielding effect of materials for γ-ray was added to this database. The user interface for this database has been constructed for retrieval of necessary data and for graphical presentation of the relation between the energy spectrum of neutron and neutron capture cross section. It is demonstrated that the possibility of chemical compositional change and radioactivity in a material caused by nuclear reactions can be easily retrieved using a browser such as Netscape or Explorer. (author)
Directory of Open Access Journals (Sweden)
GO CHIBA
2014-06-01
Full Text Available In order to properly quantify fission reactor neutronics parameter uncertainties, we have to use covariance data and sensitivity profiles consistently. In the present paper, we establish two consistent methodologies for uncertainty quantification: a self-shielded cross section-based consistent methodology and an infinitely-diluted cross section-based consistent methodology. With these methodologies and the covariance data of uranium-238 nuclear data given in JENDL-3.3, we quantify uncertainties of infinite neutron multiplication factors of light water reactor and fast reactor fuel cells. While an inconsistent methodology gives results which depend on the energy group structure of neutron flux and neutron-nuclide reaction cross section representation, both the consistent methodologies give fair results with no such dependences.
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
McDaniel, T.; D'Azevedo, E. F.; Li, Y. W.; Wong, K.; Kent, P. R. C.
2017-11-01
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.
International Nuclear Information System (INIS)
Khattab, K.; Khamis, I.
2007-01-01
Measurement of the thermal self shielding coefficient ( Gth ) in the Syrian Miniature Neutron Source Reactor (MNSR) inner irradiation site using Dy foils is presented in this paper. The thermal self shielding coefficient is measured as a function of the foil thickness or numbers. The mathematical equation which calculates the average relative radioactivity (Bq/g) versus the foil number is found as well.
International Nuclear Information System (INIS)
Fay, P.J.; Ray, J.R.; Wolf, R.J.
1994-01-01
We present a new, nondestructive, method for determining chemical potentials in Monte Carlo and molecular dynamics simulations. The method estimates a value for the chemical potential such that one has a balance between fictitious successful creation and destruction trials in which the Monte Carlo method is used to determine success or failure of the creation/destruction attempts; we thus call the method a detailed balance method. The method allows one to obtain estimates of the chemical potential for a given species in any closed ensemble simulation; the closed ensemble is paired with a ''natural'' open ensemble for the purpose of obtaining creation and destruction probabilities. We present results for the Lennard-Jones system and also for an embedded atom model of liquid palladium, and compare to previous results in the literature for these two systems. We are able to obtain an accurate estimate of the chemical potential for the Lennard-Jones system at higher densities than reported in the literature
Application of Monte Carlo method in determination of secondary characteristic X radiation in XFA
International Nuclear Information System (INIS)
Roubicek, P.
1982-01-01
Secondary characteristic radiation is excited by primary radiation from the X-ray tube and by secondary radiation of other elements so that excitations of several orders result. The Monte Carlo method was used to consider all these possibilities and the resulting flux of characteristic radiation was simulated for samples of silicate raw materials. A comparison of the results of these computations with experiments allows to determine the effect of sample preparation on the characteristic radiation flux. (M.D.)
Determination of axial diffusion coefficients by the Monte-Carlo method
International Nuclear Information System (INIS)
Milgram, M.
1994-01-01
A simple method to calculate the homogenized diffusion coefficient for a lattice cell using Monte-Carlo techniques is demonstrated. The method relies on modelling a finite reactor volume to induce a curvature in the flux distribution, and then follows a large number of histories to obtain sufficient statistics for a meaningful result. The goal is to determine the diffusion coefficient with sufficient accuracy to test approximate methods built into deterministic lattice codes. Numerical results are given. (author). 4 refs., 8 figs
International Nuclear Information System (INIS)
David, Mariano Gazineu; Salata, Camila; Almeida, Carlos Eduardo
2014-01-01
The Laboratorio de Ciencias Radiologicas develops a methodology for the determination of the absorbed dose to water by Fricke chemical dosimetry method for brachytherapy sources of 192 Ir high dose rate and have compared their results with the laboratory of the National Research Council Canada. This paper describes the determination of the correction factors by Monte Carlo method, with the Penelope code. Values for all factors are presented, with a maximum difference of 0.22% for their determination by an alternative way. (author)
Energy Technology Data Exchange (ETDEWEB)
Gonzalez, Jorge A. Carrazana; Ferrera, Eduardo A. Capote; Gomez, Isis M. Fernandez; Castro, Gloria V. Rodriguez; Ricardo, Niury Martinez, E-mail: cphr@cphr.edu.cu [Centro de Proteccion e Higiene de las Radiaciones (CPHR), La Habana (Cuba)
2013-07-01
This work shows how is established the traceability of the analytical determinations using this calibration method. Highlights the advantages offered by Monte Carlo simulation for the application of corrections by differences in chemical composition, density and height of the samples analyzed. Likewise, the results obtained by the LVRA in two exercises organized by the International Agency for Atomic Energy (IAEA) are presented. In these exercises (an intercomparison and a proficiency test) all reported analytical results were obtained based on calibrations in efficiency by Monte Carlo simulation using the DETEFF program.
The determination of beam quality correction factors: Monte Carlo simulations and measurements.
González-Castaño, D M; Hartmann, G H; Sánchez-Doblado, F; Gómez, F; Kapsch, R-P; Pena, J; Capote, R
2009-08-07
Modern dosimetry protocols are based on the use of ionization chambers provided with a calibration factor in terms of absorbed dose to water. The basic formula to determine the absorbed dose at a user's beam contains the well-known beam quality correction factor that is required whenever the quality of radiation used at calibration differs from that of the user's radiation. The dosimetry protocols describe the whole ionization chamber calibration procedure and include tabulated beam quality correction factors which refer to 60Co gamma radiation used as calibration quality. They have been calculated for a series of ionization chambers and radiation qualities based on formulae, which are also described in the protocols. In the case of high-energy photon beams, the relative standard uncertainty of the beam quality correction factor is estimated to amount to 1%. In the present work, two alternative methods to determine beam quality correction factors are prescribed-Monte Carlo simulation using the EGSnrc system and an experimental method based on a comparison with a reference chamber. Both Monte Carlo calculations and ratio measurements were carried out for nine chambers at several radiation beams. Four chamber types are not included in the current dosimetry protocols. Beam quality corrections for the reference chamber at two beam qualities were also measured using a calorimeter at a PTB Primary Standards Dosimetry Laboratory. Good agreement between the Monte Carlo calculated (1% uncertainty) and measured (0.5% uncertainty) beam quality correction factors was obtained. Based on these results we propose that beam quality correction factors can be generated both by measurements and by the Monte Carlo simulations with an uncertainty at least comparable to that given in current dosimetry protocols.
Weld metal microstructures of hardfacing deposits produced by self-shielded flux-cored arc welding
International Nuclear Information System (INIS)
Dumovic, M.; Monaghan, B.J.; Li, H.; Norrish, J.; Dunne, D.P.
2015-01-01
The molten pool weld produced during self-shielded flux-cored arc welding (SSFCAW) is protected from gas porosity arising from oxygen and nitrogen by reaction ('killing') of these gases by aluminium. However, residual Al can result in mixed micro-structures of δ-ferrite, martensite and bainite in hardfacing weld metals produced by SSFCAW and therefore, microstructural control can be an issue for hardfacing weld repair. The effect of the residual Al content on weld metal micro-structure has been examined using thermodynamic modeling and dilatometric analysis. It is concluded that the typical Al content of about 1 wt% promotes δ-ferrite formation at the expense of austenite and its martensitic/bainitic product phase(s), thereby compromising the wear resistance of the hardfacing deposit. This paper also demonstrates how the development of a Schaeffler-type diagram for predicting the weld metal micro-structure can provide guidance on weld filler metal design to produce the optimum microstructure for industrial hardfacing applications.
Resonance self-shielding methodology of new neutron transport code STREAM
International Nuclear Information System (INIS)
Choi, Sooyoung; Lee, Hyunsuk; Lee, Deokjung; Hong, Ser Gi
2015-01-01
This paper reports on the development and verification of three new resonance self-shielding methods. The verifications were performed using the new neutron transport code, STREAM. The new methodologies encompass the extension of energy range for resonance treatment, the development of optimum rational approximation, and the application of resonance treatment to isotopes in the cladding region. (1) The extended resonance energy range treatment has been developed to treat the resonances below 4 eV of three resonance isotopes and shows significant improvements in the accuracy of effective cross sections (XSs) in that energy range. (2) The optimum rational approximation can eliminate the geometric limitations of the conventional approach of equivalence theory and can also improve the accuracy of fuel escape probability. (3) The cladding resonance treatment method makes it possible to treat resonances in cladding material which have not been treated explicitly in the conventional methods. These three new methods have been implemented in the new lattice physics code STREAM and the improvement in the accuracy of effective XSs is demonstrated through detailed verification calculations. (author)
A Wavelet-Based Finite Element Method for the Self-Shielding Issue in Neutron Transport
International Nuclear Information System (INIS)
Le Tellier, R.; Fournier, D.; Ruggieri, J. M.
2009-01-01
This paper describes a new approach for treating the energy variable of the neutron transport equation in the resolved resonance energy range. The aim is to avoid recourse to a case-specific spatially dependent self-shielding calculation when considering a broad group structure. This method consists of a discontinuous Galerkin discretization of the energy using wavelet-based elements. A Σ t -orthogonalization of the element basis is presented in order to make the approach tractable for spatially dependent problems. First numerical tests of this method are carried out in a limited framework under the Livolant-Jeanpierre hypotheses in an infinite homogeneous medium. They are mainly focused on the way to construct the wavelet-based element basis. Indeed, the prior selection of these wavelet functions by a thresholding strategy applied to the discrete wavelet transform of a given quantity is a key issue for the convergence rate of the method. The Canuto thresholding approach applied to an approximate flux is found to yield a nearly optimal convergence in many cases. In these tests, the capability of such a finite element discretization to represent the flux depression in a resonant region is demonstrated; a relative accuracy of 10 -3 on the flux (in L 2 -norm) is reached with less than 100 wavelet coefficients per group. (authors)
International Nuclear Information System (INIS)
Fitzpatrick, J.; Verrall, S.M.
1985-01-01
A comparative study has been carried out on the two philosophies for providing the radiological protection necessary for the transport and handling of packaged intermediate level wastes from their sites of origin to disposal. The two philosophies are self shielding and returnable shielding. The approach taken was to assess the cost and radiological impact differentials of two respective representative waste management procedures. The comparison indicated the merits of each procedure. As a consequence, a hybrid procedure was identified which combines the advantages of each philosophy. This hybrid procedure was used for further comparison. The results of the study indicate that the use of self shielded packages throughout will incur considerable extra expense and give only a small saving in radiological impact. (author)
International Nuclear Information System (INIS)
Ganesan, S.; Gopalakrishnan, V.; Ramanadhan, M.M.; Cullan, D.E.
1986-01-01
We investigate the effect of interpolation error in the pre-processing codes LINEAR, RECENT and SIGMA1 on calculations of self-shielding factors and their temperature derivatives. We consider the 2.0347 to 3.3546 keV energy region for 238 U capture, which is the NEACRP benchmark exercise on unresolved parameters. The calculated values of temperature derivatives of self-shielding factors are significantly affected by interpolation error. The sources of problems in both evaluated data and codes are identified and eliminated in the 1985 version of these codes. This paper helps to (1) inform code users to use only 1985 versions of LINEAR, RECENT, and SIGMA1 and (2) inform designers of other code systems where they may have problems and what to do to eliminate their problems. (author)
International Nuclear Information System (INIS)
Ganesan, S.; Gopalakrishnan, V.; Ramanadhan, M.M.; Cullen, D.E.
1985-01-01
The authors investigate the effect of interpolation error in the pre-processing codes LINEAR, RECENT and SIGMA1 on calculations of self-shielding factors and their temperature derivatives. They consider the 2.0347 to 3.3546 keV energy region for /sup 238/U capture, which is the NEACRP benchmark exercise on unresolved parameters. The calculated values of temperature derivatives of self-shielding factors are significantly affected by interpolation error. The sources of problems in both evaluated data and codes are identified and eliminated in the 1985 version of these codes. This paper helps to (1) inform code users to use only 1985 versions of LINEAR, RECENT, and SIGMA1 and (2) inform designers of other code systems where they may have problems and what to do to eliminate their problems
The new solid target system at UNAM in a self-shielded 11 MeV cyclotron
International Nuclear Information System (INIS)
Zarate-Morales, A.; Gaspar-Carcamo, R. E.; Lopez-Rodriguez, V.; Flores-Moreno, A.; Trejo-Ballado, F.; Avila-Rodriguez, Miguel A.
2012-01-01
A dual beam line (BL) self-shielded RDS 111 cyclotron for radionuclide production was installed at the School of Medicine of the National Autonomous University of Mexico in 2001. One of the BL’s was upgraded to Eclipse HP (Siemens) in 2008 and the second BL was recently upgraded (June 2011) to the same version with the option for the irradiation of solid targets for the production of metallic radioisotopes.
CO Self-Shielding as a Mechanism to Make 16O-Enriched Solids in the Solar Nebula
Directory of Open Access Journals (Sweden)
Joseph A. Nuth, III
2014-05-01
Full Text Available Photochemical self-shielding of CO has been proposed as a mechanism to produce solids observed in the modern, 16O-depleted solar system. This is distinct from the relatively 16O-enriched composition of the solar nebula, as demonstrated by the oxygen isotopic composition of the contemporary sun. While supporting the idea that self-shielding can produce local enhancements in 16O-depleted solids, we argue that complementary enhancements of 16O-enriched solids can also be produced via C16O-based, Fischer-Tropsch type (FTT catalytic processes that could produce much of the carbonaceous feedstock incorporated into accreting planetesimals. Local enhancements could explain observed 16O enrichment in calcium-aluminum-rich inclusions (CAIs, such as those from the meteorite, Isheyevo (CH/CHb, as well as in chondrules from the meteorite, Acfer 214 (CH3. CO self-shielding results in an overall increase in the 17O and 18O content of nebular solids only to the extent that there is a net loss of C16O from the solar nebula. In contrast, if C16O reacts in the nebula to produce organics and water then the net effect of the self-shielding process will be negligible for the average oxygen isotopic content of nebular solids and other mechanisms must be sought to produce the observed dichotomy between oxygen in the Sun and that in meteorites and the terrestrial planets. This illustrates that the formation and metamorphism of rocks and organics need to be considered in tandem rather than as isolated reaction networks.
MOCARS: a Monte Carlo code for determining the distribution and simulation limits
International Nuclear Information System (INIS)
Matthews, S.D.
1977-07-01
MOCARS is a computer program designed for the INEL CDC 76-173 operating system to determine the distribution and simulation limits for a function by Monte Carlo techniques. The code randomly samples data from any of the 12 user-specified distributions and then either evaluates the cut set system unavailability or a user-specified function with the sample data. After the data are ordered, the values at various quantities and associated confidence bounds are calculated for output. Also available for output on microfilm are the frequency and cumulative distribution histograms from the sample data. 29 figures, 4 tables
Monte Carlo simulation of determining porosity by using dual gamma detectors
International Nuclear Information System (INIS)
Zhang Feng; Liu Juntao; Yu Huawei; Yuan Chao; Jia Yan
2013-01-01
Current formation elements spectroscopy logging technology utilize 241 Am-Be neutron source and single BGO detector to determine elements contents. It plays an important role in mineral analysis and lithology identification of unconventional oil and gas exploration, but information measured is relatively ld. Measured system based on 241 Am-Be neutron and dual detectors can be developed to realize the measurement of elements content as well as determine neutron gamma porosity by using ratio of gamma count between near and far detectors. Calculation model is built by Monte Carlo method to study neutron gamma porosity logging response with different spacing and shields. And it is concluded that measuring neutron gamma have high counts and good statistical property contrasted with measuring thermal neutron, but the sensitivity of porosity decrease. Sensitivity of porosity will increase as the spacing of dual detector increases. Spacing of far and near detectors should be around 62 cm and 35 cm respectively. Gamma counts decrease and neutron gamma porosity sensitivity increase when shield is fixed between neutron and detector. The length of main shield should be greater than 10 cm and associated shielding is about 5 cm. By Monte Carlo Simulation study, the result provides technical support for determining porosity in formation elements spectroscopy logging using 241 Am-Be neutron and gamma detectors. (authors)
SCALE Continuous-Energy Monte Carlo Depletion with Parallel KENO in TRITON
International Nuclear Information System (INIS)
Goluoglu, Sedat; Bekar, Kursat B.; Wiarda, Dorothea
2012-01-01
The TRITON sequence of the SCALE code system is a powerful and robust tool for performing multigroup (MG) reactor physics analysis using either the 2-D deterministic solver NEWT or the 3-D Monte Carlo transport code KENO. However, as with all MG codes, the accuracy of the results depends on the accuracy of the MG cross sections that are generated and/or used. While SCALE resonance self-shielding modules provide rigorous resonance self-shielding, they are based on 1-D models and therefore 2-D or 3-D effects such as heterogeneity of the lattice structures may render final MG cross sections inaccurate. Another potential drawback to MG Monte Carlo depletion is the need to perform resonance self-shielding calculations at each depletion step for each fuel segment that is being depleted. The CPU time and memory required for self-shielding calculations can often eclipse the resources needed for the Monte Carlo transport. This summary presents the results of the new continuous-energy (CE) calculation mode in TRITON. With the new capability, accurate reactor physics analyses can be performed for all types of systems using the SCALE Monte Carlo code KENO as the CE transport solver. In addition, transport calculations can be performed in parallel mode on multiple processors.
International Nuclear Information System (INIS)
Leal, L.C.; de Saussure, G.; Perez, R.B.
1989-01-01
The URR computer code has been developed to calculate cross-section probability tables, Bondarenko self-shielding factors, and self- indication ratios for fertile and fissile isotopes in the unresolved resonance region. Monte Carlo methods are utilized to select appropriate resonance parameters and to compute the cross sections at the desired reference energy. The neutron cross sections are calculated by the single-level Breit-Wigner formalism with s-, p-, and d-wave contributions. The cross-section probability tables are constructed by sampling the Doppler broadened cross-section. The various shelf-shielded factors are computed numerically as Lebesgue integrals over the cross-section probability tables. 6 refs
Determination of the spatial response of neutron based analysers using a Monte Carlo based method
International Nuclear Information System (INIS)
Tickner, James
2000-01-01
One of the principal advantages of using thermal neutron capture (TNC, also called prompt gamma neutron activation analysis or PGNAA) or neutron inelastic scattering (NIS) techniques for measuring elemental composition is the high penetrating power of both the incident neutrons and the resultant gamma-rays, which means that large sample volumes can be interrogated. Gauges based on these techniques are widely used in the mineral industry for on-line determination of the composition of bulk samples. However, attenuation of both neutrons and gamma-rays in the sample and geometric (source/detector distance) effects typically result in certain parts of the sample contributing more to the measured composition than others. In turn, this introduces errors in the determination of the composition of inhomogeneous samples. This paper discusses a combined Monte Carlo/analytical method for estimating the spatial response of a neutron gauge. Neutron propagation is handled using a Monte Carlo technique which allows an arbitrarily complex neutron source and gauge geometry to be specified. Gamma-ray production and detection is calculated analytically which leads to a dramatic increase in the efficiency of the method. As an example, the method is used to study ways of reducing the spatial sensitivity of on-belt composition measurements of cement raw meal
Determination of true coincidence correction factors using Monte-Carlo simulation techniques
Directory of Open Access Journals (Sweden)
Chionis Dionysios A.
2014-01-01
Full Text Available Aim of this work is the numerical calculation of the true coincidence correction factors by means of Monte-Carlo simulation techniques. For this purpose, the Monte Carlo computer code PENELOPE was used and the main program PENMAIN was properly modified in order to include the effect of the true coincidence phenomenon. The modified main program that takes into consideration the true coincidence phenomenon was used for the full energy peak efficiency determination of an XtRa Ge detector with relative efficiency 104% and the results obtained for the 1173 keV and 1332 keV photons of 60Co were found consistent with respective experimental ones. The true coincidence correction factors were calculated as the ratio of the full energy peak efficiencies was determined from the original main program PENMAIN and the modified main program PENMAIN. The developed technique was applied for 57Co, 88Y, and 134Cs and for two source-to-detector geometries. The results obtained were compared with true coincidence correction factors calculated from the "TrueCoinc" program and the relative bias was found to be less than 2%, 4%, and 8% for 57Co, 88Y, and 134Cs, respectively.
Energy Technology Data Exchange (ETDEWEB)
Nordenfors, C
1999-02-01
To determine dose rate in a gamma radiation field, based on measurements with a semiconductor detector, it is necessary to know how the detector effects the field. This work aims to describe this effect with Monte Carlo simulations and calculations, that is to identify the detector response function. This is done for a germanium gamma detector. The detector is normally used in the in-situ measurements that is carried out regularly at the department. After the response function is determined it is used to reconstruct a spectrum from an in-situ measurement, a so called unfolding. This is done to be able to calculate fluence rate and dose rate directly from a measured (and unfolded) spectrum. The Monte Carlo code used in this work is EGS4 developed mainly at Stanford Linear Accelerator Center. It is a widely used code package to simulate particle transport. The results of this work indicates that the method could be used as-is since the accuracy of this method compares to other methods already in use to measure dose rate. Bearing in mind that this method provides the nuclide specific dose it is useful, in radiation protection, since knowing what the relations between different nuclides are and how they change is very important when estimating the risks
International Nuclear Information System (INIS)
Liu Haibo; Wu Yican; Zheng Shanliang; Zhang Chunzao
2004-01-01
Based on the Fusion-Driven Subcritical System (FDS-I), the 25 groups, 175 groups and 620 groups neutron nuclear data libraries with/without resonance self-shielding correction are made with the Njoy and Transx codes, and the K eff and reaction rates are calculated with the Anisn code. The conclusion indicates that the resonance self-shielding effect affects the reaction rates strongly. (authors)
Use of Monte Carlo Methods for determination of isodose curves in brachytherapy
International Nuclear Information System (INIS)
Vieira, Jose Wilson
2001-08-01
Brachytherapy is a special form of cancer treatment in which the radioactive source is very close to or inside the tumor with the objective of causing the necrosis of the cancerous tissue. The intensity of cell response to the radiation varies according to the tissue type and degree of differentiation. Since the malign cells are less differentiated than the normal ones, they are more sensitive to the radiation. This is the basis for radiotherapy techniques. Institutes that work with the application of high dose rates use sophisticated computer programs to calculate the necessary dose to achieve the necrosis of the tumor and the same time, minimizing the irradiation of tissues and organs of the neighborhood. With knowledge the characteristics of the source and the tumor, it is possible to trace isodose curves with the necessary information for planning the brachytherapy in patients. The objective of this work is, using Monte Carlo techniques, to develop a computer program - the ISODOSE - which allows to determine isodose curves in turn of linear radioactive sources used in brachytherapy. The development of ISODOSE is important because the available commercial programs, in general, are very expensive and practically inaccessible to small clinics. The use of Monte Carlo techniques is viable because they avoid problems inherent to analytic solutions as, for instance , the integration of functions with singularities in its domain. The results of ISODOSE were compared with similar data found in the literature and also with those obtained at the institutes of radiotherapy of the 'Hospital do Cancer do Recife' and of the 'Hospital Portugues do Recife'. ISODOSE presented good performance, mainly, due to the Monte Carlo techniques, that allowed a quite detailed drawing of the isodose curves in turn of linear sources. (author)
Simple formalism for efficient derivatives and multi-determinant expansions in quantum Monte Carlo
Energy Technology Data Exchange (ETDEWEB)
Filippi, Claudia, E-mail: c.filippi@utwente.nl [MESA+ Institute for Nanotechnology, University of Twente, P.O. Box 217, 7500 AE Enschede (Netherlands); Assaraf, Roland, E-mail: assaraf@lct.jussieu.fr [Sorbonne Universités, UPMC Univ Paris 06, CNRS, Laboratoire de Chimie Théorique CC 137-4, place Jussieu F-75252 Paris Cedex 05 (France); Moroni, Saverio, E-mail: moroni@democritos.it [CNR-IOM DEMOCRITOS, Istituto Officina dei Materiali, and SISSA Scuola Internazionale Superiore di Studi Avanzati, Via Bonomea 265, I-34136 Trieste (Italy)
2016-05-21
We present a simple and general formalism to compute efficiently the derivatives of a multi-determinant Jastrow-Slater wave function, the local energy, the interatomic forces, and similar quantities needed in quantum Monte Carlo. Through a straightforward manipulation of matrices evaluated on the occupied and virtual orbitals, we obtain an efficiency equivalent to algorithmic differentiation in the computation of the interatomic forces and the optimization of the orbital parameters. Furthermore, for a large multi-determinant expansion, the significant computational gain afforded by a recently introduced table method is here extended to the local value of any one-body operator and to its derivatives, in both all-electron and pseudopotential calculations.
Energy Technology Data Exchange (ETDEWEB)
Choi, Chang Heon [Interdisciplinary Program in Radiation Applied Life Science, Seoul National University College of Medicine, Seoul (Korea, Republic of); Jung, Seongmoon [Program in Biomedical Radiation Sciences, Department of Transdisciplinary Studies, Graduate School of Convergence Science and Technology, Seoul National University, Seoul (Korea, Republic of); Choi, Kanghyuk; Son, Kwang-Jae; Lee, Jun Sig [Hanaro Applications Research, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Ye, Sung-Joon, E-mail: sye@snu.ac.kr [Interdisciplinary Program in Radiation Applied Life Science, Seoul National University College of Medicine, Seoul (Korea, Republic of); Program in Biomedical Radiation Sciences, Department of Transdisciplinary Studies, Graduate School of Convergence Science and Technology, Seoul National University, Seoul (Korea, Republic of); Center for Convergence Research on Robotics, Advance Institutes of Convergence Technology, Seoul National University, Suwon (Korea, Republic of)
2016-04-21
This study aims to determine the activity of a sealed pure beta-source by measuring the surface dose rate using an extrapolation chamber. A conversion factor (cGy s{sup −1} Bq{sup −1}), which was defined as the ratio of surface dose rate to activity, can be calculated by Monte Carlo simulations of the extrapolation chamber measurement. To validate this hypothesis the certified activities of two standard pure beta-sources of Sr/Y-90 and Si/P-32 were compared with those determined by this method. In addition, a sealed test source of Sr/Y-90 was manufactured by the HANARO reactor group of KAERI (Korea Atomic Energy Research Institute) and used to further validate this method. The measured surface dose rates of the Sr/Y-90 and Si/P-32 standard sources were 4.615×10{sup −5} cGy s{sup −1} and 2.259×10{sup −5} cGy s{sup −1}, respectively. The calculated conversion factors of the two sources were 1.213×10{sup −8} cGy s{sup −1} Bq{sup −1} and 1.071×10{sup −8} cGy s{sup −1} Bq{sup −1}, respectively. Therefore, the activity of the standard Sr/Y-90 source was determined to be 3.995 kBq, which was 2.0% less than the certified value (4.077 kBq). For Si/P-32 the determined activity was 2.102 kBq, which was 6.6% larger than the certified activity (1.971 kBq). The activity of the Sr/Y-90 test source was determined to be 4.166 kBq, while the apparent activity reported by KAERI was 5.803 kBq. This large difference might be due to evaporation and diffusion of the source liquid during preparation and uncertainty in the amount of weighed aliquot of source liquid. The overall uncertainty involved in this method was determined to be 7.3%. We demonstrated that the activity of a sealed pure beta-source could be conveniently determined by complementary combination of measuring the surface dose rate and Monte Carlo simulations.
International Nuclear Information System (INIS)
Taylor, Michael; Dunn, Leon; Kron, Tomas; Height, Felicity; Franich, Rick
2012-01-01
Prediction of dose distributions in close proximity to interfaces is difficult. In the context of radiotherapy of lung tumors, this may affect the minimum dose received by lesions and is particularly important when prescribing dose to covering isodoses. The objective of this work is to quantify underdosage in key regions around a hypothetical target using Monte Carlo dose calculation methods, and to develop a factor for clinical estimation of such underdosage. A systematic set of calculations are undertaken using 2 Monte Carlo radiation transport codes (EGSnrc and GEANT4). Discrepancies in dose are determined for a number of parameters, including beam energy, tumor size, field size, and distance from chest wall. Calculations were performed for 1-mm 3 regions at proximal, distal, and lateral aspects of a spherical tumor, determined for a 6-MV and a 15-MV photon beam. The simulations indicate regions of tumor underdose at the tumor-lung interface. Results are presented as ratios of the dose at key peripheral regions to the dose at the center of the tumor, a point at which the treatment planning system (TPS) predicts the dose more reliably. Comparison with TPS data (pencil-beam convolution) indicates such underdosage would not have been predicted accurately in the clinic. We define a dose reduction factor (DRF) as the average of the dose in the periphery in the 6 cardinal directions divided by the central dose in the target, the mean of which is 0.97 and 0.95 for a 6-MV and 15-MV beam, respectively. The DRF can assist clinicians in the estimation of the magnitude of potential discrepancies between prescribed and delivered dose distributions as a function of tumor size and location. Calculation for a systematic set of “generic” tumors allows application to many classes of patient case, and is particularly useful for interpreting clinical trial data.
Energy Technology Data Exchange (ETDEWEB)
Taylor, Michael, E-mail: michael.taylor@rmit.edu.au [School of Applied Sciences, College of Science, Engineering and Health, RMIT University, Melbourne, Victoria (Australia); Physical Sciences, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Dunn, Leon; Kron, Tomas; Height, Felicity; Franich, Rick [School of Applied Sciences, College of Science, Engineering and Health, RMIT University, Melbourne, Victoria (Australia); Physical Sciences, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia)
2012-04-01
Prediction of dose distributions in close proximity to interfaces is difficult. In the context of radiotherapy of lung tumors, this may affect the minimum dose received by lesions and is particularly important when prescribing dose to covering isodoses. The objective of this work is to quantify underdosage in key regions around a hypothetical target using Monte Carlo dose calculation methods, and to develop a factor for clinical estimation of such underdosage. A systematic set of calculations are undertaken using 2 Monte Carlo radiation transport codes (EGSnrc and GEANT4). Discrepancies in dose are determined for a number of parameters, including beam energy, tumor size, field size, and distance from chest wall. Calculations were performed for 1-mm{sup 3} regions at proximal, distal, and lateral aspects of a spherical tumor, determined for a 6-MV and a 15-MV photon beam. The simulations indicate regions of tumor underdose at the tumor-lung interface. Results are presented as ratios of the dose at key peripheral regions to the dose at the center of the tumor, a point at which the treatment planning system (TPS) predicts the dose more reliably. Comparison with TPS data (pencil-beam convolution) indicates such underdosage would not have been predicted accurately in the clinic. We define a dose reduction factor (DRF) as the average of the dose in the periphery in the 6 cardinal directions divided by the central dose in the target, the mean of which is 0.97 and 0.95 for a 6-MV and 15-MV beam, respectively. The DRF can assist clinicians in the estimation of the magnitude of potential discrepancies between prescribed and delivered dose distributions as a function of tumor size and location. Calculation for a systematic set of 'generic' tumors allows application to many classes of patient case, and is particularly useful for interpreting clinical trial data.
CO Self-Shielding as a Mechanism to Make O-16 Enriched Solids in the Solar Nebula
Nuth, Joseph A. III; Johnson, Natasha M.; Hill, Hugh G. M.
2014-01-01
Photochemical self-shielding of CO has been proposed as a mechanism to produce solids observed in the modern, O-16 depleted solar system. This is distinct from the relatively O-16 enriched composition of the solar nebula, as demonstrated by the oxygen isotopic composition of the contemporary sun. While supporting the idea that self-shielding can produce local enhancements in O-16 depleted solids, we argue that complementary enhancements of O-16 enriched solids can also be produced via CO-16 based, Fischer-Tropsch type (FTT) catalytic processes that could produce much of the carbonaceous feedstock incorporated into accreting planetesimals. Local enhancements could explain observed O-16 enrichment in calcium-aluminum-rich inclusions (CAIs), such as those from the meteorite, Isheyevo (CH/CHb), as well as in chondrules from the meteorite, Acfer 214 (CH3). CO selfshielding results in an overall increase in the O-17 and O-18 content of nebular solids only to the extent that there is a net loss of CO-16 from the solar nebula. In contrast, if CO-16 reacts in the nebula to produce organics and water then the net effect of the self-shielding process will be negligible for the average oxygen isotopic content of nebular solids and other mechanisms must be sought to produce the observed dichotomy between oxygen in the Sun and that in meteorites and the terrestrial planets. This illustrates that the formation and metamorphism of rocks and organics need to be considered in tandem rather than as isolated reaction networks.
Determination of fast neutrons energy spectra by Monte-Carlo Method
International Nuclear Information System (INIS)
Chetaine, A.
1986-01-01
Two computation codes based on the Monte-Carlo method are established for studying the spectrometry of neutrons with 14 Mev as initial energy. The spectra are determined, on one hand, around a neutron generator Ti-T target and, on the other hand, in a big paraffin cylinder. One code allows to determine the spectrum of neutrons irradiating the sample at various distances from the Ti-T target versus accelerator parameters: high voltage, atomic or molecular nature of deuterons beam, target thickness and materials surrounding the target. The other code determines neutron spectra at various positions inside and outside the 30 x 30 cm paraffin cylinder. The validity of the procedure used in these codes is verified by determining the spectrum of neutrons crossing a big surface, using the procedure in question and using direct simulation method. The biasing procedure used in the two codes permits to have results with good statistics from a reduced number of drawings. 70 figs.; 62 refs.; 1 tab. (author)
Influence of preheating on API 5L-X80 pipeline joint welding with self shielded flux-cored wire
International Nuclear Information System (INIS)
Cooper, R.; Silva, J. H. F.; Trevisan, R. E.
2004-01-01
The present work refers to the characterization of API 5L-X80 pipeline joints welded with self-shielded flux cored wire. This process was evaluated under preheating conditions, with an uniform and steady heat input. All joints were welded in flat position (1G), with the pipe turning and the torch still. Tube dimensions were 762 mm in external diameter and 16 mm in thickness. Welds were applied on single V-groove, with six weld beads, along with three levels of preheating temperatures (room temperature, 100 degree centigree, 160 degree centigree). These temperatures were maintained as inter pass temperature. The filler metal E71T8-K6 with mechanical properties different from parent metal was used in under matched conditions. The weld characterization is presented according to the mechanical test results of tensile strength, hardness and impact test. The mechanical tests were conducted according to API 1104, AWS and ASTM standards. API 1104 and API 51 were used as screening criteria. According to the results obtained, it was possible to remark that it is appropriate to weld API 5L-X80 steel ducts with Self-shielded Flux Cored wires, in conformance to the API standards and no preheat temperature is necessary. (Author) 22 refs
Doses determination in UCCA treatments with LDR brachytherapy using Monte Carlo methods
International Nuclear Information System (INIS)
Benites R, J. L.; Vega C, H. R.
2017-10-01
Using Monte Carlo methods, with the code MCNP5, a gynecological mannequin and a vaginal cylinder were modeled. The spatial distribution of absorbed dose rate in uterine cervical cancer (UCCA) treatments was determined under the modality of manual brachytherapy of low dose rate (B-LDR). The design of the model included the gynecological liquid water mannequin, a vaginal cylinder applicator of Lucite (PMMA) with hemisphere termination. The applicator was formed by a vaginal cylinder 10.3 cm long and 2 cm in diameter. This cylinder was mounted on a stainless steel tube 15.2 cm long by 0.6 cm in diameter. A linear array of four radioactive sources of Cesium 137 was inserted into the tube. 13 water cells of 0.5 cm in diameter were modeled around the vaginal cylinder and the absorbed dose was calculated in these. The distribution of the fluence of gamma photons in the mesh was calculated. It was found that the distribution of the absorbed dose is symmetric for cells located in the upper and lower part of the vaginal cylinder. The values of the absorbed dose rate were estimated for the date of manufacture of the sources. This result allows the use of the law of radioactive decay to determine the dose rate at any date of a gynecological treatment of B-LDR. (Author)
Report on some methods of determining the state of convergence of Monte Carlo risk estimates
International Nuclear Information System (INIS)
Orford, J.L.; Hufton, D.; Johnson, K.
1991-05-01
The Department of the Environment is developing a methodology for assessing potential sites for the disposal of low and intermediate level radioactive wastes. Computer models are used to simulate the groundwater transport of radioactive materials from a disposal facility back to man. Monte Carlo methods are being employed to conduct a probabilistic risk assessment (pra) of potential sites. The models calculate time histories of annual radiation dose to the critical group population. The annual radiation dose to the critical group in turn specifies the annual individual risk. The distribution of dose is generally highly skewed and many simulation runs are required to predict the level of confidence in the risk estimate i.e. to determine whether the risk estimate is converged. This report describes some statistical methods for determining the state of convergence of the risk estimate. The methods described include the Shapiro-Wilk test, calculation of skewness and kurtosis and normal probability plots. A method for forecasting the number of samples needed before the risk estimate is converged is presented. Three case studies were conducted to examine the performance of some of these techniques. (author)
Energy Technology Data Exchange (ETDEWEB)
Vieira, Jose Wilson
2001-08-01
Brachytherapy is a special form of cancer treatment in which the radioactive source is very close to or inside the tumor with the objective of causing the necrosis of the cancerous tissue. The intensity of cell response to the radiation varies according to the tissue type and degree of differentiation. Since the malign cells are less differentiated than the normal ones, they are more sensitive to the radiation. This is the basis for radiotherapy techniques. Institutes that work with the application of high dose rates use sophisticated computer programs to calculate the necessary dose to achieve the necrosis of the tumor and the same time, minimizing the irradiation of tissues and organs of the neighborhood. With knowledge the characteristics of the source and the tumor, it is possible to trace isodose curves with the necessary information for planning the brachytherapy in patients. The objective of this work is, using Monte Carlo techniques, to develop a computer program - the ISODOSE - which allows to determine isodose curves in turn of linear radioactive sources used in brachytherapy. The development of ISODOSE is important because the available commercial programs, in general, are very expensive and practically inaccessible to small clinics. The use of Monte Carlo techniques is viable because they avoid problems inherent to analytic solutions as, for instance , the integration of functions with singularities in its domain. The results of ISODOSE were compared with similar data found in the literature and also with those obtained at the institutes of radiotherapy of the 'Hospital do Cancer do Recife' and of the 'Hospital Portugues do Recife'. ISODOSE presented good performance, mainly, due to the Monte Carlo techniques, that allowed a quite detailed drawing of the isodose curves in turn of linear sources. (author)
Energy Technology Data Exchange (ETDEWEB)
Vieira, Jose Wilson
2001-08-01
Brachytherapy is a special form of cancer treatment in which the radioactive source is very close to or inside the tumor with the objective of causing the necrosis of the cancerous tissue. The intensity of cell response to the radiation varies according to the tissue type and degree of differentiation. Since the malign cells are less differentiated than the normal ones, they are more sensitive to the radiation. This is the basis for radiotherapy techniques. Institutes that work with the application of high dose rates use sophisticated computer programs to calculate the necessary dose to achieve the necrosis of the tumor and the same time, minimizing the irradiation of tissues and organs of the neighborhood. With knowledge the characteristics of the source and the tumor, it is possible to trace isodose curves with the necessary information for planning the brachytherapy in patients. The objective of this work is, using Monte Carlo techniques, to develop a computer program - the ISODOSE - which allows to determine isodose curves in turn of linear radioactive sources used in brachytherapy. The development of ISODOSE is important because the available commercial programs, in general, are very expensive and practically inaccessible to small clinics. The use of Monte Carlo techniques is viable because they avoid problems inherent to analytic solutions as, for instance , the integration of functions with singularities in its domain. The results of ISODOSE were compared with similar data found in the literature and also with those obtained at the institutes of radiotherapy of the 'Hospital do Cancer do Recife' and of the 'Hospital Portugues do Recife'. ISODOSE presented good performance, mainly, due to the Monte Carlo techniques, that allowed a quite detailed drawing of the isodose curves in turn of linear sources. (author)
International Nuclear Information System (INIS)
Dinpajooh, Mohammadhasan; Bai, Peng; Allan, Douglas A.; Siepmann, J. Ilja
2015-01-01
Since the seminal paper by Panagiotopoulos [Mol. Phys. 61, 813 (1997)], the Gibbs ensemble Monte Carlo (GEMC) method has been the most popular particle-based simulation approach for the computation of vapor–liquid phase equilibria. However, the validity of GEMC simulations in the near-critical region has been questioned because rigorous finite-size scaling approaches cannot be applied to simulations with fluctuating volume. Valleau [Mol. Simul. 29, 627 (2003)] has argued that GEMC simulations would lead to a spurious overestimation of the critical temperature. More recently, Patel et al. [J. Chem. Phys. 134, 024101 (2011)] opined that the use of analytical tail corrections would be problematic in the near-critical region. To address these issues, we perform extensive GEMC simulations for Lennard-Jones particles in the near-critical region varying the system size, the overall system density, and the cutoff distance. For a system with N = 5500 particles, potential truncation at 8σ and analytical tail corrections, an extrapolation of GEMC simulation data at temperatures in the range from 1.27 to 1.305 yields T c = 1.3128 ± 0.0016, ρ c = 0.316 ± 0.004, and p c = 0.1274 ± 0.0013 in excellent agreement with the thermodynamic limit determined by Potoff and Panagiotopoulos [J. Chem. Phys. 109, 10914 (1998)] using grand canonical Monte Carlo simulations and finite-size scaling. Critical properties estimated using GEMC simulations with different overall system densities (0.296 ≤ ρ t ≤ 0.336) agree to within the statistical uncertainties. For simulations with tail corrections, data obtained using r cut = 3.5σ yield T c and p c that are higher by 0.2% and 1.4% than simulations with r cut = 5 and 8σ but still with overlapping 95% confidence intervals. In contrast, GEMC simulations with a truncated and shifted potential show that r cut = 8σ is insufficient to obtain accurate results. Additional GEMC simulations for hard-core square-well particles with various
International Nuclear Information System (INIS)
Arsenault, Benoit; Le Tellier, Romain; Hebert, Alain
2008-01-01
The paper presents the results of a first implementation of a Monte Carlo module in DRAGON Version 4 based on the delta-tracking technique. The Monte Carlo module uses the geometry and the self-shielded multigroup cross-sections calculated with a deterministic model. The module has been tested with three different configurations of an ACR TM -type lattice. The paper also discusses the impact of this approach on the efficiency of the Monte Carlo module. (authors)
Malasics, Attila; Boda, Dezso
2010-06-28
Two iterative procedures have been proposed recently to calculate the chemical potentials corresponding to prescribed concentrations from grand canonical Monte Carlo (GCMC) simulations. Both are based on repeated GCMC simulations with updated excess chemical potentials until the desired concentrations are established. In this paper, we propose combining our robust and fast converging iteration algorithm [Malasics, Gillespie, and Boda, J. Chem. Phys. 128, 124102 (2008)] with the suggestion of Lamperski [Mol. Simul. 33, 1193 (2007)] to average the chemical potentials in the iterations (instead of just using the chemical potentials obtained in the last iteration). We apply the unified method for various electrolyte solutions and show that our algorithm is more efficient if we use the averaging procedure. We discuss the convergence problems arising from violation of charge neutrality when inserting/deleting individual ions instead of neutral groups of ions (salts). We suggest a correction term to the iteration procedure that makes the algorithm efficient to determine the chemical potentials of individual ions too.
Kodama, Nao; Setoi, Ayana; Kose, Katsumi
2018-01-01
Spiral MRI sequences were developed for a 9.4T vertical standard bore (54 mm) superconducting magnet using unshielded and self-shielded gradient coils. Clear spiral images with 64-shot scan were obtained with the self-shielded gradient coil, but severe shading artifacts were observed for the spiral-scan images acquired with the unshielded gradient coil. This shading artifact was successfully corrected with a phase-correction technique using reference scans that we developed based on eddy current field measurements. We therefore concluded that spiral imaging sequences can be installed even for unshielded gradient coils if phase corrections are performed using the reference scans. PMID:28367906
Kodama, Nao; Setoi, Ayana; Kose, Katsumi
2018-04-10
Spiral MRI sequences were developed for a 9.4T vertical standard bore (54 mm) superconducting magnet using unshielded and self-shielded gradient coils. Clear spiral images with 64-shot scan were obtained with the self-shielded gradient coil, but severe shading artifacts were observed for the spiral-scan images acquired with the unshielded gradient coil. This shading artifact was successfully corrected with a phase-correction technique using reference scans that we developed based on eddy current field measurements. We therefore concluded that spiral imaging sequences can be installed even for unshielded gradient coils if phase corrections are performed using the reference scans.
Nishimura, N.; Rauscher, T.; Hirschi, R.; Murphy, A. St J.; Cescutti, G.; Travaglio, C.
2018-03-01
Thermonuclear supernovae originating from the explosion of a white dwarf accreting mass from a companion star have been suggested as a site for the production of p nuclides. Such nuclei are produced during the explosion, in layers enriched with seed nuclei coming from prior strong s processing. These seeds are transformed into proton-richer isotopes mainly by photodisintegration reactions. Several thousand trajectories from a 2D explosion model were used in a Monte Carlo approach. Temperature-dependent uncertainties were assigned individually to thousands of rates varied simultaneously in post-processing in an extended nuclear reaction network. The uncertainties in the final nuclear abundances originating from uncertainties in the astrophysical reaction rates were determined. In addition to the 35 classical p nuclides, abundance uncertainties were also determined for the radioactive nuclides 92Nb, 97, 98Tc, 146Sm, and for the abundance ratios Y(92Mo)/Y(94Mo), Y(92Nb)/Y(92Mo), Y(97Tc)/Y(98Ru), Y(98Tc)/Y(98Ru), and Y(146Sm)/Y(144Sm), important for Galactic Chemical Evolution studies. Uncertainties found were generally lower than a factor of 2, although most nucleosynthesis flows mainly involve predicted rates with larger uncertainties. The main contribution to the total uncertainties comes from a group of trajectories with high peak density originating from the interior of the exploding white dwarf. The distinction between low-density and high-density trajectories allows more general conclusions to be drawn, also applicable to other simulations of white dwarf explosions.
[Study of Determination of Oil Mixture Components Content Based on Quasi-Monte Carlo Method].
Wang, Yu-tian; Xu, Jing; Liu, Xiao-fei; Chen, Meng-han; Wang, Shi-tao
2015-05-01
Gasoline, kerosene, diesel is processed by crude oil with different distillation range. The boiling range of gasoline is 35 ~205 °C. The boiling range of kerosene is 140~250 °C. And the boiling range of diesel is 180~370 °C. At the same time, the carbon chain length of differentmineral oil is different. The carbon chain-length of gasoline is within the scope of C7 to C11. The carbon chain length of kerosene is within the scope of C12 to C15. And the carbon chain length of diesel is within the scope of C15 to C18. The recognition and quantitative measurement of three kinds of mineral oil is based on different fluorescence spectrum formed in their different carbon number distribution characteristics. Mineral oil pollution occurs frequently, so monitoring mineral oil content in the ocean is very important. A new method of components content determination of spectra overlapping mineral oil mixture is proposed, with calculation of characteristic peak power integrationof three-dimensional fluorescence spectrum by using Quasi-Monte Carlo Method, combined with optimal algorithm solving optimum number of characteristic peak and range of integral region, solving nonlinear equations by using BFGS(a rank to two update method named after its inventor surname first letter, Boyden, Fletcher, Goldfarb and Shanno) method. Peak power accumulation of determined points in selected area is sensitive to small changes of fluorescence spectral line, so the measurement of small changes of component content is sensitive. At the same time, compared with the single point measurement, measurement sensitivity is improved by the decrease influence of random error due to the selection of points. Three-dimensional fluorescence spectra and fluorescence contour spectra of single mineral oil and the mixture are measured by taking kerosene, diesel and gasoline as research objects, with a single mineral oil regarded whole, not considered each mineral oil components. Six characteristic peaks are
International Nuclear Information System (INIS)
Mason, Grant W.; Spencer, Ross L.
2002-01-01
The 'self-shielding' m=1 diocotron mode in Malmberg-Penning traps has been known for over a decade to be unstable for finite length nonneutral plasmas with hollow density profiles. Early theoretical efforts were unsuccessful in accounting for the exponential growth and/or the magnitude of the growth rate. Recent theoretical work has sought to resolve the discrepancy either as a consequence of the shape of the plasma ends or as a kinetic effect resulting from a modified distribution function as a consequence of the protocol used to form the hollow profiles in experiments. We have investigated both of these finite length mechanisms in selected test cases using a three-dimensional particle-in-cell code that allows realistic treatment of shape and kinetic effects. We find that a persistent discrepancy of a factor of 2-3 remains between simulation and experimental values of the growth rate
Monte Carlo determination of dose in crystalline and thyroid during chest tomography examinations
International Nuclear Information System (INIS)
Quispe H, B.; Pena V, J. D.; Waldo B, G.; Leon M, M.; Ceron R, P.; Vallejo H, A.; Sosa A, M.; Vega C, H. R.
2017-10-01
Computed tomography is a diagnostic imaging method that deposits higher doses than other radio diagnosis methods. The knowledge of the spectrum of X-rays is important, since is in direct function with the dose absorbed by the patient. In this work we estimated the spectrum of X-rays, produced during the interaction of monoenergetic electrons of 130 KeV with Tungsten white, in order to determine their energetic characteristics at 50 cm from the focal point. The study was done using Monte Carlo methods with the code MCNP5 where the X-ray tube of a Siemens SOMATOM Perspective tomograph of the General Regional Hospital of Leon, Mexico was modeled. In the calculations, 3 x 10 8 stories were used and a relative uncertainty of less than 0.1% was obtained. Also, a neck manikin with thyroid, thorax and head that included the eye, the table and gantry with 70 cm opening of the tomography was modeled. The X-ray spectrum calculated with a cut thickness of 10 mm limited by Pb collimators was used as the source term. The radiological service routine scanning protocol was used for chest computed tomography; the step-by-step or instant trigger method was simulated by moving the manikin coordinates for each cut and 360 degree continuous rotation movement. 36 positions of the X-ray tube were used in steps of 10 degrees. The radiation dispersed in the thorax deposits a dose of 2.063 mGy in crystalline and 252 mGy in thyroid. (Author)
International Nuclear Information System (INIS)
Ganesan, S.; Gopalakrishnan, V.; Ramanadhan, M.M.; Cullen, D.E.
1988-01-01
Verification results for Doppler broadening and self-shielding are presented. One of the important results presented is that the original SIGMA1 method of numerical Doppler broadening has now been demonstrated to be inaccurate and not capable of producing results to within required accuracies. Fortunately, due to this study, the SIGMA1 method has been significantly improved and the new SIGMA1 is now capable of producing results to within required accuracies. Although this paper presents results based upon using only one code system, it is important to realize that the original SIGMA1 method is presently used in many cross-section processing code systems; the results of this paper indicate that unless these other code systems are updated to include the new SIGMA1 method, the results produced by these code systems could be very inaccurate. The objectives of the IAEA nuclear data processing code verification project are reviewed as well as the requirements for the accuracy of calculation of Doppler coefficients and the present status of these calculations. The initial results of Doppler broadening and self-shielding calculations are presented and the inconsistency of the results which led to the discovery of errors in the original SIGMA1 method of Doppler broadening are pointed out. Analysis of the errors found and improvements in the SIGMA1 method are presented. Improved results are presented in order to demonstrate that the new SIGMA1 method can produce results within required accuracies. Guidelines are presented to limit the uncertainty introduced due to cross-section processing in order to balance available computer resources to accuracy requirements. Finally cross-section processing code users are invited to participate in the IAEA processing code verification project in order to verify the accuracy of their calculated results. (author)
EURADOS action for determination of americium in skull measures in vivo and Monte Carlo simulation
International Nuclear Information System (INIS)
Lopez Ponte, M. A.; Navarro Amaro, J. F.; Perez Lopez, B.; Navarro Bravo, T.; Nogueira, P.; Vrba, T.
2013-01-01
From the Group of WG7 internal dosimetry of the EURADOS Organization (European Radiation Dosimetry group, e.V.) which It coordinates CIEMAT, international action for the vivo measurement of americium has been conducted in three mannequins type skull with detectors of Germanium by gamma spectrometry and simulation by Monte Carlo methods. Such action has been raised as two separate exercises, with the participation of institutions in Europe, America and Asia. Other actions similar precede this vivo intercomparison of measurement and modeling Monte Carlo1. The preliminary results and associated findings are presented in this work. The laboratory of the body radioactivity (CRC) of service counter of dosimetry staff internal (DPI) of the CIEMAT, it has been one of the participants in vivo measures exercise. On the other hand part, the Group of numerical dosimetry of CIEMAT is participant of the Monte Carlo2 simulation exercise. (Author)
Energy Technology Data Exchange (ETDEWEB)
Benites R, J. L. [Centro Estatal de Cancerologia de Nayarit, Comite de Investigacion, Calz. de la Cruz 118 sur, 63000 Tepic, Nayarit (Mexico); Vega C, H. R., E-mail: neutronesrapidos@gmail.com [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas, Zac. (Mexico)
2017-10-15
Using Monte Carlo methods, with the code MCNP5, a gynecological mannequin and a vaginal cylinder were modeled. The spatial distribution of absorbed dose rate in uterine cervical cancer (UCCA) treatments was determined under the modality of manual brachytherapy of low dose rate (B-LDR). The design of the model included the gynecological liquid water mannequin, a vaginal cylinder applicator of Lucite (PMMA) with hemisphere termination. The applicator was formed by a vaginal cylinder 10.3 cm long and 2 cm in diameter. This cylinder was mounted on a stainless steel tube 15.2 cm long by 0.6 cm in diameter. A linear array of four radioactive sources of Cesium 137 was inserted into the tube. 13 water cells of 0.5 cm in diameter were modeled around the vaginal cylinder and the absorbed dose was calculated in these. The distribution of the fluence of gamma photons in the mesh was calculated. It was found that the distribution of the absorbed dose is symmetric for cells located in the upper and lower part of the vaginal cylinder. The values of the absorbed dose rate were estimated for the date of manufacture of the sources. This result allows the use of the law of radioactive decay to determine the dose rate at any date of a gynecological treatment of B-LDR. (Author)
Efficiency determination of whole-body counters by Monte Carlo method, using a microcomputer
International Nuclear Information System (INIS)
Fernandes Neto, J.M.
1987-01-01
A computing program using Monte Carlo method for calculate the whole efficiency of distributed radiation counters in human body is developed. A simulater of human proportions was used, of which was filled with a known and uniform solution containing a quantity of radioisopes. The 99m Tc, 131 I and 42 K were used in this experience, and theirs activities compared by a liquid scintillator. (C.G.C.) [pt
MINX, Multigroup Cross-Sections and Self-Shielding Factors from ENDF/B for Program SPHINX
International Nuclear Information System (INIS)
Soran, P.D.; MacFarlane, R.E.; Harris, D.R.; LaBauve, R.J.; Hendricks, J.S.; Kidman, R.B.; Weisbin, C.R.; White, J.E.
1977-01-01
1 - Description of problem or function: MINX calculates fine-group averaged infinitely diluted cross sections and self-shielding factors from ENDF/B-IV data. Its primary purpose is to generate a pseudo-composition-independent multigroup library which is input to the SPHINX space-energy collapse program (2) (PSR-0129) through standard CCCC-III (8) interfaces. MINX incorporates and improves upon the resonance capabilities of existing codes such as ETOX (5) (NESC0388) and ENDRUN (9) and the high-order group-to-group transfer matrices of SUPERTOG (10) (PSR-0013) and ETOG (11). Fine group energy boundaries, Legendre expansion order, gross spectral shape component (in the Bondarenko flux model), temperatures and dilutions can all be used specifically. 2 - Method of solution: Infinitely dilute, un-broadened point cross sections are obtained from resolved resonance parameters using a modified version of the RESEND program (3) (NESC0465). The SIGMA1 (4) (IAEA0854) kernel-broadening method is used to Doppler broaden and thin the tabulated linearized pointwise cross sections at 0 K (outside of the unresolved energy region). Effective temperature- dependent self-shielded pointwise cross sections are derived from the formulation in the ETOX code. The primary modification to the ETOX algorithm is associated with the numerical quadrature scheme used to establish the mean values of the fluctuation intervals. The selection of energy mesh points, at which the effective cross sections are calculated, has been modified to include the energy points given in the ENDF/B file or, if the energy-independent formalism was employed, points at half-lethargy intervals. Infinitely dilute group cross sections and self-shielding factors are generated using the Bondarenko flux weighting model with the gross spectral shape under user control. The integral over energy for each group is divided into a set of panels defined by the union of the grid points describing the total cross section, the
Energy Technology Data Exchange (ETDEWEB)
Singh, Daya; Soltis, Patrick; Narayanan, Badri; Quintana, Marie; Fox, Jeff [The Lincoln Electric Company (United States)
2005-07-01
Self-shielded flux cored arc welding electrodes (FCAW-S) are ideal for outdoor applications, particularly open fabrication yards where high winds are a possibility. Development work was carried out on a FCAW-S electrode for welding 70 and 80 ksi yield strength base materials with a required minimum average Charpy V-Notch (CVN) absorbed energy value of 35 ft-lb at -40 deg F in the weld metal. The effect of Al, Mg, Ti, and Zr on CVN toughness was evaluated by running a Design of Experiments approach to systematically vary the levels of these components in the electrode fill and, in turn, the weld metal. These electrodes were used to weld simulated pipe joints. Over the range of compositions tested, 0.05% Ti in the weld metal was found to be optimum for CVN toughness. Ti also had a beneficial effect on the usable voltage range. Simulated offshore joints were welded to evaluate the effect of base metal dilution, heat input, and welding procedure on the toughness of weld metal. CVN toughness was again measured at -40 deg F on samples taken from the root and the cap pass regions. The root pass impact toughness showed strong dependence on the base metal dilution and the heat input used to weld the root and fill passes. (author)
International Nuclear Information System (INIS)
Zhang, Tianli; Li, Zhuoxin; Kou, Sindo; Jing, Hongyang; Li, Guodong; Li, Hong; Jin Kim, Hee
2015-01-01
The effect of inclusions on the microstructure and toughness of the deposited metals of self-shielded flux cored wires was investigated by optical microscopy, electron microscopy and mechanical testing. The deposited metals of three different wires showed different levels of low temperature impact toughness at −40 °C mainly because of differences in the properties of inclusions. The inclusions formed in the deposited metals as a result of deoxidation caused by the addition of extra Al–Mg alloy and ferromanganese to the flux. The inclusions, spherical in shape, were mixtures of Al 2 O 3 and MgO. Inclusions predominantly Al 2 O 3 and 0.3–0.8 μm in diameter were effective for nucleation of acicular ferrite. However, inclusions predominantly MgO were promoted by increasing Mg in the flux and were more effective than Al 2 O 3 inclusions of the same size. These findings suggest that the control of inclusions can be an effective way to improve the impact toughness of the deposited metal
Energy Technology Data Exchange (ETDEWEB)
Quispe H, B.; Pena V, J. D.; Waldo B, G.; Leon M, M.; Ceron R, P.; Vallejo H, A.; Sosa A, M. [Universidad de Guanajuato, Campus Leon, Division de Ciencias e Ingenierias, Loma del Bosque 103, Lomas del Campestre, 37150 Leon, Guanajuato (Mexico); Vega C, H. R., E-mail: b.quispehuillcara@ugto.mx [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98060 Zacatecas, Zac. (Mexico)
2017-10-15
Computed tomography is a diagnostic imaging method that deposits higher doses than other radio diagnosis methods. The knowledge of the spectrum of X-rays is important, since is in direct function with the dose absorbed by the patient. In this work we estimated the spectrum of X-rays, produced during the interaction of monoenergetic electrons of 130 KeV with Tungsten white, in order to determine their energetic characteristics at 50 cm from the focal point. The study was done using Monte Carlo methods with the code MCNP5 where the X-ray tube of a Siemens SOMATOM Perspective tomograph of the General Regional Hospital of Leon, Mexico was modeled. In the calculations, 3 x 10{sup 8} stories were used and a relative uncertainty of less than 0.1% was obtained. Also, a neck manikin with thyroid, thorax and head that included the eye, the table and gantry with 70 cm opening of the tomography was modeled. The X-ray spectrum calculated with a cut thickness of 10 mm limited by Pb collimators was used as the source term. The radiological service routine scanning protocol was used for chest computed tomography; the step-by-step or instant trigger method was simulated by moving the manikin coordinates for each cut and 360 degree continuous rotation movement. 36 positions of the X-ray tube were used in steps of 10 degrees. The radiation dispersed in the thorax deposits a dose of 2.063 mGy in crystalline and 252 mGy in thyroid. (Author)
Energy Technology Data Exchange (ETDEWEB)
Coste-Delclaux, M
2006-03-15
This document describes the improvements carried out for modelling the self-shielding phenomenon in the multigroup transport code APOLLO2. They concern the space and energy treatment of the slowing-down equation, the setting up of quadrature formulas to calculate reaction rates, the setting-up of a method that treats directly a resonant mixture and the development of a sub-group method. We validate these improvements either in an elementary or in a global way. Now, we obtain, more accurate multigroup reaction rates and we are able to carry out a reference self-shielding calculation on a very fine multigroup mesh. To end, we draw a conclusion and give some prospects on the remaining work. (author)
Energy Technology Data Exchange (ETDEWEB)
Coste-Delclaux, M
2006-03-15
This document describes the improvements carried out for modelling the self-shielding phenomenon in the multigroup transport code APOLLO2. They concern the space and energy treatment of the slowing-down equation, the setting up of quadrature formulas to calculate reaction rates, the setting-up of a method that treats directly a resonant mixture and the development of a sub-group method. We validate these improvements either in an elementary or in a global way. Now, we obtain, more accurate multigroup reaction rates and we are able to carry out a reference self-shielding calculation on a very fine multigroup mesh. To end, we draw a conclusion and give some prospects on the remaining work. (author)
Efficiency determination of whole-body counter by Monte Carlo method, using a microcomputer
International Nuclear Information System (INIS)
Fernandes Neto, Jose Maria
1986-01-01
The purpose of this investigation was the development of an analytical microcomputer model to evaluate a whole body counter efficiency. The model is based on a modified Sryder's model. A stretcher type geometry along with the Monte Carlo method and a Synclair type microcomputer were used. Experimental measurements were performed using two phantoms, one as an adult and the other as a 5 year old child. The phantoms were made in acrylic and and 99m Tc, 131 I and 42 K were the radioisotopes utilized. Results showed a close relationship between experimental and predicted data for energies ranging from 250 keV to 2 MeV, but some discrepancies were found for lower energies. (author)
International Nuclear Information System (INIS)
Petrascu, M.; Isbasescu, Alina; Constantinescu, A.; Serban, S.; Stoica, I.V.
2004-01-01
The neutron multidetector consists of 81 detectors, made of 4x4x12 cmc BC-400 crystals mounted on XP2972 phototubes. This detector placed in the forward direction at 138 cm from the target, was used to detect the correlated neutrons in the fusion of Li11 halo nuclei with Si targets. To verify the criterion for selecting the true coincidences against cross-talk ( a spurious effect in which the same neutron is registered by two or more detectors) and to establish the optimal distance between adjacent detectors, the program MENATE ( written by P.Desesquelles, IPN - Orsay) was used to generate Monte Carlo neutrons and their interactions in multidetector. The results were analysed with PAW (from CERN Library). (authors)
Defects detection on the welded reinforcing steel with self-shielded wires by vibration tests
Directory of Open Access Journals (Sweden)
Crâştiu Ion
2017-01-01
Full Text Available The aim of this paper is the development and validation of a vibroacustic technique to welding defects detection, especially for welded reinforcing structures. In welded structures subjected to dynamic cyclic loads may appear and propagate fatigue cracks due to local structural damage. These cracks may initiate due to the technological parameters used in welding process, or due to environmental operating conditions. By the means of Finite Element Method (FEM, the natural frequencies and shape modes of more welded steel specimens are determined. The analysis is carried out in undamaged condition as well as damaged one, after artificially induced damages. The experimental measurement of the vibroacustic response is carried out by using a condenser microphone, which is suitable for high-fidelity acoustic measurements in the frequency range of 20 – 20.000 Hz. The vibration responses of the welded specimens, in free-free conditions, are carried out using algorithms based on Fast Fourier Transform (FFT, and Prony's series. The results are compared to modal parameters estimated using FE Analysis.
International Nuclear Information System (INIS)
Matthews, S.D.; Poloski, J.P.
1978-08-01
MOCARS is a computer program designed for use on the Idaho National Engineering Laboratory (INEL) CDC CYBER 76-173 computer system that uses Monte Carlo techniques to determine the distribution and simulation limits for a function. In use, the MOCARS program randomly samples data from any of the 12 different user-specified probability distributions and either evaluates a user-specified function or cut set system unavailability using the sample data. After data ordering, the values at various quantities and associated confidence bounds are calculated for output. If the cut set unavailability function is evaluated, MOCARS can determine the importance ranking for components in the unavailability calculation. Frequency and cumulative distribution histograms from the sample data are also available for output on microfilm. 39 figures, 4 tables
Determination of shielding parameters for different types of concretes by Monte Carlo methods
International Nuclear Information System (INIS)
Aminian, A.; Nematollahi, M. R.
2007-01-01
The chose of a suitable concrete composition for a biological reactor shield remain as a research target up to now. In the present study the attempts has been made to estimate the influence of the concrete aggregates on the shielding parameters for three type of ordinary, serpentine and steel magnetite concrete by Monte Carlo N-Particle (MCNP ) transport code. MCNP calculations have been performed in order to obtain the leakage of neutrons, photons and electrons from dry shield. Also the mass attenuation coefficients and the liner attenuation coefficient are estimated for neutron and photon in those energies in range of actual energy which exist out of pressure vessel of power reactor in the cavity for the investigated concretes. The concrete densities ranged from 2.3 to 5.11 g/cm 3 . These calculations were done in the condition of a typical commercial Pressurized Water Reactor (PWR). The results show that Steel-magnetite concrete, with high density (5.11 g/cm 3 ) and constituents of relatively high atomic number, is an effective shield for both photons and neutrons
Uncertainties in s-process nucleosynthesis in massive stars determined by Monte Carlo variations
Nishimura, N.; Hirschi, R.; Rauscher, T.; St. J. Murphy, A.; Cescutti, G.
2017-08-01
The s-process in massive stars produces the weak component of the s-process (nuclei up to A ˜ 90), in amounts that match solar abundances. For heavier isotopes, such as barium, production through neutron capture is significantly enhanced in very metal-poor stars with fast rotation. However, detailed theoretical predictions for the resulting final s-process abundances have important uncertainties caused both by the underlying uncertainties in the nuclear physics (principally neutron-capture reaction and β-decay rates) as well as by the stellar evolution modelling. In this work, we investigated the impact of nuclear-physics uncertainties relevant to the s-process in massive stars. Using a Monte Carlo based approach, we performed extensive nuclear reaction network calculations that include newly evaluated upper and lower limits for the individual temperature-dependent reaction rates. We found that most of the uncertainty in the final abundances is caused by uncertainties in the neutron-capture rates, while β-decay rate uncertainties affect only a few nuclei near s-process branchings. The s-process in rotating metal-poor stars shows quantitatively different uncertainties and key reactions, although the qualitative characteristics are similar. We confirmed that our results do not significantly change at different metallicities for fast rotating massive stars in the very low metallicity regime. We highlight which of the identified key reactions are realistic candidates for improved measurement by future experiments.
A Comparison of Monte Carlo and Deterministic Solvers for keff and Sensitivity Calculations
Energy Technology Data Exchange (ETDEWEB)
Haeck, Wim [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Parsons, Donald Kent [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); White, Morgan Curtis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Saller, Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Favorite, Jeffrey A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-12-12
Verification and validation of our solutions for calculating the neutron reactivity for nuclear materials is a key issue to address for many applications, including criticality safety, research reactors, power reactors, and nuclear security. Neutronics codes solve variations of the Boltzmann transport equation. The two main variants are Monte Carlo versus deterministic solutions, e.g. the MCNP [1] versus PARTISN [2] codes, respectively. There have been many studies over the decades that examined the accuracy of such solvers and the general conclusion is that when the problems are well-posed, either solver can produce accurate results. However, the devil is always in the details. The current study examines the issue of self-shielding and the stress it puts on deterministic solvers. Most Monte Carlo neutronics codes use continuous-energy descriptions of the neutron interaction data that are not subject to this effect. The issue of self-shielding occurs because of the discretisation of data used by the deterministic solutions. Multigroup data used in these solvers are the average cross section and scattering parameters over an energy range. Resonances in cross sections can occur that change the likelihood of interaction by one to three orders of magnitude over a small energy range. Self-shielding is the numerical effect that the average cross section in groups with strong resonances can be strongly affected as neutrons within that material are preferentially absorbed or scattered out of the resonance energies. This affects both the average cross section and the scattering matrix.
Zoros, E.; Moutsatsos, A.; Pappas, E. P.; Georgiou, E.; Kollias, G.; Karaiskos, P.; Pantelis, E.
2017-09-01
Detector-, field size- and machine-specific correction factors are required for precise dosimetry measurements in small and non-standard photon fields. In this work, Monte Carlo (MC) simulation techniques were used to calculate the k{{Qmsr},{{Q}0}}{{fmsr},{{f}ref}} and k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} correction factors for a series of ionization chambers, a synthetic microDiamond and diode dosimeters, used for reference and/or output factor (OF) measurements in the Gamma Knife Perfexion photon fields. Calculations were performed for the solid water (SW) and ABS plastic phantoms, as well as for a water phantom of the same geometry. MC calculations for the k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} correction factors in SW were compared against corresponding experimental results for a subset of ionization chambers and diode detectors. Reference experimental OF data were obtained through the weighted average of corresponding measurements using TLDs, EBT-2 films and alanine pellets. k{{Qmsr},{{Q}0}}{{fmsr},{{f}ref}} values close to unity (within 1%) were calculated for most of ionization chambers in water. Greater corrections of up to 6.0% were observed for chambers with relatively large air-cavity dimensions and steel central electrode. A phantom correction of 1.006 and 1.024 (breaking down to 1.014 from the ABS sphere and 1.010 from the accompanying ABS phantom adapter) were calculated for the SW and ABS phantoms, respectively, adding up to k{{Qmsr},{{Q}0}}{{fmsr},{{f}ref}} corrections in water. Both measurements and MC calculations for the diode and microDiamond detectors resulted in lower than unit k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} correction factors, due to their denser sensitive volume and encapsulation materials. In comparison, higher than unit k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} results for the ionization chambers suggested field size depended dose underestimations (being significant for the 4 mm field), with magnitude depending on the combination of
Los, J. H.; Pellenq, R. J. M.
2010-02-01
We have determined the bulk melting temperature Tm of nickel according to a recent interatomic interaction model via Monte Carlo simulation by two methods: extrapolation from cluster melting temperatures based on the Pavlov model (a variant of the Gibbs-Thompson model) and by calculation of the liquid and solid Gibbs free energies via thermodynamic integration. The result of the latter, which is the most reliable method, gives Tm=2010±35K , to be compared to the experimental value of 1726 K. The cluster extrapolation method, however, gives a 325° higher value of Tm=2335K . This remarkable result is shown to be due to a barrier for melting, which is associated with a nonwetting behavior.
International Nuclear Information System (INIS)
Simpkin, D.J.
1989-01-01
A Monte Carlo calculation has been performed to determine the transmission of broad constant-potential x-ray beams through Pb, concrete, gypsum wallboard, steel and plate glass. The EGS4 code system was used with a simple broad-beam geometric model to generate exposure transmission curves for published 70, 100, 120 and 140-kVcp x-ray spectra. These curves are compared to measured three-phase generated x-ray transmission data in the literature and found to be reasonable. For calculation ease the data are fit to an equation previously shown to describe such curves quite well. These calculated transmission data are then used to create three-phase shielding tables for Pb and concrete, as well as other materials not available in Report No. 49 of the NCRP
Simpkin, D J
1989-02-01
A Monte Carlo calculation has been performed to determine the transmission of broad constant-potential x-ray beams through Pb, concrete, gypsum wallboard, steel and plate glass. The EGS4 code system was used with a simple broad-beam geometric model to generate exposure transmission curves for published 70, 100, 120 and 140-kVcp x-ray spectra. These curves are compared to measured three-phase generated x-ray transmission data in the literature and found to be reasonable. For calculation ease the data are fit to an equation previously shown to describe such curves quite well. These calculated transmission data are then used to create three-phase shielding tables for Pb and concrete, as well as other materials not available in Report No. 49 of the NCRP.
International Nuclear Information System (INIS)
Nikezic, D.
1994-01-01
The detection effciency, ρ, (or a calibration coefficient k) for radon measurements with a solid state nuclear track detector CR 39 was determined by many authors. There is a considerable discrepancy among reported values for ρ. This situation was a challenge to develop a software program to calculation ρ. This software is based on Bethe-Bloch's expression for the stopping power for heavy charged particles in a medium, as wll as on the Monte Carlo Method. Track parameters were calculated by using an iterative procedure as given in G. Somogyi et al., Nucl. Instr. and Meth. 109 (1973) 211. Results for an open detector and for the detector in a diffusion chamber were presented in this article. (orig.)
Distribution of the Determinant of the Sample Correlation Matrix: Monte Carlo Type One Error Rates.
Reddon, John R.; And Others
1985-01-01
Computer sampling from a multivariate normal spherical population was used to evaluate the type one error rates for a test of sphericity based on the distribution of the determinant of the sample correlation matrix. (Author/LMO)
International Nuclear Information System (INIS)
Sinitsa, V.V.
1984-11-01
The author gives a scheme for the calculation of the self-shielding factors in the unresolved resonance region using the GRUCON applied program package. This package is especially created to be used in the conversion of evaluated neutron cross-section data, as available in existing data libraries, into multigroup microscopic constants. A detailed description of the formulae and algorithms used in the programs is given. Some typical examples of calculation are considered and the results are compared with those of other authors. The calculation accuracy is better than 2%
Nuth, Joseph A., III; Johnson, Natasha M.
2012-01-01
There are at least 3 separate photochemical self-shielding models with different degrees of commonality. All of these models rely on the selective absorption of (12))C(16)O dissociative photons as the radiation source penetrates through the gas allowing the production of reactive O-17 and O-18 atoms within a specific volume. Each model also assumes that the undissociated C(16)O is stable and does not participate in the chemistry of nebular dust grains. In what follows we will argue that this last, very important assumption is simply not true despite the very high energy of the CO molecular bond.
International Nuclear Information System (INIS)
Ganesan, S.
1978-01-01
A set of energy dependent fission widths of 1 + spin state corresponding to the recommended fission cross sections of Sowerby et al is evaluated by adjustment in the energy region 600 ev to 25 Kev. Corresponding to these mean fission widths of 1 + spin state, the intermediate resonance parameters based on Weigmann's formulation of Struitinsky's double humped fission barrier model are then obtained. Pseudorandom resonances are generated with and without the intermediate structure in the mean fission but leading to the same value of infinite dilution fission cross section. The effect of the intermediate structure on the self shielding factors was then investigated. (author)
Energy Technology Data Exchange (ETDEWEB)
Min, Chul Hee; Lee, Han Rim; Yeom, Yeon Su; Cho, Sung Koo; Kim, Chan Hyeong [Hanyang University, Seoul (Korea, Republic of)
2010-06-15
The close relationship between the proton dose distribution and the distribution of prompt gammas generated by proton-induced nuclear interactions along the path of protons in a water phantom was demonstrated by means of both Monte Carlo simulations and limited experiments. In order to test the clinical applicability of the method for determining the distal dose edge in a human body, a human voxel model, constructed based on a body-composition-approximated physical phantom, was used, after which the MCNPX code was used to analyze the energy spectra and the prompt gamma yields from the major elements composing the human voxel model; finally, the prompt gamma distribution, generated from the voxel model and measured by using an array-type prompt gamma detection system, was calculated and compared with the proton dose distribution. According to the results, effective prompt gammas were produced mainly by oxygen, and the specific energy of the prompt gammas, allowing for selective measurement, was found to be 4.44 MeV. The results also show that the distal dose edge in the human phantom, despite the heterogeneous composition and the complicated shape, can be determined by measuring the prompt gamma distribution with an array-type detection system.
International Nuclear Information System (INIS)
Yamamoto, Y.; Wakaiki, M.; Ikeda, A.; Kido, Y.
1999-01-01
The lattice location of Tm implanted into Si(1 0 0) and Ge(1 1 1) with energy of 180 keV was determined precisely by ion channeling followed by Monte Carlo simulations of ion trajectories. The implantations were performed at 550 deg. C with a dose of 5 x 10 14 ions/cm 2 . In the case of Tm in Si, 25 at.% and 50 at.% of Tm are located in the tetrahedral interstitial site and in the random site, respectively and the rest takes the substitutional position. The assumption of the Gaussian distribution centered at the exact tetrahedral site with a standard deviation of 0.2 Angstroms reproduced the azimuth angular-scan spectrum around the [1 1 0] axis. However, the observed angular spectrum is significantly broader than the simulated one. This is probably due to the fact that there exist slightly different Tm lattice sites from the exact tetrahedral position. For Ge(1 1 1) substrates, 25 at.% of Tm occupied the tetrahedral interstitial site and the rest was located randomly
Energy Technology Data Exchange (ETDEWEB)
Videira, Heber S.; Burkhardt, Guilherme M.; Santos, Ronielly S., E-mail: heber@cyclopet.com.br [Cyclopet Radiofarmacos Ltda., Curitiba, PR (Brazil); Passaro, Bruno M.; Gonzalez, Julia A.; Santos, Josefina; Guimaraes, Maria I.C.C. [Universidade de Sao Paulo (HCFMRP/USP), Sao Paulo, SP (Brazil). Faculdade de Medicina. Hospital das Clinicas; Lenzi, Marcelo K. [Universidade Federal do Parana (UFPR), Curitina (Brazil). Programa de Pos-Graduacao em Engenharia Quimica
2013-04-15
The technological advances in positron emission tomography (PET) in conventional clinic imaging have led to a steady increase in the number of cyclotrons worldwide. Most of these cyclotrons are being used to produce {sup 18}F-FDG, either for themselves as for the distribution to other centers that have PET. For there to be safety in radiological facilities, the cyclotron intended for medical purposes can be classified in category I and category II, ie, self-shielded or non-shielded (bunker). Therefore, the aim of this work is to verify the effectiveness of borated water shield built for a cyclotron accelerator-type Self-shielded PETtrace 860. Mixtures of water borated occurred in accordance with the manufacturer’s specifications, as well as the results of the radiometric survey in the vicinity of the self-shielding of the cyclotron in the conditions established by the manufacturer showed that radiation levels were below the limits. (author)
Computation of a Canadian SCWR unit cell with deterministic and Monte Carlo codes
International Nuclear Information System (INIS)
Harrisson, G.; Marleau, G.
2012-01-01
The Canadian SCWR has the potential to achieve the goals that the generation IV nuclear reactors must meet. As part of the optimization process for this design concept, lattice cell calculations are routinely performed using deterministic codes. In this study, the first step (self-shielding treatment) of the computation scheme developed with the deterministic code DRAGON for the Canadian SCWR has been validated. Some options available in the module responsible for the resonance self-shielding calculation in DRAGON 3.06 and different microscopic cross section libraries based on the ENDF/B-VII.0 evaluated nuclear data file have been tested and compared to a reference calculation performed with the Monte Carlo code SERPENT under the same conditions. Compared to SERPENT, DRAGON underestimates the infinite multiplication factor in all cases. In general, the original Stammler model with the Livolant-Jeanpierre approximations are the most appropriate self-shielding options to use in this case of study. In addition, the 89 groups WIMS-AECL library for slight enriched uranium and the 172 groups WLUP library for a mixture of plutonium and thorium give the most consistent results with those of SERPENT. (authors)
International Nuclear Information System (INIS)
Romdhani, Ibtissem
2014-01-01
As part of developing its nuclear infrastructure base, the National Science and Technology Center Nuclear (CNSTN) examines the technical feasibility of setting up a new installation of subcritical assembly. Our study focuses on determining the neutron parameters of a nuclear zero power reactor based on Monte Carlo simulation MCNP. The objective of the simulation is to model the installation, determine the effective multiplication factor, and spatial distribution of neutron flux.
International Nuclear Information System (INIS)
Sdouz, G.
1980-09-01
The computer program STOSS determines the path of a particle in a heterogenous medium in three dimensions. The program can be used as a module in Monte-Carlo-calculations. The collision can be transferred from the centre-of-mass system into a fixed cartesian coordinate-system by means of appropriate transformations. Then the path length is determined and the location of the next collision is calculated. The computational details are discussed at some length. (auth.)
Lewis, Susan J; Kays, Michael B; Mueller, Bruce A
2016-10-01
Pharmacokinetic/pharmacodynamic analyses with Monte Carlo simulations (MCSs) can be used to integrate prior information on model parameters into a new renal replacement therapy (RRT) to develop optimal drug dosing when pharmacokinetic trials are not feasible. This study used MCSs to determine initial doripenem, imipenem, meropenem, and ertapenem dosing regimens for critically ill patients receiving prolonged intermittent RRT (PIRRT). Published body weights and pharmacokinetic parameter estimates (nonrenal clearance, free fraction, volume of distribution, extraction coefficients) with variability were used to develop a pharmacokinetic model. MCS of 5000 patients evaluated multiple regimens in 4 different PIRRT effluent/duration combinations (4 L/h × 10 hours or 5 L/h × 8 hours in hemodialysis or hemofiltration) occurring at the beginning or 14-16 hours after drug infusion. The probability of target attainment (PTA) was calculated using ≥40% free serum concentrations above 4 times the minimum inhibitory concentration (MIC) for the first 48 hours. Optimal doses were defined as the smallest daily dose achieving ≥90% PTA in all PIRRT combinations. At the MIC of 2 mg/L for Pseudomonas aeruginosa, optimal doses were doripenem 750 mg every 8 hours, imipenem 1 g every 8 hours or 750 mg every 6 hours, and meropenem 1 g every 12 hours or 1 g pre- and post-PIRRT. Ertapenem 500 mg followed by 500 mg post-PIRRT was optimal at the MIC of 1 mg/L for Streptococcus pneumoniae. Incorporating data from critically ill patients receiving RRT into MCS resulted in markedly different carbapenem dosing regimens in PIRRT from those recommended for conventional RRTs because of the unique drug clearance characteristics of PIRRT. These results warrant clinical validation. © 2016, The American College of Clinical Pharmacology.
International Nuclear Information System (INIS)
Cullen, D.E.
1980-01-01
Program GROUPIE reads evaluated data in the ENDF/B format and uses these data to calculate Bondarenko self-shielded cross sections and multiband parameters. To give as much generality as possible, the program allows the user to specify arbitrary energy groups and an arbitrary energy groups and an arbitrary energy-dependent neutron spectrum (weighing function). To guarantee the accuracy of the results, all integrals are performed analytically; in no case is iteration or any approximate form of integration used. The output from this program includes both listings and multiband parameters suitable for use either in a normal multigroup transport calculation or in a multiband transport calculation. A listing of the source deck is available on request
Energy Technology Data Exchange (ETDEWEB)
Moreau, J; Parisot, B [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1969-07-01
The determination of neutron multiplication coefficients by the Monte Carlo method can be carried out in different ways; the are first examined particularly complex geometries; it makes use of multi-group isotropic cross sections. The performances of this code are illustrated by some examples. (author) [French] La determination des coefficients de multiplication neutronique par methode de Monte Carlo peut se faire par differentes voies, elles sont successivement examinees et comparees. On en deduit un code rapide pour des geometries particulierement complexes, il utilise des sections efficaces multigroupes isotropes. Les performances de ce code sont demontrees par quelques exemples. (auteur)
International Nuclear Information System (INIS)
Vaque, J. Puxeu
2016-01-01
dosimetry of conventional fields To learn about detectors suitable for small fields To learn about the role of Monte Carlo simulations in determination of small field output factors To provide an overview of the IAEA small field dosimetry recommendations To provide an overview of the content of the ICRU report on Prescribing, Reporting and Recording of Small Field Radiation Therapy. To learn about special technical considerations in delivering IMRT and SBRT treatments To appreciate specific challenges of IMRT implementation J. Seuntjens, Natural Sciences and Engineering Research Council; Canadian Institutes of Health Research
Energy Technology Data Exchange (ETDEWEB)
Vaque, J. Puxeu [Institut Catala d’Oncologia (Spain)
2016-06-15
dosimetry of conventional fields To learn about detectors suitable for small fields To learn about the role of Monte Carlo simulations in determination of small field output factors To provide an overview of the IAEA small field dosimetry recommendations To provide an overview of the content of the ICRU report on Prescribing, Reporting and Recording of Small Field Radiation Therapy. To learn about special technical considerations in delivering IMRT and SBRT treatments To appreciate specific challenges of IMRT implementation J. Seuntjens, Natural Sciences and Engineering Research Council; Canadian Institutes of Health Research.
International Nuclear Information System (INIS)
Bacchetta, Alessandro; Jung, Hannes; Kutak, Krzysztof
2010-02-01
A method for tuning parameters in Monte Carlo generators is described and applied to a specific case. The method works in the following way: each observable is generated several times using different values of the parameters to be tuned. The output is then approximated by some analytic form to describe the dependence of the observables on the parameters. This approximation is used to find the values of the parameter that give the best description of the experimental data. This results in significantly faster fitting compared to an approach in which the generator is called iteratively. As an application, we employ this method to fit the parameters of the unintegrated gluon density used in the Cascade Monte Carlo generator, using inclusive deep inelastic data measured by the H1 Collaboration. We discuss the results of the fit, its limitations, and its strong points. (orig.)
MCB. A continuous energy Monte Carlo burnup simulation code
International Nuclear Information System (INIS)
Cetnar, J.; Wallenius, J.; Gudowski, W.
1999-01-01
A code for integrated simulation of neutrinos and burnup based upon continuous energy Monte Carlo techniques and transmutation trajectory analysis has been developed. Being especially well suited for studies of nuclear waste transmutation systems, the code is an extension of the well validated MCNP transport program of Los Alamos National Laboratory. Among the advantages of the code (named MCB) is a fully integrated data treatment combined with a time-stepping routine that automatically corrects for burnup dependent changes in reaction rates, neutron multiplication, material composition and self-shielding. Fission product yields are treated as continuous functions of incident neutron energy, using a non-equilibrium thermodynamical model of the fission process. In the present paper a brief description of the code and applied methods are given. (author)
Gelb, Lev D; Chakraborty, Somendra Nath
2011-12-14
The normal boiling points are obtained for a series of metals as described by the "quantum-corrected Sutton Chen" (qSC) potentials [S.-N. Luo, T. J. Ahrens, T. Çağın, A. Strachan, W. A. Goddard III, and D. C. Swift, Phys. Rev. B 68, 134206 (2003)]. Instead of conventional Monte Carlo simulations in an isothermal or expanded ensemble, simulations were done in the constant-NPH adabatic variant of the Gibbs ensemble technique as proposed by Kristóf and Liszi [Chem. Phys. Lett. 261, 620 (1996)]. This simulation technique is shown to be a precise tool for direct calculation of boiling temperatures in high-boiling fluids, with results that are almost completely insensitive to system size or other arbitrary parameters as long as the potential truncation is handled correctly. Results obtained were validated using conventional NVT-Gibbs ensemble Monte Carlo simulations. The qSC predictions for boiling temperatures are found to be reasonably accurate, but substantially underestimate the enthalpies of vaporization in all cases. This appears to be largely due to the systematic overestimation of dimer binding energies by this family of potentials, which leads to an unsatisfactory description of the vapor phase. © 2011 American Institute of Physics
International Nuclear Information System (INIS)
Mason, Grant W.; Spencer, Ross L.
2002-01-01
The 'self-shielding' m=1 diocotron mode in Malmberg-Penning traps has been known for over a decade to be unstable for finite length non-neutral plasmas with hollow density profiles. Early theoretical efforts were unsuccessful in accounting for the exponential growth and/or the magnitude of the growth rate. Recent theoretical work has sought to resolve the discrepancy either as a consequence of the shape of the plasma ends or as a kinetic effect resulting from a modified distribution function as a consequence of the protocol used to form the hollow profiles in experiments. Both of these finite length mechanisms have been investigated in selected test cases using a three-dimensional particle-in-cell code that allows realistic treatment of shape and kinetic effects. A persistent discrepancy of a factor of 2-3 remains between simulation and experimental values of the growth rate. Simulations reported here are more in agreement with theoretical predictions and fail to explain the discrepancy
International Nuclear Information System (INIS)
Komiya, Isao; Umezu, Yoshiyuki; Fujibuchi, Toshiou; Nakamura, Kazumasa; Baba, Shingo; Honda, Hiroshi
2016-01-01
The non-self-shield compact medical cyclotron and the cyclotron vault room were in operation for 27 years. They have now been decommissioned. We efficiently implemented a technique to identify an activation product in the cyclotron vault room. Firstly, the distribution of radioactive concentrations in the concrete of the cyclotron vault room was estimated by calculation from the record of the cyclotron operation. Secondly, the comparison of calculated results with an actual measurement was performed using a NaI scintillation survey meter and a high-purity germanium detector. The calculated values were overestimated as compared to the values measured using the Nal scintillation survey meter and the high-purity germanium detector. However, it could limit the decontamination area. By simulating the activation range, we were able to minimize the concrete core sampling. Finally, the appropriate range of radioactivated area in the cyclotron vault room was decontaminated based on the results of the calculation. After decontamination, the radioactive concentration was below the detection limit value in all areas inside the cyclotron vault room. By these procedures, the decommissioning process of the cyclotron vault room was more efficiently performed. (author)
1 1/2 years of experience with a 10 MeV self-shielded on-line e-beam sterilization system
International Nuclear Information System (INIS)
Lambert, Byron; Tang, Fuh-Wei; Riggs, Brian; Allen, Thomas; Williams, C.B.
2000-01-01
The Vascular Intervening Group of the Guidant Corporation (Guidant IV) has been operating a self-shielded, 10 MeV 4 kW, electron beam sterilization system since July of 1988. The system was designed, built and installed in a 70 square meter area in an existing Guidant manufacturing facility by Titan Scan Corporation and performance of the system was validated in conformance with 1S0-11137 standards. The goal of this on-site e-beam system was 'just in time' JIT, sterilization, i.e. the ability to manufacture, sterilize and ship, high intrinsic value medical devices in less than 24 hours. The benefits of moving from a long gas sterilization cycle of greater than one week to a JIT process were envisioned to be a) speed to market with innovated new products b) rapid response to customer requirements c) reduced inventory carrying costs and finally manufacturing and quality system efficiency. The ability of Guidant to realize these benefits depended upon the ability of the Guidant VI business units to adapt to the new sterilization modality and functionality and on the overall system reliability. This paper reviews the operating experience to date and the overall system reliability. (author)
International Nuclear Information System (INIS)
Gonzalez, Dania Soguero; Ardanza, Armando Chavez
2013-01-01
This paper describes the process of installation of a self-shielded irradiator category I, model ISOGAMMA LL.Co of 60 Co, with a nominal 25 kCi activity, rate of absorbed dose 8 kG/h and 5 L workload. The stages are describe step by step: import, the customs procedure which included the interview with the master of the vessel transporter, the monitoring of the entire process by the head of radiological protection of the importing Center, control of the levels of surface contamination of the shipping container of the sources before the removal of the ship, the supervision of the national regulatory authority and the transportation to the final destination. Details of assembling of the installation and the opening of the container for transportation of supplies is outlined. The action plan previously developed for the case of occurrence of radiological successful events is presented, detailing the phase of the load of radioactive sources by the specialists of the company selling the facility (IZOTOP). Finally describes the setting and implementation of the installation and the procedure of licensing for exploitation
International Nuclear Information System (INIS)
Calderon, E; Siergiej, D
2014-01-01
Purpose: Output factor determination for small fields (less than 20 mm) presents significant challenges due to ion chamber volume averaging and diode over-response. Measured output factor values between detectors are known to have large deviations as field sizes are decreased. No set standard to resolve this difference in measurement exists. We observed differences between measured output factors of up to 14% using two different detectors. Published Monte Carlo derived correction factors were used to address this challenge and decrease the output factor deviation between detectors. Methods: Output factors for Elekta's linac-based stereotactic cone system were measured using the EDGE detector (Sun Nuclear) and the A16 ion chamber (Standard Imaging). Measurements conditions were 100 cm SSD (source to surface distance) and 1.5 cm depth. Output factors were first normalized to a 10.4 cm × 10.4 cm field size using a daisy-chaining technique to minimize the dependence of field size on detector response. An equation expressing the relation between published Monte Carlo correction factors as a function of field size for each detector was derived. The measured output factors were then multiplied by the calculated correction factors. EBT3 gafchromic film dosimetry was used to independently validate the corrected output factors. Results: Without correction, the deviation in output factors between the EDGE and A16 detectors ranged from 1.3 to 14.8%, depending on cone size. After applying the calculated correction factors, this deviation fell to 0 to 3.4%. Output factors determined with film agree within 3.5% of the corrected output factors. Conclusion: We present a practical approach to applying published Monte Carlo derived correction factors to measured small field output factors for the EDGE and A16 detectors. Using this method, we were able to decrease the percent deviation between both detectors from 14.8% to 3.4% agreement
International Nuclear Information System (INIS)
Cechak, T.
1982-01-01
Applying Gardner's method of double evaluation one detector should be positioned such that its response should be independent of the material density and the second detector should be positioned so as to maximize changes in response due to density changes. The experimental scanning for optimal energy is extremely time demanding. A program was written based on the Monte Carlo method which solves the problem of error magnitude in case the computation of gamma radiation backscattering neglects multiply scattered photons, the problem of how this error depends on the atomic number of the scattering material as well as the problem of whether the representation of individual scatterings in the spectrum of backscattered photons depends on the positioning of the detector. 42 detectors, 8 types of material and 10 different density values were considered. The computed dependences are given graphically. (M.D.)
Energy Technology Data Exchange (ETDEWEB)
Coste, M.
1994-01-01
This note gives in detailed way the self-shielding formalism which is used in the multigroup transport code APOLLO2. The self-shielded cross-sections are performed with the same scheme as in APOLLO1. We use two equivalencies, first an heterogeneous/homogeneous equivalence which gives the reaction rates and then a multigroup equivalence in order to obtain the cross-sections which preserve these reaction rates. However, numerous improvements were implemented, specially in the homogenization step. Homogenization can be performed group per group with different modelizations of the heavy slowing-down operator (statistical, intermediary and ``wide resonance`` models), which allows us to fit correctly the resonance shapes. Moreover, we can take exactly into account the spatial interferences between resonant isotopes with the background matrix model. Consequently, we are now able to perform, for instance, the radial distribution of the resonant absorption inside a fuel pin. (author). 7 refs., 3 figs.
Yan, Yangqian; Blume, D
2016-06-10
The unitary equal-mass Fermi gas with zero-range interactions constitutes a paradigmatic model system that is relevant to atomic, condensed matter, nuclear, particle, and astrophysics. This work determines the fourth-order virial coefficient b_{4} of such a strongly interacting Fermi gas using a customized ab initio path-integral Monte Carlo (PIMC) algorithm. In contrast to earlier theoretical results, which disagreed on the sign and magnitude of b_{4}, our b_{4} agrees within error bars with the experimentally determined value, thereby resolving an ongoing literature debate. Utilizing a trap regulator, our PIMC approach determines the fourth-order virial coefficient by directly sampling the partition function. An on-the-fly antisymmetrization avoids the Thomas collapse and, combined with the use of the exact two-body zero-range propagator, establishes an efficient general means to treat small Fermi systems with zero-range interactions.
International Nuclear Information System (INIS)
Baly, L.; Martín, G.; Quesada, I.; Padilla, F.; Arteche, R.
2015-01-01
A new approach based on the Monte Carlo simulation is used to calculate the infinite matrix dose rate correction factors of gamma, beta and internal conversion radiations for 250 μm diameter grains of quartz and TLD500 chips. Here, the dependence of the correction factor on the radiation energy is initially calculated for each type of emitted particle and with this result the correction factors for the 232 Th and 238 U series and 40 K are determined. This analysis is made for dry soil and also for different levels of water content in it. The obtained beta correction factors for quartz are in good agreement with those previously reported. For the TLD500 chip certain differences with previously reported data are found. The analysis of the gamma water correction factor for quartz based on Zimmerman equation shows the correspondence with the similar correction factor for electrons. In the case of TLD500 chip a gamma water correction factor value of 1.0 was found. - Highlights: • A new approach based on Monte Carlo simulation is used to compute infinite matrix dose rate correction factors. • Infinite matrix models with real dimensions were analyzed within 3% uncertainties. • The dependence of grain size attenuation on particle energy is determined. • The same dependence for water correction factors is also analyzed
Energy Technology Data Exchange (ETDEWEB)
A. T. Till; M. Hanuš; J. Lou; J. E. Morel; M. L. Adams
2016-05-01
The standard multigroup (MG) method for energy discretization of the transport equation can be sensitive to approximations in the weighting spectrum chosen for cross-section averaging. As a result, MG often inaccurately treats important phenomena such as self-shielding variations across a material. From a finite-element viewpoint, MG uses a single fixed basis function (the pre-selected spectrum) within each group, with no mechanism to adapt to local solution behavior. In this work, we introduce the Finite-Element-with-Discontiguous-Support (FEDS) method, whose only approximation with respect to energy is that the angular flux is a linear combination of unknowns multiplied by basis functions. A basis function is non-zero only in the discontiguous set of energy intervals associated with its energy element. Discontiguous energy elements are generalizations of bands and are determined by minimizing a norm of the difference between snapshot spectra and their averages over the energy elements. We begin by presenting the theory of the FEDS method. We then compare to continuous-energy Monte Carlo for one-dimensional slab and two-dimensional pin-cell problem. We find FEDS to be accurate and efficient at producing quantities of interest such as reaction rates and eigenvalues. Results show that FEDS converges at a rate that is approximately first-order in the number of energy elements and that FEDS is less sensitive to weighting spectrum than standard MG.
International Nuclear Information System (INIS)
Brown, F.B.
1981-01-01
Examination of the global algorithms and local kernels of conventional general-purpose Monte Carlo codes shows that multigroup Monte Carlo methods have sufficient structure to permit efficient vectorization. A structured multigroup Monte Carlo algorithm for vector computers is developed in which many particle events are treated at once on a cell-by-cell basis. Vectorization of kernels for tracking and variance reduction is described, and a new method for discrete sampling is developed to facilitate the vectorization of collision analysis. To demonstrate the potential of the new method, a vectorized Monte Carlo code for multigroup radiation transport analysis was developed. This code incorporates many features of conventional general-purpose production codes, including general geometry, splitting and Russian roulette, survival biasing, variance estimation via batching, a number of cutoffs, and generalized tallies of collision, tracklength, and surface crossing estimators with response functions. Predictions of vectorized performance characteristics for the CYBER-205 were made using emulated coding and a dynamic model of vector instruction timing. Computation rates were examined for a variety of test problems to determine sensitivities to batch size and vector lengths. Significant speedups are predicted for even a few hundred particles per batch, and asymptotic speedups by about 40 over equivalent Amdahl 470V/8 scalar codes arepredicted for a few thousand particles per batch. The principal conclusion is that vectorization of a general-purpose multigroup Monte Carlo code is well worth the significant effort required for stylized coding and major algorithmic changes
International Nuclear Information System (INIS)
Miss, J.
1998-06-01
The goal of this thesis was to study comprehensively photons energy and angular distributions of backscattered radiations. In general, this relation is described by the concept to the backscattered factor or doubly differential albedo. This concept is useful to study the particle propagation into the air space by simple or multiple reflections on materials There are two principal treatments to solve numerically this problem: the deterministic and probabilistic methods. We showed that deterministic methods furnish unsatisfactory results: that's why we choice to develop a new gamma ray albedo estimator in the code TRIPOLI14 (three dimensional Monte Carlo code). So, we have been able to compute an important data base of doubly differential albedos. A physical analysis of these data showed that albedos can be simply described by parameter functions. These parameters were obtained by fitting the albedos of the data base over a complete range of incident and reflected energy and direction. So, we produced a very smaller data base of functions coefficients, instead of storing all the values of the doubly differential spectrum. It is so easy to make every albedo by linear interpolations on the coefficient of the new library. (author)
Application of Monte Carlo method in determination of air absorbed rate from a full γ-ray spectrum
International Nuclear Information System (INIS)
Zhang Jiangyun; Huang Ning; Tang Lili; Liu Yanfang; Zhang Guanghua
2011-01-01
The dosimetric properties of a gamma-radiation field can be measured by a dose-rate meter or by a gamma-spectrometer with spectrum-dose conversion. One of the spectrum dose conversion methods is to evaluate the radiation dose directly from integrating the observed pulse height spectrum weighted by a G(E) function, instead of making spectrum analysis.In this paper,energy spectra of 11 single energy point sources in 0.1-2.5 MeV are simulated by using Monte Carlo software MCNP5 for NaI(Tl) detectors of Φ 75 mm x 75 mm, Φ 50 mm x 50 mm and Φ 25 mm x 25 mm. The energy spectra are used as standard spectrum to calculate the G(E) value, so as to calculate corresponding doses. Comparing the results with theoretical value,the error is less than 1.5%. Finally, the G(E)-based γ dose rate of radiation field agreed well (within 5%) with the dose rate measured by a dosimeter. (authors)
Viel, Alexandra; Coutinho-Neto, Maurício D; Manthe, Uwe
2007-01-14
Quantum dynamics calculations of the ground state tunneling splitting and of the zero point energy of malonaldehyde on the full dimensional potential energy surface proposed by Yagi et al. [J. Chem. Phys. 1154, 10647 (2001)] are reported. The exact diffusion Monte Carlo and the projection operator imaginary time spectral evolution methods are used to compute accurate benchmark results for this 21-dimensional ab initio potential energy surface. A tunneling splitting of 25.7+/-0.3 cm-1 is obtained, and the vibrational ground state energy is found to be 15 122+/-4 cm-1. Isotopic substitution of the tunneling hydrogen modifies the tunneling splitting down to 3.21+/-0.09 cm-1 and the vibrational ground state energy to 14 385+/-2 cm-1. The computed tunneling splittings are slightly higher than the experimental values as expected from the potential energy surface which slightly underestimates the barrier height, and they are slightly lower than the results from the instanton theory obtained using the same potential energy surface.
Del Lama, L. S.; Godeli, J.; Poletti, M. E.
2017-08-01
The majority of breast carcinomas can be associated to the presence of calcifications before the development of a mass. However, the overlapping tissues can obscure the visualization of microcalcification clusters due to the reduced contrast-noise ratio (CNR). In order to overcome this complication, one potential solution is the use of the dual-energy (DE) technique, in which two different images are acquired at low (LE) and high (HE) energies or kVp to highlight specific lesions or cancel out tissue background. In this work, the DE features were computationally studied considering simulated acquisitions from a modified PENELOPE Monte Carlo code. The employed irradiation geometry considered typical distances used in digital mammography, a CsI detection system and an updated breast model composed of skin, microcalcifications and glandular and adipose tissues. The breast thickness ranged from 2 to 6 cm with glandularities of 25%, 50% and 75%, where microcalcifications with dimensions from 100 up to 600 μm were positioned. In general, results pointed an efficiency index better than 87% for the microcalcification thicknesses and better than 95% for the glandular ratio. The simulations evaluated in this work can be used to optimize the elements from the DE imaging chain, in order to become a complementary tool for the conventional single-exposure images, especially for the visualization and estimation of calcification thicknesses and glandular ratios.
Randomly dispersed particle fuel model in the PSG Monte Carlo neutron transport code
International Nuclear Information System (INIS)
Leppaenen, J.
2007-01-01
High-temperature gas-cooled reactor fuels are composed of thousands of microscopic fuel particles, randomly dispersed in a graphite matrix. The modelling of such geometry is complicated, especially using continuous-energy Monte Carlo codes, which are unable to apply any deterministic corrections in the calculation. This paper presents the geometry routine developed for modelling randomly dispersed particle fuels using the PSG Monte Carlo reactor physics code. The model is based on the delta-tracking method, and it takes into account the spatial self-shielding effects and the random dispersion of the fuel particles. The calculation routine is validated by comparing the results to reference MCNP4C calculations using uranium and plutonium based fuels. (authors)
Use of the GEANT4 Monte Carlo to determine three-dimensional dose factors for radionuclide dosimetry
Energy Technology Data Exchange (ETDEWEB)
Amato, Ernesto, E-mail: eamato@unime.it [University of Messina, Department of Biomedical Sciences and of Morphologic and Functional Imaging, Section of Radiological Sciences (Italy); Italiano, Antonio [INFN – Istituto Nazionale di Fisica Nucleare, Gruppo Collegato di Messina (Italy); Minutoli, Fabio; Baldari, Sergio [University of Messina, Department of Biomedical Sciences and of Morphologic and Functional Imaging, Section of Radiological Sciences (Italy)
2013-04-21
The voxel-level dosimetry is the most simple and common approach to internal dosimetry of nonuniform distributions of activity within the human body. Aim of this work was to obtain the dose “S” factors (mGy/MBqs) at the voxel level for eight beta and beta–gamma emitting radionuclides commonly used in nuclear medicine diagnostic and therapeutic procedures. We developed a Monte Carlo simulation in GEANT4 of a region of soft tissue as defined by the ICRP, divided into 11×11×11 cubic voxels, 3 mm in side. The simulation used the parameterizations of the electromagnetic interaction optimized for low energy (EEDL, EPDL). The decay of each radionuclide ({sup 32}P, {sup 90}Y, {sup 99m}Tc, {sup 177}Lu, {sup 131}I, {sup 153}Sm, {sup 186}Re, {sup 188}Re) were simulated homogeneously distributed within the central voxel (0,0,0), and the energy deposited in the surrounding voxels was mediated on the 8 octants of the three dimensional space, for reasons of symmetry. The results obtained were compared with those available in the literature. While the iodine deviations remain within 16%, for phosphorus, a pure beta emitter, the agreement is very good for self-dose (0,0,0) and good for the dose to first neighbors, while differences are observed ranging from −60% to +100% for voxels far distant from the source. The existence of significant differences in the percentage calculation of the voxel S factors, especially for pure beta emitters such as {sup 32}P or {sup 90}Y, has already been highlighted by other authors. These data can usefully extend the dosimetric approach based on the voxel to other radionuclides not covered in the available literature.
Use of the GEANT4 Monte Carlo to determine three-dimensional dose factors for radionuclide dosimetry
International Nuclear Information System (INIS)
Amato, Ernesto; Italiano, Antonio; Minutoli, Fabio; Baldari, Sergio
2013-01-01
The voxel-level dosimetry is the most simple and common approach to internal dosimetry of nonuniform distributions of activity within the human body. Aim of this work was to obtain the dose “S” factors (mGy/MBqs) at the voxel level for eight beta and beta–gamma emitting radionuclides commonly used in nuclear medicine diagnostic and therapeutic procedures. We developed a Monte Carlo simulation in GEANT4 of a region of soft tissue as defined by the ICRP, divided into 11×11×11 cubic voxels, 3 mm in side. The simulation used the parameterizations of the electromagnetic interaction optimized for low energy (EEDL, EPDL). The decay of each radionuclide ( 32 P, 90 Y, 99m Tc, 177 Lu, 131 I, 153 Sm, 186 Re, 188 Re) were simulated homogeneously distributed within the central voxel (0,0,0), and the energy deposited in the surrounding voxels was mediated on the 8 octants of the three dimensional space, for reasons of symmetry. The results obtained were compared with those available in the literature. While the iodine deviations remain within 16%, for phosphorus, a pure beta emitter, the agreement is very good for self-dose (0,0,0) and good for the dose to first neighbors, while differences are observed ranging from −60% to +100% for voxels far distant from the source. The existence of significant differences in the percentage calculation of the voxel S factors, especially for pure beta emitters such as 32 P or 90 Y, has already been highlighted by other authors. These data can usefully extend the dosimetric approach based on the voxel to other radionuclides not covered in the available literature
Directory of Open Access Journals (Sweden)
2008-05-01
Full Text Available Entrevista (en español Presentación Carlos Romero, politólogo, es profesor-investigador en el Instituto de Estudios Políticos de la Facultad de Ciencias Jurídicas y Políticas de la Universidad Central de Venezuela, en donde se ha desempeñado como coordinador del Doctorado, subdirector y director del Centro de Estudios de Postgrado. Cuenta con ocho libros publicados sobre temas de análisis político y relaciones internacionales, siendo uno de los últimos Jugando con el globo. La política exter...
Energy Technology Data Exchange (ETDEWEB)
Lopez Ponte, M. A.; Navarro Amaro, J. F.; Perez Lopez, B.; Navarro Bravo, T.; Nogueira, P.; Vrba, T.
2013-07-01
From the Group of WG7 internal dosimetry of the EURADOS Organization (European Radiation Dosimetry group, e.V.) which It coordinates CIEMAT, international action for the vivo measurement of americium has been conducted in three mannequins type skull with detectors of Germanium by gamma spectrometry and simulation by Monte Carlo methods. Such action has been raised as two separate exercises, with the participation of institutions in Europe, America and Asia. Other actions similar precede this vivo intercomparison of measurement and modeling Monte Carlo1. The preliminary results and associated findings are presented in this work. The laboratory of the body radioactivity (CRC) of service counter of dosimetry staff internal (DPI) of the CIEMAT, it has been one of the participants in vivo measures exercise. On the other hand part, the Group of numerical dosimetry of CIEMAT is participant of the Monte Carlo2 simulation exercise. (Author)
Rico-Contreras, José Octavio; Aguilar-Lasserre, Alberto Alfonso; Méndez-Contreras, Juan Manuel; López-Andrés, Jhony Josué; Cid-Chama, Gabriela
2017-11-01
The objective of this study is to determine the economic return of poultry litter combustion in boilers to produce bioenergy (thermal and electrical), as this biomass has a high-energy potential due to its component elements, using fuzzy logic to predict moisture and identify the high-impact variables. This is carried out using a proposed 7-stage methodology, which includes a statistical analysis of agricultural systems and practices to identify activities contributing to moisture in poultry litter (for example, broiler chicken management, number of air extractors, and avian population density), and thereby reduce moisture to increase the yield of the combustion process. Estimates of poultry litter production and heating value are made based on 4 different moisture content percentages (scenarios of 25%, 30%, 35%, and 40%), and then a risk analysis is proposed using the Monte Carlo simulation to select the best investment alternative and to estimate the environmental impact for greenhouse gas mitigation. The results show that dry poultry litter (25%) is slightly better for combustion, generating 3.20% more energy. Reducing moisture from 40% to 25% involves considerable economic investment due to the purchase of equipment to reduce moisture; thus, when calculating financial indicators, the 40% scenario is the most attractive, as it is the current scenario. Thus, this methodology proposes a technology approach based on the use of advanced tools to predict moisture and representation of the system (Monte Carlo simulation), where the variability and uncertainty of the system are accurately represented. Therefore, this methodology is considered generic for any bioenergy generation system and not just for the poultry sector, whether it uses combustion or another type of technology. Copyright © 2017 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Noh, Yeelyong; Chang, Kwangpil; Seo, Yutaek; Chang, Daejun
2014-01-01
This study proposes a new methodology that combines dynamic process simulation (DPS) and Monte Carlo simulation (MCS) to determine the design pressure of fuel storage tanks on LNG-fueled ships. Because the pressure of such tanks varies with time, DPS is employed to predict the pressure profile. Though equipment failure and subsequent repair affect transient pressure development, it is difficult to implement these features directly in the process simulation due to the randomness of the failure. To predict the pressure behavior realistically, MCS is combined with DPS. In MCS, discrete events are generated to create a lifetime scenario for a system. The combination of MCS with long-term DPS reveals the frequency of the exceedance pressure. The exceedance curve of the pressure provides risk-based information for determining the design pressure based on risk acceptance criteria, which may vary with different points of view. - Highlights: • The realistic operation scenario of the LNG FGS system is estimated by MCS. • In repeated MCS trials, the availability of the FGS system is evaluated. • The realistic pressure profile is obtained by the proposed methodology. • The exceedance curve provides risk-based information for determining design pressure
Energy Technology Data Exchange (ETDEWEB)
Rios, D.A.S.; Rios, P.B., E-mail: denise@inovafi.com.br [Inovafi Física aplicada à Inovação Ltda, Sorocaba, SP (Brazil); Sordi, G.M.A.A.; Carneiro, J.C.G.G. [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)
2017-07-01
The study discusses situations in the industrial environment that may lead to potential exposure of Occupationally Exposed Individuals and Public Individuals in self-shielding electron accelerators. Although these exposure situations are unlikely, simulation exercises can lead to improvements in the operating procedure as well as suggest changes in production line design in order to increase radiation protection at work. These studies can also be used in training and demonstrate a solid application of the ALARA principle in the daily activities of radiative installations.
International Nuclear Information System (INIS)
Kolbun, N.; Leveque, Ph.; Abboud, F.; Bol, A.; Vynckier, S.; Gallez, B.
2010-01-01
Purpose: The experimental determination of doses at proximal distances from radioactive sources is difficult because of the steepness of the dose gradient. The goal of this study was to determine the relative radial dose distribution for a low dose rate 192 Ir wire source using electron paramagnetic resonance imaging (EPRI) and to compare the results to those obtained using Gafchromic EBT film dosimetry and Monte Carlo (MC) simulations. Methods: Lithium formate and ammonium formate were chosen as the EPR dosimetric materials and were used to form cylindrical phantoms. The dose distribution of the stable radiation-induced free radicals in the lithium formate and ammonium formate phantoms was assessed by EPRI. EBT films were also inserted inside in ammonium formate phantoms for comparison. MC simulation was performed using the MCNP4C2 software code. Results: The radical signal in irradiated ammonium formate is contained in a single narrow EPR line, with an EPR peak-to-peak linewidth narrower than that of lithium formate (∼0.64 and 1.4 mT, respectively). The spatial resolution of EPR images was enhanced by a factor of 2.3 using ammonium formate compared to lithium formate because its linewidth is about 0.75 mT narrower than that of lithium formate. The EPRI results were consistent to within 1% with those of Gafchromic EBT films and MC simulations at distances from 1.0 to 2.9 mm. The radial dose values obtained by EPRI were about 4% lower at distances from 2.9 to 4.0 mm than those determined by MC simulation and EBT film dosimetry. Conclusions: Ammonium formate is a suitable material under certain conditions for use in brachytherapy dosimetry using EPRI. In this study, the authors demonstrated that the EPRI technique allows the estimation of the relative radial dose distribution at short distances for a 192 Ir wire source.
International Nuclear Information System (INIS)
Randriantsizafy, R.D.
2014-01-01
Brachytherapy is a means of precise and effective cancer treatment. This is due to the nearby sources of ionizing radiation. The precision and efficiency requires a good dosimetry and a good knowledge of the dose distribution in the patient. The aim is to give the right dose of ionizing radiation to destroy the tumor while reducing the dose to sensitive organs such as the bladder , liver, .... The Monte Carlo is a recognized model method for the distribution of radiation in the material. It is used in this work to determine the doses to organs during treatment planning for Cesium -137 brachytherapy. The programming language used is Python . Library outcome of this work is used in a web application BrachyPy, we designed to replace the manual processing in the Cs-137 brachytherapy planning. Model validation is done by comparing the isodose curves of the model with the isodose curves abacus NUCLETRON and the last report of the American Association of Medical Physics (AAPM) on the amendment to the algorithm TG43. [fr
International Nuclear Information System (INIS)
Berne, A.
2001-01-01
Quantitative determinations of many radioactive analytes in environmental samples are based on a process in which several independent measurements of different properties are taken. The final results that are calculated using the data have to be evaluated for accuracy and precision. The estimate of the standard deviation, s, also called the combined standard uncertainty (CSU) associated with the result of this combined measurement can be used to evaluate the precision of the result. The CSU can be calculated by applying the law of propagation of uncertainty, which is based on the Taylor series expansion of the equation used to calculate the analytical result. The estimate of s can also be obtained from a Monte Carlo simulation. The data used in this simulation includes the values resulting from the individual measurements, the estimate of the variance of each value, including the type of distribution, and the equation used to calculate the analytical result. A comparison is made between these two methods of estimating the uncertainty of the calculated result. (author)
Monte Carlo-based development of a shield and total background estimation for the COBRA experiment
International Nuclear Information System (INIS)
Heidrich, Nadine
2014-11-01
The COBRA experiment aims for the measurement of the neutrinoless double beta decay and thus for the determination the effective Majorana mass of the neutrino. To be competitive with other next-generation experiments the background rate has to be in the order of 10 -3 counts/kg/keV/yr, which is a challenging criterion. This thesis deals with the development of a shield design and the calculation of the expected total background rate for the large scale COBRA experiment containing 13824 6 cm 3 CdZnTe detectors. For the development of a shield single-layer and multi-layer shields were investigated and a shield design was optimized concerning high-energy muon-induced neutrons. As the best design the combination of 10 cm boron doped polyethylene as outermost layer, 20 cm lead and 10 cm copper as innermost layer were determined. It showed the best performance regarding neutron attenuation as well as (n, γ) self-shielding effects leading to a negligible background rate of less than 2.10 -6 counts/kg/keV/yr. Additionally. the shield with a thickness of 40 cm is compact and costeffective. In the next step the expected total background rate was computed taking into account individual setup parts and various background sources including natural and man-made radioactivity, cosmic ray-induced background and thermal neutrons. Furthermore, a comparison of measured data from the COBRA demonstrator setup with Monte Carlo data was used to calculate reliable contamination levels of the single setup parts. The calculation was performed conservatively to prevent an underestimation. In addition, the contribution to the total background rate regarding the individual detector parts and background sources was investigated. The main portion arise from the Delrin support structure, the Glyptal lacquer followed by the circuit board of the high voltage supply. Most background events originate from particles with a quantity of 99 % in total. Regarding surface events a contribution of 26
Monte Carlo-based development of a shield and total background estimation for the COBRA experiment
Energy Technology Data Exchange (ETDEWEB)
Heidrich, Nadine
2014-11-15
The COBRA experiment aims for the measurement of the neutrinoless double beta decay and thus for the determination the effective Majorana mass of the neutrino. To be competitive with other next-generation experiments the background rate has to be in the order of 10{sup -3} counts/kg/keV/yr, which is a challenging criterion. This thesis deals with the development of a shield design and the calculation of the expected total background rate for the large scale COBRA experiment containing 13824 6 cm{sup 3} CdZnTe detectors. For the development of a shield single-layer and multi-layer shields were investigated and a shield design was optimized concerning high-energy muon-induced neutrons. As the best design the combination of 10 cm boron doped polyethylene as outermost layer, 20 cm lead and 10 cm copper as innermost layer were determined. It showed the best performance regarding neutron attenuation as well as (n, γ) self-shielding effects leading to a negligible background rate of less than 2.10{sup -6} counts/kg/keV/yr. Additionally. the shield with a thickness of 40 cm is compact and costeffective. In the next step the expected total background rate was computed taking into account individual setup parts and various background sources including natural and man-made radioactivity, cosmic ray-induced background and thermal neutrons. Furthermore, a comparison of measured data from the COBRA demonstrator setup with Monte Carlo data was used to calculate reliable contamination levels of the single setup parts. The calculation was performed conservatively to prevent an underestimation. In addition, the contribution to the total background rate regarding the individual detector parts and background sources was investigated. The main portion arise from the Delrin support structure, the Glyptal lacquer followed by the circuit board of the high voltage supply. Most background events originate from particles with a quantity of 99 % in total. Regarding surface events a
Directory of Open Access Journals (Sweden)
Charlie Samuya Veric
2001-12-01
Full Text Available The importance of Carlos Bulosan in Filipino and Filipino-American radical history and literature is indisputable. His eminence spans the pacific, and he is known, diversely, as a radical poet, fictionist, novelist, and labor organizer. Author of the canonical America Iis the Hearts, Bulosan is celebrated for chronicling the conditions in America in his time, such as racism and unemployment. In the history of criticism on Bulosan's life and work, however, there is an undeclared general consensus that views Bulosan and his work as coherent permanent texts of radicalism and anti-imperialism. Central to the existence of such a tradition of critical reception are the generations of critics who, in more ways than one, control the discourse on and of Carlos Bulosan. This essay inquires into the sphere of the critical reception that orders, for our time and for the time ahead, the reading and interpretation of Bulosan. What eye and seeing, the essay asks, determine the perception of Bulosan as the angel of radicalism? What is obscured in constructing Bulosan as an immutable figure of the political? What light does the reader conceive when the personal is brought into the open and situated against the political? the essay explores the answers to these questions in Bulosan's loving letters to various friends, strangers, and white American women. The presence of these interrogations, the essay believes, will secure ultimately the continuing importance of Carlos Bulosan to radical literature and history.
Thorn, Graeme J; King, John R
2016-01-01
The Gram-positive bacterium Clostridium acetobutylicum is an anaerobic endospore-forming species which produces acetone, butanol and ethanol via the acetone-butanol (AB) fermentation process, leading to biofuels including butanol. In previous work we looked to estimate the parameters in an ordinary differential equation model of the glucose metabolism network using data from pH-controlled continuous culture experiments. Here we combine two approaches, namely the approximate Bayesian computation via an existing sequential Monte Carlo (ABC-SMC) method (to compute credible intervals for the parameters), and the profile likelihood estimation (PLE) (to improve the calculation of confidence intervals for the same parameters), the parameters in both cases being derived from experimental data from forward shift experiments. We also apply the ABC-SMC method to investigate which of the models introduced previously (one non-sporulation and four sporulation models) have the greatest strength of evidence. We find that the joint approximate posterior distribution of the parameters determines the same parameters as previously, including all of the basal and increased enzyme production rates and enzyme reaction activity parameters, as well as the Michaelis-Menten kinetic parameters for glucose ingestion, while other parameters are not as well-determined, particularly those connected with the internal metabolites acetyl-CoA, acetoacetyl-CoA and butyryl-CoA. We also find that the approximate posterior is strongly non-Gaussian, indicating that our previous assumption of elliptical contours of the distribution is not valid, which has the effect of reducing the numbers of pairs of parameters that are (linearly) correlated with each other. Calculations of confidence intervals using the PLE method back this up. Finally, we find that all five of our models are equally likely, given the data available at present. Copyright © 2015 Elsevier Inc. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Reynoso, F [UT MD Anderson Cancer Center, Houston, TX (United States); Washington University School of Medicine, St. Louis, MO (United States); Munro, J [Source Production & Equipment Co., Inc., St. Rose, LA (United States); Cho, S [UT MD Anderson Cancer Center, Houston, TX (United States)
2016-06-15
Purpose: To determine the AAPM TG-43 brachytherapy dosimetry parameters of a new titanium-encapsulated Yb-169 source designed to maximize the dose enhancement during gold nanoparticle-aided radiation therapy (GNRT). Methods: An existing Monte Carlo (MC) model of the titanium-encapsulated Yb-169 source, which was described in the current investigators’ published MC optimization study, was modified based on the source manufacturer’s detailed specifications, resulting in an accurate model of the titanium-encapsulated Yb-169 source that was actually manufactured. MC calculations were then performed using the MCNP5 code system and the modified source model, in order to obtain a complete set of the AAPM TG-43 parameters for the new Yb-169 source. Results: The MC-calculated dose rate constant for the new titanium-encapsulated Yb-169 source was 1.05 ± 0.03 cGy per hr U, indicating about 10% decrease from the values reported for the conventional stainless steel-encapsulated Yb-169 sources. The source anisotropy and radial dose function for the new source were found similar to those reported for the conventional Yb-169 sources. Conclusion: In this study, the AAPM TG-43 brachytherapy dosimetry parameters of a new titanium-encapsulated Yb-169 source were determined by MC calculations. The current results suggested that the use of titanium, instead of stainless steel, to encapsulate the Yb-169 core would not lead to any major change in the dosimetric characteristics of the Yb-169 source, while it would allow more low energy photons being transmitted through the source filter thereby leading to an increased dose enhancement during GNRT. Supported by DOD/PCRP grant W81XWH-12-1-0198 This investigation was supported by DOD/PCRP grant W81XWH-12-1- 0198.
International Nuclear Information System (INIS)
Smith, Rachel L.; Young, Edward D.; Pontoppidan, Klaus M.; Morris, Mark R.; Van Dishoeck, Ewine F.
2009-01-01
Using very high resolution (λ/Δλ ∼ 95 000) 4.7 μm fundamental and 2.3 μm overtone rovibrational CO absorption spectra obtained with the Cryogenic Infrared Echelle Spectrograph infrared spectrometer on the Very Large Telescope (VLT), we report detections of four CO isotopologues-C 16 O, 13 CO, C 18 O, and the rare species, C 17 O-in the circumstellar environment of two young protostars: VV CrA, a binary T Tauri star in the Corona Australis molecular cloud, and Reipurth 50, an intermediate-mass FU Ori star in the Orion Molecular Cloud. We argue that the observed CO absorption lines probe a protoplanetary disk in VV CrA, and a protostellar envelope in Reipurth 50. All CO line profiles are spectrally resolved, with intrinsic line widths of ∼3-4 km s -1 (FWHM), permitting direct calculation of CO oxygen isotopologue ratios with 5%-10% accuracy. The rovibrational level populations for all species can be reproduced by assuming that CO absorption arises in two temperature regimes. In the higher temperature regime, in which the column densities are best determined, the derived oxygen isotope ratios in VV CrA are: [C 16 O]/[C 18 O] =690 ± 30; [C 16 O]/[C 17 O] =2800 ± 300, and [C 18 O]/[C 17 O]=4.1 ± 0.4. For Reipurth 50, we find [C 16 O]/[C 18 O] =490 ± 30; [C 16 O]/[C 17 O] =2200 ± 150, [C 18 O]/[C 17 O] = 4.4 ± 0.2. For both objects, 12 C/ 13 C are on the order of 100, nearly twice the expected interstellar medium (ISM) ratio. The derived oxygen abundance ratios for the VV CrA disk show a significant mass-independent deficit of C 17 O and C 18 O relative to C 16 O compared to ISM baseline abundances. The Reipurth 50 envelope shows no clear differences in oxygen CO isotopologue ratios compared with the local ISM. A mass-independent fractionation can be interpreted as being due to selective photodissociation of CO in the disk surface due to self-shielding. The deficits in C 17 O and C 18 O in the VV CrA protoplanetary disk are consistent with an analogous
Daures, J; Gouriou, J; Bordy, J M
2011-03-01
This work has been performed within the frame of the European Union ORAMED project (Optimisation of RAdiation protection for MEDical staff). The main goal of the project is to improve standards of protection for medical staff for procedures resulting in potentially high exposures and to develop methodologies for better assessing and for reducing, exposures to medical staff. The Work Package WP2 is involved in the development of practical eye-lens dosimetry in interventional radiology. This study is complementary of the part of the ENEA report concerning the calculations with the MCNP-4C code of the conversion factors related to the operational quantity H(p)(3). In this study, a set of energy- and angular-dependent conversion coefficients (H(p)(3)/K(a)), in the newly proposed square cylindrical phantom made of ICRU tissue, have been calculated with the Monte-Carlo code PENELOPE and MCNP5. The H(p)(3) values have been determined in terms of absorbed dose, according to the definition of this quantity, and also with the kerma approximation as formerly reported in ICRU reports. At a low-photon energy (up to 1 MeV), the two results obtained with the two methods are consistent. Nevertheless, large differences are showed at a higher energy. This is mainly due to the lack of electronic equilibrium, especially for small angle incidences. The values of the conversion coefficients obtained with the MCNP-4C code published by ENEA quite agree with the kerma approximation calculations obtained with PENELOPE. We also performed the same calculations with the code MCNP5 with two types of tallies: F6 for kerma approximation and *F8 for estimating the absorbed dose that is, as known, due to secondary electrons. PENELOPE and MCNP5 results agree for the kerma approximation and for the absorbed dose calculation of H(p)(3) and prove that, for photon energies larger than 1 MeV, the transport of the secondary electrons has to be taken into account.
International Nuclear Information System (INIS)
Krongkietlearts, K; Tangboonduangjit, P; Paisangittisakul, N
2016-01-01
In order to improve the life's quality for a cancer patient, the radiation techniques are constantly evolving. Especially, the two modern techniques which are intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) are quite promising. They comprise of many small beam sizes (beamlets) with various intensities to achieve the intended radiation dose to the tumor and minimal dose to the nearby normal tissue. The study investigates whether the microDiamond detector (PTW manufacturer), a synthetic single crystal diamond detector, is suitable for small field output factor measurement. The results were compared with those measured by the stereotactic field detector (SFD) and the Monte Carlo simulation (EGSnrc/BEAMnrc/DOSXYZ). The calibration of Monte Carlo simulation was done using the percentage depth dose and dose profile measured by the photon field detector (PFD) of the 10×10 cm 2 field size with 100 cm SSD. Comparison of the values obtained from the calculations and measurements are consistent, no more than 1% difference. The output factors obtained from the microDiamond detector have been compared with those of SFD and Monte Carlo simulation, the results demonstrate the percentage difference of less than 2%. (paper)
Dunn, William L
2012-01-01
Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble
Directory of Open Access Journals (Sweden)
Bardenet Rémi
2013-07-01
Full Text Available Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of Monte Carlo in experimental physics, and point to landmarks in the literature for the curious reader.
Murthy, K. P. N.
2001-01-01
An introduction to the basics of Monte Carlo is given. The topics covered include, sample space, events, probabilities, random variables, mean, variance, covariance, characteristic function, chebyshev inequality, law of large numbers, central limit theorem (stable distribution, Levy distribution), random numbers (generation and testing), random sampling techniques (inversion, rejection, sampling from a Gaussian, Metropolis sampling), analogue Monte Carlo and Importance sampling (exponential b...
Monte Carlo - Advances and Challenges
International Nuclear Information System (INIS)
Brown, Forrest B.; Mosteller, Russell D.; Martin, William R.
2008-01-01
Abstract only, full text follows: With ever-faster computers and mature Monte Carlo production codes, there has been tremendous growth in the application of Monte Carlo methods to the analysis of reactor physics and reactor systems. In the past, Monte Carlo methods were used primarily for calculating k eff of a critical system. More recently, Monte Carlo methods have been increasingly used for determining reactor power distributions and many design parameters, such as β eff , l eff , τ, reactivity coefficients, Doppler defect, dominance ratio, etc. These advanced applications of Monte Carlo methods are now becoming common, not just feasible, but bring new challenges to both developers and users: Convergence of 3D power distributions must be assured; confidence interval bias must be eliminated; iterated fission probabilities are required, rather than single-generation probabilities; temperature effects including Doppler and feedback must be represented; isotopic depletion and fission product buildup must be modeled. This workshop focuses on recent advances in Monte Carlo methods and their application to reactor physics problems, and on the resulting challenges faced by code developers and users. The workshop is partly tutorial, partly a review of the current state-of-the-art, and partly a discussion of future work that is needed. It should benefit both novice and expert Monte Carlo developers and users. In each of the topic areas, we provide an overview of needs, perspective on past and current methods, a review of recent work, and discussion of further research and capabilities that are required. Electronic copies of all workshop presentations and material will be available. The workshop is structured as 2 morning and 2 afternoon segments: - Criticality Calculations I - convergence diagnostics, acceleration methods, confidence intervals, and the iterated fission probability, - Criticality Calculations II - reactor kinetics parameters, dominance ratio, temperature
International Nuclear Information System (INIS)
Cramer, S.N.
1984-01-01
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described
International Nuclear Information System (INIS)
Mercier, B.
1985-04-01
We have shown that the transport equation can be solved with particles, like the Monte-Carlo method, but without random numbers. In the Monte-Carlo method, particles are created from the source, and are followed from collision to collision until either they are absorbed or they leave the spatial domain. In our method, particles are created from the original source, with a variable weight taking into account both collision and absorption. These particles are followed until they leave the spatial domain, and we use them to determine a first collision source. Another set of particles is then created from this first collision source, and tracked to determine a second collision source, and so on. This process introduces an approximation which does not exist in the Monte-Carlo method. However, we have analyzed the effect of this approximation, and shown that it can be limited. Our method is deterministic, gives reproducible results. Furthermore, when extra accuracy is needed in some region, it is easier to get more particles to go there. It has the same kind of applications: rather problems where streaming is dominant than collision dominated problems
Variational Monte Carlo Technique
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 19; Issue 8. Variational Monte Carlo Technique: Ground State Energies of Quantum Mechanical Systems. Sukanta Deb. General Article Volume 19 Issue 8 August 2014 pp 713-739 ...
Smans, Kristien; Zoetelief, Johannes; Verbrugge, Beatrijs; Haeck, Wim; Struelens, Lara; Vanhavere, Filip; Bosmans, Hilde
2010-05-01
The purpose of this study was to compare and validate three methods to simulate radiographic image detectors with the Monte Carlo software MCNP/MCNPX in a time efficient way. The first detector model was the standard semideterministic radiography tally, which has been used in previous image simulation studies. Next to the radiography tally two alternative stochastic detector models were developed: A perfect energy integrating detector and a detector based on the energy absorbed in the detector material. Validation of three image detector models was performed by comparing calculated scatter-to-primary ratios (SPRs) with the published and experimentally acquired SPR values. For mammographic applications, SPRs computed with the radiography tally were up to 44% larger than the published results, while the SPRs computed with the perfect energy integrating detectors and the blur-free absorbed energy detector model were, on the average, 0.3% (ranging from -3% to 3%) and 0.4% (ranging from -5% to 5%) lower, respectively. For general radiography applications, the radiography tally overestimated the measured SPR by as much as 46%. The SPRs calculated with the perfect energy integrating detectors were, on the average, 4.7% (ranging from -5.3% to -4%) lower than the measured SPRs, whereas for the blur-free absorbed energy detector model, the calculated SPRs were, on the average, 1.3% (ranging from -0.1% to 2.4%) larger than the measured SPRs. For mammographic applications, both the perfect energy integrating detector model and the blur-free energy absorbing detector model can be used to simulate image detectors, whereas for conventional x-ray imaging using higher energies, the blur-free energy absorbing detector model is the most appropriate image detector model. The radiography tally overestimates the scattered part and should therefore not be used to simulate radiographic image detectors.
Energy Technology Data Exchange (ETDEWEB)
Mermigkis, Panagiotis G.; Tsalikis, Dimitrios G. [Department of Chemical Engineering, University of Patras, GR 26500 Patras (Greece); Institute of Chemical Engineering and High Temperature Chemical Processes, GR 26500 Patras (Greece); Mavrantzas, Vlasis G., E-mail: vlasis@chemeng.upatras.gr [Department of Chemical Engineering, University of Patras, GR 26500 Patras (Greece); Institute of Chemical Engineering and High Temperature Chemical Processes, GR 26500 Patras (Greece); Particle Technology Laboratory, Department of Mechanical and Process Engineering, ETH-Z, CH-8092 Zurich (Switzerland)
2015-10-28
A kinetic Monte Carlo (kMC) simulation algorithm is developed for computing the effective diffusivity of water molecules in a poly(methyl methacrylate) (PMMA) matrix containing carbon nanotubes (CNTs) at several loadings. The simulations are conducted on a cubic lattice to the bonds of which rate constants are assigned governing the elementary jump events of water molecules from one lattice site to another. Lattice sites belonging to PMMA domains of the membrane are assigned different rates than lattice sites belonging to CNT domains. Values of these two rate constants are extracted from available numerical data for water diffusivity within a PMMA matrix and a CNT pre-computed on the basis of independent atomistic molecular dynamics simulations, which show that water diffusivity in CNTs is 3 orders of magnitude faster than in PMMA. Our discrete-space, continuum-time kMC simulation results for several PMMA-CNT nanocomposite membranes (characterized by different values of CNT length L and diameter D and by different loadings of the matrix in CNTs) demonstrate that the overall or effective diffusivity, D{sub eff}, of water in the entire polymeric membrane is of the same order of magnitude as its diffusivity in PMMA domains and increases only linearly with the concentration C (vol. %) in nanotubes. For a constant value of the concentration C, D{sub eff} is found to vary practically linearly also with the CNT aspect ratio L/D. The kMC data allow us to propose a simple bilinear expression for D{sub eff} as a function of C and L/D that can describe the numerical data for water mobility in the membrane extremely accurately. Additional simulations with two different CNT configurations (completely random versus aligned) show that CNT orientation in the polymeric matrix has only a minor effect on D{sub eff} (as long as CNTs do not fully penetrate the membrane). We have also extensively analyzed and quantified sublinear (anomalous) diffusive phenomena over small to moderate
International Nuclear Information System (INIS)
Mermigkis, Panagiotis G.; Tsalikis, Dimitrios G.; Mavrantzas, Vlasis G.
2015-01-01
A kinetic Monte Carlo (kMC) simulation algorithm is developed for computing the effective diffusivity of water molecules in a poly(methyl methacrylate) (PMMA) matrix containing carbon nanotubes (CNTs) at several loadings. The simulations are conducted on a cubic lattice to the bonds of which rate constants are assigned governing the elementary jump events of water molecules from one lattice site to another. Lattice sites belonging to PMMA domains of the membrane are assigned different rates than lattice sites belonging to CNT domains. Values of these two rate constants are extracted from available numerical data for water diffusivity within a PMMA matrix and a CNT pre-computed on the basis of independent atomistic molecular dynamics simulations, which show that water diffusivity in CNTs is 3 orders of magnitude faster than in PMMA. Our discrete-space, continuum-time kMC simulation results for several PMMA-CNT nanocomposite membranes (characterized by different values of CNT length L and diameter D and by different loadings of the matrix in CNTs) demonstrate that the overall or effective diffusivity, D eff , of water in the entire polymeric membrane is of the same order of magnitude as its diffusivity in PMMA domains and increases only linearly with the concentration C (vol. %) in nanotubes. For a constant value of the concentration C, D eff is found to vary practically linearly also with the CNT aspect ratio L/D. The kMC data allow us to propose a simple bilinear expression for D eff as a function of C and L/D that can describe the numerical data for water mobility in the membrane extremely accurately. Additional simulations with two different CNT configurations (completely random versus aligned) show that CNT orientation in the polymeric matrix has only a minor effect on D eff (as long as CNTs do not fully penetrate the membrane). We have also extensively analyzed and quantified sublinear (anomalous) diffusive phenomena over small to moderate times and correlated
International Nuclear Information System (INIS)
Valdez, F. Roberto Fragoso; Alvarez Romero, J. Trinidad
2001-01-01
The functions Λ(r, z), G(r, θ), g(r) and F(r, θ) were calculated for Amersham model CDCS-M-type 137 Cs source by means of Monte Carlo simulation using the algorithm PENELOPE. These functions are required to verify and/or to feed planning systems or directly as entrance data for the manual planning of the distribution of absorbed dose according with the recommendations of the TG 43, [1]. The values of the constant Λ (r, Z) were determined as the quotient of absorbed dose rate distribution in water and air kerma strength in 'free air' S k . The values obtained for Λ (r, Z) differ up to 3% of those reported in the literature, being very sensitive to the cutoff energy for the electrons in the interface of the source's encapsulated and water
Monte Carlo codes and Monte Carlo simulator program
International Nuclear Information System (INIS)
Higuchi, Kenji; Asai, Kiyoshi; Suganuma, Masayuki.
1990-03-01
Four typical Monte Carlo codes KENO-IV, MORSE, MCNP and VIM have been vectorized on VP-100 at Computing Center, JAERI. The problems in vector processing of Monte Carlo codes on vector processors have become clear through the work. As the result, it is recognized that these are difficulties to obtain good performance in vector processing of Monte Carlo codes. A Monte Carlo computing machine, which processes the Monte Carlo codes with high performances is being developed at our Computing Center since 1987. The concept of Monte Carlo computing machine and its performance have been investigated and estimated by using a software simulator. In this report the problems in vectorization of Monte Carlo codes, Monte Carlo pipelines proposed to mitigate these difficulties and the results of the performance estimation of the Monte Carlo computing machine by the simulator are described. (author)
2009-01-01
Carlo Rubbia turned 75 on March 31, and CERN held a symposium to mark his birthday and pay tribute to his impressive contribution to both CERN and science. Carlo Rubbia, 4th from right, together with the speakers at the symposium.On 7 April CERN hosted a celebration marking Carlo Rubbia’s 75th birthday and 25 years since he was awarded the Nobel Prize for Physics. "Today we will celebrate 100 years of Carlo Rubbia" joked CERN’s Director-General, Rolf Heuer in his opening speech, "75 years of his age and 25 years of the Nobel Prize." Rubbia received the Nobel Prize along with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. During the symposium, which was held in the Main Auditorium, several eminent speakers gave lectures on areas of science to which Carlo Rubbia made decisive contributions. Among those who spoke were Michel Spiro, Director of the French National Insti...
Energy Technology Data Exchange (ETDEWEB)
Palau, J M [CEA Cadarache, Service de Physique des Reacteurs et du Cycle, Lab. de Projets Nucleaires, 13 - Saint-Paul-lez-Durance (France)
2005-07-01
This paper presents how Monte-Carlo calculations (French TRIPOLI4 poly-kinetic code with an appropriate pre-processing and post-processing software called OVNI) are used in the case of 3-dimensional heterogeneous benchmarks (slab reactor cores) to reduce model biases and enable a thorough and detailed analysis of the performances of deterministic methods and their associated data libraries with respect to key neutron parameters (reactivity, local power). Outstanding examples of application of these tools are presented regarding the new numerical methods implemented in the French lattice code APOLLO2 (advanced self-shielding models, new IDT characteristics method implemented within the discrete-ordinates flux solver model) and the JEFF3.1 nuclear data library (checked against JEF2.2 previous file). In particular we have pointed out, by performing multigroup/point-wise TRIPOLI4 (assembly and core) calculations, the efficiency (in terms of accuracy and computation time) of the new IDT method developed in APOLLO2. In addition, by performing 3-dimensional TRIPOLI4 calculations of the whole slab core (few millions of elementary volumes), the high quality of the new JEFF3.1 nuclear data files and revised evaluations (U{sup 235}, U{sup 238}, Hf) for reactivity prediction of slab cores critical experiments has been stressed. As a feedback of the whole validation process, improvements in terms of nuclear data (mainly Hf capture cross-sections) and numerical methods (advanced quadrature formulas accounting validation results, validation of new self-shielding models, parallelization) are suggested to improve even more the APOLLO2-CRONOS2 standard calculation route. (author)
International Nuclear Information System (INIS)
Palau, J.M.
2005-01-01
This paper presents how Monte-Carlo calculations (French TRIPOLI4 poly-kinetic code with an appropriate pre-processing and post-processing software called OVNI) are used in the case of 3-dimensional heterogeneous benchmarks (slab reactor cores) to reduce model biases and enable a thorough and detailed analysis of the performances of deterministic methods and their associated data libraries with respect to key neutron parameters (reactivity, local power). Outstanding examples of application of these tools are presented regarding the new numerical methods implemented in the French lattice code APOLLO2 (advanced self-shielding models, new IDT characteristics method implemented within the discrete-ordinates flux solver model) and the JEFF3.1 nuclear data library (checked against JEF2.2 previous file). In particular we have pointed out, by performing multigroup/point-wise TRIPOLI4 (assembly and core) calculations, the efficiency (in terms of accuracy and computation time) of the new IDT method developed in APOLLO2. In addition, by performing 3-dimensional TRIPOLI4 calculations of the whole slab core (few millions of elementary volumes), the high quality of the new JEFF3.1 nuclear data files and revised evaluations (U 235 , U 238 , Hf) for reactivity prediction of slab cores critical experiments has been stressed. As a feedback of the whole validation process, improvements in terms of nuclear data (mainly Hf capture cross-sections) and numerical methods (advanced quadrature formulas accounting validation results, validation of new self-shielding models, parallelization) are suggested to improve even more the APOLLO2-CRONOS2 standard calculation route. (author)
Monte Carlo criticality analysis for dissolvers with neutron poison
International Nuclear Information System (INIS)
Yu, Deshun; Dong, Xiufang; Pu, Fuxiang.
1987-01-01
Criticality analysis for dissolvers with neutron poison is given on the basis of Monte Carlo method. In Monte Carlo calculations of thermal neutron group parameters for fuel pieces, neutron transport length is determined in terms of maximum cross section approach. A set of related effective multiplication factors (K eff ) are calculated by Monte Carlo method for the three cases. Related numerical results are quite useful for the design and operation of this kind of dissolver in the criticality safety analysis. (author)
Determination of boron over a large dynamic range by prompt-gamma activation analysis
International Nuclear Information System (INIS)
Harrison, R.K.; Landsberger, S.
2009-01-01
An evaluation of the PGAA method for the determination of boron across a wide dynamic range of concentrations was performed for trace levels up to 5 wt.% boron. This range encompasses a transition from neutron transparency to significant self- shielding conditions. To account for self-shielding, several PGAA techniques were employed. First, a calibration curve was developed in which a set of boron standards was tested and the count rate to boron mass curve was determined. This set of boron measurements was compared with an internal standard self-shielding correction method and with a method for determining composition using PGAA peak ratios. The advantages and disadvantages of each method are analyzed. The boron concentrations of several laboratory-grade chemicals and standard reference materials were measured with each method and compared. The evaluation of the boron content of nanocrystalline transition metals prepared with a boron-containing reducing agent was also performed with each of the methods tested. Finally, the k 0 method was used for non-destructive measurement of boron in catalyst materials for the characterization of new non-platinum fuel cell catalysts.
Leonardo Rossi
Carlo Caso (1940 - 2007) Our friend and colleague Carlo Caso passed away on July 7th, after several months of courageous fight against cancer. Carlo spent most of his scientific career at CERN, taking an active part in the experimental programme of the laboratory. His long and fruitful involvement in particle physics started in the sixties, in the Genoa group led by G. Tomasini. He then made several experiments using the CERN liquid hydrogen bubble chambers -first the 2000HBC and later BEBC- to study various facets of the production and decay of meson and baryon resonances. He later made his own group and joined the NA27 Collaboration to exploit the EHS Spectrometer with a rapid cycling bubble chamber as vertex detector. Amongst their many achievements, they were the first to measure, with excellent precision, the lifetime of the charmed D mesons. At the start of the LEP era, Carlo and his group moved to the DELPHI experiment, participating in the construction and running of the HPC electromagnetic c...
Variational Monte Carlo Technique
Indian Academy of Sciences (India)
ias
on the development of nuclear weapons in Los Alamos ..... cantly improved the paper. ... Carlo simulations of solids, Reviews of Modern Physics, Vol.73, pp.33– ... The computer algorithms are usually based on a random seed that starts the ...
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 3. Markov Chain Monte Carlo - Examples. Arnab Chakraborty. General Article Volume 7 Issue 3 March 2002 pp 25-34. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/007/03/0025-0034. Keywords.
Monte Carlo and Quasi-Monte Carlo Sampling
Lemieux, Christiane
2009-01-01
Presents essential tools for using quasi-Monte Carlo sampling in practice. This book focuses on issues related to Monte Carlo methods - uniform and non-uniform random number generation, variance reduction techniques. It covers several aspects of quasi-Monte Carlo methods.
Wielandt acceleration for MCNP5 Monte Carlo eigenvalue calculations
International Nuclear Information System (INIS)
Brown, F.
2007-01-01
Monte Carlo criticality calculations use the power iteration method to determine the eigenvalue (k eff ) and eigenfunction (fission source distribution) of the fundamental mode. A recently proposed method for accelerating convergence of the Monte Carlo power iteration using Wielandt's method has been implemented in a test version of MCNP5. The method is shown to provide dramatic improvements in convergence rates and to greatly reduce the possibility of false convergence assessment. The method is effective and efficient, improving the Monte Carlo figure-of-merit for many problems. In addition, the method should eliminate most of the underprediction bias in confidence intervals for Monte Carlo criticality calculations. (authors)
International Nuclear Information System (INIS)
Khelifi, R.; Bode, P.
2016-01-01
MCNP5 has been used to optimize the design of a Prompt gamma ray neutron activation analysis (PGNAA) facility, which was subsequently constructed for quantification of total chlorine in water to simulate neutron transport from an 241 AmBe source into a PGNAA set-up. Modeling calculations were performed to optimize the experimental set-up for Cl measurements in water. The optimization with MCNP5 was focused on maximizing the thermal neutrons flux which leads to improving the gamma prompt production after neutron capture in a water sample. The influence of dimensions and materials for the neutron collimation as well as the dimensions of the sample together were studied. A PGNAA facility with an 241 AmBe neutron source was built based on the optimized configuration and used to determine chlorine concentration. Measured values of the chlorine count rate were plotted versus the NaCl in water. The count rate versus amount of chlorine show a good coefficient of correlation of the linear fit. The result permits PGNAA to be a valuable diagnostic tool for getting an indication of the salinity contamination of water. (author)
Energy Technology Data Exchange (ETDEWEB)
Miss, J
1998-06-01
The goal of this thesis was to study comprehensively photons energy and angular distributions of backscattered radiations. In general, this relation is described by the concept to the backscattered factor or doubly differential albedo. This concept is useful to study the particle propagation into the air space by simple or multiple reflections on materials There are two principal treatments to solve numerically this problem: the deterministic and probabilistic methods. We showed that deterministic methods furnish unsatisfactory results: that`s why we choice to develop a new gamma ray albedo estimator in the code TRIPOLI14 (three dimensional Monte Carlo code). So, we have been able to compute an important data base of doubly differential albedos. A physical analysis of these data showed that albedos can be simply described by parameter functions. These parameters were obtained by fitting the albedos of the data base over a complete range of incident and reflected energy and direction. So, we produced a very smaller data base of functions coefficients, instead of storing all the values of the doubly differential spectrum. It is so easy to make every albedo by linear interpolations on the coefficient of the new library. (author) 63 refs.
Monte Carlo simulation applied to alpha spectrometry
International Nuclear Information System (INIS)
Baccouche, S.; Gharbi, F.; Trabelsi, A.
2007-01-01
Alpha particle spectrometry is a widely-used analytical method, in particular when we deal with pure alpha emitting radionuclides. Monte Carlo simulation is an adequate tool to investigate the influence of various phenomena on this analytical method. We performed an investigation of those phenomena using the simulation code GEANT of CERN. The results concerning the geometrical detection efficiency in different measurement geometries agree with analytical calculations. This work confirms that Monte Carlo simulation of solid angle of detection is a very useful tool to determine with very good accuracy the detection efficiency.
International Nuclear Information System (INIS)
Rajabalinejad, M.
2010-01-01
To reduce cost of Monte Carlo (MC) simulations for time-consuming processes, Bayesian Monte Carlo (BMC) is introduced in this paper. The BMC method reduces number of realizations in MC according to the desired accuracy level. BMC also provides a possibility of considering more priors. In other words, different priors can be integrated into one model by using BMC to further reduce cost of simulations. This study suggests speeding up the simulation process by considering the logical dependence of neighboring points as prior information. This information is used in the BMC method to produce a predictive tool through the simulation process. The general methodology and algorithm of BMC method are presented in this paper. The BMC method is applied to the simplified break water model as well as the finite element model of 17th Street Canal in New Orleans, and the results are compared with the MC and Dynamic Bounds methods.
Monte Carlo principles and applications
Energy Technology Data Exchange (ETDEWEB)
Raeside, D E [Oklahoma Univ., Oklahoma City (USA). Health Sciences Center
1976-03-01
The principles underlying the use of Monte Carlo methods are explained, for readers who may not be familiar with the approach. The generation of random numbers is discussed, and the connection between Monte Carlo methods and random numbers is indicated. Outlines of two well established Monte Carlo sampling techniques are given, together with examples illustrating their use. The general techniques for improving the efficiency of Monte Carlo calculations are considered. The literature relevant to the applications of Monte Carlo calculations in medical physics is reviewed.
International Nuclear Information System (INIS)
Dubi, A.; Gerstl, S.A.W.
1979-05-01
The contributon Monte Carlo method is based on a new recipe to calculate target responses by means of volume integral of the contributon current in a region between the source and the detector. A comprehensive description of the method, its implementation in the general-purpose MCNP code, and results of the method for realistic nonhomogeneous, energy-dependent problems are presented. 23 figures, 10 tables
Directory of Open Access Journals (Sweden)
Pedro Medina Avendaño
1981-01-01
Full Text Available Carlos Vega Duarte tenía la sencillez de los seres elementales y puros. Su corazón era limpio como oro de aluvión. Su trato directo y coloquial ponía de relieve a un santandereano sin contaminaciones que amaba el fulgor de las armas y se encandilaba con el destello de las frases perfectas
International Nuclear Information System (INIS)
Wollaber, Allan Benton
2016-01-01
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating @@), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
International Nuclear Information System (INIS)
Creutz, M.
1986-01-01
The author discusses a recently developed algorithm for simulating statistical systems. The procedure interpolates between molecular dynamics methods and canonical Monte Carlo. The primary advantages are extremely fast simulations of discrete systems such as the Ising model and a relative insensitivity to random number quality. A variation of the algorithm gives rise to a deterministic dynamics for Ising spins. This model may be useful for high speed simulation of non-equilibrium phenomena
Energy Technology Data Exchange (ETDEWEB)
Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
2009-01-01
On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency and Professor at the IUSS School for Advanced Studies in Pavia will speak about his work with Carlo Rubbia. Finally, Hans Joachim Sch...
2009-01-01
On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency, will speak about his work with Carlo Rubbia. Finally, Hans Joachim Schellnhuber of the Potsdam Institute for Climate Research and Sven Kul...
Energy Technology Data Exchange (ETDEWEB)
Brockway, D.; Soran, P.; Whalen, P.
1985-01-01
A Monte Carlo algorithm to efficiently calculate static alpha eigenvalues, N = ne/sup ..cap alpha..t/, for supercritical systems has been developed and tested. A direct Monte Carlo approach to calculating a static alpha is to simply follow the buildup in time of neutrons in a supercritical system and evaluate the logarithmic derivative of the neutron population with respect to time. This procedure is expensive, and the solution is very noisy and almost useless for a system near critical. The modified approach is to convert the time-dependent problem to a static ..cap alpha../sup -/eigenvalue problem and regress ..cap alpha.. on solutions of a/sup -/ k/sup -/eigenvalue problem. In practice, this procedure is much more efficient than the direct calculation, and produces much more accurate results. Because the Monte Carlo codes are intrinsically three-dimensional and use elaborate continuous-energy cross sections, this technique is now used as a standard for evaluating other calculational techniques in odd geometries or with group cross sections.
Energy Technology Data Exchange (ETDEWEB)
Cooper, R.; Silva, J. H. F.; Trevisan, R. E.
2004-07-01
The present work refers to the characterization of API 5L-X80 pipeline joints welded with self-shielded flux cored wire. This process was evaluated under preheating conditions, with an uniform and steady heat input. All joints were welded in flat position (1G), with the pipe turning and the torch still. Tube dimensions were 762 mm in external diameter and 16 mm in thickness. Welds were applied on single V-groove, with six weld beads, along with three levels of preheating temperatures (room temperature, 100 degree centigree, 160 degree centigree). These temperatures were maintained as inter pass temperature. The filler metal E71T8-K6 with mechanical properties different from parent metal was used in under matched conditions. The weld characterization is presented according to the mechanical test results of tensile strength, hardness and impact test. The mechanical tests were conducted according to API 1104, AWS and ASTM standards. API 1104 and API 51 were used as screening criteria. According to the results obtained, it was possible to remark that it is appropriate to weld API 5L-X80 steel ducts with Self-shielded Flux Cored wires, in conformance to the API standards and no preheat temperature is necessary. (Author) 22 refs.
Monte Carlo method for array criticality calculations
International Nuclear Information System (INIS)
Dickinson, D.; Whitesides, G.E.
1976-01-01
The Monte Carlo method for solving neutron transport problems consists of mathematically tracing paths of individual neutrons collision by collision until they are lost by absorption or leakage. The fate of the neutron after each collision is determined by the probability distribution functions that are formed from the neutron cross-section data. These distributions are sampled statistically to establish the successive steps in the neutron's path. The resulting data, accumulated from following a large number of batches, are analyzed to give estimates of k/sub eff/ and other collision-related quantities. The use of electronic computers to produce the simulated neutron histories, initiated at Los Alamos Scientific Laboratory, made the use of the Monte Carlo method practical for many applications. In analog Monte Carlo simulation, the calculation follows the physical events of neutron scattering, absorption, and leakage. To increase calculational efficiency, modifications such as the use of statistical weights are introduced. The Monte Carlo method permits the use of a three-dimensional geometry description and a detailed cross-section representation. Some of the problems in using the method are the selection of the spatial distribution for the initial batch, the preparation of the geometry description for complex units, and the calculation of error estimates for region-dependent quantities such as fluxes. The Monte Carlo method is especially appropriate for criticality safety calculations since it permits an accurate representation of interacting units of fissile material. Dissimilar units, units of complex shape, moderators between units, and reflected arrays may be calculated. Monte Carlo results must be correlated with relevant experimental data, and caution must be used to ensure that a representative set of neutron histories is produced
Monte Carlo applications to radiation shielding problems
International Nuclear Information System (INIS)
Subbaiah, K.V.
2009-01-01
Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling of physical and mathematical systems to compute their results. However, basic concepts of MC are both simple and straightforward and can be learned by using a personal computer. Uses of Monte Carlo methods require large amounts of random numbers, and it was their use that spurred the development of pseudorandom number generators, which were far quicker to use than the tables of random numbers which had been previously used for statistical sampling. In Monte Carlo simulation of radiation transport, the history (track) of a particle is viewed as a random sequence of free flights that end with an interaction event where the particle changes its direction of movement, loses energy and, occasionally, produces secondary particles. The Monte Carlo simulation of a given experimental arrangement (e.g., an electron beam, coming from an accelerator and impinging on a water phantom) consists of the numerical generation of random histories. To simulate these histories we need an interaction model, i.e., a set of differential cross sections (DCS) for the relevant interaction mechanisms. The DCSs determine the probability distribution functions (pdf) of the random variables that characterize a track; 1) free path between successive interaction events, 2) type of interaction taking place and 3) energy loss and angular deflection in a particular event (and initial state of emitted secondary particles, if any). Once these pdfs are known, random histories can be generated by using appropriate sampling methods. If the number of generated histories is large enough, quantitative information on the transport process may be obtained by simply averaging over the simulated histories. The Monte Carlo method yields the same information as the solution of the Boltzmann transport equation, with the same interaction model, but is easier to implement. In particular, the simulation of radiation
Monte Carlo Solutions for Blind Phase Noise Estimation
Directory of Open Access Journals (Sweden)
Çırpan Hakan
2009-01-01
Full Text Available This paper investigates the use of Monte Carlo sampling methods for phase noise estimation on additive white Gaussian noise (AWGN channels. The main contributions of the paper are (i the development of a Monte Carlo framework for phase noise estimation, with special attention to sequential importance sampling and Rao-Blackwellization, (ii the interpretation of existing Monte Carlo solutions within this generic framework, and (iii the derivation of a novel phase noise estimator. Contrary to the ad hoc phase noise estimators that have been proposed in the past, the estimators considered in this paper are derived from solid probabilistic and performance-determining arguments. Computer simulations demonstrate that, on one hand, the Monte Carlo phase noise estimators outperform the existing estimators and, on the other hand, our newly proposed solution exhibits a lower complexity than the existing Monte Carlo solutions.
Sampling from a polytope and hard-disk Monte Carlo
International Nuclear Information System (INIS)
Kapfer, Sebastian C; Krauth, Werner
2013-01-01
The hard-disk problem, the statics and the dynamics of equal two-dimensional hard spheres in a periodic box, has had a profound influence on statistical and computational physics. Markov-chain Monte Carlo and molecular dynamics were first discussed for this model. Here we reformulate hard-disk Monte Carlo algorithms in terms of another classic problem, namely the sampling from a polytope. Local Markov-chain Monte Carlo, as proposed by Metropolis et al. in 1953, appears as a sequence of random walks in high-dimensional polytopes, while the moves of the more powerful event-chain algorithm correspond to molecular dynamics evolution. We determine the convergence properties of Monte Carlo methods in a special invariant polytope associated with hard-disk configurations, and the implications for convergence of hard-disk sampling. Finally, we discuss parallelization strategies for event-chain Monte Carlo and present results for a multicore implementation
Biases in Monte Carlo eigenvalue calculations
Energy Technology Data Exchange (ETDEWEB)
Gelbard, E.M.
1992-12-01
The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the ``fixed-source`` case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated (``replicated``) over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here.
Biases in Monte Carlo eigenvalue calculations
Energy Technology Data Exchange (ETDEWEB)
Gelbard, E.M.
1992-01-01
The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the fixed-source'' case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated ( replicated'') over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here.
Biases in Monte Carlo eigenvalue calculations
International Nuclear Information System (INIS)
Gelbard, E.M.
1992-01-01
The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the ''fixed-source'' case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated (''replicated'') over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here
Directory of Open Access Journals (Sweden)
Hammam Oktajianto
2014-12-01
Full Text Available Gas-cooled nuclear reactor is a Generation IV reactor which has been receiving significant attention due to many desired characteristics such as inherent safety, modularity, relatively low cost, short construction period, and easy financing. High temperature reactor (HTR pebble-bed as one of type of gas-cooled reactor concept is getting attention. In HTR pebble-bed design, radius and enrichment of the fuel kernel are the key parameter that can be chosen freely to determine the desired value of criticality. This paper models HTR pebble-bed 10 MW and determines an effective of enrichment and radius of the fuel (Kernel to get criticality value of reactor. The TRISO particle coated fuel particle which was modelled explicitly and distributed in the fuelled region of the fuel pebbles using a Simple-Cubic (SC lattice. The pebble-bed balls and moderator balls distributed in the core zone using a Body-Centred Cubic lattice with assumption of a fresh fuel by the fuel enrichment was 7-17% at 1% range and the size of the fuel radius was 175-300 µm at 25 µm ranges. The geometrical model of the full reactor is obtained by using lattice and universe facilities provided by MCNP4C. The details of model are discussed with necessary simplifications. Criticality calculations were conducted by Monte Carlo transport code MCNP4C and continuous energy nuclear data library ENDF/B-VI. From calculation results can be concluded that an effective of enrichment and radius of fuel (Kernel to achieve a critical condition was the enrichment of 15-17% at a radius of 200 µm, the enrichment of 13-17% at a radius of 225 µm, the enrichments of 12-15% at radius of 250 µm, the enrichments of 11-14% at a radius of 275 µm and the enrichment of 10-13% at a radius of 300 µm, so that the effective of enrichments and radii of fuel (Kernel can be considered in the HTR 10 MW. Keywords—MCNP4C, HTR, enrichment, radius, criticality
Monte Carlo Methods in Physics
International Nuclear Information System (INIS)
Santoso, B.
1997-01-01
Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained
Cost of splitting in Monte Carlo transport
International Nuclear Information System (INIS)
Everett, C.J.; Cashwell, E.D.
1978-03-01
In a simple transport problem designed to estimate transmission through a plane slab of x free paths by Monte Carlo methods, it is shown that m-splitting (m > or = 2) does not pay unless exp(x) > m(m + 3)/(m - 1). In such a case, the minimum total cost in terms of machine time is obtained as a function of m, and the optimal value of m is determined
Lectures on Monte Carlo methods
Madras, Neal
2001-01-01
Monte Carlo methods form an experimental branch of mathematics that employs simulations driven by random number generators. These methods are often used when others fail, since they are much less sensitive to the "curse of dimensionality", which plagues deterministic methods in problems with a large number of variables. Monte Carlo methods are used in many fields: mathematics, statistics, physics, chemistry, finance, computer science, and biology, for instance. This book is an introduction to Monte Carlo methods for anyone who would like to use these methods to study various kinds of mathemati
Multilevel sequential Monte-Carlo samplers
Jasra, Ajay
2016-01-01
Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.
Multilevel sequential Monte-Carlo samplers
Jasra, Ajay
2016-01-05
Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.
Monte Carlo Simulation for Particle Detectors
Pia, Maria Grazia
2012-01-01
Monte Carlo simulation is an essential component of experimental particle physics in all the phases of its life-cycle: the investigation of the physics reach of detector concepts, the design of facilities and detectors, the development and optimization of data reconstruction software, the data analysis for the production of physics results. This note briefly outlines some research topics related to Monte Carlo simulation, that are relevant to future experimental perspectives in particle physics. The focus is on physics aspects: conceptual progress beyond current particle transport schemes, the incorporation of materials science knowledge relevant to novel detection technologies, functionality to model radiation damage, the capability for multi-scale simulation, quantitative validation and uncertainty quantification to determine the predictive power of simulation. The R&D on simulation for future detectors would profit from cooperation within various components of the particle physics community, and synerg...
International Nuclear Information System (INIS)
Borg, M.; Badr, I.; Royle, G. J.
2013-01-01
Modern full-field digital mammography (FFDM) units display the mean glandular dose (MGD) and the entrance or incident air kerma (K) to the breast following each exposure. Information on how these values are calculated is limited and knowing how displayed MGD values compare and correlate to conventional Monte-Carlo-based methods is useful. From measurements done on polymethyl methacrylate (PMMA) phantoms, it has been shown that displayed and calculated MGD values are similar for thin to medium thicknesses and appear to differ with larger PMMA thicknesses. As a result, a multiple linear regression analysis on the data was performed to generate models by which displayed MGD values on the two FFDM units included in the study may be converted to the Monte-Carlo values calculated by conventional methods. These models should be a useful tool for medical physicists requiring MGD data from FFDM units included in this paper and should reduce the survey time spent on dose calculations. (authors)
Advanced Multilevel Monte Carlo Methods
Jasra, Ajay; Law, Kody; Suciu, Carina
2017-01-01
This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.
Advanced Multilevel Monte Carlo Methods
Jasra, Ajay
2017-04-24
This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.
TARC: Carlo Rubbia's Energy Amplifier
Laurent Guiraud
1997-01-01
Transmutation by Adiabatic Resonance Crossing (TARC) is Carlo Rubbia's energy amplifier. This CERN experiment demonstrated that long-lived fission fragments, such as 99-TC, can be efficiently destroyed.
Monte Carlo simulation for IRRMA
International Nuclear Information System (INIS)
Gardner, R.P.; Liu Lianyan
2000-01-01
Monte Carlo simulation is fast becoming a standard approach for many radiation applications that were previously treated almost entirely by experimental techniques. This is certainly true for Industrial Radiation and Radioisotope Measurement Applications - IRRMA. The reasons for this include: (1) the increased cost and inadequacy of experimentation for design and interpretation purposes; (2) the availability of low cost, large memory, and fast personal computers; and (3) the general availability of general purpose Monte Carlo codes that are increasingly user-friendly, efficient, and accurate. This paper discusses the history and present status of Monte Carlo simulation for IRRMA including the general purpose (GP) and specific purpose (SP) Monte Carlo codes and future needs - primarily from the experience of the authors
Adjoint electron Monte Carlo calculations
International Nuclear Information System (INIS)
Jordan, T.M.
1986-01-01
Adjoint Monte Carlo is the most efficient method for accurate analysis of space systems exposed to natural and artificially enhanced electron environments. Recent adjoint calculations for isotropic electron environments include: comparative data for experimental measurements on electronics boxes; benchmark problem solutions for comparing total dose prediction methodologies; preliminary assessment of sectoring methods used during space system design; and total dose predictions on an electronics package. Adjoint Monte Carlo, forward Monte Carlo, and experiment are in excellent agreement for electron sources that simulate space environments. For electron space environments, adjoint Monte Carlo is clearly superior to forward Monte Carlo, requiring one to two orders of magnitude less computer time for relatively simple geometries. The solid-angle sectoring approximations used for routine design calculations can err by more than a factor of 2 on dose in simple shield geometries. For critical space systems exposed to severe electron environments, these potential sectoring errors demand the establishment of large design margins and/or verification of shield design by adjoint Monte Carlo/experiment
Monte Carlo theory and practice
International Nuclear Information System (INIS)
James, F.
1987-01-01
Historically, the first large-scale calculations to make use of the Monte Carlo method were studies of neutron scattering and absorption, random processes for which it is quite natural to employ random numbers. Such calculations, a subset of Monte Carlo calculations, are known as direct simulation, since the 'hypothetical population' of the narrower definition above corresponds directly to the real population being studied. The Monte Carlo method may be applied wherever it is possible to establish equivalence between the desired result and the expected behaviour of a stochastic system. The problem to be solved may already be of a probabilistic or statistical nature, in which case its Monte Carlo formulation will usually be a straightforward simulation, or it may be of a deterministic or analytic nature, in which case an appropriate Monte Carlo formulation may require some imagination and may appear contrived or artificial. In any case, the suitability of the method chosen will depend on its mathematical properties and not on its superficial resemblance to the problem to be solved. The authors show how Monte Carlo techniques may be compared with other methods of solution of the same physical problem
Combinatorial nuclear level density by a Monte Carlo method
International Nuclear Information System (INIS)
Cerf, N.
1994-01-01
We present a new combinatorial method for the calculation of the nuclear level density. It is based on a Monte Carlo technique, in order to avoid a direct counting procedure which is generally impracticable for high-A nuclei. The Monte Carlo simulation, making use of the Metropolis sampling scheme, allows a computationally fast estimate of the level density for many fermion systems in large shell model spaces. We emphasize the advantages of this Monte Carlo approach, particularly concerning the prediction of the spin and parity distributions of the excited states,and compare our results with those derived from a traditional combinatorial or a statistical method. Such a Monte Carlo technique seems very promising to determine accurate level densities in a large energy range for nuclear reaction calculations
Quantum Monte Carlo for vibrating molecules
International Nuclear Information System (INIS)
Brown, W.R.; Lawrence Berkeley National Lab., CA
1996-08-01
Quantum Monte Carlo (QMC) has successfully computed the total electronic energies of atoms and molecules. The main goal of this work is to use correlation function quantum Monte Carlo (CFQMC) to compute the vibrational state energies of molecules given a potential energy surface (PES). In CFQMC, an ensemble of random walkers simulate the diffusion and branching processes of the imaginary-time time dependent Schroedinger equation in order to evaluate the matrix elements. The program QMCVIB was written to perform multi-state VMC and CFQMC calculations and employed for several calculations of the H 2 O and C 3 vibrational states, using 7 PES's, 3 trial wavefunction forms, two methods of non-linear basis function parameter optimization, and on both serial and parallel computers. In order to construct accurate trial wavefunctions different wavefunctions forms were required for H 2 O and C 3 . In order to construct accurate trial wavefunctions for C 3 , the non-linear parameters were optimized with respect to the sum of the energies of several low-lying vibrational states. In order to stabilize the statistical error estimates for C 3 the Monte Carlo data was collected into blocks. Accurate vibrational state energies were computed using both serial and parallel QMCVIB programs. Comparison of vibrational state energies computed from the three C 3 PES's suggested that a non-linear equilibrium geometry PES is the most accurate and that discrete potential representations may be used to conveniently determine vibrational state energies
International Nuclear Information System (INIS)
Mainegra, E.; Capote, R.
1997-01-01
A methodology was developed for the characterization of sparkle detectors with crystal of NaI (Ti), from simulation by Monte Carlo with the Electron-Gamma-Shower system version 4. In the simulation it considered the aluminum cover of the crystal and the armor protector, of the detection system. The experimental spectrum was reproduced with precision, except for energies smaller than the peak of re tro dispersion. This divergence is explained for the non consideration of the real dimensions of the fountain and hence of the dispersion of the gamma radiation in the same one. (author) [es
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan
2016-01-01
In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . âˆž>h0>h1â‹¯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. Â© 2016 Elsevier B.V.
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros
2016-08-29
In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . âˆž>h0>h1â‹¯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. Â© 2016 Elsevier B.V.
International Nuclear Information System (INIS)
Perlado, J. Manuel; Marian, Jaime; Sanz, Jesus Garcia
2000-01-01
Validating state-of-the-art methods used to predict fluence exposure to reactor pressure vessels (RPVs) has become an important issue in identifying the sources of uncertainty in the estimated RPV fluence for pressurized water reactors. This is a very important aspect in evaluating irradiation damage leading to the hardening and embrittlement of such structural components. One of the major benchmark experiments carried out to test three-dimensional methodologies is the VENUS-3 Benchmark Experiment in which three-dimensional Monte Carlo and S n codes have proved more efficient than synthesis methods. At the Instituto de Fusion Nuclear (DENIM) at the Universidad Politecnica de Madrid, a detailed full three-dimensional model of the Venus Critical Facility has been developed making use of the Monte Carlo transport code MCNP4B. The problem geometry and source modeling are described, and results, including calculated versus experimental (C/E) ratios as well as additional studies, are presented. Evidence was found that the great majority of C/E values fell within the 10% tolerance and most within 5%. Tolerance limits are discussed on the basis of evaluated data library and fission spectra sensitivity, where a value ranging between 10 to 15% should be accepted. Also, a calculation of the atomic displacement rate has been carried out in various locations throughout the reactor, finding that values of 0.0001 displacements per atom in external components such as the core barrel are representative of this type of reactor during a 30-yr time span
Automated Monte Carlo biasing for photon-generated electrons near surfaces.
Energy Technology Data Exchange (ETDEWEB)
Franke, Brian Claude; Crawford, Martin James; Kensek, Ronald Patrick
2009-09-01
This report describes efforts to automate the biasing of coupled electron-photon Monte Carlo particle transport calculations. The approach was based on weight-windows biasing. Weight-window settings were determined using adjoint-flux Monte Carlo calculations. A variety of algorithms were investigated for adaptivity of the Monte Carlo tallies. Tree data structures were used to investigate spatial partitioning. Functional-expansion tallies were used to investigate higher-order spatial representations.
Development and application of the automated Monte Carlo biasing procedure in SAS4
International Nuclear Information System (INIS)
Tang, J.S.; Broadhead, B.L.
1995-01-01
An automated approach for biasing Monte Carlo shielding calculations is described. In particular, adjoint fluxes from a one-dimensional discrete-ordinates calculation are used to generate biasing parameters for a three-dimensional Monte Carlo calculation. The automated procedure consisting of cross-section processing, adjoint flux determination, biasing parameter generation, and the initiation of a MORSE-SGC/S Monte Carlo calculation has been implemented in the SAS4 module of the SCALE computer code system. (author)
Strategije drevesnega preiskovanja Monte Carlo
VODOPIVEC, TOM
2018-01-01
Po preboju pri igri go so metode drevesnega preiskovanja Monte Carlo (ang. Monte Carlo tree search – MCTS) sprožile bliskovit napredek agentov za igranje iger: raziskovalna skupnost je od takrat razvila veliko variant in izboljšav algoritma MCTS ter s tem zagotovila napredek umetne inteligence ne samo pri igrah, ampak tudi v številnih drugih domenah. Čeprav metode MCTS združujejo splošnost naključnega vzorčenja z natančnostjo drevesnega preiskovanja, imajo lahko v praksi težave s počasno konv...
Control Variates for Monte Carlo Valuation of American Options
DEFF Research Database (Denmark)
Rasmussen, Nicki S.
2005-01-01
This paper considers two applications of control variates to the Monte Carlo valuation of American options. The main contribution of the paper lies in the particular choice of a control variate for American or Bermudan options. It is shown that for any martingale process used as a control variate...... technique is used for improving the least-squares Monte Carlo (LSM) approach for determining exercise strategies. The suggestions made allow for more efficient estimation of the continuation value, used in determining the strategy. An additional suggestion is made in order to improve the stability...
International Nuclear Information System (INIS)
Erben, O.
1980-01-01
The coefficients of thermal and epithermal neutron flux density depression and self-shielding for the SPN detectors with vanadium, rhodium, silver and cobalt emitters are presented, (for cobalt SPN detectors the functions describing the absorbtion of neutrons along the emitter cross-section are also shown). Using these coefficients and previously published beta particle escape efficiencies, sensitivities are determined for the principal types of detectors produced by Les Cables de Lyon and SODERN companies. The experiments and their results verifying the validity of the theoretical work are described. (author)
Energy Technology Data Exchange (ETDEWEB)
Riley, Jr, J E; Lindstrom, R M
1987-01-01
Major levels of boron in borosilicate glasses were determined nondestructively by neutron activation analysis. The effects of neutron self-shielding by boron (1 to 8% by weight) are examined. Results of the analysis of a series of glasses with increasing boron composition are 1.150 +- .005% and 7.766 +- .035% for the low and high members of the series. Once analyzed, the glasses are useful as secondary standards for alpha track counting, and also ion and electron microprobe analyses of glasses. 12 refs.; 3 tables.
Is Monte Carlo embarrassingly parallel?
Energy Technology Data Exchange (ETDEWEB)
Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)
2012-07-01
Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)
Is Monte Carlo embarrassingly parallel?
International Nuclear Information System (INIS)
Hoogenboom, J. E.
2012-01-01
Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)
Exact Monte Carlo for molecules
International Nuclear Information System (INIS)
Lester, W.A. Jr.; Reynolds, P.J.
1985-03-01
A brief summary of the fixed-node quantum Monte Carlo method is presented. Results obtained for binding energies, the classical barrier height for H + H 2 , and the singlet-triplet splitting in methylene are presented and discussed. 17 refs
Quantum Monte Carlo for atoms and molecules
International Nuclear Information System (INIS)
Barnett, R.N.
1989-11-01
The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H 2 , LiH, Li 2 , and H 2 O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li 2 , and H 2 O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations, the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions
Energy Technology Data Exchange (ETDEWEB)
Gonzalez, Dania Soguero; Ardanza, Armando Chavez, E-mail: sdania@ceaden.edu.cu [Centro de Aplicaciones Tecnologicas y Desarrollo Nuclear (CEADEN), La Habana (Cuba)
2013-07-01
This paper describes the process of installation of a self-shielded irradiator category I, model ISOGAMMA LL.Co of {sup 60}Co, with a nominal 25 kCi activity, rate of absorbed dose 8 kG/h and 5 L workload. The stages are describe step by step: import, the customs procedure which included the interview with the master of the vessel transporter, the monitoring of the entire process by the head of radiological protection of the importing Center, control of the levels of surface contamination of the shipping container of the sources before the removal of the ship, the supervision of the national regulatory authority and the transportation to the final destination. Details of assembling of the installation and the opening of the container for transportation of supplies is outlined. The action plan previously developed for the case of occurrence of radiological successful events is presented, detailing the phase of the load of radioactive sources by the specialists of the company selling the facility (IZOTOP). Finally describes the setting and implementation of the installation and the procedure of licensing for exploitation.
(U) Introduction to Monte Carlo Methods
Energy Technology Data Exchange (ETDEWEB)
Hungerford, Aimee L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-03-20
Monte Carlo methods are very valuable for representing solutions to particle transport problems. Here we describe a “cook book” approach to handling the terms in a transport equation using Monte Carlo methods. Focus is on the mechanics of a numerical Monte Carlo code, rather than the mathematical foundations of the method.
Monte Carlo simulation of virtual compton scattering at MAMI
International Nuclear Information System (INIS)
D'Hose, N.; Ducret, J.E.; Gousset, TH.; Guichon, P.A.M.; Kerhoas, S.; Lhuillier, D.; Marchand, C.; Marchand, D.; Martino, J.; Mougey, J.; Roche, J.; Vanderhaeghen, M.; Vernin, P.; Bohm, H.; Distler, M.; Edelhoff, R.; Friedrich, J.M.; Geiges, R.; Jennewein, P.; Kahrau, M.; Korn, M.; Kramer, H.; Krygier, K.W.; Kunde, V.; Liesenfeld, A.; Merkel, H.; Merle, K.; Neuhausen, R.; Pospischil, TH.; Rosner, G.; Sauer, P.; Schmieden, H.; Schardt, S.; Tamas, G.; Wagner, A.; Walcher, TH.; Wolf, S.; Hyde-Wright, CH.; Boeglin, W.U.; Van de Wiele, J.
1996-01-01
The Monte Carlo simulation developed specially for the VCS experiments taking place at MAMI in fully described. This simulation can generate events according to the Bethe-Heitler + Born cross section behaviour and takes into account resolution deteriorating effects. It is used to determine solid angles for the various experimental settings. (authors)
A Monte Carlo Sampling Technique for Multi-phonon Processes
Energy Technology Data Exchange (ETDEWEB)
Hoegberg, Thure
1961-12-15
A sampling technique for selecting scattering angle and energy gain in Monte Carlo calculations of neutron thermalization is described. It is supposed that the scattering is separated into processes involving different numbers of phonons. The number of phonons involved is first determined. Scattering angle and energy gain are then chosen by using special properties of the multi-phonon term.
Monte Carlo simulation of the seed germination process
International Nuclear Information System (INIS)
Gladyszewska, B.; Koper, R.
2000-01-01
Paper presented a mathematical model of seed germination process based on the Monte Carlo method and theoretical premises resulted from the physiology of seed germination suggesting three consecutive stages: physical, biochemical and physiological. The model was experimentally verified by determination of germination characteristics for seeds of ground tomatoes, Promyk cultivar, within broad range of temperatures (from 15 to 30 deg C)
Osmotic pressure of ring polymer solutions : A Monte Carlo study
Flikkema, Edwin; Brinke, Gerrit ten
2000-01-01
Using the wall theorem, the osmotic pressure of ring polymers in solution has been determined using an off-lattice topology conserving Monte Carlo algorithm. The ring polymers are modeled as freely-jointed chains with point-like beads, i.e., under conditions corresponding to θ-conditions for the
International Nuclear Information System (INIS)
Abánades, A.; Álvarez-Velarde, F.; González-Romero, E.M.; Ismailov, K.; Lafuente, A.; Nishihara, K.; Saito, M.; Stanculescu, A.; Sugawara, T.
2013-01-01
Highlights: ► TARC experiment benchmark capture rates results. ► Utilization of updated databases, included ADSLib. ► Self-shielding effect in reactor design for transmutation. ► Effect of Lead nuclear data. - Abstract: The design of Accelerator Driven Systems (ADS) requires the development of simulation tools that are able to describe in a realistic way their nuclear performance and transmutation rate capability. In this publication, we present an evaluation of state of the art Monte Carlo design tools to assess their performance concerning transmutation of long-lived fission products. This work, performed under the umbrella of the International Atomic Energy Agency, analyses two important aspects for transmutation systems: moderation on Lead and neutron captures of 99 Tc, 127 I and 129 I. The analysis of the results shows how shielding effects due to the resonances at epithermal energies of these nuclides affects strongly their transmutation rate. The results suggest that some research effort should be undertaken to improve the quality of Iodine nuclear data at epithermal and fast neutron energy to obtain a reliable transmutation estimation.
Energy Technology Data Exchange (ETDEWEB)
Abanades, A., E-mail: abanades@etsii.upm.es [Grupo de Modelizacion de Sistemas Termoenergeticos, ETSII, Universidad Politecnica de Madrid, c/Ramiro de Maeztu, 7, 28040 Madrid (Spain); Alvarez-Velarde, F.; Gonzalez-Romero, E.M. [Centro de Investigaciones Medioambientales y Tecnologicas (CIEMAT), Avda. Complutense, 40, Ed. 17, 28040 Madrid (Spain); Ismailov, K. [Tokyo Institute of Technology, 2-12-1, O-okayama, Meguro-ku, Tokyo 152-8550 (Japan); Lafuente, A. [Grupo de Modelizacion de Sistemas Termoenergeticos, ETSII, Universidad Politecnica de Madrid, c/Ramiro de Maeztu, 7, 28040 Madrid (Spain); Nishihara, K. [Transmutation Section, J-PARC Center, JAEA, Tokai-mura, Ibaraki-ken 319-1195 (Japan); Saito, M. [Tokyo Institute of Technology, 2-12-1, O-okayama, Meguro-ku, Tokyo 152-8550 (Japan); Stanculescu, A. [International Atomic Energy Agency (IAEA), Vienna (Austria); Sugawara, T. [Transmutation Section, J-PARC Center, JAEA, Tokai-mura, Ibaraki-ken 319-1195 (Japan)
2013-01-15
Highlights: Black-Right-Pointing-Pointer TARC experiment benchmark capture rates results. Black-Right-Pointing-Pointer Utilization of updated databases, included ADSLib. Black-Right-Pointing-Pointer Self-shielding effect in reactor design for transmutation. Black-Right-Pointing-Pointer Effect of Lead nuclear data. - Abstract: The design of Accelerator Driven Systems (ADS) requires the development of simulation tools that are able to describe in a realistic way their nuclear performance and transmutation rate capability. In this publication, we present an evaluation of state of the art Monte Carlo design tools to assess their performance concerning transmutation of long-lived fission products. This work, performed under the umbrella of the International Atomic Energy Agency, analyses two important aspects for transmutation systems: moderation on Lead and neutron captures of {sup 99}Tc, {sup 127}I and {sup 129}I. The analysis of the results shows how shielding effects due to the resonances at epithermal energies of these nuclides affects strongly their transmutation rate. The results suggest that some research effort should be undertaken to improve the quality of Iodine nuclear data at epithermal and fast neutron energy to obtain a reliable transmutation estimation.
Scouting the feasibility of Monte Carlo reactor dynamics simulations
International Nuclear Information System (INIS)
Legrady, David; Hoogenboom, J. Eduard
2008-01-01
In this paper we present an overview of the methodological questions related to Monte Carlo simulation of time dependent power transients in nuclear reactors. Investigations using a small fictional 3D reactor with isotropic scattering and a single energy group we have performed direct Monte Carlo transient calculations with simulation of delayed neutrons and with and without thermal feedback. Using biased delayed neutron sampling and population control at time step boundaries calculation times were kept reasonably low. We have identified the initial source determination and the prompt chain simulations as key issues that require most attention. (authors)
Scouting the feasibility of Monte Carlo reactor dynamics simulations
Energy Technology Data Exchange (ETDEWEB)
Legrady, David [Forschungszentrum Dresden-Rossendorf, Dresden (Germany); Hoogenboom, J. Eduard [Delft University of Technology, Delft (Netherlands)
2008-07-01
In this paper we present an overview of the methodological questions related to Monte Carlo simulation of time dependent power transients in nuclear reactors. Investigations using a small fictional 3D reactor with isotropic scattering and a single energy group we have performed direct Monte Carlo transient calculations with simulation of delayed neutrons and with and without thermal feedback. Using biased delayed neutron sampling and population control at time step boundaries calculation times were kept reasonably low. We have identified the initial source determination and the prompt chain simulations as key issues that require most attention. (authors)
Results of the Monte Carlo 'simple case' benchmark exercise
International Nuclear Information System (INIS)
2003-11-01
A new 'simple case' benchmark intercomparison exercise was launched, intended to study the importance of the fundamental nuclear data constants, physics treatments and geometry model approximations, employed by Monte Carlo codes in common use. The exercise was also directed at determining the level of agreement which can be expected between measured and calculated quantities, using current state or the art modelling codes and techniques. To this end, measurements and Monte Carlo calculations of the total (or gross) neutron count rates have been performed using a simple moderated 3 He cylindrical proportional counter array or 'slab monitor' counting geometry, deciding to select a very simple geometry for this exercise
Proton therapy analysis using the Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Noshad, Houshyar [Center for Theoretical Physics and Mathematics, AEOI, P.O. Box 14155-1339, Tehran (Iran, Islamic Republic of)]. E-mail: hnoshad@aeoi.org.ir; Givechi, Nasim [Islamic Azad University, Science and Research Branch, Tehran (Iran, Islamic Republic of)
2005-10-01
The range and straggling data obtained from the transport of ions in matter (TRIM) computer program were used to determine the trajectories of monoenergetic 60 MeV protons in muscle tissue by using the Monte Carlo technique. The appropriate profile for the shape of a proton pencil beam in proton therapy as well as the dose deposited in the tissue were computed. The good agreements between our results as compared with the corresponding experimental values are presented here to show the reliability of our Monte Carlo method.
Monte Carlo simulation of the ARGO
International Nuclear Information System (INIS)
Depaola, G.O.
1997-01-01
We use GEANT Monte Carlo code to design an outline of the geometry and simulate the performance of the Argentine gamma-ray observer (ARGO), a telescope based on silicon strip detector technlogy. The γ-ray direction is determined by geometrical means and the angular resolution is calculated for small variations of the basic design. The results show that the angular resolutions vary from a few degrees at low energies (∝50 MeV) to 0.2 , approximately, at high energies (>500 MeV). We also made simulations using as incoming γ-ray the energy spectrum of PKS0208-512 and PKS0528+134 quasars. Moreover, a method based on multiple scattering theory is also used to determine the incoming energy. We show that this method is applicable to energy spectrum. (orig.)
Directory of Open Access Journals (Sweden)
Bahram Andarzian
2015-06-01
Full Text Available Wheat production in the south of Khuzestan, Iran is constrained by heat stress for late sowing dates. For optimization of yield, sowing at the appropriate time to fit the cultivar maturity length and growing season is critical. Crop models could be used to determine optimum sowing window for a locality. The objectives of this study were to evaluate the Cropping System Model (CSM-CERES-Wheat for its ability to simulate growth, development, grain yield of wheat in the tropical regions of Iran, and to study the impact of different sowing dates on wheat performance. The genetic coefficients of cultivar Chamran were calibrated for the CSM-CERES-Wheat model and crop model performance was evaluated with experimental data. Wheat cultivar Chamran was sown on different dates, ranging from 5 November to 9 January during 5 years of field experiments that were conducted in the Khuzestan province, Iran, under full and deficit irrigation conditions. The model was run for 8 sowing dates starting on 25 October and repeated every 10 days until 5 January using long-term historical weather data from the Ahvaz, Behbehan, Dezful and Izeh locations. The seasonal analysis program of DSSAT was used to determine the optimum sowing window for different locations as well. Evaluation with the experimental data showed that performance of the model was reasonable as indicated by fairly accurate simulation of crop phenology, biomass accumulation and grain yield against measured data. The normalized RMSE were 3%, 2%, 11.8%, and 3.4% for anthesis date, maturity date, grain yield and biomass, respectively. Optimum sowing window was different among locations. It was opened and closed on 5 November and 5 December for Ahvaz; 5 November and 15 December for Behbehan and Dezful;and 1 November and 15 December for Izeh, respectively. CERES-Wheat model could be used as a tool to evaluate the effect of sowing date on wheat performance in Khuzestan conditions. Further model evaluations
Energy Technology Data Exchange (ETDEWEB)
Dickens, J.K.
1988-04-01
This document provides a discussion of the development of the FORTRAN Monte Carlo program SCINFUL (for scintillator full response), a program designed to provide a calculated full response anticipated for either an NE-213 (liquid) scintillator or an NE-110 (solid) scintillator. The program may also be used to compute angle-integrated spectra of charged particles (p, d, t, /sup 3/He, and ..cap alpha..) following neutron interactions with /sup 12/C. Extensive comparisons with a variety of experimental data are given. There is generally overall good agreement (<10% differences) of results from SCINFUL calculations with measured detector responses, i.e., N(E/sub r/) vs E/sub r/ where E/sub r/ is the response pulse height, reproduce measured detector responses with an accuracy which, at least partly, depends upon how well the experimental configuration is known. For E/sub n/ < 16 MeV and for E/sub r/ > 15% of the maximum pulse height response, calculated spectra are within +-5% of experiment on the average. For E/sub n/ up to 50 MeV similar good agreement is obtained with experiment for E/sub r/ > 30% of maximum response. For E/sub n/ up to 75 MeV the calculated shape of the response agrees with measurements, but the calculations underpredicts the measured response by up to 30%. 65 refs., 64 figs., 3 tabs.
Isotopic depletion with Monte Carlo
International Nuclear Information System (INIS)
Martin, W.R.; Rathkopf, J.A.
1996-06-01
This work considers a method to deplete isotopes during a time- dependent Monte Carlo simulation of an evolving system. The method is based on explicitly combining a conventional estimator for the scalar flux with the analytical solutions to the isotopic depletion equations. There are no auxiliary calculations; the method is an integral part of the Monte Carlo calculation. The method eliminates negative densities and reduces the variance in the estimates for the isotope densities, compared to existing methods. Moreover, existing methods are shown to be special cases of the general method described in this work, as they can be derived by combining a high variance estimator for the scalar flux with a low-order approximation to the analytical solution to the depletion equation
Zimmerman, George B.
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
International Nuclear Information System (INIS)
Zimmerman, G.B.
1997-01-01
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials. copyright 1997 American Institute of Physics
International Nuclear Information System (INIS)
Zimmerman, George B.
1997-01-01
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials
Shell model Monte Carlo methods
International Nuclear Information System (INIS)
Koonin, S.E.; Dean, D.J.; Langanke, K.
1997-01-01
We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; the resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo (SMMC) methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, the thermal and rotational behavior of rare-earth and γ-soft nuclei, and the calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. (orig.)
A contribution Monte Carlo method
International Nuclear Information System (INIS)
Aboughantous, C.H.
1994-01-01
A Contribution Monte Carlo method is developed and successfully applied to a sample deep-penetration shielding problem. The random walk is simulated in most of its parts as in conventional Monte Carlo methods. The probability density functions (pdf's) are expressed in terms of spherical harmonics and are continuous functions in direction cosine and azimuthal angle variables as well as in position coordinates; the energy is discretized in the multigroup approximation. The transport pdf is an unusual exponential kernel strongly dependent on the incident and emergent directions and energies and on the position of the collision site. The method produces the same results obtained with the deterministic method with a very small standard deviation, with as little as 1,000 Contribution particles in both analog and nonabsorption biasing modes and with only a few minutes CPU time
Shell model Monte Carlo methods
International Nuclear Information System (INIS)
Koonin, S.E.
1996-01-01
We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs
Monte Carlo simulated dynamical magnetization of single-chain magnets
Energy Technology Data Exchange (ETDEWEB)
Li, Jun; Liu, Bang-Gui, E-mail: bgliu@iphy.ac.cn
2015-03-15
Here, a dynamical Monte-Carlo (DMC) method is used to study temperature-dependent dynamical magnetization of famous Mn{sub 2}Ni system as typical example of single-chain magnets with strong magnetic anisotropy. Simulated magnetization curves are in good agreement with experimental results under typical temperatures and sweeping rates, and simulated coercive fields as functions of temperature are also consistent with experimental curves. Further analysis indicates that the magnetization reversal is determined by both thermal-activated effects and quantum spin tunnelings. These can help explore basic properties and applications of such important magnetic systems. - Highlights: • Monte Carlo simulated magnetization curves are in good agreement with experimental results. • Simulated coercive fields as functions of temperature are consistent with experimental results. • The magnetization reversal is understood in terms of the Monte Carlo simulations.
Calibration and Monte Carlo modelling of neutron long counters
Tagziria, H
2000-01-01
The Monte Carlo technique has become a very powerful tool in radiation transport as full advantage is taken of enhanced cross-section data, more powerful computers and statistical techniques, together with better characterisation of neutron and photon source spectra. At the National Physical Laboratory, calculations using the Monte Carlo radiation transport code MCNP-4B have been combined with accurate measurements to characterise two long counters routinely used to standardise monoenergetic neutron fields. New and more accurate response function curves have been produced for both long counters. A novel approach using Monte Carlo methods has been developed, validated and used to model the response function of the counters and determine more accurately their effective centres, which have always been difficult to establish experimentally. Calculations and measurements agree well, especially for the De Pangher long counter for which details of the design and constructional material are well known. The sensitivit...
Parallel Monte Carlo reactor neutronics
International Nuclear Information System (INIS)
Blomquist, R.N.; Brown, F.B.
1994-01-01
The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved
Elements of Monte Carlo techniques
International Nuclear Information System (INIS)
Nagarajan, P.S.
2000-01-01
The Monte Carlo method is essentially mimicking the real world physical processes at the microscopic level. With the incredible increase in computing speeds and ever decreasing computing costs, there is widespread use of the method for practical problems. The method is used in calculating algorithm-generated sequences known as pseudo random sequence (prs)., probability density function (pdf), test for randomness, extension to multidimensional integration etc
Adaptive Multilevel Monte Carlo Simulation
Hoel, H
2011-08-23
This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).
Geometrical splitting in Monte Carlo
International Nuclear Information System (INIS)
Dubi, A.; Elperin, T.; Dudziak, D.J.
1982-01-01
A statistical model is presented by which a direct statistical approach yielded an analytic expression for the second moment, the variance ratio, and the benefit function in a model of an n surface-splitting Monte Carlo game. In addition to the insight into the dependence of the second moment on the splitting parameters the main importance of the expressions developed lies in their potential to become a basis for in-code optimization of splitting through a general algorithm. Refs
Extending canonical Monte Carlo methods
International Nuclear Information System (INIS)
Velazquez, L; Curilef, S
2010-01-01
In this paper, we discuss the implications of a recently obtained equilibrium fluctuation-dissipation relation for the extension of the available Monte Carlo methods on the basis of the consideration of the Gibbs canonical ensemble to account for the existence of an anomalous regime with negative heat capacities C α with α≈0.2 for the particular case of the 2D ten-state Potts model
International Nuclear Information System (INIS)
Kennedy, D.C. II.
1987-01-01
This is an update on the progress of the BREMMUS Monte Carlo simulator, particularly in its current incarnation, BREM5. The present report is intended only as a follow-up to the Mark II/Granlibakken proceedings, and those proceedings should be consulted for a complete description of the capabilities and goals of the BREMMUS program. The new BREM5 program improves on the previous version of BREMMUS, BREM2, in a number of important ways. In BREM2, the internal loop (oblique) corrections were not treated in consistent fashion, a deficiency that led to renormalization scheme-dependence; i.e., physical results, such as cross sections, were dependent on the method used to eliminate infinities from the theory. Of course, this problem cannot be tolerated in a Monte Carlo designed for experimental use. BREM5 incorporates a new way of treating the oblique corrections, as explained in the Granlibakken proceedings, that guarantees renormalization scheme-independence and dramatically simplifies the organization and calculation of radiative corrections. This technique is to be presented in full detail in a forthcoming paper. BREM5 is, at this point, the only Monte Carlo to contain the entire set of one-loop corrections to electroweak four-fermion processes and renormalization scheme-independence. 3 figures
Statistical implications in Monte Carlo depletions - 051
International Nuclear Information System (INIS)
Zhiwen, Xu; Rhodes, J.; Smith, K.
2010-01-01
As a result of steady advances of computer power, continuous-energy Monte Carlo depletion analysis is attracting considerable attention for reactor burnup calculations. The typical Monte Carlo analysis is set up as a combination of a Monte Carlo neutron transport solver and a fuel burnup solver. Note that the burnup solver is a deterministic module. The statistical errors in Monte Carlo solutions are introduced into nuclide number densities and propagated along fuel burnup. This paper is towards the understanding of the statistical implications in Monte Carlo depletions, including both statistical bias and statistical variations in depleted fuel number densities. The deterministic Studsvik lattice physics code, CASMO-5, is modified to model the Monte Carlo depletion. The statistical bias in depleted number densities is found to be negligible compared to its statistical variations, which, in turn, demonstrates the correctness of the Monte Carlo depletion method. Meanwhile, the statistical variation in number densities generally increases with burnup. Several possible ways of reducing the statistical errors are discussed: 1) to increase the number of individual Monte Carlo histories; 2) to increase the number of time steps; 3) to run additional independent Monte Carlo depletion cases. Finally, a new Monte Carlo depletion methodology, called the batch depletion method, is proposed, which consists of performing a set of independent Monte Carlo depletions and is thus capable of estimating the overall statistical errors including both the local statistical error and the propagated statistical error. (authors)
ZZ BOREHOLE-EB6.8-MG, multi group cross-section library for deterministic and Monte Carlo codes
International Nuclear Information System (INIS)
Kodeli, Ivo; Aldama, Daniel L.; Leege, Piet F.A. de; Legrady, David; Hoogenboom, J. Eduard
2007-01-01
1 - Description: Format: MATXS and ACE; Number of groups: 175 neutron, 45 gamma-ray; Nuclides: H-1, C-12, O-16, Na-23, Mg-nat, Al-27, Si-28, -29, -30, S-nat, Cl-35, -37, K-nat, Ca-nat, Mn-55, Fe-54, -56, -57, -58, I-127, W-nat. Origin: ENDF/B-VI.8; Weighting spectrum: Fission and fusion peak at high energies and a 1/E + thermal Maxwellian extension at low energies. The following materials/nuclides are included in the library: H-1, C-12, O-16, Na-23, Mg-nat, Al-27, Si-28, -29, -30, S-nat, Cl-35, -37, K-nat, Ca-nat, Fe-54, -56, -57, -58, Mn-55, I-127, W-nat. ZZ-BOREHOLE-EB6.8-MG is a multigroup cross section library for deterministic (DOORS, DANTSYS) and Monte Carlo (MCNP) transport codes developed for the oil well logging applications. The library is based on the ENDF/B-VI.8 evaluation and was processed by the NJOY-99 code. The cross sections are given in the 175 neutron and 45 gamma ray group structure. The MATXS format library can be directly used in TRANSX code to prepare the multigroup self-shielded cross sections for deterministic discrete ordinates codes like DOORS and DANTSYS. The data provided in the GROUPR and GAMINR format were converted to the MCNP ACE format by the NSLINK, SCALE and CRSRD codes. IAEA1398/03: Multigroup cross section data for Mn-55 were added in TRANSX format
Monte Carlo calculation with unquenched Wilson-Fermions
International Nuclear Information System (INIS)
Montvay, I.
1984-01-01
A Monte Carlo updating procedure taking into account the virtual quark loops is described. It is based on high order hopping parameter expansion of the quark determinant for Wilson-fermions. In a first test run Wilson-loop expectation values are measured on 6 4 lattice at β=5.70 using 16sup(th) order hopping parameter expansion for the quark determinant. (orig.)
Monte Carlo simulations of low background detectors
International Nuclear Information System (INIS)
Miley, H.S.; Brodzinski, R.L.; Hensley, W.K.; Reeves, J.H.
1995-01-01
An implementation of the Electron Gamma Shower 4 code (EGS4) has been developed to allow convenient simulation of typical gamma ray measurement systems. Coincidence gamma rays, beta spectra, and angular correlations have been added to adequately simulate a complete nuclear decay and provide corrections to experimentally determined detector efficiencies. This code has been used to strip certain low-background spectra for the purpose of extremely low-level assay. Monte Carlo calculations of this sort can be extremely successful since low background detectors are usually free of significant contributions from poorly localized radiation sources, such as cosmic muons, secondary cosmic neutrons, and radioactive construction or shielding materials. Previously, validation of this code has been obtained from a series of comparisons between measurements and blind calculations. An example of the application of this code to an exceedingly low background spectrum stripping will be presented. (author) 5 refs.; 3 figs.; 1 tab
Variational Monte Carlo study of pentaquark states
Energy Technology Data Exchange (ETDEWEB)
Mark W. Paris
2005-07-01
Accurate numerical solution of the five-body Schrodinger equation is effected via variational Monte Carlo. The spectrum is assumed to exhibit a narrow resonance with strangeness S=+1. A fully antisymmetrized and pair-correlated five-quark wave function is obtained for the assumed non-relativistic Hamiltonian which has spin, isospin, and color dependent pair interactions and many-body confining terms which are fixed by the non-exotic spectra. Gauge field dynamics are modeled via flux tube exchange factors. The energy determined for the ground states with J=1/2 and negative (positive) parity is 2.22 GeV (2.50 GeV). A lower energy negative parity state is consistent with recent lattice results. The short-range structure of the state is analyzed via its diquark content.
Geometric Monte Carlo and black Janus geometries
Energy Technology Data Exchange (ETDEWEB)
Bak, Dongsu, E-mail: dsbak@uos.ac.kr [Physics Department, University of Seoul, Seoul 02504 (Korea, Republic of); B.W. Lee Center for Fields, Gravity & Strings, Institute for Basic Sciences, Daejeon 34047 (Korea, Republic of); Kim, Chanju, E-mail: cjkim@ewha.ac.kr [Department of Physics, Ewha Womans University, Seoul 03760 (Korea, Republic of); Kim, Kyung Kiu, E-mail: kimkyungkiu@gmail.com [Department of Physics, Sejong University, Seoul 05006 (Korea, Republic of); Department of Physics, College of Science, Yonsei University, Seoul 03722 (Korea, Republic of); Min, Hyunsoo, E-mail: hsmin@uos.ac.kr [Physics Department, University of Seoul, Seoul 02504 (Korea, Republic of); Song, Jeong-Pil, E-mail: jeong_pil_song@brown.edu [Department of Chemistry, Brown University, Providence, RI 02912 (United States)
2017-04-10
We describe an application of the Monte Carlo method to the Janus deformation of the black brane background. We present numerical results for three and five dimensional black Janus geometries with planar and spherical interfaces. In particular, we argue that the 5D geometry with a spherical interface has an application in understanding the finite temperature bag-like QCD model via the AdS/CFT correspondence. The accuracy and convergence of the algorithm are evaluated with respect to the grid spacing. The systematic errors of the method are determined using an exact solution of 3D black Janus. This numerical approach for solving linear problems is unaffected initial guess of a trial solution and can handle an arbitrary geometry under various boundary conditions in the presence of source fields.
Radiation Modeling with Direct Simulation Monte Carlo
Carlson, Ann B.; Hassan, H. A.
1991-01-01
Improvements in the modeling of radiation in low density shock waves with direct simulation Monte Carlo (DSMC) are the subject of this study. A new scheme to determine the relaxation collision numbers for excitation of electronic states is proposed. This scheme attempts to move the DSMC programs toward a more detailed modeling of the physics and more reliance on available rate data. The new method is compared with the current modeling technique and both techniques are compared with available experimental data. The differences in the results are evaluated. The test case is based on experimental measurements from the AVCO-Everett Research Laboratory electric arc-driven shock tube of a normal shock wave in air at 10 km/s and .1 Torr. The new method agrees with the available data as well as the results from the earlier scheme and is more easily extrapolated to di erent ow conditions.
Cuartel San Carlos. Yacimiento veterano
Directory of Open Access Journals (Sweden)
Mariana Flores
2007-01-01
Full Text Available El Cuartel San Carlos es un monumento histórico nacional (1986 de finales del siglo XVIII (1785-1790, caracterizado por sufrir diversas adversidades en su construcción y soportar los terremotos de 1812 y 1900. En el año 2006, el organismo encargado de su custodia, el Instituto de Patrimonio Cultural del Ministerio de Cultura, ejecutó tres etapas de exploración arqueológica, que abarcaron las áreas Traspatio, Patio Central y las Naves Este y Oeste de la edificación. Este trabajo reseña el análisis de la documentación arqueológica obtenida en el sitio, a partir de la realización de dicho proyecto, denominado EACUSAC (Estudio Arqueológico del Cuartel San Carlos, que representa además, la tercera campaña realizada en el sitio. La importancia de este yacimiento histórico, radica en su participación en los acontecimientos que propiciaron conflictos de poder durante el surgimiento de la República y en los sucesos políticos del siglo XX. De igual manera, se encontró en el sitio una amplia muestra de materiales arqueológicos que reseñan un estilo de vida cotidiana militar, así como las dinámicas sociales internas ocurridas en el San Carlos, como lugar estratégico para la defensa de los diferentes regímenes que atravesó el país, desde la época del imperialismo español hasta nuestros días.
Carlos Battilana: Profesor, Gestor, Amigo
Directory of Open Access Journals (Sweden)
José Pacheco
2009-12-01
Full Text Available El Comité Editorial de Anales ha perdido a uno de sus miembros más connotados. Brillante docente de nuestra Facultad, Carlos Alberto Battilana Guanilo (1945-2009 supo transmitir los conocimientos y atraer la atención de sus auditorios, de jóvenes estudiantes o de contemporáneos ya no tan jóvenes. Interesó a sus alumnos en la senda de la capacitación permanente y en la investigación. Por otro lado, comprometió a médicos distinguidos a conformar y liderar grupos con interés en la ciencia-amistad. Su vocación docente lo vinculó a facultades de medicina y academias y sociedades científicas, en donde coordinó cursos y congresos de grato recuerdo. Su producción científica la dedicó a la nefrología, inmunología, cáncer, costos en el tratamiento médico. Su capacidad gestora y de liderazgo presente desde su época de estudiante, le permitió llegar a ser director regional de un laboratorio farmacéutico de mucho prestigio, organizar una facultad de medicina y luego tener el cargo de decano de la facultad de ciencias de la salud de dicha universidad privada. Carlos fue elemento importante para que Anales alcanzara un sitial de privilegio entre las revistas biomédicas peruanas. En la semblanza que publicamos tratamos de resumir apretadamente la trayectoria de Carlos Battilana, semanas después de su partida sin retorno.
Monte Carlo Particle Lists: MCPL
DEFF Research Database (Denmark)
Kittelmann, Thomas; Klinkby, Esben Bryndt; Bergbäck Knudsen, Erik
2017-01-01
A binary format with lists of particle state information, for interchanging particles between various Monte Carlo simulation applications, is presented. Portable C code for file manipulation is made available to the scientific community, along with converters and plugins for several popular...... simulation packages. Program summary: Program Title: MCPL. Program Files doi: http://dx.doi.org/10.17632/cby92vsv5g.1 Licensing provisions: CC0 for core MCPL, see LICENSE file for details. Programming language: C and C++ External routines/libraries: Geant4, MCNP, McStas, McXtrace Nature of problem: Saving...
Directory of Open Access Journals (Sweden)
Rafael Maya
1979-04-01
Full Text Available Entre los poetasa del Centenario tuvo Luis Carlos López mucha popularidad en el extranjero, desde la publicación de su primer libro. Creo que su obra llamó la atención de filósofos como Unamuno y, si no estoy equivocado, Darío se refirió a ella en términos elogiosos. En Colombia ha sido encomiada hiperbólicamente por algunos, a tiemp que otros no le conceden mayor mérito.
International Nuclear Information System (INIS)
Valentine, T.E.; Mihalczo, J.T.
1996-01-01
One primary concern for design of safety systems for reactors is the time response of external detectors to changes in the core. This paper describes a way to estimate the time delay between the core power production and the external detector response using Monte Carlo calculations and suggests a technique to measure the time delay. The Monte Carlo code KENO-NR was used to determine the time delay between the core power production and the external detector response for a conceptual design of the Advanced Neutron Source (ANS) reactor. The Monte Carlo estimated time delay was determined to be about 10 ms for this conceptual design of the ANS reactor
Monte Carlo techniques in radiation therapy
Verhaegen, Frank
2013-01-01
Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...
Monte Carlo techniques in diagnostic and therapeutic nuclear medicine
International Nuclear Information System (INIS)
Zaidi, H.
2002-01-01
Monte Carlo techniques have become one of the most popular tools in different areas of medical radiation physics following the development and subsequent implementation of powerful computing systems for clinical use. In particular, they have been extensively applied to simulate processes involving random behaviour and to quantify physical parameters that are difficult or even impossible to calculate analytically or to determine by experimental measurements. The use of the Monte Carlo method to simulate radiation transport turned out to be the most accurate means of predicting absorbed dose distributions and other quantities of interest in the radiation treatment of cancer patients using either external or radionuclide radiotherapy. The same trend has occurred for the estimation of the absorbed dose in diagnostic procedures using radionuclides. There is broad consensus in accepting that the earliest Monte Carlo calculations in medical radiation physics were made in the area of nuclear medicine, where the technique was used for dosimetry modelling and computations. Formalism and data based on Monte Carlo calculations, developed by the Medical Internal Radiation Dose (MIRD) committee of the Society of Nuclear Medicine, were published in a series of supplements to the Journal of Nuclear Medicine, the first one being released in 1968. Some of these pamphlets made extensive use of Monte Carlo calculations to derive specific absorbed fractions for electron and photon sources uniformly distributed in organs of mathematical phantoms. Interest in Monte Carlo-based dose calculations with β-emitters has been revived with the application of radiolabelled monoclonal antibodies to radioimmunotherapy. As a consequence of this generalized use, many questions are being raised primarily about the need and potential of Monte Carlo techniques, but also about how accurate it really is, what would it take to apply it clinically and make it available widely to the medical physics
Mean field simulation for Monte Carlo integration
Del Moral, Pierre
2013-01-01
In the last three decades, there has been a dramatic increase in the use of interacting particle methods as a powerful tool in real-world applications of Monte Carlo simulation in computational physics, population biology, computer sciences, and statistical machine learning. Ideally suited to parallel and distributed computation, these advanced particle algorithms include nonlinear interacting jump diffusions; quantum, diffusion, and resampled Monte Carlo methods; Feynman-Kac particle models; genetic and evolutionary algorithms; sequential Monte Carlo methods; adaptive and interacting Marko
Monte Carlo surface flux tallies
International Nuclear Information System (INIS)
Favorite, Jeffrey A.
2010-01-01
Particle fluxes on surfaces are difficult to calculate with Monte Carlo codes because the score requires a division by the surface-crossing angle cosine, and grazing angles lead to inaccuracies. We revisit the standard practice of dividing by half of a cosine 'cutoff' for particles whose surface-crossing cosines are below the cutoff. The theory behind this approximation is sound, but the application of the theory to all possible situations does not account for two implicit assumptions: (1) the grazing band must be symmetric about 0, and (2) a single linear expansion for the angular flux must be applied in the entire grazing band. These assumptions are violated in common circumstances; for example, for separate in-going and out-going flux tallies on internal surfaces, and for out-going flux tallies on external surfaces. In some situations, dividing by two-thirds of the cosine cutoff is more appropriate. If users were able to control both the cosine cutoff and the substitute value, they could use these parameters to make accurate surface flux tallies. The procedure is demonstrated in a test problem in which Monte Carlo surface fluxes in cosine bins are converted to angular fluxes and compared with the results of a discrete ordinates calculation.
International Nuclear Information System (INIS)
Macdonald, J.L.
1975-08-01
Statistical and deterministic pattern recognition systems are designed to classify the state space of a Monte Carlo transport problem into importance regions. The surfaces separating the regions can be used for particle splitting and Russian roulette in state space in order to reduce the variance of the Monte Carlo tally. Computer experiments are performed to evaluate the performance of the technique using one and two dimensional Monte Carlo problems. Additional experiments are performed to determine the sensitivity of the technique to various pattern recognition and Monte Carlo problem dependent parameters. A system for applying the technique to a general purpose Monte Carlo code is described. An estimate of the computer time required by the technique is made in order to determine its effectiveness as a variance reduction device. It is recommended that the technique be further investigated in a general purpose Monte Carlo code. (auth)
Monte Carlo simulations of neutron scattering instruments
International Nuclear Information System (INIS)
Aestrand, Per-Olof; Copenhagen Univ.; Lefmann, K.; Nielsen, K.
2001-01-01
A Monte Carlo simulation is an important computational tool used in many areas of science and engineering. The use of Monte Carlo techniques for simulating neutron scattering instruments is discussed. The basic ideas, techniques and approximations are presented. Since the construction of a neutron scattering instrument is very expensive, Monte Carlo software used for design of instruments have to be validated and tested extensively. The McStas software was designed with these aspects in mind and some of the basic principles of the McStas software will be discussed. Finally, some future prospects are discussed for using Monte Carlo simulations in optimizing neutron scattering experiments. (R.P.)
On the use of stochastic approximation Monte Carlo for Monte Carlo integration
Liang, Faming
2009-01-01
The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration
Uncertainty Propagation in Monte Carlo Depletion Analysis
International Nuclear Information System (INIS)
Shim, Hyung Jin; Kim, Yeong-il; Park, Ho Jin; Joo, Han Gyu; Kim, Chang Hyo
2008-01-01
A new formulation aimed at quantifying uncertainties of Monte Carlo (MC) tallies such as k eff and the microscopic reaction rates of nuclides and nuclide number densities in MC depletion analysis and examining their propagation behaviour as a function of depletion time step (DTS) is presented. It is shown that the variance of a given MC tally used as a measure of its uncertainty in this formulation arises from four sources; the statistical uncertainty of the MC tally, uncertainties of microscopic cross sections and nuclide number densities, and the cross correlations between them and the contribution of the latter three sources can be determined by computing the correlation coefficients between the uncertain variables. It is also shown that the variance of any given nuclide number density at the end of each DTS stems from uncertainties of the nuclide number densities (NND) and microscopic reaction rates (MRR) of nuclides at the beginning of each DTS and they are determined by computing correlation coefficients between these two uncertain variables. To test the viability of the formulation, we conducted MC depletion analysis for two sample depletion problems involving a simplified 7x7 fuel assembly (FA) and a 17x17 PWR FA, determined number densities of uranium and plutonium isotopes and their variances as well as k ∞ and its variance as a function of DTS, and demonstrated the applicability of the new formulation for uncertainty propagation analysis that need be followed in MC depletion computations. (authors)
Performance of quantum Monte Carlo for calculating molecular bond lengths
Energy Technology Data Exchange (ETDEWEB)
Cleland, Deidre M., E-mail: deidre.cleland@csiro.au; Per, Manolo C., E-mail: manolo.per@csiro.au [CSIRO Virtual Nanoscience Laboratory, 343 Royal Parade, Parkville, Victoria 3052 (Australia)
2016-03-28
This work investigates the accuracy of real-space quantum Monte Carlo (QMC) methods for calculating molecular geometries. We present the equilibrium bond lengths of a test set of 30 diatomic molecules calculated using variational Monte Carlo (VMC) and diffusion Monte Carlo (DMC) methods. The effect of different trial wavefunctions is investigated using single determinants constructed from Hartree-Fock (HF) and Density Functional Theory (DFT) orbitals with LDA, PBE, and B3LYP functionals, as well as small multi-configurational self-consistent field (MCSCF) multi-determinant expansions. When compared to experimental geometries, all DMC methods exhibit smaller mean-absolute deviations (MADs) than those given by HF, DFT, and MCSCF. The most accurate MAD of 3 ± 2 × 10{sup −3} Å is achieved using DMC with a small multi-determinant expansion. However, the more computationally efficient multi-determinant VMC method has a similar MAD of only 4.0 ± 0.9 × 10{sup −3} Å, suggesting that QMC forces calculated from the relatively simple VMC algorithm may often be sufficient for accurate molecular geometries.
Directory of Open Access Journals (Sweden)
A.C.P. de A. Primavesi
1994-04-01
Full Text Available Em experimento conduzido em Latossolo Vermelho-Amarelo distrófico, em área da EMBRAPA - CPPSE em São Carlos, situada a 22°01'S e 47°53'W, com altitude de 856 m e média de precipitação anual de 1502 mm, procedeu-se a determinação da composição bromatológicade folhas, hastes com diâmetro menor que 6 mm e vagens, de genótipos de leucena. Os genótipos avaliados, foram: L.leucocephala cv. Texas 1074 (TI, L.leucocephala 29 A9 (T2, L.leucocephala 11 x L.dlversifolia 25 (T3, L.leucocephala 11 x L.diversifolia 26 (T4, L.leucocephala 24-19/2-39 x L.diverstfolia 26 (T5 e L.leucocephala c v. Cunningham (testemunha. Verificou-se que: os genótipos avahados não apresentaram diferenças nas determinações bromatológicas, realizadas nas folhas e talos finos; o genótipo T3 registrou o maior teor de proteína bruta (28,06%, de fósforo (0,29% e a maior relação PB/FDN e o menor teor de FDN para vagens; os genótipos apresentaram os seguintes teores médios, em porcentagem, para a composição bromatológicadas folhas, vagens e talos finos, respectivamente: Proteína bruta (18,57; 21,68; 6,41; Fibra detergente neutro (29,09; 41,58; 71,01; Fósforo (0,12; 0,22; 0,06; Cálcio (1,39; 0,36; 0,49; Magnesio (0,51; 0,28; 0,24; Tanino (1,32; 1,15; 0,28 e Digestibilidade "in vitro" (58,39; 61,22; 33,61; os teores de proteína e fósforo apresentaram a seguinte ordem decrescente nas partes das plantas: vagens > folhas > talos finos; os teores de cálcio: folhas > talos finos > vagens e de magnésio: folhas > vagens > talos finos.In a trial conducted on a distrofic Red-Yellow Latossol, at EMBRAPA-CPPSE, São Carlos, located at 22°01'S and 47'53'W, altitude of 856 m and with a mean annual rainfall of 1502 mm, the bromatological composition of leaves, stems smaller than 6 mm diameter and pods of leucena genotypes was determined. The genotypes evaluated were: L.leucocephala cv. Texas 1074 (T1, L.leucocephala 29 A9 (T2, L.leucocephala 11 x L.dlversifolia 25
International Nuclear Information System (INIS)
Cai, Li
2014-01-01
calculation solver SNATCH in the PARIS code platform. The latter uses the transport theory which is indispensable for the new generation fast reactors analysis. The principal conclusions are as follows: The Monte-Carlo assembly calculation code is an interesting way (in the sense of avoiding the difficulties in the self-shielding calculation, the limited order development of anisotropy parameters, the exact 3D geometries) to validate the deterministic codes like ECCO or APOLLO3 and to produce the multi-group constants for deterministic or Monte-Carlo multi-group calculation codes. The results obtained for the moment with the multi-group constants calculated by TRIPOLI-4 code are comparable with those produced from ECCO, but did not show remarkable advantages. (author) [fr
International Nuclear Information System (INIS)
Moore, J.G.
1974-01-01
The Monte Carlo code MONK is a general program written to provide a high degree of flexibility to the user. MONK is distinguished by its detailed representation of nuclear data in point form i.e., the cross-section is tabulated at specific energies instead of the more usual group representation. The nuclear data are unadjusted in the point form but recently the code has been modified to accept adjusted group data as used in fast and thermal reactor applications. The various geometrical handling capabilities and importance sampling techniques are described. In addition to the nuclear data aspects, the following features are also described; geometrical handling routines, tracking cycles, neutron source and output facilities. 12 references. (U.S.)
Monte Carlo lattice program KIM
International Nuclear Information System (INIS)
Cupini, E.; De Matteis, A.; Simonini, R.
1980-01-01
The Monte Carlo program KIM solves the steady-state linear neutron transport equation for a fixed-source problem or, by successive fixed-source runs, for the eigenvalue problem, in a two-dimensional thermal reactor lattice. Fluxes and reaction rates are the main quantities computed by the program, from which power distribution and few-group averaged cross sections are derived. The simulation ranges from 10 MeV to zero and includes anisotropic and inelastic scattering in the fast energy region, the epithermal Doppler broadening of the resonances of some nuclides, and the thermalization phenomenon by taking into account the thermal velocity distribution of some molecules. Besides the well known combinatorial geometry, the program allows complex configurations to be represented by a discrete set of points, an approach greatly improving calculation speed
Monte Carlo simulation of experiments
International Nuclear Information System (INIS)
Opat, G.I.
1977-07-01
An outline of the technique of computer simulation of particle physics experiments by the Monte Carlo method is presented. Useful special purpose subprograms are listed and described. At each stage the discussion is made concrete by direct reference to the programs SIMUL8 and its variant MONTE-PION, written to assist in the analysis of the radiative decay experiments μ + → e + ν sub(e) antiνγ and π + → e + ν sub(e)γ, respectively. These experiments were based on the use of two large sodium iodide crystals, TINA and MINA, as e and γ detectors. Instructions for the use of SIMUL8 and MONTE-PION are given. (author)
Advanced Computational Methods for Monte Carlo Calculations
Energy Technology Data Exchange (ETDEWEB)
Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2018-01-12
This course is intended for graduate students who already have a basic understanding of Monte Carlo methods. It focuses on advanced topics that may be needed for thesis research, for developing new state-of-the-art methods, or for working with modern production Monte Carlo codes.
Nested Sampling with Constrained Hamiltonian Monte Carlo
Betancourt, M. J.
2010-01-01
Nested sampling is a powerful approach to Bayesian inference ultimately limited by the computationally demanding task of sampling from a heavily constrained probability distribution. An effective algorithm in its own right, Hamiltonian Monte Carlo is readily adapted to efficiently sample from any smooth, constrained distribution. Utilizing this constrained Hamiltonian Monte Carlo, I introduce a general implementation of the nested sampling algorithm.
Monte Carlo Treatment Planning for Advanced Radiotherapy
DEFF Research Database (Denmark)
Cronholm, Rickard
This Ph.d. project describes the development of a workflow for Monte Carlo Treatment Planning for clinical radiotherapy plans. The workflow may be utilized to perform an independent dose verification of treatment plans. Modern radiotherapy treatment delivery is often conducted by dynamically...... modulating the intensity of the field during the irradiation. The workflow described has the potential to fully model the dynamic delivery, including gantry rotation during irradiation, of modern radiotherapy. Three corner stones of Monte Carlo Treatment Planning are identified: Building, commissioning...... and validation of a Monte Carlo model of a medical linear accelerator (i), converting a CT scan of a patient to a Monte Carlo compliant phantom (ii) and translating the treatment plan parameters (including beam energy, angles of incidence, collimator settings etc) to a Monte Carlo input file (iii). A protocol...
The MC21 Monte Carlo Transport Code
International Nuclear Information System (INIS)
Sutton TM; Donovan TJ; Trumbull TH; Dobreff PS; Caro E; Griesheimer DP; Tyburski LJ; Carpenter DC; Joo H
2007-01-01
MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities
Monte Carlo simulation in nuclear medicine
International Nuclear Information System (INIS)
Morel, Ch.
2007-01-01
The Monte Carlo method allows for simulating random processes by using series of pseudo-random numbers. It became an important tool in nuclear medicine to assist in the design of new medical imaging devices, optimise their use and analyse their data. Presently, the sophistication of the simulation tools allows the introduction of Monte Carlo predictions in data correction and image reconstruction processes. The availability to simulate time dependent processes opens up new horizons for Monte Carlo simulation in nuclear medicine. In a near future, these developments will allow to tackle simultaneously imaging and dosimetry issues and soon, case system Monte Carlo simulations may become part of the nuclear medicine diagnostic process. This paper describes some Monte Carlo method basics and the sampling methods that were developed for it. It gives a referenced list of different simulation software used in nuclear medicine and enumerates some of their present and prospective applications. (author)
Evaluation of equivalent doses in 18F PET/CT using the Monte Carlo method with MCNPX code
International Nuclear Information System (INIS)
Belinato, Walmir; Santos, William Souza; Perini, Ana Paula; Neves, Lucio Pereira; Souza, Divanizia N.
2017-01-01
The present work used the Monte Carlo method (MMC), specifically the Monte Carlo NParticle - MCNPX, to simulate the interaction of radiation involving photons and particles, such as positrons and electrons, with virtual adult anthropomorphic simulators on PET / CT scans and to determine absorbed and equivalent doses in adult male and female patients
Evaluation of a special pencil ionization chamber by the Monte Carlo method
International Nuclear Information System (INIS)
Mendonca, Dalila; Neves, Lucio P.; Perini, Ana P.
2015-01-01
A special pencil type ionization chamber, developed at the Instituto de Pesquisas Energeticas e Nucleares, was characterized by means of Monte Carlo simulation to determine the influence of its components on its response. The main differences between this ionization chamber and commercial ionization chambers are related to its configuration and constituent materials. The simulations were made employing the MCNP-4C Monte Carlo code. The highest influence was obtained for the body of PMMA: 7.0%. (author)
UDOANYA RAYMOND MANUEL; ANIEKAN OFFIONG
2014-01-01
This paper presents the importance of applying queuing theory to the Automated Teller Machine (ATM) using Monte Carlo Simulation in order to determine, control and manage the level of queuing congestion found within the Automated Teller Machine (ATM) centre in Nigeria and also it contains the empirical data analysis of the queuing systems obtained at the Automated Teller Machine (ATM) located within the Bank premises for a period of three (3) months. Monte Carlo Simulation is applied to th...
Monte Carlo Codes Invited Session
International Nuclear Information System (INIS)
Trama, J.C.; Malvagi, F.; Brown, F.
2013-01-01
This document lists 22 Monte Carlo codes used in radiation transport applications throughout the world. For each code the names of the organization and country and/or place are given. We have the following computer codes. 1) ARCHER, USA, RPI; 2) COG11, USA, LLNL; 3) DIANE, France, CEA/DAM Bruyeres; 4) FLUKA, Italy and CERN, INFN and CERN; 5) GEANT4, International GEANT4 collaboration; 6) KENO and MONACO (SCALE), USA, ORNL; 7) MC21, USA, KAPL and Bettis; 8) MCATK, USA, LANL; 9) MCCARD, South Korea, Seoul National University; 10) MCNP6, USA, LANL; 11) MCU, Russia, Kurchatov Institute; 12) MONK and MCBEND, United Kingdom, AMEC; 13) MORET5, France, IRSN Fontenay-aux-Roses; 14) MVP2, Japan, JAEA; 15) OPENMC, USA, MIT; 16) PENELOPE, Spain, Barcelona University; 17) PHITS, Japan, JAEA; 18) PRIZMA, Russia, VNIITF; 19) RMC, China, Tsinghua University; 20) SERPENT, Finland, VTT; 21) SUPERMONTECARLO, China, CAS INEST FDS Team Hefei; and 22) TRIPOLI-4, France, CEA Saclay
Advanced computers and Monte Carlo
International Nuclear Information System (INIS)
Jordan, T.L.
1979-01-01
High-performance parallelism that is currently available is synchronous in nature. It is manifested in such architectures as Burroughs ILLIAC-IV, CDC STAR-100, TI ASC, CRI CRAY-1, ICL DAP, and many special-purpose array processors designed for signal processing. This form of parallelism has apparently not been of significant value to many important Monte Carlo calculations. Nevertheless, there is much asynchronous parallelism in many of these calculations. A model of a production code that requires up to 20 hours per problem on a CDC 7600 is studied for suitability on some asynchronous architectures that are on the drawing board. The code is described and some of its properties and resource requirements ae identified to compare with corresponding properties and resource requirements are identified to compare with corresponding properties and resource requirements are identified to compare with corresponding properties and resources of some asynchronous multiprocessor architectures. Arguments are made for programer aids and special syntax to identify and support important asynchronous parallelism. 2 figures, 5 tables
Adaptive Markov Chain Monte Carlo
Jadoon, Khan
2016-08-08
A substantial interpretation of electromagnetic induction (EMI) measurements requires quantifying optimal model parameters and uncertainty of a nonlinear inverse problem. For this purpose, an adaptive Bayesian Markov chain Monte Carlo (MCMC) algorithm is used to assess multi-orientation and multi-offset EMI measurements in an agriculture field with non-saline and saline soil. In the MCMC simulations, posterior distribution was computed using Bayes rule. The electromagnetic forward model based on the full solution of Maxwell\\'s equations was used to simulate the apparent electrical conductivity measured with the configurations of EMI instrument, the CMD mini-Explorer. The model parameters and uncertainty for the three-layered earth model are investigated by using synthetic data. Our results show that in the scenario of non-saline soil, the parameters of layer thickness are not well estimated as compared to layers electrical conductivity because layer thicknesses in the model exhibits a low sensitivity to the EMI measurements, and is hence difficult to resolve. Application of the proposed MCMC based inversion to the field measurements in a drip irrigation system demonstrate that the parameters of the model can be well estimated for the saline soil as compared to the non-saline soil, and provide useful insight about parameter uncertainty for the assessment of the model outputs.
Importance iteration in MORSE Monte Carlo calculations
International Nuclear Information System (INIS)
Kloosterman, J.L.; Hoogenboom, J.E.
1994-01-01
An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example that shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation
Monte Carlo approaches to light nuclei
International Nuclear Information System (INIS)
Carlson, J.
1990-01-01
Significant progress has been made recently in the application of Monte Carlo methods to the study of light nuclei. We review new Green's function Monte Carlo results for the alpha particle, Variational Monte Carlo studies of 16 O, and methods for low-energy scattering and transitions. Through these calculations, a coherent picture of the structure and electromagnetic properties of light nuclei has arisen. In particular, we examine the effect of the three-nucleon interaction and the importance of exchange currents in a variety of experimentally measured properties, including form factors and capture cross sections. 29 refs., 7 figs
Monte Carlo approaches to light nuclei
Energy Technology Data Exchange (ETDEWEB)
Carlson, J.
1990-01-01
Significant progress has been made recently in the application of Monte Carlo methods to the study of light nuclei. We review new Green's function Monte Carlo results for the alpha particle, Variational Monte Carlo studies of {sup 16}O, and methods for low-energy scattering and transitions. Through these calculations, a coherent picture of the structure and electromagnetic properties of light nuclei has arisen. In particular, we examine the effect of the three-nucleon interaction and the importance of exchange currents in a variety of experimentally measured properties, including form factors and capture cross sections. 29 refs., 7 figs.
Importance iteration in MORSE Monte Carlo calculations
International Nuclear Information System (INIS)
Kloosterman, J.L.; Hoogenboom, J.E.
1994-02-01
An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example, which shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation. (orig.)
Monte carlo simulation for soot dynamics
Zhou, Kun
2012-01-01
A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.
On-the-fly doppler broadening for Monte Carlo codes
International Nuclear Information System (INIS)
Yesilyurt, G.; Martin, W. R.; Brown, F. B.
2009-01-01
A methodology to allow on-the-fly Doppler broadening of neutron cross sections for use in Monte Carlo codes has been developed. The Monte Carlo code only needs to store 0 K cross sections for each isotope and the method will broaden the 0 K cross sections for any isotope in the library to any temperature in the range 77 K-3200 K. The methodology is based on a combination of Taylor series expansions and asymptotic series expansions. The type of series representation was determined by investigating the temperature dependence of U3o8 resonance cross sections in three regions: near the resonance peaks, mid-resonance, and the resonance wings. The coefficients for these series expansions were determined by a regression over the energy and temperature range of interest. Since the resonance parameters are a function of the neutron energy and target nuclide, the ψ and χ functions in the Adler-Adler multi-level resonance model can be represented by series expansions in temperature only, allowing the least number of terms to approximate the temperature dependent cross sections within a given accuracy. The comparison of the broadened cross sections using this methodology with the NJOY cross sections was excellent over the entire temperature range (77 K-3200 K) and energy range. A Monte Carlo code was implemented to apply the combined regression model and used to estimate the additional computing cost which was found to be less than <1%. (authors)
Monte Carlo simulations of plutonium gamma-ray spectra
International Nuclear Information System (INIS)
Koenig, Z.M.; Carlson, J.B.; Wang, Tzu-Fang; Ruhter, W.D.
1993-01-01
Monte Carlo calculations were investigated as a means of simulating the gamma-ray spectra of Pu. These simulated spectra will be used to develop and evaluate gamma-ray analysis techniques for various nondestructive measurements. Simulated spectra of calculational standards can be used for code intercomparisons, to understand systematic biases and to estimate minimum detection levels of existing and proposed nondestructive analysis instruments. The capability to simulate gamma-ray spectra from HPGe detectors could significantly reduce the costs of preparing large numbers of real reference materials. MCNP was used for the Monte Carlo transport of the photons. Results from the MCNP calculations were folded in with a detector response function for a realistic spectrum. Plutonium spectrum peaks were produced with Lorentzian shapes, for the x-rays, and Gaussian distributions. The MGA code determined the Pu isotopes and specific power of this calculated spectrum and compared it to a similar analysis on a measured spectrum
Dielectric response of periodic systems from quantum Monte Carlo calculations.
Umari, P; Willamson, A J; Galli, Giulia; Marzari, Nicola
2005-11-11
We present a novel approach that allows us to calculate the dielectric response of periodic systems in the quantum Monte Carlo formalism. We employ a many-body generalization for the electric-enthalpy functional, where the coupling with the field is expressed via the Berry-phase formulation for the macroscopic polarization. A self-consistent local Hamiltonian then determines the ground-state wave function, allowing for accurate diffusion quantum Monte Carlo calculations where the polarization's fixed point is estimated from the average on an iterative sequence, sampled via forward walking. This approach has been validated for the case of an isolated hydrogen atom and then applied to a periodic system, to calculate the dielectric susceptibility of molecular-hydrogen chains. The results found are in excellent agreement with the best estimates obtained from the extrapolation of quantum-chemistry calculations.
11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing
Nuyens, Dirk
2016-01-01
This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.
Quantum Monte Carlo approaches for correlated systems
Becca, Federico
2017-01-01
Over the past several decades, computational approaches to studying strongly-interacting systems have become increasingly varied and sophisticated. This book provides a comprehensive introduction to state-of-the-art quantum Monte Carlo techniques relevant for applications in correlated systems. Providing a clear overview of variational wave functions, and featuring a detailed presentation of stochastic samplings including Markov chains and Langevin dynamics, which are developed into a discussion of Monte Carlo methods. The variational technique is described, from foundations to a detailed description of its algorithms. Further topics discussed include optimisation techniques, real-time dynamics and projection methods, including Green's function, reptation and auxiliary-field Monte Carlo, from basic definitions to advanced algorithms for efficient codes, and the book concludes with recent developments on the continuum space. Quantum Monte Carlo Approaches for Correlated Systems provides an extensive reference ...
Monte Carlo simulations for plasma physics
International Nuclear Information System (INIS)
Okamoto, M.; Murakami, S.; Nakajima, N.; Wang, W.X.
2000-07-01
Plasma behaviours are very complicated and the analyses are generally difficult. However, when the collisional processes play an important role in the plasma behaviour, the Monte Carlo method is often employed as a useful tool. For examples, in neutral particle injection heating (NBI heating), electron or ion cyclotron heating, and alpha heating, Coulomb collisions slow down high energetic particles and pitch angle scatter them. These processes are often studied by the Monte Carlo technique and good agreements can be obtained with the experimental results. Recently, Monte Carlo Method has been developed to study fast particle transports associated with heating and generating the radial electric field. Further it is applied to investigating the neoclassical transport in the plasma with steep gradients of density and temperatures which is beyong the conventional neoclassical theory. In this report, we briefly summarize the researches done by the present authors utilizing the Monte Carlo method. (author)
Frontiers of quantum Monte Carlo workshop: preface
International Nuclear Information System (INIS)
Gubernatis, J.E.
1985-01-01
The introductory remarks, table of contents, and list of attendees are presented from the proceedings of the conference, Frontiers of Quantum Monte Carlo, which appeared in the Journal of Statistical Physics
Monte Carlo code development in Los Alamos
International Nuclear Information System (INIS)
Carter, L.L.; Cashwell, E.D.; Everett, C.J.; Forest, C.A.; Schrandt, R.G.; Taylor, W.M.; Thompson, W.L.; Turner, G.D.
1974-01-01
The present status of Monte Carlo code development at Los Alamos Scientific Laboratory is discussed. A brief summary is given of several of the most important neutron, photon, and electron transport codes. 17 references. (U.S.)
"Shaakal" Carlos kaebas arreteerija kohtusse / Margo Pajuste
Pajuste, Margo
2006-01-01
Ilmunud ka: Postimees : na russkom jazõke 3. juuli lk. 11. Vangistatud kurikuulus terrorist "Shaakal" Carlos alustas kohtuasja oma kunagise vahistaja vastu. Ta süüdistab Prantsusmaa luureteenistuse endist juhti inimröövis
Experience with the Monte Carlo Method
Energy Technology Data Exchange (ETDEWEB)
Hussein, E M.A. [Department of Mechanical Engineering University of New Brunswick, Fredericton, N.B., (Canada)
2007-06-15
Monte Carlo simulation of radiation transport provides a powerful research and design tool that resembles in many aspects laboratory experiments. Moreover, Monte Carlo simulations can provide an insight not attainable in the laboratory. However, the Monte Carlo method has its limitations, which if not taken into account can result in misleading conclusions. This paper will present the experience of this author, over almost three decades, in the use of the Monte Carlo method for a variety of applications. Examples will be shown on how the method was used to explore new ideas, as a parametric study and design optimization tool, and to analyze experimental data. The consequences of not accounting in detail for detector response and the scattering of radiation by surrounding structures are two of the examples that will be presented to demonstrate the pitfall of condensed.
Experience with the Monte Carlo Method
International Nuclear Information System (INIS)
Hussein, E.M.A.
2007-01-01
Monte Carlo simulation of radiation transport provides a powerful research and design tool that resembles in many aspects laboratory experiments. Moreover, Monte Carlo simulations can provide an insight not attainable in the laboratory. However, the Monte Carlo method has its limitations, which if not taken into account can result in misleading conclusions. This paper will present the experience of this author, over almost three decades, in the use of the Monte Carlo method for a variety of applications. Examples will be shown on how the method was used to explore new ideas, as a parametric study and design optimization tool, and to analyze experimental data. The consequences of not accounting in detail for detector response and the scattering of radiation by surrounding structures are two of the examples that will be presented to demonstrate the pitfall of condensed
Monte Carlo Transport for Electron Thermal Transport
Chenhall, Jeffrey; Cao, Duc; Moses, Gregory
2015-11-01
The iSNB (implicit Schurtz Nicolai Busquet multigroup electron thermal transport method of Cao et al. is adapted into a Monte Carlo transport method in order to better model the effects of non-local behavior. The end goal is a hybrid transport-diffusion method that combines Monte Carlo Transport with a discrete diffusion Monte Carlo (DDMC). The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the method will be presented. This work was supported by Sandia National Laboratory - Albuquerque and the University of Rochester Laboratory for Laser Energetics.
A continuation multilevel Monte Carlo algorithm
Collier, Nathan; Haji Ali, Abdul Lateef; Nobile, Fabio; von Schwerin, Erik; Tempone, Raul
2014-01-01
We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending when the required error
Simulation and the Monte Carlo method
Rubinstein, Reuven Y
2016-01-01
Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...
Hybrid Monte Carlo methods in computational finance
Leitao Rodriguez, A.
2017-01-01
Monte Carlo methods are highly appreciated and intensively employed in computational finance in the context of financial derivatives valuation or risk management. The method offers valuable advantages like flexibility, easy interpretation and straightforward implementation. Furthermore, the
Energy Technology Data Exchange (ETDEWEB)
Baker, Randal Scott [Univ. of Arizona, Tucson, AZ (United States)
1990-01-01
The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S_{N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S_{N} regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S_{N} is well suited for by themselves. The fully coupled Monte Carlo/S_{N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S_{N} calculation is to be performed. The Monte Carlo region may comprise the entire spatial region for selected energy groups, or may consist of a rectangular area that is either completely or partially embedded in an arbitrary S_{N} region. The Monte Carlo and S_{N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and volumetric sources. The hybrid method has been implemented in the S_{N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and volumetric sources, and linkage subrountines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S_{N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating S_{N} calculations. The special-purpose Monte Carlo routines used are essentially analog, with few variance reduction techniques employed. However, the routines have been successfully vectorized, with approximately a factor of five increase in speed over the non-vectorized version.
Bartalini, P.; Kryukov, A.; Selyuzhenkov, Ilya V.; Sherstnev, A.; Vologdin, A.
2004-01-01
We present the Monte-Carlo events Data Base (MCDB) project and its development plans. MCDB facilitates communication between authors of Monte-Carlo generators and experimental users. It also provides a convenient book-keeping and an easy access to generator level samples. The first release of MCDB is now operational for the CMS collaboration. In this paper we review the main ideas behind MCDB and discuss future plans to develop this Data Base further within the CERN LCG framework.
Multilevel Monte Carlo in Approximate Bayesian Computation
Jasra, Ajay
2017-02-13
In the following article we consider approximate Bayesian computation (ABC) inference. We introduce a method for numerically approximating ABC posteriors using the multilevel Monte Carlo (MLMC). A sequential Monte Carlo version of the approach is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.
Monte Carlo method applied to medical physics
International Nuclear Information System (INIS)
Oliveira, C.; Goncalves, I.F.; Chaves, A.; Lopes, M.C.; Teixeira, N.; Matos, B.; Goncalves, I.C.; Ramalho, A.; Salgado, J.
2000-01-01
The main application of the Monte Carlo method to medical physics is dose calculation. This paper shows some results of two dose calculation studies and two other different applications: optimisation of neutron field for Boron Neutron Capture Therapy and optimization of a filter for a beam tube for several purposes. The time necessary for Monte Carlo calculations - the highest boundary for its intensive utilisation - is being over-passed with faster and cheaper computers. (author)
Moignier, C; Huet, C; Makovicka, L
2014-07-01
In a previous work, output ratio (ORdet) measurements were performed for the 800 MU/min CyberKnife(®) at the Oscar Lambret Center (COL, France) using several commercially available detectors as well as using two passive dosimeters (EBT2 radiochromic film and micro-LiF TLD-700). The primary aim of the present work was to determine by Monte Carlo calculations the output factor in water (OFMC,w) and the [Formula: see text] correction factors. The secondary aim was to study the detector response in small beams using Monte Carlo simulation. The LINAC head of the CyberKnife(®) was modeled using the PENELOPE Monte Carlo code system. The primary electron beam was modeled using a monoenergetic source with a radial gaussian distribution. The model was adjusted by comparisons between calculated and measured lateral profiles and tissue-phantom ratios obtained with the largest field. In addition, the PTW 60016 and 60017 diodes, PTW 60003 diamond, and micro-LiF were modeled. Output ratios with modeled detectors (ORMC,det) and OFMC,w were calculated and compared to measurements, in order to validate the model for smallest fields and to calculate [Formula: see text] correction factors, respectively. For the study of the influence of detector characteristics on their response in small beams; first, the impact of the atomic composition and the mass density of silicon, LiF, and diamond materials were investigated; second, the material, the volume averaging, and the coating effects of detecting material on the detector responses were estimated. Finally, the influence of the size of silicon chip on diode response was investigated. Looking at measurement ratios (uncorrected output factors) compared to the OFMC,w, the PTW 60016, 60017 and Sun Nuclear EDGE diodes systematically over-responded (about +6% for the 5 mm field), whereas the PTW 31014 Pinpoint chamber systematically under-responded (about -12% for the 5 mm field). ORdet measured with the SFD diode and PTW 60003 diamond
Monte Carlo simulation on kinetics of batch and semi-batch free radical polymerization
Shao, Jing
2015-10-27
Based on Monte Carlo simulation technology, we proposed a hybrid routine which combines reaction mechanism together with coarse-grained molecular simulation to study the kinetics of free radical polymerization. By comparing with previous experimental and simulation studies, we showed the capability of our Monte Carlo scheme on representing polymerization kinetics in batch and semi-batch processes. Various kinetics information, such as instant monomer conversion, molecular weight, and polydispersity etc. are readily calculated from Monte Carlo simulation. The kinetic constants such as polymerization rate k p is determined in the simulation without of “steady-state” hypothesis. We explored the mechanism for the variation of polymerization kinetics those observed in previous studies, as well as polymerization-induced phase separation. Our Monte Carlo simulation scheme is versatile on studying polymerization kinetics in batch and semi-batch processes.
Development and application of the automated Monte Carlo biasing procedure in SAS4
International Nuclear Information System (INIS)
Tang, J.S.; Broadhead, B.L.
1993-01-01
An automated approach for biasing Monte Carlo shielding calculations is described. In particular, adjoint fluxes from a one-dimensional discrete-ordinates calculation are used to generate biasing parameters for a three-dimensional Monte Carlo calculation. The automated procedure consisting of cross-section processing, adjoint flux determination, biasing parameter generation, and the initiation of a MORSE-SGC/S Monte Carlo calculation has been implemented in the SAS4 module of the SCALE computer code system. The automated procedure has been used extensively in the investigation of both computational and experimental benchmarks for the NEACRP working group on shielding assessment of transportation packages. The results of these studies indicate that with the automated biasing procedure, Monte Carlo shielding calculations of spent fuel casks can be easily performed with minimum effort and that accurate results can be obtained at reasonable computing cost. The systematic biasing approach described in this paper can also be applied to other similar shielding problems
Monte carlo sampling of fission multiplicity.
Energy Technology Data Exchange (ETDEWEB)
Hendricks, J. S. (John S.)
2004-01-01
Two new methods have been developed for fission multiplicity modeling in Monte Carlo calculations. The traditional method of sampling neutron multiplicity from fission is to sample the number of neutrons above or below the average. For example, if there are 2.7 neutrons per fission, three would be chosen 70% of the time and two would be chosen 30% of the time. For many applications, particularly {sup 3}He coincidence counting, a better estimate of the true number of neutrons per fission is required. Generally, this number is estimated by sampling a Gaussian distribution about the average. However, because the tail of the Gaussian distribution is negative and negative neutrons cannot be produced, a slight positive bias can be found in the average value. For criticality calculations, the result of rejecting the negative neutrons is an increase in k{sub eff} of 0.1% in some cases. For spontaneous fission, where the average number of neutrons emitted from fission is low, the error also can be unacceptably large. If the Gaussian width approaches the average number of fissions, 10% too many fission neutrons are produced by not treating the negative Gaussian tail adequately. The first method to treat the Gaussian tail is to determine a correction offset, which then is subtracted from all sampled values of the number of neutrons produced. This offset depends on the average value for any given fission at any energy and must be computed efficiently at each fission from the non-integrable error function. The second method is to determine a corrected zero point so that all neutrons sampled between zero and the corrected zero point are killed to compensate for the negative Gaussian tail bias. Again, the zero point must be computed efficiently at each fission. Both methods give excellent results with a negligible computing time penalty. It is now possible to include the full effects of fission multiplicity without the negative Gaussian tail bias.
Stock Price Simulation Using Bootstrap and Monte Carlo
Directory of Open Access Journals (Sweden)
Pažický Martin
2017-06-01
Full Text Available In this paper, an attempt is made to assessment and comparison of bootstrap experiment and Monte Carlo experiment for stock price simulation. Since the stock price evolution in the future is extremely important for the investors, there is the attempt to find the best method how to determine the future stock price of BNP Paribas′ bank. The aim of the paper is define the value of the European and Asian option on BNP Paribas′ stock at the maturity date. There are employed four different methods for the simulation. First method is bootstrap experiment with homoscedastic error term, second method is blocked bootstrap experiment with heteroscedastic error term, third method is Monte Carlo simulation with heteroscedastic error term and the last method is Monte Carlo simulation with homoscedastic error term. In the last method there is necessary to model the volatility using econometric GARCH model. The main purpose of the paper is to compare the mentioned methods and select the most reliable. The difference between classical European option and exotic Asian option based on the experiment results is the next aim of tis paper.
A New Approach to Monte Carlo Simulations in Statistical Physics
Landau, David P.
2002-08-01
Monte Carlo simulations [1] have become a powerful tool for the study of diverse problems in statistical/condensed matter physics. Standard methods sample the probability distribution for the states of the system, most often in the canonical ensemble, and over the past several decades enormous improvements have been made in performance. Nonetheless, difficulties arise near phase transitions-due to critical slowing down near 2nd order transitions and to metastability near 1st order transitions, and these complications limit the applicability of the method. We shall describe a new Monte Carlo approach [2] that uses a random walk in energy space to determine the density of states directly. Once the density of states is known, all thermodynamic properties can be calculated. This approach can be extended to multi-dimensional parameter spaces and should be effective for systems with complex energy landscapes, e.g., spin glasses, protein folding models, etc. Generalizations should produce a broadly applicable optimization tool. 1. A Guide to Monte Carlo Simulations in Statistical Physics, D. P. Landau and K. Binder (Cambridge U. Press, Cambridge, 2000). 2. Fugao Wang and D. P. Landau, Phys. Rev. Lett. 86, 2050 (2001); Phys. Rev. E64, 056101-1 (2001).
Automatic fission source convergence criteria for Monte Carlo criticality calculations
International Nuclear Information System (INIS)
Shim, Hyung Jin; Kim, Chang Hyo
2005-01-01
The Monte Carlo criticality calculations for the multiplication factor and the power distribution in a nuclear system require knowledge of stationary or fundamental-mode fission source distribution (FSD) in the system. Because it is a priori unknown, so-called inactive cycle Monte Carlo (MC) runs are performed to determine it. The inactive cycle MC runs should be continued until the FSD converges to the stationary FSD. Obviously, if one stops them prematurely, the MC calculation results may have biases because the followup active cycles may be run with the non-stationary FSD. Conversely, if one performs the inactive cycle MC runs more than necessary, one is apt to waste computing time because inactive cycle MC runs are used to elicit the fundamental-mode FSD only. In the absence of suitable criteria for terminating the inactive cycle MC runs, one cannot but rely on empiricism in deciding how many inactive cycles one should conduct for a given problem. Depending on the problem, this may introduce biases into Monte Carlo estimates of the parameters one tries to calculate. The purpose of this paper is to present new fission source convergence criteria designed for the automatic termination of inactive cycle MC runs
Monte Carlo simulation experiments on box-type radon dosimeter
International Nuclear Information System (INIS)
Jamil, Khalid; Kamran, Muhammad; Illahi, Ahsan; Manzoor, Shahid
2014-01-01
Epidemiological studies show that inhalation of radon gas ( 222 Rn) may be carcinogenic especially to mine workers, people living in closed indoor energy conserved environments and underground dwellers. It is, therefore, of paramount importance to measure the 222 Rn concentrations (Bq/m 3 ) in indoors environments. For this purpose, box-type passive radon dosimeters employing ion track detector like CR-39 are widely used. Fraction of the number of radon alphas emitted in the volume of the box type dosimeter resulting in latent track formation on CR-39 is the latent track registration efficiency. Latent track registration efficiency is ultimately required to evaluate the radon concentration which consequently determines the effective dose and the radiological hazards. In this research, Monte Carlo simulation experiments were carried out to study the alpha latent track registration efficiency for box type radon dosimeter as a function of dosimeter’s dimensions and range of alpha particles in air. Two different self developed Monte Carlo simulation techniques were employed namely: (a) Surface ratio (SURA) method and (b) Ray hitting (RAHI) method. Monte Carlo simulation experiments revealed that there are two types of efficiencies i.e. intrinsic efficiency (η int ) and alpha hit efficiency (η hit ). The η int depends upon only on the dimensions of the dosimeter and η hit depends both upon dimensions of the dosimeter and range of the alpha particles. The total latent track registration efficiency is the product of both intrinsic and hit efficiencies. It has been concluded that if diagonal length of box type dosimeter is kept smaller than the range of alpha particle then hit efficiency is achieved as 100%. Nevertheless the intrinsic efficiency keeps playing its role. The Monte Carlo simulation experimental results have been found helpful to understand the intricate track registration mechanisms in the box type dosimeter. This paper explains that how radon
International Nuclear Information System (INIS)
Ljubenov, V.; Milosevic, M.
2003-01-01
A procedure for the neutron flux determination in a neutron field with an arbitrary energy spectrum, based on the using of standard methods for the measurement of irradiated foils activity and on the application of the SCALE-4.4a code system for averaged cross section calculation is described in this paper. Proposed procedure allows to include the energy spectrum of neutron flux reestablished in the location of irradiated foils and the resonance self-shielding effects in the foils also. Example application of this procedure is given for the neutron flux determination inside the neutron filter with boron placed in the centre of heavy water critical assembly RB at the Vinca Institute (author)
Successful vectorization - reactor physics Monte Carlo code
International Nuclear Information System (INIS)
Martin, W.R.
1989-01-01
Most particle transport Monte Carlo codes in use today are based on the ''history-based'' algorithm, wherein one particle history at a time is simulated. Unfortunately, the ''history-based'' approach (present in all Monte Carlo codes until recent years) is inherently scalar and cannot be vectorized. In particular, the history-based algorithm cannot take advantage of vector architectures, which characterize the largest and fastest computers at the current time, vector supercomputers such as the Cray X/MP or IBM 3090/600. However, substantial progress has been made in recent years in developing and implementing a vectorized Monte Carlo algorithm. This algorithm follows portions of many particle histories at the same time and forms the basis for all successful vectorized Monte Carlo codes that are in use today. This paper describes the basic vectorized algorithm along with descriptions of several variations that have been developed by different researchers for specific applications. These applications have been mainly in the areas of neutron transport in nuclear reactor and shielding analysis and photon transport in fusion plasmas. The relative merits of the various approach schemes will be discussed and the present status of known vectorization efforts will be summarized along with available timing results, including results from the successful vectorization of 3-D general geometry, continuous energy Monte Carlo. (orig.)
Yours in Revolution: Retrofitting Carlos the Jackal
Directory of Open Access Journals (Sweden)
Samuel Thomas
2013-09-01
Full Text Available This paper explores the representation of ‘Carlos the Jackal’, the one-time ‘World’s Most Wanted Man’ and ‘International Face of Terror’ – primarily in cin-ema but also encompassing other forms of popular culture and aspects of Cold War policy-making. At the centre of the analysis is Olivier Assayas’s Carlos (2010, a transnational, five and a half hour film (first screened as a TV mini-series about the life and times of the infamous militant. Concentrating on the var-ious ways in which Assayas expresses a critical preoccupation with names and faces through complex formal composition, the project examines the play of ab-straction and embodiment that emerges from the narrativisation of terrorist vio-lence. Lastly, it seeks to engage with the hidden implications of Carlos in terms of the intertwined trajectories of formal experimentation and revolutionary politics.
Monte Carlo strategies in scientific computing
Liu, Jun S
2008-01-01
This paperback edition is a reprint of the 2001 Springer edition This book provides a self-contained and up-to-date treatment of the Monte Carlo method and develops a common framework under which various Monte Carlo techniques can be "standardized" and compared Given the interdisciplinary nature of the topics and a moderate prerequisite for the reader, this book should be of interest to a broad audience of quantitative researchers such as computational biologists, computer scientists, econometricians, engineers, probabilists, and statisticians It can also be used as the textbook for a graduate-level course on Monte Carlo methods Many problems discussed in the alter chapters can be potential thesis topics for masters’ or PhD students in statistics or computer science departments Jun Liu is Professor of Statistics at Harvard University, with a courtesy Professor appointment at Harvard Biostatistics Department Professor Liu was the recipient of the 2002 COPSS Presidents' Award, the most prestigious one for sta...
Random Numbers and Monte Carlo Methods
Scherer, Philipp O. J.
Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.
Off-diagonal expansion quantum Monte Carlo.
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Reflections on early Monte Carlo calculations
International Nuclear Information System (INIS)
Spanier, J.
1992-01-01
Monte Carlo methods for solving various particle transport problems developed in parallel with the evolution of increasingly sophisticated computer programs implementing diffusion theory and low-order moments calculations. In these early years, Monte Carlo calculations and high-order approximations to the transport equation were seen as too expensive to use routinely for nuclear design but served as invaluable aids and supplements to design with less expensive tools. The earliest Monte Carlo programs were quite literal; i.e., neutron and other particle random walk histories were simulated by sampling from the probability laws inherent in the physical system without distoration. Use of such analogue sampling schemes resulted in a good deal of time being spent in examining the possibility of lowering the statistical uncertainties in the sample estimates by replacing simple, and intuitively obvious, random variables by those with identical means but lower variances
Monte Carlo simulation of Markov unreliability models
International Nuclear Information System (INIS)
Lewis, E.E.; Boehm, F.
1984-01-01
A Monte Carlo method is formulated for the evaluation of the unrealibility of complex systems with known component failure and repair rates. The formulation is in terms of a Markov process allowing dependences between components to be modeled and computational efficiencies to be achieved in the Monte Carlo simulation. Two variance reduction techniques, forced transition and failure biasing, are employed to increase computational efficiency of the random walk procedure. For an example problem these result in improved computational efficiency by more than three orders of magnitudes over analog Monte Carlo. The method is generalized to treat problems with distributed failure and repair rate data, and a batching technique is introduced and shown to result in substantial increases in computational efficiency for an example problem. A method for separating the variance due to the data uncertainty from that due to the finite number of random walks is presented. (orig.)
Shell model the Monte Carlo way
International Nuclear Information System (INIS)
Ormand, W.E.
1995-01-01
The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined
Shell model the Monte Carlo way
Energy Technology Data Exchange (ETDEWEB)
Ormand, W.E.
1995-03-01
The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.
SPQR: a Monte Carlo reactor kinetics code
International Nuclear Information System (INIS)
Cramer, S.N.; Dodds, H.L.
1980-02-01
The SPQR Monte Carlo code has been developed to analyze fast reactor core accident problems where conventional methods are considered inadequate. The code is based on the adiabatic approximation of the quasi-static method. This initial version contains no automatic material motion or feedback. An existing Monte Carlo code is used to calculate the shape functions and the integral quantities needed in the kinetics module. Several sample problems have been devised and analyzed. Due to the large statistical uncertainty associated with the calculation of reactivity in accident simulations, the results, especially at later times, differ greatly from deterministic methods. It was also found that in large uncoupled systems, the Monte Carlo method has difficulty in handling asymmetric perturbations
A Monte Carlo study of radiation trapping effects
International Nuclear Information System (INIS)
Wang, J.B.; Williams, J.F.; Carter, C.J.
1997-01-01
A Monte Carlo simulation of radiative transfer in an atomic beam is carried out to investigate the effects of radiation trapping on electron-atom collision experiments. The collisionally excited atom is represented by a simple electric dipole, for which the emission intensity distribution is well known. The spatial distribution, frequency and free path of this and the sequential dipoles were determined by a computer random generator according to the probabilities given by quantum theory. By altering the atomic number density at the target site, the pressure dependence of the observed atomic lifetime, the angular intensity distribution and polarisation of the radiation field is studied. 7 refs., 5 figs
Load Balancing of Parallel Monte Carlo Transport Calculations
International Nuclear Information System (INIS)
Procassini, R J; O'Brien, M J; Taylor, J M
2005-01-01
The performance of parallel Monte Carlo transport calculations which use both spatial and particle parallelism is increased by dynamically assigning processors to the most worked domains. Since he particle work load varies over the course of the simulation, this algorithm determines each cycle if dynamic load balancing would speed up the calculation. If load balancing is required, a small number of particle communications are initiated in order to achieve load balance. This method has decreased the parallel run time by more than a factor of three for certain criticality calculations
Monte Carlo neutral density calculations for ELMO Bumpy Torus
International Nuclear Information System (INIS)
Davis, W.A.; Colchin, R.J.
1986-11-01
The steady-state nature of the ELMO Bumpy Torus (EBT) plasma implies that the neutral density at any point inside the plasma volume will determine the local particle confinement time. This paper describes a Monte Carlo calculation of three-dimensional atomic and molecular neutral density profiles in EBT. The calculation has been done using various models for neutral source points, for launching schemes, for plasma profiles, and for plasma densities and temperatures. Calculated results are compared with experimental observations - principally spectroscopic measurements - both for guidance in normalization and for overall consistency checks. Implications of the predicted neutral profiles for the fast-ion-decay measurement of neutral densities are also addressed
Dynamic Load Balancing of Parallel Monte Carlo Transport Calculations
International Nuclear Information System (INIS)
O'Brien, M; Taylor, J; Procassini, R
2004-01-01
The performance of parallel Monte Carlo transport calculations which use both spatial and particle parallelism is increased by dynamically assigning processors to the most worked domains. Since the particle work load varies over the course of the simulation, this algorithm determines each cycle if dynamic load balancing would speed up the calculation. If load balancing is required, a small number of particle communications are initiated in order to achieve load balance. This method has decreased the parallel run time by more than a factor of three for certain criticality calculations
Novel extrapolation method in the Monte Carlo shell model
International Nuclear Information System (INIS)
Shimizu, Noritaka; Abe, Takashi; Utsuno, Yutaka; Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio
2010-01-01
We propose an extrapolation method utilizing energy variance in the Monte Carlo shell model to estimate the energy eigenvalue and observables accurately. We derive a formula for the energy variance with deformed Slater determinants, which enables us to calculate the energy variance efficiently. The feasibility of the method is demonstrated for the full pf-shell calculation of 56 Ni, and the applicability of the method to a system beyond the current limit of exact diagonalization is shown for the pf+g 9/2 -shell calculation of 64 Ge.
Monte Carlo simulation of fully Markovian stochastic geometries
International Nuclear Information System (INIS)
Lepage, Thibaut; Delaby, Lucie; Malvagi, Fausto; Mazzolo, Alain
2010-01-01
The interest in resolving the equation of transport in stochastic media has continued to increase these last years. For binary stochastic media it is often assumed that the geometry is Markovian, which is never the case in usual environments. In the present paper, based on rigorous mathematical theorems, we construct fully two-dimensional Markovian stochastic geometries and we study their main properties. In particular, we determine a percolation threshold p c , equal to 0.586 ± 0.0015 for such geometries. Finally, Monte Carlo simulations are performed through these geometries and the results compared to homogeneous geometries. (author)
International Nuclear Information System (INIS)
Morohashi, Yuko; Ishibashi, Junichi; Nishi, Hiroshi
2002-03-01
The criticality analysis of the MONJU initial critical core was conducted based on conventional methods developed by the JUPITER program. Effective cross sections were created, considering self-shielding effects, from the JAERI Fast Set (JFS-3-J3.2); group constants in 70 energy groups, which were processed from the Japanese Evaluated Nuclear Data Library (JENDL-3.2). These were used in the standard calculation method: a 3-Dimensional Hexagonal-Z whole core calculation by diffusion theory. This standard calculation, however, involves several approximations. The continuous neutron energy spectrum is divided into 70 discrete energy groups and continuous spatial coordinates are represented by assembly-wise spatial meshes. Original transport equations are solved by diffusion theory (isotropic scattering) approximation and fine structures in fuel assemblies, such as fuel pins or wrapper tubes, are processed into cell-wise homogeneous mixture. To improve the accuracy of the results, these approximations are compensated for by applying corresponding correction factors. Cell heterogeneity effects, among them, were evaluated to be 0.3-0.4% Δk/kk' by diffusion calculations based on the group constants, obtained by heterogeneous cell model calculations. This method, however, has the drawback that it assumes that there is no interdependency of the related approximations; energy grouping, diffusion approximation, etc. A study on cell heterogeneity effects has been conducted using the continuous energy Monte Carlo method to validate the adequacy of this non-interdependency assumption. As a result, cell heterogeneity effects slightly larger than those from conventional methods have been obtained: 0.54% Δk/kk' for the initial critical core, and 0.50% Δk/kk' for the initial full power core. Dependency on plutonium enrichment and fuel temperature has also been identified, which implies the dependency of the cell heterogeneity effects on the specific core conditions. Grouping
Alexander, Andrew William
Within the field of medical physics, Monte Carlo radiation transport simulations are considered to be the most accurate method for the determination of dose distributions in patients. The McGill Monte Carlo treatment planning system (MMCTP), provides a flexible software environment to integrate Monte Carlo simulations with current and new treatment modalities. A developing treatment modality called energy and intensity modulated electron radiotherapy (MERT) is a promising modality, which has the fundamental capabilities to enhance the dosimetry of superficial targets. An objective of this work is to advance the research and development of MERT with the end goal of clinical use. To this end, we present the MMCTP system with an integrated toolkit for MERT planning and delivery of MERT fields. Delivery is achieved using an automated "few leaf electron collimator" (FLEC) and a controller. Aside from the MERT planning toolkit, the MMCTP system required numerous add-ons to perform the complex task of large-scale autonomous Monte Carlo simulations. The first was a DICOM import filter, followed by the implementation of DOSXYZnrc as a dose calculation engine and by logic methods for submitting and updating the status of Monte Carlo simulations. Within this work we validated the MMCTP system with a head and neck Monte Carlo recalculation study performed by a medical dosimetrist. The impact of MMCTP lies in the fact that it allows for systematic and platform independent large-scale Monte Carlo dose calculations for different treatment sites and treatment modalities. In addition to the MERT planning tools, various optimization algorithms were created external to MMCTP. The algorithms produced MERT treatment plans based on dose volume constraints that employ Monte Carlo pre-generated patient-specific kernels. The Monte Carlo kernels are generated from patient-specific Monte Carlo dose distributions within MMCTP. The structure of the MERT planning toolkit software and
Current and future applications of Monte Carlo
International Nuclear Information System (INIS)
Zaidi, H.
2003-01-01
Full text: The use of radionuclides in medicine has a long history and encompasses a large area of applications including diagnosis and radiation treatment of cancer patients using either external or radionuclide radiotherapy. The 'Monte Carlo method'describes a very broad area of science, in which many processes, physical systems, and phenomena are simulated by statistical methods employing random numbers. The general idea of Monte Carlo analysis is to create a model, which is as similar as possible to the real physical system of interest, and to create interactions within that system based on known probabilities of occurrence, with random sampling of the probability density functions (pdfs). As the number of individual events (called 'histories') is increased, the quality of the reported average behavior of the system improves, meaning that the statistical uncertainty decreases. The use of the Monte Carlo method to simulate radiation transport has become the most accurate means of predicting absorbed dose distributions and other quantities of interest in the radiation treatment of cancer patients using either external or radionuclide radiotherapy. The same trend has occurred for the estimation of the absorbed dose in diagnostic procedures using radionuclides as well as the assessment of image quality and quantitative accuracy of radionuclide imaging. As a consequence of this generalized use, many questions are being raised primarily about the need and potential of Monte Carlo techniques, but also about how accurate it really is, what would it take to apply it clinically and make it available widely to the nuclear medicine community at large. Many of these questions will be answered when Monte Carlo techniques are implemented and used for more routine calculations and for in-depth investigations. In this paper, the conceptual role of the Monte Carlo method is briefly introduced and followed by a survey of its different applications in diagnostic and therapeutic
Simplified monte carlo simulation for Beijing spectrometer
International Nuclear Information System (INIS)
Wang Taijie; Wang Shuqin; Yan Wuguang; Huang Yinzhi; Huang Deqiang; Lang Pengfei
1986-01-01
The Monte Carlo method based on the functionization of the performance of detectors and the transformation of values of kinematical variables into ''measured'' ones by means of smearing has been used to program the Monte Carlo simulation of the performance of the Beijing Spectrometer (BES) in FORTRAN language named BESMC. It can be used to investigate the multiplicity, the particle type, and the distribution of four-momentum of the final states of electron-positron collision, and also the response of the BES to these final states. Thus, it provides a measure to examine whether the overall design of the BES is reasonable and to decide the physical topics of the BES
Self-learning Monte Carlo (dynamical biasing)
International Nuclear Information System (INIS)
Matthes, W.
1981-01-01
In many applications the histories of a normal Monte Carlo game rarely reach the target region. An approximate knowledge of the importance (with respect to the target) may be used to guide the particles more frequently into the target region. A Monte Carlo method is presented in which each history contributes to update the importance field such that eventually most target histories are sampled. It is a self-learning method in the sense that the procedure itself: (a) learns which histories are important (reach the target) and increases their probability; (b) reduces the probabilities of unimportant histories; (c) concentrates gradually on the more important target histories. (U.K.)
Burnup calculations using Monte Carlo method
International Nuclear Information System (INIS)
Ghosh, Biplab; Degweker, S.B.
2009-01-01
In the recent years, interest in burnup calculations using Monte Carlo methods has gained momentum. Previous burn up codes have used multigroup transport theory based calculations followed by diffusion theory based core calculations for the neutronic portion of codes. The transport theory methods invariably make approximations with regard to treatment of the energy and angle variables involved in scattering, besides approximations related to geometry simplification. Cell homogenisation to produce diffusion, theory parameters adds to these approximations. Moreover, while diffusion theory works for most reactors, it does not produce accurate results in systems that have strong gradients, strong absorbers or large voids. Also, diffusion theory codes are geometry limited (rectangular, hexagonal, cylindrical, and spherical coordinates). Monte Carlo methods are ideal to solve very heterogeneous reactors and/or lattices/assemblies in which considerable burnable poisons are used. The key feature of this approach is that Monte Carlo methods permit essentially 'exact' modeling of all geometrical detail, without resort to ene and spatial homogenization of neutron cross sections. Monte Carlo method would also be better for in Accelerator Driven Systems (ADS) which could have strong gradients due to the external source and a sub-critical assembly. To meet the demand for an accurate burnup code, we have developed a Monte Carlo burnup calculation code system in which Monte Carlo neutron transport code is coupled with a versatile code (McBurn) for calculating the buildup and decay of nuclides in nuclear materials. McBurn is developed from scratch by the authors. In this article we will discuss our effort in developing the continuous energy Monte Carlo burn-up code, McBurn. McBurn is intended for entire reactor core as well as for unit cells and assemblies. Generally, McBurn can do burnup of any geometrical system which can be handled by the underlying Monte Carlo transport code
Improvements for Monte Carlo burnup calculation
Energy Technology Data Exchange (ETDEWEB)
Shenglong, Q.; Dong, Y.; Danrong, S.; Wei, L., E-mail: qiangshenglong@tsinghua.org.cn, E-mail: d.yao@npic.ac.cn, E-mail: songdr@npic.ac.cn, E-mail: luwei@npic.ac.cn [Nuclear Power Inst. of China, Cheng Du, Si Chuan (China)
2015-07-01
Monte Carlo burnup calculation is development trend of reactor physics, there would be a lot of work to be done for engineering applications. Based on Monte Carlo burnup code MOI, non-fuel burnup calculation methods and critical search suggestions will be mentioned in this paper. For non-fuel burnup, mixed burnup mode will improve the accuracy of burnup calculation and efficiency. For critical search of control rod position, a new method called ABN based on ABA which used by MC21 will be proposed for the first time in this paper. (author)
A keff calculation method by Monte Carlo
International Nuclear Information System (INIS)
Shen, H; Wang, K.
2008-01-01
The effective multiplication factor (k eff ) is defined as the ratio between the number of neutrons in successive generations, which definition is adopted by most Monte Carlo codes (e.g. MCNP). Also, it can be thought of as the ratio of the generation rate of neutrons by the sum of the leakage rate and the absorption rate, which should exclude the effect of the neutron reaction such as (n, 2n) and (n, 3n). This article discusses the Monte Carlo method for k eff calculation based on the second definition. A new code has been developed and the results are presented. (author)
Monte Carlo electron/photon transport
International Nuclear Information System (INIS)
Mack, J.M.; Morel, J.E.; Hughes, H.G.
1985-01-01
A review of nonplasma coupled electron/photon transport using Monte Carlo method is presented. Remarks are mainly restricted to linerarized formalisms at electron energies from 1 keV to 1000 MeV. Applications involving pulse-height estimation, transport in external magnetic fields, and optical Cerenkov production are discussed to underscore the importance of this branch of computational physics. Advances in electron multigroup cross-section generation is reported, and its impact on future code development assessed. Progress toward the transformation of MCNP into a generalized neutral/charged-particle Monte Carlo code is described. 48 refs
Monte Carlo simulation of neutron scattering instruments
International Nuclear Information System (INIS)
Seeger, P.A.
1995-01-01
A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width
Simulation of transport equations with Monte Carlo
International Nuclear Information System (INIS)
Matthes, W.
1975-09-01
The main purpose of the report is to explain the relation between the transport equation and the Monte Carlo game used for its solution. The introduction of artificial particles carrying a weight provides one with high flexibility in constructing many different games for the solution of the same equation. This flexibility opens a way to construct a Monte Carlo game for the solution of the adjoint transport equation. Emphasis is laid mostly on giving a clear understanding of what to do and not on the details of how to do a specific game
Monte Carlo dose distributions for radiosurgery
International Nuclear Information System (INIS)
Perucha, M.; Leal, A.; Rincon, M.; Carrasco, E.
2001-01-01
The precision of Radiosurgery Treatment planning systems is limited by the approximations of their algorithms and by their dosimetrical input data. This fact is especially important in small fields. However, the Monte Carlo methods is an accurate alternative as it considers every aspect of particle transport. In this work an acoustic neurinoma is studied by comparing the dose distribution of both a planning system and Monte Carlo. Relative shifts have been measured and furthermore, Dose-Volume Histograms have been calculated for target and adjacent organs at risk. (orig.)
Monte Carlo characterisation of the Dose Magnifying Glass for proton therapy quality assurance
Merchant, A. H.; Guatelli, S.; Petesecca, M.; Jackson, M.; Rozenfeld, A. B.
2017-01-01
A Geant4 Monte Carlo simulation study was carried out to characterise a novel silicon strip detector, the Dose Magnifying Glass (DMG), for use in proton therapy Quality Assurance. We investigated the possibility to use DMG to determine the energy of the incident proton beam. The advantages of DMG are quick response, easy operation and high spatial resolution. In this work we theoretically proved that DMG can be used for QA in the determination of the energy of the incident proton beam, for ocular and prostate cancer therapy. The study was performed by means of Monte Carlo simulations Experimental measurements are currently on their way to confirm the results of this simulation study.
Monte Carlo study of quantum number retention in hadron jets
International Nuclear Information System (INIS)
Hayward, S.K.; Weiss, N.
1992-01-01
We present a Monte Carlo study in which we used weighted quantum numbers of hadron jets in an attempt to identify the parent parton of these jets. Two-jet events produced by e + e- annihilation were studied using the Lund Monte Carlo program. It was found that the sign of the charge of the leading parton could be determined in a majority of events and that the quark jet could be distinguished from the antiquark jet in a majority of events containing baryons. A careful selection of a subset of the events by making cuts on the value of these weighted quantum numbers increased significantly the accuracy with which both the charge and the baryon number of the leading parton could be determined. Some success was also made in differentiating light-quark from heavy-quark events and in determining the leading quark flavor in the light-quark events. Unfortunately quantum number retention does not differentiate gluon jets from quark jets. The consequences of this for three-jet events and for jet identification in other reactions is discussed
Fast sequential Monte Carlo methods for counting and optimization
Rubinstein, Reuven Y; Vaisman, Radislav
2013-01-01
A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the
Specialized Monte Carlo codes versus general-purpose Monte Carlo codes
International Nuclear Information System (INIS)
Moskvin, Vadim; DesRosiers, Colleen; Papiez, Lech; Lu, Xiaoyi
2002-01-01
The possibilities of Monte Carlo modeling for dose calculations and optimization treatment are quite limited in radiation oncology applications. The main reason is that the Monte Carlo technique for dose calculations is time consuming while treatment planning may require hundreds of possible cases of dose simulations to be evaluated for dose optimization. The second reason is that general-purpose codes widely used in practice, require an experienced user to customize them for calculations. This paper discusses the concept of Monte Carlo code design that can avoid the main problems that are preventing wide spread use of this simulation technique in medical physics. (authors)
On the use of stochastic approximation Monte Carlo for Monte Carlo integration
Liang, Faming
2009-03-01
The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration via a dynamically weighted estimator by calling some results from the literature of nonhomogeneous Markov chains. Our numerical results indicate that SAMC can yield significant savings over conventional Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, for the problems for which the energy landscape is rugged. © 2008 Elsevier B.V. All rights reserved.
Parallel processing Monte Carlo radiation transport codes
International Nuclear Information System (INIS)
McKinney, G.W.
1994-01-01
Issues related to distributed-memory multiprocessing as applied to Monte Carlo radiation transport are discussed. Measurements of communication overhead are presented for the radiation transport code MCNP which employs the communication software package PVM, and average efficiency curves are provided for a homogeneous virtual machine
Juan Carlos D'Olivo: A portrait
Aguilar-Arévalo, Alexis A.
2013-06-01
This report attempts to give a brief bibliographical sketch of the academic life of Juan Carlos D'Olivo, researcher and teacher at the Instituto de Ciencias Nucleares of UNAM, devoted to advancing the fields of High Energy Physics and Astroparticle Physics in Mexico and Latin America.
The Monte Carlo applied for calculation dose
International Nuclear Information System (INIS)
Peixoto, J.E.
1988-01-01
The Monte Carlo method is showed for the calculation of absorbed dose. The trajectory of the photon is traced simulating sucessive interaction between the photon and the substance that consist the human body simulator. The energy deposition in each interaction of the simulator organ or tissue per photon is also calculated. (C.G.C.) [pt
Monte Carlo code for neutron radiography
International Nuclear Information System (INIS)
Milczarek, Jacek J.; Trzcinski, Andrzej; El-Ghany El Abd, Abd; Czachor, Andrzej
2005-01-01
The concise Monte Carlo code, MSX, for simulation of neutron radiography images of non-uniform objects is presented. The possibility of modeling the images of objects with continuous spatial distribution of specific isotopes is included. The code can be used for assessment of the scattered neutron component in neutron radiograms
Monte Carlo code for neutron radiography
Energy Technology Data Exchange (ETDEWEB)
Milczarek, Jacek J. [Institute of Atomic Energy, Swierk, 05-400 Otwock (Poland)]. E-mail: jjmilcz@cyf.gov.pl; Trzcinski, Andrzej [Institute for Nuclear Studies, Swierk, 05-400 Otwock (Poland); El-Ghany El Abd, Abd [Institute of Atomic Energy, Swierk, 05-400 Otwock (Poland); Nuclear Research Center, PC 13759, Cairo (Egypt); Czachor, Andrzej [Institute of Atomic Energy, Swierk, 05-400 Otwock (Poland)
2005-04-21
The concise Monte Carlo code, MSX, for simulation of neutron radiography images of non-uniform objects is presented. The possibility of modeling the images of objects with continuous spatial distribution of specific isotopes is included. The code can be used for assessment of the scattered neutron component in neutron radiograms.
Monte Carlo method in neutron activation analysis
International Nuclear Information System (INIS)
Majerle, M.; Krasa, A.; Svoboda, O.; Wagner, V.; Adam, J.; Peetermans, S.; Slama, O.; Stegajlov, V.I.; Tsupko-Sitnikov, V.M.
2009-01-01
Neutron activation detectors are a useful technique for the neutron flux measurements in spallation experiments. The study of the usefulness and the accuracy of this method at similar experiments was performed with the help of Monte Carlo codes MCNPX and FLUKA
Atomistic Monte Carlo simulation of lipid membranes
DEFF Research Database (Denmark)
Wüstner, Daniel; Sklenar, Heinz
2014-01-01
Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction...... of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol....
Computer system for Monte Carlo experimentation
International Nuclear Information System (INIS)
Grier, D.A.
1986-01-01
A new computer system for Monte Carlo Experimentation is presented. The new system speeds and simplifies the process of coding and preparing a Monte Carlo Experiment; it also encourages the proper design of Monte Carlo Experiments, and the careful analysis of the experimental results. A new functional language is the core of this system. Monte Carlo Experiments, and their experimental designs, are programmed in this new language; those programs are compiled into Fortran output. The Fortran output is then compiled and executed. The experimental results are analyzed with a standard statistics package such as Si, Isp, or Minitab or with a user-supplied program. Both the experimental results and the experimental design may be directly loaded into the workspace of those packages. The new functional language frees programmers from many of the details of programming an experiment. Experimental designs such as factorial, fractional factorial, or latin square are easily described by the control structures and expressions of the language. Specific mathematical modes are generated by the routines of the language
Scalable Domain Decomposed Monte Carlo Particle Transport
Energy Technology Data Exchange (ETDEWEB)
O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)
2013-12-05
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.
Monte Carlo methods beyond detailed balance
Schram, Raoul D.; Barkema, Gerard T.|info:eu-repo/dai/nl/101275080
2015-01-01
Monte Carlo algorithms are nearly always based on the concept of detailed balance and ergodicity. In this paper we focus on algorithms that do not satisfy detailed balance. We introduce a general method for designing non-detailed balance algorithms, starting from a conventional algorithm satisfying
Monte Carlo studies of ZEPLIN III
Dawson, J; Davidge, D C R; Gillespie, J R; Howard, A S; Jones, W G; Joshi, M; Lebedenko, V N; Sumner, T J; Quenby, J J
2002-01-01
A Monte Carlo simulation of a two-phase xenon dark matter detector, ZEPLIN III, has been achieved. Results from the analysis of a simulated data set are presented, showing primary and secondary signal distributions from low energy gamma ray events.
Dynamic bounds coupled with Monte Carlo simulations
Energy Technology Data Exchange (ETDEWEB)
Rajabalinejad, M., E-mail: M.Rajabalinejad@tudelft.n [Faculty of Civil Engineering, Delft University of Technology, Delft (Netherlands); Meester, L.E. [Delft Institute of Applied Mathematics, Delft University of Technology, Delft (Netherlands); Gelder, P.H.A.J.M. van; Vrijling, J.K. [Faculty of Civil Engineering, Delft University of Technology, Delft (Netherlands)
2011-02-15
For the reliability analysis of engineering structures a variety of methods is known, of which Monte Carlo (MC) simulation is widely considered to be among the most robust and most generally applicable. To reduce simulation cost of the MC method, variance reduction methods are applied. This paper describes a method to reduce the simulation cost even further, while retaining the accuracy of Monte Carlo, by taking into account widely present monotonicity. For models exhibiting monotonic (decreasing or increasing) behavior, dynamic bounds (DB) are defined, which in a coupled Monte Carlo simulation are updated dynamically, resulting in a failure probability estimate, as well as a strict (non-probabilistic) upper and lower bounds. Accurate results are obtained at a much lower cost than an equivalent ordinary Monte Carlo simulation. In a two-dimensional and a four-dimensional numerical example, the cost reduction factors are 130 and 9, respectively, where the relative error is smaller than 5%. At higher accuracy levels, this factor increases, though this effect is expected to be smaller with increasing dimension. To show the application of DB method to real world problems, it is applied to a complex finite element model of a flood wall in New Orleans.
Dynamic bounds coupled with Monte Carlo simulations
Rajabali Nejad, Mohammadreza; Meester, L.E.; van Gelder, P.H.A.J.M.; Vrijling, J.K.
2011-01-01
For the reliability analysis of engineering structures a variety of methods is known, of which Monte Carlo (MC) simulation is widely considered to be among the most robust and most generally applicable. To reduce simulation cost of the MC method, variance reduction methods are applied. This paper
Design and analysis of Monte Carlo experiments
Kleijnen, Jack P.C.; Gentle, J.E.; Haerdle, W.; Mori, Y.
2012-01-01
By definition, computer simulation or Monte Carlo models are not solved by mathematical analysis (such as differential calculus), but are used for numerical experimentation. The goal of these experiments is to answer questions about the real world; i.e., the experimenters may use their models to
Some problems on Monte Carlo method development
International Nuclear Information System (INIS)
Pei Lucheng
1992-01-01
This is a short paper on some problems of Monte Carlo method development. The content consists of deep-penetration problems, unbounded estimate problems, limitation of Mdtropolis' method, dependency problem in Metropolis' method, random error interference problems and random equations, intellectualisation and vectorization problems of general software
Monte Carlo simulations in theoretical physic
International Nuclear Information System (INIS)
Billoire, A.
1991-01-01
After a presentation of the MONTE CARLO method principle, the method is applied, first to the critical exponents calculations in the three dimensions ISING model, and secondly to the discrete quantum chromodynamic with calculation times in function of computer power. 28 refs., 4 tabs
Monte Carlo method for random surfaces
International Nuclear Information System (INIS)
Berg, B.
1985-01-01
Previously two of the authors proposed a Monte Carlo method for sampling statistical ensembles of random walks and surfaces with a Boltzmann probabilistic weight. In the present paper we work out the details for several models of random surfaces, defined on d-dimensional hypercubic lattices. (orig.)
Monte Carlo simulation of the microcanonical ensemble
International Nuclear Information System (INIS)
Creutz, M.
1984-01-01
We consider simulating statistical systems with a random walk on a constant energy surface. This combines features of deterministic molecular dynamics techniques and conventional Monte Carlo simulations. For discrete systems the method can be programmed to run an order of magnitude faster than other approaches. It does not require high quality random numbers and may also be useful for nonequilibrium studies. 10 references
Variance Reduction Techniques in Monte Carlo Methods
Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.
2010-01-01
Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the
Gian-Carlos Rota and Combinatorial Math.
Kolata, Gina Bari
1979-01-01
Presents the first of a series of occasional articles about mathematics as seen through the eyes of its prominent scholars. In an interview with Gian-Carlos Rota of the Massachusetts Institute of Technology he discusses how combinatorial mathematics began as a field and its future. (HM)
Coded aperture optimization using Monte Carlo simulations
International Nuclear Information System (INIS)
Martineau, A.; Rocchisani, J.M.; Moretti, J.L.
2010-01-01
Coded apertures using Uniformly Redundant Arrays (URA) have been unsuccessfully evaluated for two-dimensional and three-dimensional imaging in Nuclear Medicine. The images reconstructed from coded projections contain artifacts and suffer from poor spatial resolution in the longitudinal direction. We introduce a Maximum-Likelihood Expectation-Maximization (MLEM) algorithm for three-dimensional coded aperture imaging which uses a projection matrix calculated by Monte Carlo simulations. The aim of the algorithm is to reduce artifacts and improve the three-dimensional spatial resolution in the reconstructed images. Firstly, we present the validation of GATE (Geant4 Application for Emission Tomography) for Monte Carlo simulations of a coded mask installed on a clinical gamma camera. The coded mask modelling was validated by comparison between experimental and simulated data in terms of energy spectra, sensitivity and spatial resolution. In the second part of the study, we use the validated model to calculate the projection matrix with Monte Carlo simulations. A three-dimensional thyroid phantom study was performed to compare the performance of the three-dimensional MLEM reconstruction with conventional correlation method. The results indicate that the artifacts are reduced and three-dimensional spatial resolution is improved with the Monte Carlo-based MLEM reconstruction.
Monte Carlo studies of uranium calorimetry
International Nuclear Information System (INIS)
Brau, J.; Hargis, H.J.; Gabriel, T.A.; Bishop, B.L.
1985-01-01
Detailed Monte Carlo calculations of uranium calorimetry are presented which reveal a significant difference in the responses of liquid argon and plastic scintillator in uranium calorimeters. Due to saturation effects, neutrons from the uranium are found to contribute only weakly to the liquid argon signal. Electromagnetic sampling inefficiencies are significant and contribute substantially to compensation in both systems. 17 references
Monte Carlo models: Quo vadimus?
Energy Technology Data Exchange (ETDEWEB)
Wang, Xin-Nian
2001-01-01
Coherence, multiple scattering and the interplay between soft and hard processes are discussed. These physics phenomena are essential for understanding the nuclear dependences of rapidity density and p{sub T} spectra in high-energy heavy-ion collisions. The RHIC data have shown the onset of hard processes and indications of high p{sub T} spectra suppression due to parton energy loss. Within the pQCD parton model, the combination of azimuthal anisotropy ({nu}{sub 2}) and hadron spectra suppression at large p{sub T} can help one to determine the initial gluon density in heavy-ion collisions at RHIC.
Monte Carlo models: Quo vadimus?
International Nuclear Information System (INIS)
Wang, Xin-Nian
2001-01-01
Coherence, multiple scattering and the interplay between soft and hard processes are discussed. These physics phenomena are essential for understanding the nuclear dependences of rapidity density and p T spectra in high-energy heavy-ion collisions. The RHIC data have shown the onset of hard processes and indications of high p T spectra suppression due to parton energy loss. Within the pQCD parton model, the combination of azimuthal anisotropy (ν 2 ) and hadron spectra suppression at large p T can help one to determine the initial gluon density in heavy-ion collisions at RHIC
High-efficiency wavefunction updates for large scale Quantum Monte Carlo
Kent, Paul; McDaniel, Tyler; Li, Ying Wai; D'Azevedo, Ed
Within ab intio Quantum Monte Carlo (QMC) simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunctions. The evaluation of each Monte Carlo move requires finding the determinant of a dense matrix, which is traditionally iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. For calculations with thousands of electrons, this operation dominates the execution profile. We propose a novel rank- k delayed update scheme. This strategy enables probability evaluation for multiple successive Monte Carlo moves, with application of accepted moves to the matrices delayed until after a predetermined number of moves, k. Accepted events grouped in this manner are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency. This procedure does not change the underlying Monte Carlo sampling or the sampling efficiency. For large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude speedups can be obtained on both multi-core CPU and on GPUs, making this algorithm highly advantageous for current petascale and future exascale computations.
Uncertainty analysis in Monte Carlo criticality computations
International Nuclear Information System (INIS)
Qi Ao
2011-01-01
Highlights: ► Two types of uncertainty methods for k eff Monte Carlo computations are examined. ► Sampling method has the least restrictions on perturbation but computing resources. ► Analytical method is limited to small perturbation on material properties. ► Practicality relies on efficiency, multiparameter applicability and data availability. - Abstract: Uncertainty analysis is imperative for nuclear criticality risk assessments when using Monte Carlo neutron transport methods to predict the effective neutron multiplication factor (k eff ) for fissionable material systems. For the validation of Monte Carlo codes for criticality computations against benchmark experiments, code accuracy and precision are measured by both the computational bias and uncertainty in the bias. The uncertainty in the bias accounts for known or quantified experimental, computational and model uncertainties. For the application of Monte Carlo codes for criticality analysis of fissionable material systems, an administrative margin of subcriticality must be imposed to provide additional assurance of subcriticality for any unknown or unquantified uncertainties. Because of a substantial impact of the administrative margin of subcriticality on economics and safety of nuclear fuel cycle operations, recently increasing interests in reducing the administrative margin of subcriticality make the uncertainty analysis in criticality safety computations more risk-significant. This paper provides an overview of two most popular k eff uncertainty analysis methods for Monte Carlo criticality computations: (1) sampling-based methods, and (2) analytical methods. Examples are given to demonstrate their usage in the k eff uncertainty analysis due to uncertainties in both neutronic and non-neutronic parameters of fissionable material systems.
Spatial distribution of reflected gamma rays by Monte Carlo simulation
International Nuclear Information System (INIS)
Jehouani, A.; Merzouki, A.; Boutadghart, F.; Ghassoun, J.
2007-01-01
In nuclear facilities, the reflection of gamma rays of the walls and metals constitutes an unknown origin of radiation. These reflected gamma rays must be estimated and determined. This study concerns reflected gamma rays on metal slabs. We evaluated the spatial distribution of the reflected gamma rays spectra by using the Monte Carlo method. An appropriate estimator for the double differential albedo is used to determine the energy spectra and the angular distribution of reflected gamma rays by slabs of iron and aluminium. We took into the account the principal interactions of gamma rays with matter: photoelectric, coherent scattering (Rayleigh), incoherent scattering (Compton) and pair creation. The Klein-Nishina differential cross section was used to select direction and energy of scattered photons after each Compton scattering. The obtained spectra show peaks at 0.511 * MeV for higher source energy. The Results are in good agreement with those obtained by the TRIPOLI code [J.C. Nimal et al., TRIPOLI02: Programme de Monte Carlo Polycinsetique a Trois dimensions, CEA Rapport, Commissariat a l'Energie Atomique.
Monte Carlo simulation of the HEGRA cosmic ray detector performance
Energy Technology Data Exchange (ETDEWEB)
Martinez, S. [Universidad Complutense de Madrid (Spain). Dept. de Fisica Atomica, Molecular y Nuclear; Arqueros, F. [Universidad Complutense de Madrid (Spain). Dept. de Fisica Atomica, Molecular y Nuclear; Fonseca, V. [Universidad Complutense de Madrid (Spain). Dept. de Fisica Atomica, Molecular y Nuclear; Karle, A. [Max-Planck-Institut fuer Physik, Foehringer Ring 6, D80805 Munich (Germany); Lorenz, E. [Max-Planck-Institut fuer Physik, Foehringer Ring 6, D80805 Munich (Germany); Plaga, R. [Max-Planck-Institut fuer Physik, Foehringer Ring 6, D80805 Munich (Germany); Rozanska, M. [Max-Planck-Institut fuer Physik, Foehringer Ring 6, D80805 Munich (Germany)]|[Institute of Nuclear Physics, ul.Kawiory 26a, PL30-055 Cracow (Poland)
1995-04-21
Models of the scintillator and wide-angle air Cherenkov (AIROBICC) arrays of the HEGRA experiment are described here. Their response to extensive air showers generated by cosmic rays in the 10 to 1000 TeV range has been assessed using a detailed Monte Carlo simulation of air shower development and associated Cherenkov emission. Protons, {gamma}-rays and oxygen and iron nuclei have been considered as primary particles. For both arrays, the angular resolution as determined from the Monte Carlo simulation is compared with experimental data. Shower size N{sub e} can be reconstructed from the scintillator signals with an error ranging from 10% (N{sub e}=2x10{sup 5}) to 35% (N{sub e}=3x10{sup 3}). The energy threshold of AIROBICC is 14 TeV for primary gammas and 27 TeV for protons and an angular resolution of 0.25 can be obtained. The measurement of the Cherenkov light at 90 m from the shower core provides an accurate determination of primary energy E{sub 0} as far as the nature of the primary particle is known. For gammas an error in the energy prediction ranging from 8% (E{sub 0}=5x10{sup 14} eV) to 15% (E{sub 0}=2x10{sup 13} eV) is achieved. This detector is therefore a powerful tool for {gamma}-ray astronomy. ((orig.)).
Monte Carlo simulation of zinc protoporphyrin fluorescence in the retina
Chen, Xiaoyan; Lane, Stephen
2010-02-01
We have used Monte Carlo simulation of autofluorescence in the retina to determine that noninvasive detection of nutritional iron deficiency is possible. Nutritional iron deficiency (which leads to iron deficiency anemia) affects more than 2 billion people worldwide, and there is an urgent need for a simple, noninvasive diagnostic test. Zinc protoporphyrin (ZPP) is a fluorescent compound that accumulates in red blood cells and is used as a biomarker for nutritional iron deficiency. We developed a computational model of the eye, using parameters that were identified either by literature search, or by direct experimental measurement to test the possibility of detecting ZPP non-invasively in retina. By incorporating fluorescence into Steven Jacques' original code for multi-layered tissue, we performed Monte Carlo simulation of fluorescence in the retina and determined that if the beam is not focused on a blood vessel in a neural retina layer or if part of light is hitting the vessel, ZPP fluorescence will be 10-200 times higher than background lipofuscin fluorescence coming from the retinal pigment epithelium (RPE) layer directly below. In addition we found that if the light can be focused entirely onto a blood vessel in the neural retina layer, the fluorescence signal comes only from ZPP. The fluorescence from layers below in this second situation does not contribute to the signal. Therefore, the possibility that a device could potentially be built and detect ZPP fluorescence in retina looks very promising.
Pore-scale uncertainty quantification with multilevel Monte Carlo
Icardi, Matteo; Hoel, Haakon; Long, Quan; Tempone, Raul
2014-01-01
. Since there are no generic ways to parametrize the randomness in the porescale structures, Monte Carlo techniques are the most accessible to compute statistics. We propose a multilevel Monte Carlo (MLMC) technique to reduce the computational cost
Prospect on general software of Monte Carlo method
International Nuclear Information System (INIS)
Pei Lucheng
1992-01-01
This is a short paper on the prospect of Monte Carlo general software. The content consists of cluster sampling method, zero variance technique, self-improved method, and vectorized Monte Carlo method
Bayesian phylogeny analysis via stochastic approximation Monte Carlo
Cheon, Sooyoung; Liang, Faming
2009-01-01
in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method
Applications of Monte Carlo method in Medical Physics
International Nuclear Information System (INIS)
Diez Rios, A.; Labajos, M.
1989-01-01
The basic ideas of Monte Carlo techniques are presented. Random numbers and their generation by congruential methods, which underlie Monte Carlo calculations are shown. Monte Carlo techniques to solve integrals are discussed. The evaluation of a simple monodimensional integral with a known answer, by means of two different Monte Carlo approaches are discussed. The basic principles to simualate on a computer photon histories reduce variance and the current applications in Medical Physics are commented. (Author)
Nonlinear Spatial Inversion Without Monte Carlo Sampling
Curtis, A.; Nawaz, A.
2017-12-01
High-dimensional, nonlinear inverse or inference problems usually have non-unique solutions. The distribution of solutions are described by probability distributions, and these are usually found using Monte Carlo (MC) sampling methods. These take pseudo-random samples of models in parameter space, calculate the probability of each sample given available data and other information, and thus map out high or low probability values of model parameters. However, such methods would converge to the solution only as the number of samples tends to infinity; in practice, MC is found to be slow to converge, convergence is not guaranteed to be achieved in finite time, and detection of convergence requires the use of subjective criteria. We propose a method for Bayesian inversion of categorical variables such as geological facies or rock types in spatial problems, which requires no sampling at all. The method uses a 2-D Hidden Markov Model over a grid of cells, where observations represent localized data constraining the model in each cell. The data in our example application are seismic properties such as P- and S-wave impedances or rock density; our model parameters are the hidden states and represent the geological rock types in each cell. The observations at each location are assumed to depend on the facies at that location only - an assumption referred to as `localized likelihoods'. However, the facies at a location cannot be determined solely by the observation at that location as it also depends on prior information concerning its correlation with the spatial distribution of facies elsewhere. Such prior information is included in the inversion in the form of a training image which represents a conceptual depiction of the distribution of local geologies that might be expected, but other forms of prior information can be used in the method as desired. The method provides direct (pseudo-analytic) estimates of posterior marginal probability distributions over each variable
'Odontologic dosimetric card' experiments and simulations using Monte Carlo methods
International Nuclear Information System (INIS)
Menezes, C.J.M.; Lima, R. de A.; Peixoto, J.E.; Vieira, J.W.
2008-01-01
The techniques for data processing, combined with the development of fast and more powerful computers, makes the Monte Carlo methods one of the most widely used tools in the radiation transport simulation. For applications in diagnostic radiology, this method generally uses anthropomorphic phantoms to evaluate the absorbed dose to patients during exposure. In this paper, some Monte Carlo techniques were used to simulation of a testing device designed for intra-oral X-ray equipment performance evaluation called Odontologic Dosimetric Card (CDO of 'Cartao Dosimetrico Odontologico' in Portuguese) for different thermoluminescent detectors. This paper used two computational models of exposition RXD/EGS4 and CDO/EGS4. In the first model, the simulation results are compared with experimental data obtained in the similar conditions. The second model, it presents the same characteristics of the testing device studied (CDO). For the irradiations, the X-ray spectra were generated by the IPEM report number 78, spectrum processor. The attenuated spectrum was obtained for IEC 61267 qualities and various additional filters for a Pantak 320 X-ray industrial equipment. The results obtained for the study of the copper filters used in the determination of the kVp were compared with experimental data, validating the model proposed for the characterization of the CDO. The results shower of the CDO will be utilized in quality assurance programs in order to guarantee that the equipment fulfill the requirements of the Norm SVS No. 453/98 MS (Brazil) 'Directives of Radiation Protection in Medical and Dental Radiodiagnostic'. We conclude that the EGS4 is a suitable code Monte Carlo to simulate thermoluminescent dosimeters and experimental procedures employed in the routine of the quality control laboratory in diagnostic radiology. (author)
Verification of Monte Carlo transport codes by activation experiments
Energy Technology Data Exchange (ETDEWEB)
Chetvertkova, Vera
2012-12-18
With the increasing energies and intensities of heavy-ion accelerator facilities, the problem of an excessive activation of the accelerator components caused by beam losses becomes more and more important. Numerical experiments using Monte Carlo transport codes are performed in order to assess the levels of activation. The heavy-ion versions of the codes were released approximately a decade ago, therefore the verification is needed to be sure that they give reasonable results. Present work is focused on obtaining the experimental data on activation of the targets by heavy-ion beams. Several experiments were performed at GSI Helmholtzzentrum fuer Schwerionenforschung. The interaction of nitrogen, argon and uranium beams with aluminum targets, as well as interaction of nitrogen and argon beams with copper targets was studied. After the irradiation of the targets by different ion beams from the SIS18 synchrotron at GSI, the γ-spectroscopy analysis was done: the γ-spectra of the residual activity were measured, the radioactive nuclides were identified, their amount and depth distribution were detected. The obtained experimental results were compared with the results of the Monte Carlo simulations using FLUKA, MARS and SHIELD. The discrepancies and agreements between experiment and simulations are pointed out. The origin of discrepancies is discussed. Obtained results allow for a better verification of the Monte Carlo transport codes, and also provide information for their further development. The necessity of the activation studies for accelerator applications is discussed. The limits of applicability of the heavy-ion beam-loss criteria were studied using the FLUKA code. FLUKA-simulations were done to determine the most preferable from the radiation protection point of view materials for use in accelerator components.
Monte Carlo computation in the applied research of nuclear technology
International Nuclear Information System (INIS)
Xu Shuyan; Liu Baojie; Li Qin
2007-01-01
This article briefly introduces Monte Carlo Methods and their properties. It narrates the Monte Carlo methods with emphasis in their applications to several domains of nuclear technology. Monte Carlo simulation methods and several commonly used computer software to implement them are also introduced. The proposed methods are demonstrated by a real example. (authors)
International Nuclear Information System (INIS)
Greensite, J.
1984-03-01
It is likely that the quark confinement mechanism at large N should be understood purely in terms of high-order planar Feynman diagrams; in particular, the center of the gauge group can play no role whatever. The author considers the diagrammatic expansion of loop integrals in planar wrong-sign phi4 theory. It is shown that the sum of all fishnet diagrams contributing to the loop can be expressed as the grand partition function of an unusual gas, whose dynamics can be simulated on a computer. The 'molecules' of this gas correspond to vertices of the position-space diagrams, the molecular interactions are determined by the propagators, and the coupling constant plays the role of a chemical potential. The most remarkable feature of this gas is the existence of a critical coupling gsub(c), where string formation takes place. As g → gsub(c) the fishnet vertices tend to cluster around the minimal surface of the loop, thereby forming a string. The role of asymptotic freedom in bringing the coupling to the critical point, and the connection to the Polyakov string, are also discussed. In the Hamiltonian formulation, a very straightforward explanation of quark confinement is presented. (Auth.)
Enhanced Monte-Carlo-Linked Depletion Capabilities in MCNPX
International Nuclear Information System (INIS)
Fensin, Michael L.; Hendricks, John S.; Anghaie, Samim
2006-01-01
As advanced reactor concepts challenge the accuracy of current modeling technologies, a higher-fidelity depletion calculation is necessary to model time-dependent core reactivity properly for accurate cycle length and safety margin determinations. The recent integration of CINDER90 into the MCNPX Monte Carlo radiation transport code provides a completely self-contained Monte-Carlo-linked depletion capability. Two advances have been made in the latest MCNPX capability based on problems observed in pre-released versions: continuous energy collision density tracking and proper fission yield selection. Pre-released versions of the MCNPX depletion code calculated the reaction rates for (n,2n), (n,3n), (n,p), (n,a), and (n,?) by matching the MCNPX steady-state 63-group flux with 63-group cross sections inherent in the CINDER90 library and then collapsing to one-group collision densities for the depletion calculation. This procedure led to inaccuracies due to the miscalculation of the reaction rates resulting from the collapsed multi-group approach. The current version of MCNPX eliminates this problem by using collapsed one-group collision densities generated from continuous energy reaction rates determined during the MCNPX steady-state calculation. MCNPX also now explicitly determines the proper fission yield to be used by the CINDER90 code for the depletion calculation. The CINDER90 code offers a thermal, fast, and high-energy fission yield for each fissile isotope contained in the CINDER90 data file. MCNPX determines which fission yield to use for a specified problem by calculating the integral fission rate for the defined energy boundaries (thermal, fast, and high energy), determining which energy range contains the majority of fissions, and then selecting the appropriate fission yield for the energy range containing the majority of fissions. The MCNPX depletion capability enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code
Monte Carlo study of the phase diagram for the two-dimensional Z(4) model
International Nuclear Information System (INIS)
Carneiro, G.M.; Pol, M.E.; Zagury, N.
1982-05-01
The phase diagram of the two-dimensional Z(4) model on a square lattice is determined using a Monte Carlo method. The results of this simulation confirm the general features of the phase diagram predicted theoretically for the ferromagnetic case, and show the existence of a new phase with perpendicular order. (Author) [pt
Kinetic Monte Carlo study of sensitiviy of OLED efficiency and lifetime to materials parameters
Coehoorn, R.; Eersel, van H.; Bobbert, P.A.; Janssen, R.A.J.
2015-01-01
The performance of organic light-emitting diodes (OLEDs) is determined by a complex interplay of the optoelectronic processes in the active layer stack. In order to enable simulation-assisted layer stack development, a three-dimensional kinetic Monte Carlo OLED simulation method which includes the
International Nuclear Information System (INIS)
Maconald, J.L.; Cashwell, E.D.
1978-09-01
The techniques of learning theory and pattern recognition are used to learn splitting surface locations for the Monte Carlo neutron transport code MCN. A study is performed to determine default values for several pattern recognition and learning parameters. The modified MCN code is used to reduce computer cost for several nontrivial example problems
Generation of triangulated random surfaces by the Monte Carlo method in the grand canonical ensemble
International Nuclear Information System (INIS)
Zmushko, V.V.; Migdal, A.A.
1987-01-01
A model of triangulated random surfaces which is the discrete analog of the Polyakov string is considered. An algorithm is proposed which enables one to study the model by the Monte Carlo method in the grand canonical ensemble. Preliminary results on the determination of the critical index γ are presented
The Monte Carlo Quiz: Encouraging Punctual Completion and Deep Processing of Assigned Readings
Fernald, Peter S.
2004-01-01
The Monte Carlo Quiz (MCQ), a single-item quiz, is so named because chance, with the roll of a die, determines (a) whether the quiz is administered; (b) the specific article, chapter, or section of the assigned reading that the quiz covers; and (c) the particular question that makes up the quiz. The MCQ encourages both punctual completion and deep…
Monte Carlo calculation of received dose from ingestion and inhalation of natural uranium
International Nuclear Information System (INIS)
Trobok, M.; Zupunski, Lj.; Spasic-Jokic, V.; Gordanic, V.; Sovilj, P.
2009-01-01
For the purpose of this study eighty samples are taken from the area Bela Crkva and Vrsac. The activity of radionuclide in the soil is determined by gamma- ray spectrometry. Monte Carlo method is used to calculate effective dose received by population resulting from the inhalation and ingestion of natural uranium. The estimated doses were compared with the legally prescribed levels. (author) [sr
Tarim, Urkiye Akar; Ozmutlu, Emin N.; Yalcin, Sezai; Gundogdu, Ozcan; Bradley, D. A.; Gurler, Orhan
2017-11-01
A Monte Carlo method was developed to investigate radiation shielding properties of bismuth borate glass. The mass attenuation coefficients and half-value layer parameters were determined for different fractional amounts of Bi2O3 in the glass samples for the 356, 662, 1173 and 1332 keV photon energies. A comparison of the theoretical and experimental attenuation coefficients is presented.
An algorithm of α-and γ-mode eigenvalue calculations by Monte Carlo method
International Nuclear Information System (INIS)
Yamamoto, Toshihiro; Miyoshi, Yoshinori
2003-01-01
A new algorithm for Monte Carlo calculation was developed to obtain α- and γ-mode eigenvalues. The α is a prompt neutron time decay constant measured in subcritical experiments, and the γ is a spatial decay constant measured in an exponential method for determining the subcriticality. This algorithm can be implemented into existing Monte Carlo eigenvalue calculation codes with minimum modifications. The algorithm was implemented into MCNP code and the performance of calculating the both mode eigenvalues were verified through comparison of the calculated eigenvalues with the ones obtained by fixed source calculations. (author)
Monte Carlo modeling of the Fastscan whole body counter response
International Nuclear Information System (INIS)
Graham, H.R.; Waller, E.J.
2015-01-01
Monte Carlo N-Particle (MCNP) was used to make a model of the Fastscan for the purpose of calibration. Two models were made one for the Pickering Nuclear Site, and one for the Darlington Nuclear Site. Once these models were benchmarked and found to be in good agreement, simulations were run to study the effect different sized phantoms had on the detected response, and the shielding effect of torso fat was not negligible. Simulations into the nature of a source being positioned externally on the anterior or posterior of a person were also conducted to determine a ratio that could be used to determine if a source is externally or internally placed. (author)
CORPORATE VALUATION USING TWO-DIMENSIONAL MONTE CARLO SIMULATION
Directory of Open Access Journals (Sweden)
Toth Reka
2010-12-01
Full Text Available In this paper, we have presented a corporate valuation model. The model combine several valuation methods in order to get more accurate results. To determine the corporate asset value we have used the Gordon-like two-stage asset valuation model based on the calculation of the free cash flow to the firm. We have used the free cash flow to the firm to determine the corporate market value, which was calculated with use of the Black-Scholes option pricing model in frame of the two-dimensional Monte Carlo simulation method. The combined model and the use of the two-dimensional simulation model provides a better opportunity for the corporate value estimation.
Monte Carlo simulation experiments on box-type radon dosimeter
Energy Technology Data Exchange (ETDEWEB)
Jamil, Khalid, E-mail: kjamil@comsats.edu.pk; Kamran, Muhammad; Illahi, Ahsan; Manzoor, Shahid
2014-11-11
Epidemiological studies show that inhalation of radon gas ({sup 222}Rn) may be carcinogenic especially to mine workers, people living in closed indoor energy conserved environments and underground dwellers. It is, therefore, of paramount importance to measure the {sup 222}Rn concentrations (Bq/m{sup 3}) in indoors environments. For this purpose, box-type passive radon dosimeters employing ion track detector like CR-39 are widely used. Fraction of the number of radon alphas emitted in the volume of the box type dosimeter resulting in latent track formation on CR-39 is the latent track registration efficiency. Latent track registration efficiency is ultimately required to evaluate the radon concentration which consequently determines the effective dose and the radiological hazards. In this research, Monte Carlo simulation experiments were carried out to study the alpha latent track registration efficiency for box type radon dosimeter as a function of dosimeter’s dimensions and range of alpha particles in air. Two different self developed Monte Carlo simulation techniques were employed namely: (a) Surface ratio (SURA) method and (b) Ray hitting (RAHI) method. Monte Carlo simulation experiments revealed that there are two types of efficiencies i.e. intrinsic efficiency (η{sub int}) and alpha hit efficiency (η{sub hit}). The η{sub int} depends upon only on the dimensions of the dosimeter and η{sub hit} depends both upon dimensions of the dosimeter and range of the alpha particles. The total latent track registration efficiency is the product of both intrinsic and hit efficiencies. It has been concluded that if diagonal length of box type dosimeter is kept smaller than the range of alpha particle then hit efficiency is achieved as 100%. Nevertheless the intrinsic efficiency keeps playing its role. The Monte Carlo simulation experimental results have been found helpful to understand the intricate track registration mechanisms in the box type dosimeter. This paper
A midway forward-adjoint coupling method for neutron and photon Monte Carlo transport
International Nuclear Information System (INIS)
Serov, I.V.; John, T.M.; Hoogenboom, J.E.
1999-01-01
The midway Monte Carlo method for calculating detector responses combines a forward and an adjoint Monte Carlo calculation. In both calculations, particle scores are registered at a surface to be chosen by the user somewhere between the source and detector domains. The theory of the midway response determination is developed within the framework of transport theory for external sources and for criticality theory. The theory is also developed for photons, which are generated at inelastic scattering or capture of neutrons. In either the forward or the adjoint calculation a so-called black absorber technique can be applied; i.e., particles need not be followed after passing the midway surface. The midway Monte Carlo method is implemented in the general-purpose MCNP Monte Carlo code. The midway Monte Carlo method is demonstrated to be very efficient in problems with deep penetration, small source and detector domains, and complicated streaming paths. All the problems considered pose difficult variance reduction challenges. Calculations were performed using existing variance reduction methods of normal MCNP runs and using the midway method. The performed comparative analyses show that the midway method appears to be much more efficient than the standard techniques in an overwhelming majority of cases and can be recommended for use in many difficult variance reduction problems of neutral particle transport
A new method to assess the statistical convergence of monte carlo solutions
International Nuclear Information System (INIS)
Forster, R.A.
1991-01-01
Accurate Monte Carlo confidence intervals (CIs), which are formed with an estimated mean and an estimated standard deviation, can only be created when the number of particle histories N becomes large enough so that the central limit theorem can be applied. The Monte Carlo user has a limited number of marginal methods to assess the fulfillment of this condition, such as statistical error reduction proportional to 1/√N with error magnitude guidelines and third and fourth moment estimators. A new method is presented here to assess the statistical convergence of Monte Carlo solutions by analyzing the shape of the empirical probability density function (PDF) of history scores. Related work in this area includes the derivation of analytic score distributions for a two-state Monte Carlo problem. Score distribution histograms have been generated to determine when a small number of histories accounts for a large fraction of the result. This summary describes initial studies of empirical Monte Carlo history score PDFs created from score histograms of particle transport simulations. 7 refs., 1 fig
Monte Carlo-based tail exponent estimator
Barunik, Jozef; Vacha, Lukas
2010-11-01
In this paper we propose a new approach to estimation of the tail exponent in financial stock markets. We begin the study with the finite sample behavior of the Hill estimator under α-stable distributions. Using large Monte Carlo simulations, we show that the Hill estimator overestimates the true tail exponent and can hardly be used on samples with small length. Utilizing our results, we introduce a Monte Carlo-based method of estimation for the tail exponent. Our proposed method is not sensitive to the choice of tail size and works well also on small data samples. The new estimator also gives unbiased results with symmetrical confidence intervals. Finally, we demonstrate the power of our estimator on the international world stock market indices. On the two separate periods of 2002-2005 and 2006-2009, we estimate the tail exponent.
No-compromise reptation quantum Monte Carlo
International Nuclear Information System (INIS)
Yuen, W K; Farrar, Thomas J; Rothstein, Stuart M
2007-01-01
Since its publication, the reptation quantum Monte Carlo algorithm of Baroni and Moroni (1999 Phys. Rev. Lett. 82 4745) has been applied to several important problems in physics, but its mathematical foundations are not well understood. We show that their algorithm is not of typical Metropolis-Hastings type, and we specify conditions required for the generated Markov chain to be stationary and to converge to the intended distribution. The time-step bias may add up, and in many applications it is only the middle of a reptile that is the most important. Therefore, we propose an alternative, 'no-compromise reptation quantum Monte Carlo' to stabilize the middle of the reptile. (fast track communication)
Multilevel Monte Carlo Approaches for Numerical Homogenization
Efendiev, Yalchin R.
2015-10-01
In this article, we study the application of multilevel Monte Carlo (MLMC) approaches to numerical random homogenization. Our objective is to compute the expectation of some functionals of the homogenized coefficients, or of the homogenized solutions. This is accomplished within MLMC by considering different sizes of representative volumes (RVEs). Many inexpensive computations with the smallest RVE size are combined with fewer expensive computations performed on larger RVEs. Likewise, when it comes to homogenized solutions, different levels of coarse-grid meshes are used to solve the homogenized equation. We show that, by carefully selecting the number of realizations at each level, we can achieve a speed-up in the computations in comparison to a standard Monte Carlo method. Numerical results are presented for both one-dimensional and two-dimensional test-cases that illustrate the efficiency of the approach.
Status of Monte Carlo at Los Alamos
International Nuclear Information System (INIS)
Thompson, W.L.; Cashwell, E.D.
1980-01-01
At Los Alamos the early work of Fermi, von Neumann, and Ulam has been developed and supplemented by many followers, notably Cashwell and Everett, and the main product today is the continuous-energy, general-purpose, generalized-geometry, time-dependent, coupled neutron-photon transport code called MCNP. The Los Alamos Monte Carlo research and development effort is concentrated in Group X-6. MCNP treats an arbitrary three-dimensional configuration of arbitrary materials in geometric cells bounded by first- and second-degree surfaces and some fourth-degree surfaces (elliptical tori). Monte Carlo has evolved into perhaps the main method for radiation transport calculations at Los Alamos. MCNP is used in every technical division at the Laboratory by over 130 users about 600 times a month accounting for nearly 200 hours of CDC-7600 time
Monte Carlo simulations in skin radiotherapy
International Nuclear Information System (INIS)
Sarvari, A.; Jeraj, R.; Kron, T.
2000-01-01
The primary goal of this work was to develop a procedure for calculation the appropriate filter shape for a brachytherapy applicator used for skin radiotherapy. In the applicator a radioactive source is positioned close to the skin. Without a filter, the resultant dose distribution would be highly nonuniform.High uniformity is usually required however. This can be achieved using an appropriately shaped filter, which flattens the dose profile. Because of the complexity of the transport and geometry, Monte Carlo simulations had to be used. An 192 Ir high dose rate photon source was used. All necessary transport parameters were simulated with the MCNP4B Monte Carlo code. A highly efficient iterative procedure was developed, which enabled calculation of the optimal filter shape in only few iterations. The initially non-uniform dose distributions became uniform within a percent when applying the filter calculated by this procedure. (author)
Coevolution Based Adaptive Monte Carlo Localization (CEAMCL
Directory of Open Access Journals (Sweden)
Luo Ronghua
2008-11-01
Full Text Available An adaptive Monte Carlo localization algorithm based on coevolution mechanism of ecological species is proposed. Samples are clustered into species, each of which represents a hypothesis of the robot's pose. Since the coevolution between the species ensures that the multiple distinct hypotheses can be tracked stably, the problem of premature convergence when using MCL in highly symmetric environments can be solved. And the sample size can be adjusted adaptively over time according to the uncertainty of the robot's pose by using the population growth model. In addition, by using the crossover and mutation operators in evolutionary computation, intra-species evolution can drive the samples move towards the regions where the desired posterior density is large. So a small size of samples can represent the desired density well enough to make precise localization. The new algorithm is termed coevolution based adaptive Monte Carlo localization (CEAMCL. Experiments have been carried out to prove the efficiency of the new localization algorithm.
Monte Carlo simulation of gas Cerenkov detectors
International Nuclear Information System (INIS)
Mack, J.M.; Jain, M.; Jordan, T.M.
1984-01-01
Theoretical study of selected gamma-ray and electron diagnostic necessitates coupling Cerenkov radiation to electron/photon cascades. A Cerenkov production model and its incorporation into a general geometry Monte Carlo coupled electron/photon transport code is discussed. A special optical photon ray-trace is implemented using bulk optical properties assigned to each Monte Carlo zone. Good agreement exists between experimental and calculated Cerenkov data in the case of a carbon-dioxide gas Cerenkov detector experiment. Cerenkov production and threshold data are presented for a typical carbon-dioxide gas detector that converts a 16.7 MeV photon source to Cerenkov light, which is collected by optics and detected by a photomultiplier
EU Commissioner Carlos Moedas visits SESAME
CERN Bulletin
2015-01-01
The European Commissioner for research, science and innovation, Carlos Moedas, visited the SESAME laboratory in Jordan on Monday 13 April. When it begins operation in 2016, SESAME, a synchrotron light source, will be the Middle East’s first major international science centre, carrying out experiments ranging from the physical sciences to environmental science and archaeology. CERN Director-General Rolf Heuer (left) and European Commissioner Carlos Moedas with the model SESAME magnet. © European Union, 2015. Commissioner Moedas was accompanied by a European Commission delegation led by Robert-Jan Smits, Director-General of DG Research and Innovation, as well as Rolf Heuer, CERN Director-General, Jean-Pierre Koutchouk, coordinator of the CERN-EC Support for SESAME Magnets (CESSAMag) project and Princess Sumaya bint El Hassan of Jordan, a leading advocate of science in the region. They toured the SESAME facility together with SESAME Director, Khaled Tou...
Hypothesis testing of scientific Monte Carlo calculations
Wallerberger, Markus; Gull, Emanuel
2017-11-01
The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.
Status of Monte Carlo at Los Alamos
International Nuclear Information System (INIS)
Thompson, W.L.; Cashwell, E.D.; Godfrey, T.N.K.; Schrandt, R.G.; Deutsch, O.L.; Booth, T.E.
1980-05-01
Four papers were presented by Group X-6 on April 22, 1980, at the Oak Ridge Radiation Shielding Information Center (RSIC) Seminar-Workshop on Theory and Applications of Monte Carlo Methods. These papers are combined into one report for convenience and because they are related to each other. The first paper (by Thompson and Cashwell) is a general survey about X-6 and MCNP and is an introduction to the other three papers. It can also serve as a resume of X-6. The second paper (by Godfrey) explains some of the details of geometry specification in MCNP. The third paper (by Cashwell and Schrandt) illustrates calculating flux at a point with MCNP; in particular, the once-more-collided flux estimator is demonstrated. Finally, the fourth paper (by Thompson, Deutsch, and Booth) is a tutorial on some variance-reduction techniques. It should be required for a fledging Monte Carlo practitioner
Evaluation of tomographic-image based geometries with PENELOPE Monte Carlo
International Nuclear Information System (INIS)
Kakoi, A.A.Y.; Galina, A.C.; Nicolucci, P.
2009-01-01
The Monte Carlo method can be used to evaluate treatment planning systems or for the determination of dose distributions in radiotherapy planning due to its accuracy and precision. In Monte Carlo simulation packages typically used in radiotherapy, however, a realistic representation of the geometry of the patient can not be used, which compromises the accuracy of the results. In this work, an algorithm for the description of geometries based on CT images of patients, developed to be used with Monte Carlo simulation package PENELOPE, is tested by simulating the dose distribution produced by a photon beam of 10 MV. The geometry simulated was based on CT images of a planning of prostate cancer. The volumes of interest in the treatment were adequately represented in the simulation geometry, allowing the algorithm to be used in verification of doses in radiotherapy treatments. (author)
Monte Carlo study of electron irradiation effect on YBCO dpa profiles
International Nuclear Information System (INIS)
Pinnera, I.; Cruz, C.; Abreu, Y.; Leyva, A.; Van Espen, P.
2011-01-01
The Monte Carlo assisted Classical Method (MCCM) consists on a calculation procedure for determining the displacements per atom (dpa) distribution in solid materials. This algorithm allows studying the gamma and electron irradiation damage in different materials. It is based on the electrons elastic scattering classic theories and the use of Monte Carlo simulation for the physical processes involved. The present study deals with the Monte Carlo simulation of electron irradiation effects on YBa 2 Cu 3 O 7-x (YBCO) slabs using the MCNPX code system. Displacements per atom distributions are obtained through the MCCM for electron irradiation up to 10 MeV. In-depth dpa profiles for electrons and positrons are obtained and analyzed. Also, for each atomic species in the material, the dpa distributions are calculated. All the results are discussed in the present contribution. (Author)
Comparison of calculational methods for liquid metal reactor shields
International Nuclear Information System (INIS)
Carter, L.L.; Moore, F.S.; Morford, R.J.; Mann, F.M.
1985-09-01
A one-dimensional comparison is made between Monte Carlo (MCNP), discrete ordinances (ANISN), and diffusion theory (MlDX) calculations of neutron flux and radiation damage from the core of the Fast Flux Test Facility (FFTF) out to the reactor vessel. Diffusion theory was found to be reasonably accurate for the calculation of both total flux and radiation damage. However, for large distances from the core, the calculated flux at very high energies is low by an order of magnitude or more when the diffusion theory is used. Particular emphasis was placed in this study on the generation of multitable cross sections for use in discrete ordinates codes that are self-shielded, consistent with the self-shielding employed in the generation of cross sections for use with diffusion theory. The Monte Carlo calculation, with a pointwise representation of the cross sections, was used as the benchmark for determining the limitations of the other two calculational methods. 12 refs., 33 figs
Topological zero modes in Monte Carlo simulations
International Nuclear Information System (INIS)
Dilger, H.
1994-08-01
We present an improvement of global Metropolis updating steps, the instanton hits, used in a hybrid Monte Carlo simulation of the two-flavor Schwinger model with staggered fermions. These hits are designed to change the topological sector of the gauge field. In order to match these hits to an unquenched simulation with pseudofermions, the approximate zero mode structure of the lattice Dirac operator has to be considered explicitly. (orig.)
Handbook of Markov chain Monte Carlo
Brooks, Steve
2011-01-01
""Handbook of Markov Chain Monte Carlo"" brings together the major advances that have occurred in recent years while incorporating enough introductory material for new users of MCMC. Along with thorough coverage of the theoretical foundations and algorithmic and computational methodology, this comprehensive handbook includes substantial realistic case studies from a variety of disciplines. These case studies demonstrate the application of MCMC methods and serve as a series of templates for the construction, implementation, and choice of MCMC methodology.
The lund Monte Carlo for jet fragmentation
International Nuclear Information System (INIS)
Sjoestrand, T.
1982-03-01
We present a Monte Carlo program based on the Lund model for jet fragmentation. Quark, gluon, diquark and hadron jets are considered. Special emphasis is put on the fragmentation of colour singlet jet systems, for which energy, momentum and flavour are conserved explicitly. The model for decays of unstable particles, in particular the weak decay of heavy hadrons, is described. The central part of the paper is a detailed description on how to use the FORTRAN 77 program. (Author)
Monte Carlo methods for preference learning
DEFF Research Database (Denmark)
Viappiani, P.
2012-01-01
Utility elicitation is an important component of many applications, such as decision support systems and recommender systems. Such systems query the users about their preferences and give recommendations based on the system’s belief about the utility function. Critical to these applications is th...... is the acquisition of prior distribution about the utility parameters and the possibility of real time Bayesian inference. In this paper we consider Monte Carlo methods for these problems....
Monte Carlo methods for shield design calculations
International Nuclear Information System (INIS)
Grimstone, M.J.
1974-01-01
A suite of Monte Carlo codes is being developed for use on a routine basis in commercial reactor shield design. The methods adopted for this purpose include the modular construction of codes, simplified geometries, automatic variance reduction techniques, continuous energy treatment of cross section data, and albedo methods for streaming. Descriptions are given of the implementation of these methods and of their use in practical calculations. 26 references. (U.S.)
General purpose code for Monte Carlo simulations
International Nuclear Information System (INIS)
Wilcke, W.W.
1983-01-01
A general-purpose computer called MONTHY has been written to perform Monte Carlo simulations of physical systems. To achieve a high degree of flexibility the code is organized like a general purpose computer, operating on a vector describing the time dependent state of the system under simulation. The instruction set of the computer is defined by the user and is therefore adaptable to the particular problem studied. The organization of MONTHY allows iterative and conditional execution of operations
Autocorrelations in hybrid Monte Carlo simulations
International Nuclear Information System (INIS)
Schaefer, Stefan; Virotta, Francesco
2010-11-01
Simulations of QCD suffer from severe critical slowing down towards the continuum limit. This problem is known to be prominent in the topological charge, however, all observables are affected to various degree by these slow modes in the Monte Carlo evolution. We investigate the slowing down in high statistics simulations and propose a new error analysis method, which gives a realistic estimate of the contribution of the slow modes to the errors. (orig.)
Introduction to the Monte Carlo methods
International Nuclear Information System (INIS)
Uzhinskij, V.V.
1993-01-01
Codes illustrating the use of Monte Carlo methods in high energy physics such as the inverse transformation method, the ejection method, the particle propagation through the nucleus, the particle interaction with the nucleus, etc. are presented. A set of useful algorithms of random number generators is given (the binomial distribution, the Poisson distribution, β-distribution, γ-distribution and normal distribution). 5 figs., 1 tab
Sequential Monte Carlo with Highly Informative Observations
Del Moral, Pierre; Murray, Lawrence M.
2014-01-01
We propose sequential Monte Carlo (SMC) methods for sampling the posterior distribution of state-space models under highly informative observation regimes, a situation in which standard SMC methods can perform poorly. A special case is simulating bridges between given initial and final values. The basic idea is to introduce a schedule of intermediate weighting and resampling times between observation times, which guide particles towards the final state. This can always be done for continuous-...
Monte Carlo codes use in neutron therapy
International Nuclear Information System (INIS)
Paquis, P.; Mokhtari, F.; Karamanoukian, D.; Pignol, J.P.; Cuendet, P.; Iborra, N.
1998-01-01
Monte Carlo calculation codes allow to study accurately all the parameters relevant to radiation effects, like the dose deposition or the type of microscopic interactions, through one by one particle transport simulation. These features are very useful for neutron irradiations, from device development up to dosimetry. This paper illustrates some applications of these codes in Neutron Capture Therapy and Neutron Capture Enhancement of fast neutrons irradiations. (authors)
Quantum Monte Carlo calculations of light nuclei
International Nuclear Information System (INIS)
Pandharipande, V. R.
1999-01-01
Quantum Monte Carlo methods provide an essentially exact way to calculate various properties of nuclear bound, and low energy continuum states, from realistic models of nuclear interactions and currents. After a brief description of the methods and modern models of nuclear forces, we review the results obtained for all the bound, and some continuum states of up to eight nucleons. Various other applications of the methods are reviewed along with future prospects
Monte-Carlo simulation of electromagnetic showers
International Nuclear Information System (INIS)
Amatuni, Ts.A.
1984-01-01
The universal ELSS-1 program for Monte Carlo simulation of high energy electromagnetic showers in homogeneous absorbers of arbitrary geometry is written. The major processes and effects of electron and photon interaction with matter, particularly the Landau-Pomeranchuk-Migdal effect, are taken into account in the simulation procedures. The simulation results are compared with experimental data. Some characteristics of shower detectors and electromagnetic showers for energies up 1 TeV are calculated
Monte Carlo simulation of Touschek effect
Directory of Open Access Journals (Sweden)
Aimin Xiao
2010-07-01
Full Text Available We present a Monte Carlo method implementation in the code elegant for simulating Touschek scattering effects in a linac beam. The local scattering rate and the distribution of scattered electrons can be obtained from the code either for a Gaussian-distributed beam or for a general beam whose distribution function is given. In addition, scattered electrons can be tracked through the beam line and the local beam-loss rate and beam halo information recorded.
Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo
Martinez, Josue G.
2010-06-01
The authors consider the analysis of hierarchical longitudinal functional data based upon a functional principal components approach. In contrast to standard frequentist approaches to selecting the number of principal components, the authors do model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order to overcome this, the authors show how to apply Stochastic Approximation Monte Carlo (SAMC) to this problem, a method that has the potential to explore the entire space and does not become trapped in local extrema. The combination of reversible jump methods and SAMC in hierarchical longitudinal functional data is simplified by a polar coordinate representation of the principal components. The approach is easy to implement and does well in simulated data in determining the distribution of the number of principal components, and in terms of its frequentist estimation properties. Empirical applications are also presented.