WorldWideScience

Sample records for carlo based depletion

  1. Statistical implications in Monte Carlo depletions - 051

    International Nuclear Information System (INIS)

    Zhiwen, Xu; Rhodes, J.; Smith, K.

    2010-01-01

    As a result of steady advances of computer power, continuous-energy Monte Carlo depletion analysis is attracting considerable attention for reactor burnup calculations. The typical Monte Carlo analysis is set up as a combination of a Monte Carlo neutron transport solver and a fuel burnup solver. Note that the burnup solver is a deterministic module. The statistical errors in Monte Carlo solutions are introduced into nuclide number densities and propagated along fuel burnup. This paper is towards the understanding of the statistical implications in Monte Carlo depletions, including both statistical bias and statistical variations in depleted fuel number densities. The deterministic Studsvik lattice physics code, CASMO-5, is modified to model the Monte Carlo depletion. The statistical bias in depleted number densities is found to be negligible compared to its statistical variations, which, in turn, demonstrates the correctness of the Monte Carlo depletion method. Meanwhile, the statistical variation in number densities generally increases with burnup. Several possible ways of reducing the statistical errors are discussed: 1) to increase the number of individual Monte Carlo histories; 2) to increase the number of time steps; 3) to run additional independent Monte Carlo depletion cases. Finally, a new Monte Carlo depletion methodology, called the batch depletion method, is proposed, which consists of performing a set of independent Monte Carlo depletions and is thus capable of estimating the overall statistical errors including both the local statistical error and the propagated statistical error. (authors)

  2. Isotopic depletion with Monte Carlo

    International Nuclear Information System (INIS)

    Martin, W.R.; Rathkopf, J.A.

    1996-06-01

    This work considers a method to deplete isotopes during a time- dependent Monte Carlo simulation of an evolving system. The method is based on explicitly combining a conventional estimator for the scalar flux with the analytical solutions to the isotopic depletion equations. There are no auxiliary calculations; the method is an integral part of the Monte Carlo calculation. The method eliminates negative densities and reduces the variance in the estimates for the isotope densities, compared to existing methods. Moreover, existing methods are shown to be special cases of the general method described in this work, as they can be derived by combining a high variance estimator for the scalar flux with a low-order approximation to the analytical solution to the depletion equation

  3. MCOR - Monte Carlo depletion code for reference LWR calculations

    Energy Technology Data Exchange (ETDEWEB)

    Puente Espel, Federico, E-mail: fup104@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Tippayakul, Chanatip, E-mail: cut110@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Ivanov, Kostadin, E-mail: kni1@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Misu, Stefan, E-mail: Stefan.Misu@areva.com [AREVA, AREVA NP GmbH, Erlangen (Germany)

    2011-04-15

    Research highlights: > Introduction of a reference Monte Carlo based depletion code with extended capabilities. > Verification and validation results for MCOR. > Utilization of MCOR for benchmarking deterministic lattice physics (spectral) codes. - Abstract: The MCOR (MCnp-kORigen) code system is a Monte Carlo based depletion system for reference fuel assembly and core calculations. The MCOR code is designed as an interfacing code that provides depletion capability to the LANL Monte Carlo code by coupling two codes: MCNP5 with the AREVA NP depletion code, KORIGEN. The physical quality of both codes is unchanged. The MCOR code system has been maintained and continuously enhanced since it was initially developed and validated. The verification of the coupling was made by evaluating the MCOR code against similar sophisticated code systems like MONTEBURNS, OCTOPUS and TRIPOLI-PEPIN. After its validation, the MCOR code has been further improved with important features. The MCOR code presents several valuable capabilities such as: (a) a predictor-corrector depletion algorithm, (b) utilization of KORIGEN as the depletion module, (c) individual depletion calculation of each burnup zone (no burnup zone grouping is required, which is particularly important for the modeling of gadolinium rings), and (d) on-line burnup cross-section generation by the Monte Carlo calculation for 88 isotopes and usage of the KORIGEN libraries for PWR and BWR typical spectra for the remaining isotopes. Besides the just mentioned capabilities, the MCOR code newest enhancements focus on the possibility of executing the MCNP5 calculation in sequential or parallel mode, a user-friendly automatic re-start capability, a modification of the burnup step size evaluation, and a post-processor and test-matrix, just to name the most important. The article describes the capabilities of the MCOR code system; from its design and development to its latest improvements and further ameliorations. Additionally

  4. MCOR - Monte Carlo depletion code for reference LWR calculations

    International Nuclear Information System (INIS)

    Puente Espel, Federico; Tippayakul, Chanatip; Ivanov, Kostadin; Misu, Stefan

    2011-01-01

    Research highlights: → Introduction of a reference Monte Carlo based depletion code with extended capabilities. → Verification and validation results for MCOR. → Utilization of MCOR for benchmarking deterministic lattice physics (spectral) codes. - Abstract: The MCOR (MCnp-kORigen) code system is a Monte Carlo based depletion system for reference fuel assembly and core calculations. The MCOR code is designed as an interfacing code that provides depletion capability to the LANL Monte Carlo code by coupling two codes: MCNP5 with the AREVA NP depletion code, KORIGEN. The physical quality of both codes is unchanged. The MCOR code system has been maintained and continuously enhanced since it was initially developed and validated. The verification of the coupling was made by evaluating the MCOR code against similar sophisticated code systems like MONTEBURNS, OCTOPUS and TRIPOLI-PEPIN. After its validation, the MCOR code has been further improved with important features. The MCOR code presents several valuable capabilities such as: (a) a predictor-corrector depletion algorithm, (b) utilization of KORIGEN as the depletion module, (c) individual depletion calculation of each burnup zone (no burnup zone grouping is required, which is particularly important for the modeling of gadolinium rings), and (d) on-line burnup cross-section generation by the Monte Carlo calculation for 88 isotopes and usage of the KORIGEN libraries for PWR and BWR typical spectra for the remaining isotopes. Besides the just mentioned capabilities, the MCOR code newest enhancements focus on the possibility of executing the MCNP5 calculation in sequential or parallel mode, a user-friendly automatic re-start capability, a modification of the burnup step size evaluation, and a post-processor and test-matrix, just to name the most important. The article describes the capabilities of the MCOR code system; from its design and development to its latest improvements and further ameliorations

  5. Enhanced Monte-Carlo-Linked Depletion Capabilities in MCNPX

    International Nuclear Information System (INIS)

    Fensin, Michael L.; Hendricks, John S.; Anghaie, Samim

    2006-01-01

    As advanced reactor concepts challenge the accuracy of current modeling technologies, a higher-fidelity depletion calculation is necessary to model time-dependent core reactivity properly for accurate cycle length and safety margin determinations. The recent integration of CINDER90 into the MCNPX Monte Carlo radiation transport code provides a completely self-contained Monte-Carlo-linked depletion capability. Two advances have been made in the latest MCNPX capability based on problems observed in pre-released versions: continuous energy collision density tracking and proper fission yield selection. Pre-released versions of the MCNPX depletion code calculated the reaction rates for (n,2n), (n,3n), (n,p), (n,a), and (n,?) by matching the MCNPX steady-state 63-group flux with 63-group cross sections inherent in the CINDER90 library and then collapsing to one-group collision densities for the depletion calculation. This procedure led to inaccuracies due to the miscalculation of the reaction rates resulting from the collapsed multi-group approach. The current version of MCNPX eliminates this problem by using collapsed one-group collision densities generated from continuous energy reaction rates determined during the MCNPX steady-state calculation. MCNPX also now explicitly determines the proper fission yield to be used by the CINDER90 code for the depletion calculation. The CINDER90 code offers a thermal, fast, and high-energy fission yield for each fissile isotope contained in the CINDER90 data file. MCNPX determines which fission yield to use for a specified problem by calculating the integral fission rate for the defined energy boundaries (thermal, fast, and high energy), determining which energy range contains the majority of fissions, and then selecting the appropriate fission yield for the energy range containing the majority of fissions. The MCNPX depletion capability enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code

  6. SCALE Continuous-Energy Monte Carlo Depletion with Parallel KENO in TRITON

    International Nuclear Information System (INIS)

    Goluoglu, Sedat; Bekar, Kursat B.; Wiarda, Dorothea

    2012-01-01

    The TRITON sequence of the SCALE code system is a powerful and robust tool for performing multigroup (MG) reactor physics analysis using either the 2-D deterministic solver NEWT or the 3-D Monte Carlo transport code KENO. However, as with all MG codes, the accuracy of the results depends on the accuracy of the MG cross sections that are generated and/or used. While SCALE resonance self-shielding modules provide rigorous resonance self-shielding, they are based on 1-D models and therefore 2-D or 3-D effects such as heterogeneity of the lattice structures may render final MG cross sections inaccurate. Another potential drawback to MG Monte Carlo depletion is the need to perform resonance self-shielding calculations at each depletion step for each fuel segment that is being depleted. The CPU time and memory required for self-shielding calculations can often eclipse the resources needed for the Monte Carlo transport. This summary presents the results of the new continuous-energy (CE) calculation mode in TRITON. With the new capability, accurate reactor physics analyses can be performed for all types of systems using the SCALE Monte Carlo code KENO as the CE transport solver. In addition, transport calculations can be performed in parallel mode on multiple processors.

  7. Monte Carlo simulation in UWB1 depletion code

    International Nuclear Information System (INIS)

    Lovecky, M.; Prehradny, J.; Jirickova, J.; Skoda, R.

    2015-01-01

    U W B 1 depletion code is being developed as a fast computational tool for the study of burnable absorbers in the University of West Bohemia in Pilsen, Czech Republic. In order to achieve higher precision, the newly developed code was extended by adding a Monte Carlo solver. Research of fuel depletion aims at development and introduction of advanced types of burnable absorbers in nuclear fuel. Burnable absorbers (BA) allow the compensation of the initial reactivity excess of nuclear fuel and result in an increase of fuel cycles lengths with higher enriched fuels. The paper describes the depletion calculations of VVER nuclear fuel doped with rare earth oxides as burnable absorber based on performed depletion calculations, rare earth oxides are divided into two equally numerous groups, suitable burnable absorbers and poisoning absorbers. According to residual poisoning and BA reactivity worth, rare earth oxides marked as suitable burnable absorbers are Nd, Sm, Eu, Gd, Dy, Ho and Er, while poisoning absorbers include Sc, La, Lu, Y, Ce, Pr and Tb. The presentation slides have been added to the article

  8. Domain Decomposition strategy for pin-wise full-core Monte Carlo depletion calculation with the reactor Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Jingang; Wang, Kan; Qiu, Yishu [Dept. of Engineering Physics, LiuQing Building, Tsinghua University, Beijing (China); Chai, Xiao Ming; Qiang, Sheng Long [Science and Technology on Reactor System Design Technology Laboratory, Nuclear Power Institute of China, Chengdu (China)

    2016-06-15

    Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC) codes in accomplishing pin-wise three-dimensional (3D) full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC) code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.

  9. Monte Carlo Depletion with Critical Spectrum for Assembly Group Constant Generation

    International Nuclear Information System (INIS)

    Park, Ho Jin; Joo, Han Gyu; Shim, Hyung Jin; Kim, Chang Hyo

    2010-01-01

    The conventional two-step procedure has been used in practical nuclear reactor analysis. In this procedure, a deterministic assembly transport code such as HELIOS and CASMO is normally to generate multigroup flux distribution to be used in few-group cross section generation. Recently there are accuracy issues related with the resonance treatment or the double heterogeneity (DH) treatment for VHTR fuel blocks. In order to mitigate the accuracy issues, Monte Carlo (MC) methods can be used as an alternative way to generate few-group cross sections because the accuracy of the MC calculations benefits from its ability to use continuous energy nuclear data and detailed geometric information. In an earlier work, the conventional methods of obtaining multigroup cross sections and the critical spectrum are implemented into the McCARD Monte Carlo code. However, it was not complete in that the critical spectrum is not reflected in the depletion calculation. The purpose of this study is to develop a method to apply the critical spectrum to MC depletion calculations to correct for the leakage effect in the depletion calculation and then to examine the MC based group constants within the two-step procedure by comparing the two-step solution with the direct whole core MC depletion result

  10. A perturbation-based susbtep method for coupled depletion Monte-Carlo codes

    International Nuclear Information System (INIS)

    Kotlyar, Dan; Aufiero, Manuele; Shwageraus, Eugene; Fratoni, Massimiliano

    2017-01-01

    Highlights: • The GPT method allows to calculate the sensitivity coefficients to any perturbation. • Full Jacobian of sensitivities, cross sections (XS) to concentrations, may be obtained. • The time dependent XS is obtained by combining the GPT and substep methods. • The proposed GPT substep method considerably reduces the time discretization error. • No additional MC transport solutions are required within the time step. - Abstract: Coupled Monte Carlo (MC) methods are becoming widely used in reactor physics analysis and design. Many research groups therefore, developed their own coupled MC depletion codes. Typically, in such coupled code systems, neutron fluxes and cross sections are provided to the depletion module by solving a static neutron transport problem. These fluxes and cross sections are representative only of a specific time-point. In reality however, both quantities would change through the depletion time interval. Recently, Generalized Perturbation Theory (GPT) equivalent method that relies on collision history approach was implemented in Serpent MC code. This method was used here to calculate the sensitivity of each nuclide and reaction cross section due to the change in concentration of every isotope in the system. The coupling method proposed in this study also uses the substep approach, which incorporates these sensitivity coefficients to account for temporal changes in cross sections. As a result, a notable improvement in time dependent cross section behavior was obtained. The method was implemented in a wrapper script that couples Serpent with an external depletion solver. The performance of this method was compared with other existing methods. The results indicate that the proposed method requires substantially less MC transport solutions to achieve the same accuracy.

  11. Improvements of MCOR: A Monte Carlo depletion code system for fuel assembly reference calculations

    Energy Technology Data Exchange (ETDEWEB)

    Tippayakul, C.; Ivanov, K. [Pennsylvania State Univ., Univ. Park (United States); Misu, S. [AREVA NP GmbH, An AREVA and SIEMENS Company, Erlangen (Germany)

    2006-07-01

    This paper presents the improvements of MCOR, a Monte Carlo depletion code system for fuel assembly reference calculations. The improvements of MCOR were initiated by the cooperation between the Penn State Univ. and AREVA NP to enhance the original Penn State Univ. MCOR version in order to be used as a new Monte Carlo depletion analysis tool. Essentially, a new depletion module using KORIGEN is utilized to replace the existing ORIGEN-S depletion module in MCOR. Furthermore, the online burnup cross section generation by the Monte Carlo calculation is implemented in the improved version instead of using the burnup cross section library pre-generated by a transport code. Other code features have also been added to make the new MCOR version easier to use. This paper, in addition, presents the result comparisons of the original and the improved MCOR versions against CASMO-4 and OCTOPUS. It was observed in the comparisons that there were quite significant improvements of the results in terms of k{sub inf}, fission rate distributions and isotopic contents. (authors)

  12. Uncertainty Propagation in Monte Carlo Depletion Analysis

    International Nuclear Information System (INIS)

    Shim, Hyung Jin; Kim, Yeong-il; Park, Ho Jin; Joo, Han Gyu; Kim, Chang Hyo

    2008-01-01

    A new formulation aimed at quantifying uncertainties of Monte Carlo (MC) tallies such as k eff and the microscopic reaction rates of nuclides and nuclide number densities in MC depletion analysis and examining their propagation behaviour as a function of depletion time step (DTS) is presented. It is shown that the variance of a given MC tally used as a measure of its uncertainty in this formulation arises from four sources; the statistical uncertainty of the MC tally, uncertainties of microscopic cross sections and nuclide number densities, and the cross correlations between them and the contribution of the latter three sources can be determined by computing the correlation coefficients between the uncertain variables. It is also shown that the variance of any given nuclide number density at the end of each DTS stems from uncertainties of the nuclide number densities (NND) and microscopic reaction rates (MRR) of nuclides at the beginning of each DTS and they are determined by computing correlation coefficients between these two uncertain variables. To test the viability of the formulation, we conducted MC depletion analysis for two sample depletion problems involving a simplified 7x7 fuel assembly (FA) and a 17x17 PWR FA, determined number densities of uranium and plutonium isotopes and their variances as well as k ∞ and its variance as a function of DTS, and demonstrated the applicability of the new formulation for uncertainty propagation analysis that need be followed in MC depletion computations. (authors)

  13. Monte carlo depletion analysis of SMART core by MCNAP code

    International Nuclear Information System (INIS)

    Jung, Jong Sung; Sim, Hyung Jin; Kim, Chang Hyo; Lee, Jung Chan; Ji, Sung Kyun

    2001-01-01

    Depletion an analysis of SMART, a small-sized advanced integral PWR under development by KAERI, is conducted using the Monte Carlo (MC) depletion analysis program, MCNAP. The results are compared with those of the CASMO-3/ MASTER nuclear analysis. The difference between MASTER and MCNAP on k eff prediction is observed about 600pcm at BOC, and becomes smaller as the core burnup increases. The maximum difference bet ween two predict ions on fuel assembly (FA) normalized power distribution is about 6.6% radially , and 14.5% axially but the differences are observed to lie within standard deviation of MC estimations

  14. The development of depletion program coupled with Monte Carlo computer code

    International Nuclear Information System (INIS)

    Nguyen Kien Cuong; Huynh Ton Nghiem; Vuong Huu Tan

    2015-01-01

    The paper presents the development of depletion code for light water reactor coupled with MCNP5 code called the MCDL code (Monte Carlo Depletion for Light Water Reactor). The first order differential depletion system equations of 21 actinide isotopes and 50 fission product isotopes are solved by the Radau IIA Implicit Runge Kutta (IRK) method after receiving neutron flux, reaction rates in one group energy and multiplication factors for fuel pin, fuel assembly or whole reactor core from the calculation results of the MCNP5 code. The calculation for beryllium poisoning and cooling time is also integrated in the code. To verify and validate the MCDL code, high enriched uranium (HEU) and low enriched uranium (LEU) fuel assemblies VVR-M2 types and 89 fresh HEU fuel assemblies, 92 LEU fresh fuel assemblies cores of the Dalat Nuclear Research Reactor (DNRR) have been investigated and compared with the results calculated by the SRAC code and the MCNP R EBUS linkage system code. The results show good agreement between calculated data of the MCDL code and reference codes. (author)

  15. The new MCNP6 depletion capability

    International Nuclear Information System (INIS)

    Fensin, M. L.; James, M. R.; Hendricks, J. S.; Goorley, J. T.

    2012-01-01

    The first MCNP based in-line Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. Both the MCNP5 and MCNPX codes have historically provided a successful combinatorial geometry based, continuous energy, Monte Carlo radiation transport solution for advanced reactor modeling and simulation. However, due to separate development pathways, useful simulation capabilities were dispersed between both codes and not unified in a single technology. MCNP6, the next evolution in the MCNP suite of codes, now combines the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. We describe here the new capabilities of the MCNP6 depletion code dating from the official RSICC release MCNPX 2.6.0, reported previously, to the now current state of MCNP6. NEA/OECD benchmark results are also reported. The MCNP6 depletion capability enhancements beyond MCNPX 2.6.0 reported here include: (1) new performance enhancing parallel architecture that implements both shared and distributed memory constructs; (2) enhanced memory management that maximizes calculation fidelity; and (3) improved burnup physics for better nuclide prediction. MCNP6 depletion enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code. The enhancements described here help provide a powerful capability as well as dictate a path forward for future development to improve the usefulness of the technology. (authors)

  16. The New MCNP6 Depletion Capability

    International Nuclear Information System (INIS)

    Fensin, Michael Lorne; James, Michael R.; Hendricks, John S.; Goorley, John T.

    2012-01-01

    The first MCNP based inline Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. Both the MCNP5 and MCNPX codes have historically provided a successful combinatorial geometry based, continuous energy, Monte Carlo radiation transport solution for advanced reactor modeling and simulation. However, due to separate development pathways, useful simulation capabilities were dispersed between both codes and not unified in a single technology. MCNP6, the next evolution in the MCNP suite of codes, now combines the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. We describe here the new capabilities of the MCNP6 depletion code dating from the official RSICC release MCNPX 2.6.0, reported previously, to the now current state of MCNP6. NEA/OECD benchmark results are also reported. The MCNP6 depletion capability enhancements beyond MCNPX 2.6.0 reported here include: (1) new performance enhancing parallel architecture that implements both shared and distributed memory constructs; (2) enhanced memory management that maximizes calculation fidelity; and (3) improved burnup physics for better nuclide prediction. MCNP6 depletion enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code. The enhancements described here help provide a powerful capability as well as dictate a path forward for future development to improve the usefulness of the technology.

  17. Development of the point-depletion code DEPTH

    International Nuclear Information System (INIS)

    She, Ding; Wang, Kan; Yu, Ganglin

    2013-01-01

    Highlights: ► The DEPTH code has been developed for the large-scale depletion system. ► DEPTH uses the data library which is convenient to couple with MC codes. ► TTA and matrix exponential methods are implemented and compared. ► DEPTH is able to calculate integral quantities based on the matrix inverse. ► Code-to-code comparisons prove the accuracy and efficiency of DEPTH. -- Abstract: The burnup analysis is an important aspect in reactor physics, which is generally done by coupling of transport calculations and point-depletion calculations. DEPTH is a newly-developed point-depletion code of handling large burnup depletion systems and detailed depletion chains. For better coupling with Monte Carlo transport codes, DEPTH uses data libraries based on the combination of ORIGEN-2 and ORIGEN-S and allows users to assign problem-dependent libraries for each depletion step. DEPTH implements various algorithms of treating the stiff depletion systems, including the Transmutation trajectory analysis (TTA), the Chebyshev Rational Approximation Method (CRAM), the Quadrature-based Rational Approximation Method (QRAM) and the Laguerre Polynomial Approximation Method (LPAM). Three different modes are supported by DEPTH to execute the decay, constant flux and constant power calculations. In addition to obtaining the instantaneous quantities of the radioactivity, decay heats and reaction rates, DEPTH is able to calculate the integral quantities by a time-integrated solver. Through calculations compared with ORIGEN-2, the validity of DEPTH in point-depletion calculations is proved. The accuracy and efficiency of depletion algorithms are also discussed. In addition, an actual pin-cell burnup case is calculated to illustrate the DEPTH code performance in coupling with the RMC Monte Carlo code

  18. The enhancements and testing for the MCNPX depletion capability

    International Nuclear Information System (INIS)

    Fensin, M. L.; Hendricks, J. S.; Anghaie, S.

    2008-01-01

    Monte Carlo-linked depletion methods have gained recent interest due to the ability to more accurately model true system physics and better track the evolution of temporal nuclide inventory by simulating the actual physical process. The integration of INDER90 into the MCNPX Monte Carlo radiation transport code provides a completely self-contained Monte- Carlo-linked depletion capability in a single Monte Carlo code that is compatible with most nuclear criticality (KCODE) particle tracking features in MCNPX. MCNPX depletion tracks all necessary reaction rates and follows as many isotopes as cross section data permits in order to achieve a highly accurate temporal nuclide inventory solution. We describe here the depletion methodology dating from the original linking of MONTEBURNS and MCNP to the first public release of the integrated capability (MCNPX 2. 6.B, June, 2006) that has been reported previously. Then we further detail the many new depletion capability enhancements since then leading to the present capability. The H.B. Robinson benchmark calculation results are also reported. The new MCNPX depletion capability enhancements include: (1) allowing the modeling of as large a system as computer memory capacity permits; (2) tracking every fission product available in ENDF/B VII. 0; (3) enabling depletion in repeated structures geometries such as repeated arrays of fuel pins; (4) including metastable isotopes in burnup; and (5) manually changing the concentrations of key isotopes during different time steps to simulate changing reactor control conditions such as dilution of poisons to maintain criticality during burnup. These enhancements allow better detail to model the true system physics and also improve the robustness of the capability. The H.B. Robinson benchmark calculation was completed in order to determine the accuracy of the depletion solution. Temporal nuclide computations of key actinide and fission products are compared to the results of other

  19. Monte Carlo burnup codes acceleration using the correlated sampling method

    International Nuclear Information System (INIS)

    Dieudonne, C.

    2013-01-01

    For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this document we present an original methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time we develop a theoretical model to study the features of the correlated sampling method to understand its effects on depletion calculations. In a third time the implementation of this method in the TRIPOLI-4 code will be discussed, as well as the precise calculation scheme used to bring important speed-up of the depletion calculation. We will begin to validate and optimize the perturbed depletion scheme with the calculation of a REP-like fuel cell depletion. Then this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes. (author) [fr

  20. LCG Monte-Carlo Data Base

    CERN Document Server

    Bartalini, P.; Kryukov, A.; Selyuzhenkov, Ilya V.; Sherstnev, A.; Vologdin, A.

    2004-01-01

    We present the Monte-Carlo events Data Base (MCDB) project and its development plans. MCDB facilitates communication between authors of Monte-Carlo generators and experimental users. It also provides a convenient book-keeping and an easy access to generator level samples. The first release of MCDB is now operational for the CMS collaboration. In this paper we review the main ideas behind MCDB and discuss future plans to develop this Data Base further within the CERN LCG framework.

  1. Sub-step methodology for coupled Monte Carlo depletion and thermal hydraulic codes

    International Nuclear Information System (INIS)

    Kotlyar, D.; Shwageraus, E.

    2016-01-01

    Highlights: • Discretization of time in coupled MC codes determines the results’ accuracy. • The error is due to lack of information regarding the time-dependent reaction rates. • The proposed sub-step method considerably reduces the time discretization error. • No additional MC transport solutions are required within the time step. • The reaction rates are varied as functions of nuclide densities and TH conditions. - Abstract: The governing procedure in coupled Monte Carlo (MC) codes relies on discretization of the simulation time into time steps. Typically, the MC transport solution at discrete points will generate reaction rates, which in most codes are assumed to be constant within the time step. This assumption can trigger numerical instabilities or result in a loss of accuracy, which, in turn, would require reducing the time steps size. This paper focuses on reducing the time discretization error without requiring additional MC transport solutions and hence with no major computational overhead. The sub-step method presented here accounts for the reaction rate variation due to the variation in nuclide densities and thermal hydraulic (TH) conditions. This is achieved by performing additional depletion and TH calculations within the analyzed time step. The method was implemented in BGCore code and subsequently used to analyze a series of test cases. The results indicate that computational speedup of up to a factor of 10 may be achieved over the existing coupling schemes.

  2. Application of backtracking algorithm to depletion calculations

    International Nuclear Information System (INIS)

    Wu Mingyu; Wang Shixi; Yang Yong; Zhang Qiang; Yang Jiayin

    2013-01-01

    Based on the theory of linear chain method for analytical depletion calculations, the burnup matrix is decoupled by the divide and conquer strategy and the linear chain with Markov characteristic is formed. The density, activity and decay heat of every nuclide in the chain then can be calculated by analytical solutions. Every possible reaction path of the nuclide must be considered during the linear chain establishment process. To confirm the calculation precision and efficiency, the algorithm which can cover all the reaction paths and search the paths automatically according to the problem description and precision restrictions should be found. Through analysis and comparison of several kinds of searching algorithms, the backtracking algorithm was selected to establish and calculate the linear chains in searching process using depth first search (DFS) method, forming an algorithm which can solve the depletion problem adaptively and with high fidelity. The complexity of the solution space and time was analyzed by taking into account depletion process and the characteristics of the backtracking algorithm. The newly developed depletion program was coupled with Monte Carlo program MCMG-Ⅱ to calculate the benchmark burnup problem of the first core of China Experimental Fast Reactor (CEFR) and the preliminary verification and validation of the program were performed. (authors)

  3. Development of a practical Monte Carlo based fuel management system for the Penn State University Breazeale Research Reactor (PSBR)

    International Nuclear Information System (INIS)

    Tippayakul, Chanatip; Ivanov, Kostadin; Frederick Sears, C.

    2008-01-01

    A practical fuel management system for the he Pennsylvania State University Breazeale Research Reactor (PSBR) based on the advanced Monte Carlo methodology was developed from the existing fuel management tool in this research. Several modeling improvements were implemented to the old system. The improved fuel management system can now utilize the burnup dependent cross section libraries generated specifically for PSBR fuel and it is also able to update the cross sections of these libraries by the Monte Carlo calculation automatically. Considerations were given to balance the computation time and the accuracy of the cross section update. Thus, certain types of a limited number of isotopes, which are considered 'important', are calculated and updated by the scheme. Moreover, the depletion algorithm of the existing fuel management tool was replaced from the predictor only to the predictor-corrector depletion scheme to account for burnup spectrum changes during the burnup step more accurately. An intermediate verification of the fuel management system was performed to assess the correctness of the newly implemented schemes against HELIOS. It was found that the agreement of both codes is good when the same energy released per fission (Q values) is used. Furthermore, to be able to model the reactor at various temperatures, the fuel management tool is able to utilize automatically the continuous cross sections generated at different temperatures. Other additional useful capabilities were also added to the fuel management tool to make it easy to use and be practical. As part of the development, a hybrid nodal diffusion/Monte Carlo calculation was devised to speed up the Monte Carlo calculation by providing more converged initial source distribution for the Monte Carlo calculation from the nodal diffusion calculation. Finally, the fuel management system was validated against the measured data using several actual PSBR core loadings. The agreement of the predicted core

  4. Monte Carlo - Advances and Challenges

    International Nuclear Information System (INIS)

    Brown, Forrest B.; Mosteller, Russell D.; Martin, William R.

    2008-01-01

    Abstract only, full text follows: With ever-faster computers and mature Monte Carlo production codes, there has been tremendous growth in the application of Monte Carlo methods to the analysis of reactor physics and reactor systems. In the past, Monte Carlo methods were used primarily for calculating k eff of a critical system. More recently, Monte Carlo methods have been increasingly used for determining reactor power distributions and many design parameters, such as β eff , l eff , τ, reactivity coefficients, Doppler defect, dominance ratio, etc. These advanced applications of Monte Carlo methods are now becoming common, not just feasible, but bring new challenges to both developers and users: Convergence of 3D power distributions must be assured; confidence interval bias must be eliminated; iterated fission probabilities are required, rather than single-generation probabilities; temperature effects including Doppler and feedback must be represented; isotopic depletion and fission product buildup must be modeled. This workshop focuses on recent advances in Monte Carlo methods and their application to reactor physics problems, and on the resulting challenges faced by code developers and users. The workshop is partly tutorial, partly a review of the current state-of-the-art, and partly a discussion of future work that is needed. It should benefit both novice and expert Monte Carlo developers and users. In each of the topic areas, we provide an overview of needs, perspective on past and current methods, a review of recent work, and discussion of further research and capabilities that are required. Electronic copies of all workshop presentations and material will be available. The workshop is structured as 2 morning and 2 afternoon segments: - Criticality Calculations I - convergence diagnostics, acceleration methods, confidence intervals, and the iterated fission probability, - Criticality Calculations II - reactor kinetics parameters, dominance ratio, temperature

  5. The MC21 Monte Carlo Transport Code

    International Nuclear Information System (INIS)

    Sutton TM; Donovan TJ; Trumbull TH; Dobreff PS; Caro E; Griesheimer DP; Tyburski LJ; Carpenter DC; Joo H

    2007-01-01

    MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities

  6. Coupling effects of depletion interactions in a three-sphere colloidal system

    International Nuclear Information System (INIS)

    Chen Ze-Shun; Dai Gang; Gao Hai-Xia; Xiao Chang-Ming

    2013-01-01

    In a three-sphere system, the middle sphere is acted upon by two opposite depletion forces from the other two spheres. It is found that, in this system, the two depletion forces are coupled with each other and result in a strengthened depletion force. So the difference of the depletion forces of the three-sphere system and its corresponding two two-sphere systems is introduced to describe the coupling effect of the depletion interactions. The numerical results obtained by Monte-Carlo simulations show that this coupling effect is affected by both the concentration of small spheres and the geometrical confinement. Meanwhile, it is also found that the mechanisms of the coupling effect and the effect on the depletion force from the geometry factor are the same. (interdisciplinary physics and related areas of science and technology)

  7. Research on perturbation based Monte Carlo reactor criticality search

    International Nuclear Information System (INIS)

    Li Zeguang; Wang Kan; Li Yangliu; Deng Jingkang

    2013-01-01

    Criticality search is a very important aspect in reactor physics analysis. Due to the advantages of Monte Carlo method and the development of computer technologies, Monte Carlo criticality search is becoming more and more necessary and feasible. Traditional Monte Carlo criticality search method is suffered from large amount of individual criticality runs and uncertainty and fluctuation of Monte Carlo results. A new Monte Carlo criticality search method based on perturbation calculation is put forward in this paper to overcome the disadvantages of traditional method. By using only one criticality run to get initial k_e_f_f and differential coefficients of concerned parameter, the polynomial estimator of k_e_f_f changing function is solved to get the critical value of concerned parameter. The feasibility of this method was tested. The results show that the accuracy and efficiency of perturbation based criticality search method are quite inspiring and the method overcomes the disadvantages of traditional one. (authors)

  8. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    International Nuclear Information System (INIS)

    Brown, Forrest B.; Univ. of New Mexico, Albuquerque, NM

    2016-01-01

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations

  9. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications Group; Univ. of New Mexico, Albuquerque, NM (United States). Nuclear Engineering Dept.

    2016-11-29

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations

  10. Feasibility Study of Core Design with a Monte Carlo Code for APR1400 Initial core

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jinsun; Chang, Do Ik; Seong, Kibong [KEPCO NF, Daejeon (Korea, Republic of)

    2014-10-15

    The Monte Carlo calculation becomes more popular and useful nowadays due to the rapid progress in computing power and parallel calculation techniques. There have been many attempts to analyze a commercial core by Monte Carlo transport code using the enhanced computer capability, recently. In this paper, Monte Carlo calculation of APR1400 initial core has been performed and the results are compared with the calculation results of conventional deterministic code to find out the feasibility of core design using Monte Carlo code. SERPENT, a 3D continuous-energy Monte Carlo reactor physics burnup calculation code is used for this purpose and the KARMA-ASTRA code system, which is used for a deterministic code of comparison. The preliminary investigation for the feasibility of commercial core design with Monte Carlo code was performed in this study. Simplified core geometry modeling was performed for the reactor core surroundings and reactor coolant model is based on two region model. The reactivity difference at HZP ARO condition between Monte Carlo code and the deterministic code is consistent with each other and the reactivity difference during the depletion could be reduced by adopting the realistic moderator temperature. The reactivity difference calculated at HFP, BOC, ARO equilibrium condition was 180 ±9 pcm, with axial moderator temperature of a deterministic code. The computing time will be a significant burden at this time for the application of Monte Carlo code to the commercial core design even with the application of parallel computing because numerous core simulations are required for actual loading pattern search. One of the remedy will be a combination of Monte Carlo code and the deterministic code to generate the physics data. The comparison of physics parameters with sophisticated moderator temperature modeling and depletion will be performed for a further study.

  11. Benchmarking time-dependent neutron problems with Monte Carlo codes

    International Nuclear Information System (INIS)

    Couet, B.; Loomis, W.A.

    1990-01-01

    Many nuclear logging tools measure the time dependence of a neutron flux in a geological formation to infer important properties of the formation. The complex geometry of the tool and the borehole within the formation does not permit an exact deterministic modelling of the neutron flux behaviour. While this exact simulation is possible with Monte Carlo methods the computation time does not facilitate quick turnaround of results useful for design and diagnostic purposes. Nonetheless a simple model based on the diffusion-decay equation for the flux of neutrons of a single energy group can be useful in this situation. A combination approach where a Monte Carlo calculation benchmarks a deterministic model in terms of the diffusion constants of the neutrons propagating in the media and their flux depletion rates thus offers the possibility of quick calculation with assurance as to accuracy. We exemplify this approach with the Monte Carlo benchmarking of a logging tool problem, showing standoff and bedding response. (author)

  12. Local condensate depletion at trap center under strong interactions

    Science.gov (United States)

    Yukalov, V. I.; Yukalova, E. P.

    2018-04-01

    Cold trapped Bose-condensed atoms, interacting via hard-sphere repulsive potentials are considered. Simple mean-field approximations show that the condensate distribution inside a harmonic trap always has the shape of a hump with the maximum condensate density occurring at the trap center. However, Monte Carlo simulations at high density and strong interactions display the condensate depletion at the trap center. The explanation of this effect of local condensate depletion at trap center is suggested in the frame of self-consistent theory of Bose-condensed systems. The depletion is shown to be due to the existence of the anomalous average that takes into account pair correlations and appears in systems with broken gauge symmetry.

  13. Monte Carlo based diffusion coefficients for LMFBR analysis

    International Nuclear Information System (INIS)

    Van Rooijen, Willem F.G.; Takeda, Toshikazu; Hazama, Taira

    2010-01-01

    A method based on Monte Carlo calculations is developed to estimate the diffusion coefficient of unit cells. The method uses a geometrical model similar to that used in lattice theory, but does not use the assumption of a separable fundamental mode used in lattice theory. The method uses standard Monte Carlo flux and current tallies, and the continuous energy Monte Carlo code MVP was used without modifications. Four models are presented to derive the diffusion coefficient from tally results of flux and partial currents. In this paper the method is applied to the calculation of a plate cell of the fast-spectrum critical facility ZEBRA. Conventional calculations of the diffusion coefficient diverge in the presence of planar voids in the lattice, but our Monte Carlo method can treat this situation without any problem. The Monte Carlo method was used to investigate the influence of geometrical modeling as well as the directional dependence of the diffusion coefficient. The method can be used to estimate the diffusion coefficient of complicated unit cells, the limitation being the capabilities of the Monte Carlo code. The method will be used in the future to confirm results for the diffusion coefficient obtained of the Monte Carlo code. The method will be used in the future to confirm results for the diffusion coefficient obtained with deterministic codes. (author)

  14. Depleted Reactor Analysis With MCNP-4B

    International Nuclear Information System (INIS)

    Caner, M.; Silverman, L.; Bettan, M.

    2004-01-01

    Monte Carlo neutronics calculations are mostly done for fresh reactor cores. There is today an ongoing activity in the development of Monte Carlo plus burnup code systems made possible by the fast gains in computer processor speeds. In this work we investigate the use of MCNP-4B for the calculation of a depleted core of the Soreq reactor (IRR-1). The number densities as function of burnup were taken from the WIMS-D/4 cell code calculations. This particular code coupling has been implemented before. The Monte Carlo code MCNP-4B calculates the coupled transport of neutrons and photons for complicated geometries. We have done neutronics calculations of the IRR-1 core with the WIMS and CITATION codes in the past Also, we have developed an MCNP model of the IRR-1 standard fuel for a criticality safety calculation of a spent fuel storage pool

  15. Continuous energy Monte Carlo method based lattice homogeinzation

    International Nuclear Information System (INIS)

    Li Mancang; Yao Dong; Wang Kan

    2014-01-01

    Based on the Monte Carlo code MCNP, the continuous energy Monte Carlo multi-group constants generation code MCMC has been developed. The track length scheme has been used as the foundation of cross section generation. The scattering matrix and Legendre components require special techniques, and the scattering event method has been proposed to solve this problem. Three methods have been developed to calculate the diffusion coefficients for diffusion reactor core codes and the Legendre method has been applied in MCMC. To the satisfaction of the equivalence theory, the general equivalence theory (GET) and the superhomogenization method (SPH) have been applied to the Monte Carlo method based group constants. The super equivalence method (SPE) has been proposed to improve the equivalence. GET, SPH and SPE have been implemented into MCMC. The numerical results showed that generating the homogenization multi-group constants via Monte Carlo method overcomes the difficulties in geometry and treats energy in continuum, thus provides more accuracy parameters. Besides, the same code and data library can be used for a wide range of applications due to the versatility. The MCMC scheme can be seen as a potential alternative to the widely used deterministic lattice codes. (authors)

  16. Transformation of human osteoblast cells to the tumorigenic phenotype by depleted uranium-uranyl chloride.

    OpenAIRE

    Miller, A C; Blakely, W F; Livengood, D; Whittaker, T; Xu, J; Ejnik, J W; Hamilton, M M; Parlette, E; John, T S; Gerstenberg, H M; Hsu, H

    1998-01-01

    Depleted uranium (DU) is a dense heavy metal used primarily in military applications. Although the health effects of occupational uranium exposure are well known, limited data exist regarding the long-term health effects of internalized DU in humans. We established an in vitro cellular model to study DU exposure. Microdosimetric assessment, determined using a Monte Carlo computer simulation based on measured intracellular and extracellular uranium levels, showed that few (0.0014%) cell nuclei...

  17. Perturbation based Monte Carlo criticality search in density, enrichment and concentration

    International Nuclear Information System (INIS)

    Li, Zeguang; Wang, Kan; Deng, Jingkang

    2015-01-01

    Highlights: • A new perturbation based Monte Carlo criticality search method is proposed. • The method could get accurate results with only one individual criticality run. • The method is used to solve density, enrichment and concentration search problems. • Results show the feasibility and good performances of this method. • The relationship between results’ accuracy and perturbation order is discussed. - Abstract: Criticality search is a very important aspect in reactor physics analysis. Due to the advantages of Monte Carlo method and the development of computer technologies, Monte Carlo criticality search is becoming more and more necessary and feasible. Existing Monte Carlo criticality search methods need large amount of individual criticality runs and may have unstable results because of the uncertainties of criticality results. In this paper, a new perturbation based Monte Carlo criticality search method is proposed and discussed. This method only needs one individual criticality calculation with perturbation tallies to estimate k eff changing function using initial k eff and differential coefficients results, and solves polynomial equations to get the criticality search results. The new perturbation based Monte Carlo criticality search method is implemented in the Monte Carlo code RMC, and criticality search problems in density, enrichment and concentration are taken out. Results show that this method is quite inspiring in accuracy and efficiency, and has advantages compared with other criticality search methods

  18. Development and validation of ALEPH Monte Carlo burn-up code

    International Nuclear Information System (INIS)

    Stankovskiy, A.; Van den Eynde, G.; Vidmar, T.

    2011-01-01

    The Monte-Carlo burn-up code ALEPH is being developed in SCK-CEN since 2004. Belonging to the category of shells coupling Monte Carlo transport (MCNP or MCNPX) and 'deterministic' depletion codes (ORIGEN-2.2), ALEPH possess some unique features that distinguish it from other codes. The most important feature is full data consistency between steady-state Monte Carlo and time-dependent depletion calculations. Recent improvements of ALEPH concern full implementation of general-purpose nuclear data libraries (JEFF-3.1.1, ENDF/B-VII, JENDL-3.3). The upgraded version of the code is capable to treat isomeric branching ratios, neutron induced fission product yields, spontaneous fission yields and energy release per fission recorded in ENDF-formatted data files. The alternative algorithm for time evolution of nuclide concentrations is added. A predictor-corrector mechanism and the calculation of nuclear heating are available as well. The validation of the code on REBUS experimental programme results has been performed. The upgraded version of ALEPH has shown better agreement with measured data than other codes, including previous version of ALEPH. (authors)

  19. Adding trend data to Depletion-Based Stock Reduction Analysis

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A Bayesian model of Depletion-Based Stock Reduction Analysis (DB-SRA), informed by a time series of abundance indexes, was developed, using the Sampling Importance...

  20. Monte Carlo-based tail exponent estimator

    Science.gov (United States)

    Barunik, Jozef; Vacha, Lukas

    2010-11-01

    In this paper we propose a new approach to estimation of the tail exponent in financial stock markets. We begin the study with the finite sample behavior of the Hill estimator under α-stable distributions. Using large Monte Carlo simulations, we show that the Hill estimator overestimates the true tail exponent and can hardly be used on samples with small length. Utilizing our results, we introduce a Monte Carlo-based method of estimation for the tail exponent. Our proposed method is not sensitive to the choice of tail size and works well also on small data samples. The new estimator also gives unbiased results with symmetrical confidence intervals. Finally, we demonstrate the power of our estimator on the international world stock market indices. On the two separate periods of 2002-2005 and 2006-2009, we estimate the tail exponent.

  1. Acceleration of the MCNP branch of the OCTOPUS depletion code system

    Energy Technology Data Exchange (ETDEWEB)

    Pijlgroms, B.J.; Hogenbirk, A.; Oppe, J. [Section Nuclear and Reactor Physics, ECN Nuclear Research, Petten (Netherlands)

    1998-09-01

    OCTOPUS depletion calculations using the 3D Monte Carlo spectrum code MCNP (Monte Carlo Code for Neutron and Photon Transport) require much computing time. In a former implementation, the time required by OCTOPUS to perform multi-zone calculations, increased roughly proportional to the number of burnable zones. By using a different method the situation has improved considerably. In the new implementation described here, the dependence of the computing time on the number of zones has been moved from the MCNP code to a faster postprocessing code. By this, the overall computing time will reduce substantially. 11 refs.

  2. Acceleration of the MCNP branch of the OCTOPUS depletion code system

    International Nuclear Information System (INIS)

    Pijlgroms, B.J.; Hogenbirk, A.; Oppe, J.

    1998-09-01

    OCTOPUS depletion calculations using the 3D Monte Carlo spectrum code MCNP (Monte Carlo Code for Neutron and Photon Transport) require much computing time. In a former implementation, the time required by OCTOPUS to perform multi-zone calculations, increased roughly proportional to the number of burnable zones. By using a different method the situation has improved considerably. In the new implementation described here, the dependence of the computing time on the number of zones has been moved from the MCNP code to a faster postprocessing code. By this, the overall computing time will reduce substantially. 11 refs

  3. Coevolution Based Adaptive Monte Carlo Localization (CEAMCL

    Directory of Open Access Journals (Sweden)

    Luo Ronghua

    2008-11-01

    Full Text Available An adaptive Monte Carlo localization algorithm based on coevolution mechanism of ecological species is proposed. Samples are clustered into species, each of which represents a hypothesis of the robot's pose. Since the coevolution between the species ensures that the multiple distinct hypotheses can be tracked stably, the problem of premature convergence when using MCL in highly symmetric environments can be solved. And the sample size can be adjusted adaptively over time according to the uncertainty of the robot's pose by using the population growth model. In addition, by using the crossover and mutation operators in evolutionary computation, intra-species evolution can drive the samples move towards the regions where the desired posterior density is large. So a small size of samples can represent the desired density well enough to make precise localization. The new algorithm is termed coevolution based adaptive Monte Carlo localization (CEAMCL. Experiments have been carried out to prove the efficiency of the new localization algorithm.

  4. A lattice-based Monte Carlo evaluation of Canada Deuterium Uranium-6 safety parameters

    International Nuclear Information System (INIS)

    Kim, Yong Hee; Hartanto, Donny; Kim, Woo Song

    2016-01-01

    Important safety parameters such as the fuel temperature coefficient (FTC) and the power coefficient of reactivity (PCR) of the CANada Deuterium Uranium (CANDU-6) reactor have been evaluated using the Monte Carlo method. For accurate analysis of the parameters, the Doppler broadening rejection correction scheme was implemented in the MCNPX code to account for the thermal motion of the heavy uranium-238 nucleus in the neutron-U scattering reactions. In this work, a standard fuel lattice has been modeled and the fuel is depleted using MCNPX. The FTC value is evaluated for several burnup points including the mid-burnup representing a near-equilibrium core. The Doppler effect has been evaluated using several cross-section libraries such as ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1.1, and JENDL-4.0. The PCR value is also evaluated at mid-burnup conditions to characterize the safety features of an equilibrium CANDU-6 reactor. To improve the reliability of the Monte Carlo calculations, we considered a huge number of neutron histories in this work and the standard deviation of the k-infinity values is only 0.5-1 pcm

  5. Evaluation of tomographic-image based geometries with PENELOPE Monte Carlo

    International Nuclear Information System (INIS)

    Kakoi, A.A.Y.; Galina, A.C.; Nicolucci, P.

    2009-01-01

    The Monte Carlo method can be used to evaluate treatment planning systems or for the determination of dose distributions in radiotherapy planning due to its accuracy and precision. In Monte Carlo simulation packages typically used in radiotherapy, however, a realistic representation of the geometry of the patient can not be used, which compromises the accuracy of the results. In this work, an algorithm for the description of geometries based on CT images of patients, developed to be used with Monte Carlo simulation package PENELOPE, is tested by simulating the dose distribution produced by a photon beam of 10 MV. The geometry simulated was based on CT images of a planning of prostate cancer. The volumes of interest in the treatment were adequately represented in the simulation geometry, allowing the algorithm to be used in verification of doses in radiotherapy treatments. (author)

  6. Evaluation of acute tryptophan depletion and sham depletion with a gelatin-based collagen peptide protein mixture

    DEFF Research Database (Denmark)

    Stenbæk, Dea Siggaard; Einarsdottir, H S; Goregliad-Fjaellingsdal, T

    2016-01-01

    Acute Tryptophan Depletion (ATD) is a dietary method used to modulate central 5-HT to study the effects of temporarily reduced 5-HT synthesis. The aim of this study is to evaluate a novel method of ATD using a gelatin-based collagen peptide (CP) mixture. We administered CP-Trp or CP+Trp mixtures...

  7. Comparative evaluations of the Monte Carlo-based light propagation simulation packages for optical imaging

    Directory of Open Access Journals (Sweden)

    Lin Wang

    2018-01-01

    Full Text Available Monte Carlo simulation of light propagation in turbid medium has been studied for years. A number of software packages have been developed to handle with such issue. However, it is hard to compare these simulation packages, especially for tissues with complex heterogeneous structures. Here, we first designed a group of mesh datasets generated by Iso2Mesh software, and used them to cross-validate the accuracy and to evaluate the performance of four Monte Carlo-based simulation packages, including Monte Carlo model of steady-state light transport in multi-layered tissues (MCML, tetrahedron-based inhomogeneous Monte Carlo optical simulator (TIMOS, Molecular Optical Simulation Environment (MOSE, and Mesh-based Monte Carlo (MMC. The performance of each package was evaluated based on the designed mesh datasets. The merits and demerits of each package were also discussed. Comparative results showed that the TIMOS package provided the best performance, which proved to be a reliable, efficient, and stable MC simulation package for users.

  8. Three-dimensional modeling of the neutral gas depletion effect in a helicon discharge plasma

    Science.gov (United States)

    Kollasch, Jeffrey; Schmitz, Oliver; Norval, Ryan; Reiter, Detlev; Sovinec, Carl

    2016-10-01

    Helicon discharges provide an attractive radio-frequency driven regime for plasma, but neutral-particle dynamics present a challenge to extending performance. A neutral gas depletion effect occurs when neutrals in the plasma core are not replenished at a sufficient rate to sustain a higher plasma density. The Monte Carlo neutral particle tracking code EIRENE was setup for the MARIA helicon experiment at UW Madison to study its neutral particle dynamics. Prescribed plasma temperature and density profiles similar to those in the MARIA device are used in EIRENE to investigate the main causes of the neutral gas depletion effect. The most dominant plasma-neutral interactions are included so far, namely electron impact ionization of neutrals, charge exchange interactions of neutrals with plasma ions, and recycling at the wall. Parameter scans show how the neutral depletion effect depends on parameters such as Knudsen number, plasma density and temperature, and gas-surface interaction accommodation coefficients. Results are compared to similar analytic studies in the low Knudsen number limit. Plans to incorporate a similar Monte Carlo neutral model into a larger helicon modeling framework are discussed. This work is funded by the NSF CAREER Award PHY-1455210.

  9. Mechanism-based biomarker gene sets for glutathione depletion-related hepatotoxicity in rats

    International Nuclear Information System (INIS)

    Gao Weihua; Mizukawa, Yumiko; Nakatsu, Noriyuki; Minowa, Yosuke; Yamada, Hiroshi; Ohno, Yasuo; Urushidani, Tetsuro

    2010-01-01

    Chemical-induced glutathione depletion is thought to be caused by two types of toxicological mechanisms: PHO-type glutathione depletion [glutathione conjugated with chemicals such as phorone (PHO) or diethyl maleate (DEM)], and BSO-type glutathione depletion [i.e., glutathione synthesis inhibited by chemicals such as L-buthionine-sulfoximine (BSO)]. In order to identify mechanism-based biomarker gene sets for glutathione depletion in rat liver, male SD rats were treated with various chemicals including PHO (40, 120 and 400 mg/kg), DEM (80, 240 and 800 mg/kg), BSO (150, 450 and 1500 mg/kg), and bromobenzene (BBZ, 10, 100 and 300 mg/kg). Liver samples were taken 3, 6, 9 and 24 h after administration and examined for hepatic glutathione content, physiological and pathological changes, and gene expression changes using Affymetrix GeneChip Arrays. To identify differentially expressed probe sets in response to glutathione depletion, we focused on the following two courses of events for the two types of mechanisms of glutathione depletion: a) gene expression changes occurring simultaneously in response to glutathione depletion, and b) gene expression changes after glutathione was depleted. The gene expression profiles of the identified probe sets for the two types of glutathione depletion differed markedly at times during and after glutathione depletion, whereas Srxn1 was markedly increased for both types as glutathione was depleted, suggesting that Srxn1 is a key molecule in oxidative stress related to glutathione. The extracted probe sets were refined and verified using various compounds including 13 additional positive or negative compounds, and they established two useful marker sets. One contained three probe sets (Akr7a3, Trib3 and Gstp1) that could detect conjugation-type glutathione depletors any time within 24 h after dosing, and the other contained 14 probe sets that could detect glutathione depletors by any mechanism. These two sets, with appropriate scoring

  10. Implementation of a Monte Carlo based inverse planning model for clinical IMRT with MCNP code

    International Nuclear Information System (INIS)

    He, Tongming Tony

    2003-01-01

    Inaccurate dose calculations and limitations of optimization algorithms in inverse planning introduce systematic and convergence errors to treatment plans. This work was to implement a Monte Carlo based inverse planning model for clinical IMRT aiming to minimize the aforementioned errors. The strategy was to precalculate the dose matrices of beamlets in a Monte Carlo based method followed by the optimization of beamlet intensities. The MCNP 4B (Monte Carlo N-Particle version 4B) code was modified to implement selective particle transport and dose tallying in voxels and efficient estimation of statistical uncertainties. The resulting performance gain was over eleven thousand times. Due to concurrent calculation of multiple beamlets of individual ports, hundreds of beamlets in an IMRT plan could be calculated within a practical length of time. A finite-sized point source model provided a simple and accurate modeling of treatment beams. The dose matrix calculations were validated through measurements in phantoms. Agreements were better than 1.5% or 0.2 cm. The beamlet intensities were optimized using a parallel platform based optimization algorithm that was capable of escape from local minima and preventing premature convergence. The Monte Carlo based inverse planning model was applied to clinical cases. The feasibility and capability of Monte Carlo based inverse planning for clinical IMRT was demonstrated. Systematic errors in treatment plans of a commercial inverse planning system were assessed in comparison with the Monte Carlo based calculations. Discrepancies in tumor doses and critical structure doses were up to 12% and 17%, respectively. The clinical importance of Monte Carlo based inverse planning for IMRT was demonstrated

  11. Development of Monte Carlo-based pebble bed reactor fuel management code

    International Nuclear Information System (INIS)

    Setiadipura, Topan; Obara, Toru

    2014-01-01

    Highlights: • A new Monte Carlo-based fuel management code for OTTO cycle pebble bed reactor was developed. • The double-heterogeneity was modeled using statistical method in MVP-BURN code. • The code can perform analysis of equilibrium and non-equilibrium phase. • Code-to-code comparisons for Once-Through-Then-Out case were investigated. • Ability of the code to accommodate the void cavity was confirmed. - Abstract: A fuel management code for pebble bed reactors (PBRs) based on the Monte Carlo method has been developed in this study. The code, named Monte Carlo burnup analysis code for PBR (MCPBR), enables a simulation of the Once-Through-Then-Out (OTTO) cycle of a PBR from the running-in phase to the equilibrium condition. In MCPBR, a burnup calculation based on a continuous-energy Monte Carlo code, MVP-BURN, is coupled with an additional utility code to be able to simulate the OTTO cycle of PBR. MCPBR has several advantages in modeling PBRs, namely its Monte Carlo neutron transport modeling, its capability of explicitly modeling the double heterogeneity of the PBR core, and its ability to model different axial fuel speeds in the PBR core. Analysis at the equilibrium condition of the simplified PBR was used as the validation test of MCPBR. The calculation results of the code were compared with the results of diffusion-based fuel management PBR codes, namely the VSOP and PEBBED codes. Using JENDL-4.0 nuclide library, MCPBR gave a 4.15% and 3.32% lower k eff value compared to VSOP and PEBBED, respectively. While using JENDL-3.3, MCPBR gave a 2.22% and 3.11% higher k eff value compared to VSOP and PEBBED, respectively. The ability of MCPBR to analyze neutron transport in the top void of the PBR core and its effects was also confirmed

  12. Continuous energy Monte Carlo method based homogenization multi-group constants calculation

    International Nuclear Information System (INIS)

    Li Mancang; Wang Kan; Yao Dong

    2012-01-01

    The efficiency of the standard two-step reactor physics calculation relies on the accuracy of multi-group constants from the assembly-level homogenization process. In contrast to the traditional deterministic methods, generating the homogenization cross sections via Monte Carlo method overcomes the difficulties in geometry and treats energy in continuum, thus provides more accuracy parameters. Besides, the same code and data bank can be used for a wide range of applications, resulting in the versatility using Monte Carlo codes for homogenization. As the first stage to realize Monte Carlo based lattice homogenization, the track length scheme is used as the foundation of cross section generation, which is straight forward. The scattering matrix and Legendre components, however, require special techniques. The Scattering Event method was proposed to solve the problem. There are no continuous energy counterparts in the Monte Carlo calculation for neutron diffusion coefficients. P 1 cross sections were used to calculate the diffusion coefficients for diffusion reactor simulator codes. B N theory is applied to take the leakage effect into account when the infinite lattice of identical symmetric motives is assumed. The MCMC code was developed and the code was applied in four assembly configurations to assess the accuracy and the applicability. At core-level, A PWR prototype core is examined. The results show that the Monte Carlo based multi-group constants behave well in average. The method could be applied to complicated configuration nuclear reactor core to gain higher accuracy. (authors)

  13. A functional method for estimating DPA tallies in Monte Carlo calculations of Light Water Reactors

    International Nuclear Information System (INIS)

    Read, Edward A.; Oliveira, Cassiano R.E. de

    2011-01-01

    There has been a growing need in recent years for the development of methodology to calculate radiation damage factors, namely displacements per atom (dpa), of structural components for Light Water Reactors (LWRs). The aim of this paper is to discuss the development and implementation of a dpa method using Monte Carlo method for transport calculations. The capabilities of the Monte Carlo code Serpent such as Woodcock tracking and fuel depletion are assessed for radiation damage calculations and its capability demonstrated and compared to those of the Monte Carlo code MCNP for radiation damage calculations of a typical LWR configuration. (author)

  14. CRDIAC: Coupled Reactor Depletion Instrument with Automated Control

    International Nuclear Information System (INIS)

    Logan, Steven K.

    2012-01-01

    When modeling the behavior of a nuclear reactor over time, it is important to understand how the isotopes in the reactor will change, or transmute, over that time. This is especially important in the reactor fuel itself. Many nuclear physics modeling codes model how particles interact in the system, but do not model this over time. Thus, another code is used in conjunction with the nuclear physics code to accomplish this. In our code, Monte Carlo N-Particle (MCNP) codes and the Multi Reactor Transmutation Analysis Utility (MRTAU) were chosen as the codes to use. In this way, MCNP would produce the reaction rates in the different isotopes present and MRTAU would use cross sections generated from these reaction rates to determine how the mass of each isotope is lost or gained. Between these two codes, the information must be altered and edited for use. For this, a Python 2.7 script was developed to aid the user in getting the information in the correct forms. This newly developed methodology was called the Coupled Reactor Depletion Instrument with Automated Controls (CRDIAC). As is the case in any newly developed methodology for modeling of physical phenomena, CRDIAC needed to be verified against similar methodology and validated against data taken from an experiment, in our case AFIP-3. AFIP-3 was a reduced enrichment plate type fuel tested in the ATR. We verified our methodology against the MCNP Coupled with ORIGEN2 (MCWO) method and validated our work against the Post Irradiation Examination (PIE) data. When compared to MCWO, the difference in concentration of U-235 throughout Cycle 144A was about 1%. When compared to the PIE data, the average bias for end of life U-235 concentration was about 2%. These results from CRDIAC therefore agree with the MCWO and PIE data, validating and verifying CRDIAC. CRDIAC provides an alternative to using ORIGEN-based methodology, which is useful because CRDIAC's depletion code, MRTAU, uses every available isotope in its depletion

  15. Monte Carlo simulation of VHTR particle fuel with chord length sampling

    International Nuclear Information System (INIS)

    Ji, W.; Martin, W. R.

    2007-01-01

    The Very High Temperature Gas-Cooled Reactor (VHTR) poses a problem for neutronic analysis due to the double heterogeneity posed by the particle fuel and either the fuel compacts in the case of the prismatic block reactor or the fuel pebbles in the case of the pebble bed reactor. Direct Monte Carlo simulation has been used in recent years to analyze these VHTR configurations but is computationally challenged when space dependent phenomena are considered such as depletion or temperature feedback. As an alternative approach, we have considered chord length sampling to reduce the computational burden of the Monte Carlo simulation. We have improved on an existing method called 'limited chord length sampling' and have used it to analyze stochastic media representative of either pebble bed or prismatic VHTR fuel geometries. Based on the assumption that the PDF had an exponential form, a theoretical chord length distribution is derived and shown to be an excellent model for a wide range of packing fractions. This chord length PDF was then used to analyze a stochastic medium that was constructed using the RSA (Random Sequential Addition) algorithm and the results were compared to a benchmark Monte Carlo simulation of the actual stochastic geometry. The results are promising and suggest that the theoretical chord length PDF can be used instead of a full Monte Carlo random walk simulation in the stochastic medium, saving orders of magnitude in computational time (and memory demand) to perform the simulation. (authors)

  16. Computer simulations and theoretical aspects of the depletion interaction in protein-oligomer mixtures.

    Science.gov (United States)

    Boncina, M; Rescic, J; Kalyuzhnyi, Yu V; Vlachy, V

    2007-07-21

    The depletion interaction between proteins caused by addition of either uncharged or partially charged oligomers was studied using the canonical Monte Carlo simulation technique and the integral equation theory. A protein molecule was modeled in two different ways: either as (i) a hard sphere of diameter 30.0 A with net charge 0, or +5, or (ii) as a hard sphere with discrete charges (depending on the pH of solution) of diameter 45.4 A. The oligomers were pictured as tangentially jointed, uncharged, or partially charged, hard spheres. The ions of a simple electrolyte present in solution were represented by charged hard spheres distributed in the dielectric continuum. In this study we were particularly interested in changes of the protein-protein pair-distribution function, caused by addition of the oligomer component. In agreement with previous studies we found that addition of a nonadsorbing oligomer reduces the phase stability of solution, which is reflected in the shape of the protein-protein pair-distribution function. The value of this function in protein-protein contact increases with increasing oligomer concentration, and is larger for charged oligomers. The range of the depletion interaction and its strength also depend on the length (number of monomer units) of the oligomer chain. The integral equation theory, based on the Wertheim Ornstein-Zernike approach applied in this study, was found to be in fair agreement with Monte Carlo results only for very short oligomers. The computer simulations for a model mimicking the lysozyme molecule (ii) are in qualitative agreement with small-angle neutron experiments for lysozyme-dextran mixtures.

  17. State of the art of Monte Carlo technics for reliable activated waste evaluations

    International Nuclear Information System (INIS)

    Culioli, Matthieu; Chapoutier, Nicolas; Barbier, Samuel; Janski, Sylvain

    2016-01-01

    This paper presents the calculation scheme used for many studies to assess the activities inventory of French shutdown reactors (including Pressurized Water Reactor, Heavy Water Reactor, Sodium-Cooled Fast Reactor and Natural Uranium Gas Cooled or UNGG). This calculation scheme is based on Monte Carlo calculations (MCNP) and involves advanced technique for source modeling, geometry modeling (with Computer-Aided Design integration), acceleration methods and depletion calculations coupling on 3D meshes. All these techniques offer efficient and reliable evaluations on large scale model with a high level of details reducing the risks of underestimation or conservatisms. (authors)

  18. Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy

    Science.gov (United States)

    Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe

    2015-07-01

    The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm3 calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.

  19. Monte Carlo evaluation of derivative-based global sensitivity measures

    International Nuclear Information System (INIS)

    Kucherenko, S.; Rodriguez-Fernandez, M.; Pantelides, C.; Shah, N.

    2009-01-01

    A novel approach for evaluation of derivative-based global sensitivity measures (DGSM) is presented. It is compared with the Morris and the Sobol' sensitivity indices methods. It is shown that there is a link between DGSM and Sobol' sensitivity indices. DGSM are very easy to implement and evaluate numerically. The computational time required for numerical evaluation of DGSM is many orders of magnitude lower than that for estimation of the Sobol' sensitivity indices. It is also lower than that for the Morris method. Efficiencies of Monte Carlo (MC) and quasi-Monte Carlo (QMC) sampling methods for calculation of DGSM are compared. It is shown that the superiority of QMC over MC depends on the problem's effective dimension, which can also be estimated using DGSM.

  20. Base cation depletion and potential long-term acidification of Norwegian catchments

    International Nuclear Information System (INIS)

    Kirchner, J.W.; Lydersen, E.

    1995-01-01

    Long-term monitoring data from Norwegian catchments show that since the late 1970s, sulfate deposition and runoff sulfate concentrations have declined significantly. However, water quality has not significantly improved, because reductions in runoff sulfate have been matched by equal declines in calcium and magnesium concentrations. Long-term declines in runoff Ca and Mg are most pronounced at catchments subject to highly acidic deposition; the observed rates of decline are quantitatively consistent with depletion of exchangeable bases by accelerated leaching under high acid loading. Even though water quality has not recovered, reductions in acid deposition have been valuable because they have prevented significant acidification that would otherwise have occurred under constant acid deposition. Ongoing depletion of exchangeable bases from these catchments implies that continued deposition reductions will be needed to avoid further acidification and that recovery from acidification will be slow. 31 refs., 2 figs., 4 tabs

  1. Monte Carlo evaluation of derivative-based global sensitivity measures

    Energy Technology Data Exchange (ETDEWEB)

    Kucherenko, S. [Centre for Process Systems Engineering, Imperial College London, London SW7 2AZ (United Kingdom)], E-mail: s.kucherenko@ic.ac.uk; Rodriguez-Fernandez, M. [Process Engineering Group, Instituto de Investigaciones Marinas, Spanish Council for Scientific Research (C.S.I.C.), C/ Eduardo Cabello, 6, 36208 Vigo (Spain); Pantelides, C.; Shah, N. [Centre for Process Systems Engineering, Imperial College London, London SW7 2AZ (United Kingdom)

    2009-07-15

    A novel approach for evaluation of derivative-based global sensitivity measures (DGSM) is presented. It is compared with the Morris and the Sobol' sensitivity indices methods. It is shown that there is a link between DGSM and Sobol' sensitivity indices. DGSM are very easy to implement and evaluate numerically. The computational time required for numerical evaluation of DGSM is many orders of magnitude lower than that for estimation of the Sobol' sensitivity indices. It is also lower than that for the Morris method. Efficiencies of Monte Carlo (MC) and quasi-Monte Carlo (QMC) sampling methods for calculation of DGSM are compared. It is shown that the superiority of QMC over MC depends on the problem's effective dimension, which can also be estimated using DGSM.

  2. Monte Carlo applications to core-following of the National Research Universal reactor (NRU)

    International Nuclear Information System (INIS)

    Nguyen, T.S.; Wang, X.; Leung, T.

    2014-01-01

    Reactor code TRIAD, relying on a two-group neutron diffusion model, is currently used for core-following of NRU - to track reactor assembly locations and burnups. The Monte Carlo (MCNP or SERPENT) full-reactor models of NRU can be used to provide the core power distribution for calculating fuel burnups, with WIMS-AECL providing fuel depletion calculations. The MCNP/WIMS core-following results were in good agreement with the measured data, within the expected biases. The Monte Carlo methods, still very time-consuming, need to be able to run faster before they can replace TRIAD for timely support of NRU operations. (author)

  3. MCNP evaluation of top node control rod depletion below the core in KKL

    International Nuclear Information System (INIS)

    Beran, Tâm; Seltborg, Per; Lindahl, Sten-Örjan; Bieli, Roger; Ledergerber, Guido

    2014-01-01

    In previous studies, there has been identified a significant discrepancy in the BWR control rod top node depletion between the two core simulator nodal codes POLCA7 and PRESTO-2, which indicates that there is a large general uncertainty in nodal codes in calculating the top node depletion of fully withdrawn control rods. In this study, the stochastic Monte Carlo code MCNP has been used to calculate the top node control rod depletion for benchmarking the nodal codes. By using the TIP signal obtained from an extended TIP campaign below the core performed in the KKL reactor, the MCNP model has been verified by comparing the axial profile between the TIP data and the gamma flux calculated by MCNP. The MCNP results have also been compared with calculations from POLCA7, which was found to yield slightly higher depletion rates than MCNP. It was also found that the 10 B depletion in the top node is very sensitive to the exact axial location of the control rod top when it is fully withdrawn. By using the MCNP results, the neutron flux model below the core in the nodal codes can be improved by implementing an exponential function for the neutron flux. (author)

  4. Biases and statistical errors in Monte Carlo burnup calculations: an unbiased stochastic scheme to solve Boltzmann/Bateman coupled equations

    International Nuclear Information System (INIS)

    Dumonteil, E.; Diop, C.M.

    2011-01-01

    External linking scripts between Monte Carlo transport codes and burnup codes, and complete integration of burnup capability into Monte Carlo transport codes, have been or are currently being developed. Monte Carlo linked burnup methodologies may serve as an excellent benchmark for new deterministic burnup codes used for advanced systems; however, there are some instances where deterministic methodologies break down (i.e., heavily angularly biased systems containing exotic materials without proper group structure) and Monte Carlo burn up may serve as an actual design tool. Therefore, researchers are also developing these capabilities in order to examine complex, three-dimensional exotic material systems that do not contain benchmark data. Providing a reference scheme implies being able to associate statistical errors to any neutronic value of interest like k(eff), reaction rates, fluxes, etc. Usually in Monte Carlo, standard deviations are associated with a particular value by performing different independent and identical simulations (also referred to as 'cycles', 'batches', or 'replicas'), but this is only valid if the calculation itself is not biased. And, as will be shown in this paper, there is a bias in the methodology that consists of coupling transport and depletion codes because Bateman equations are not linear functions of the fluxes or of the reaction rates (those quantities being always measured with an uncertainty). Therefore, we have to quantify and correct this bias. This will be achieved by deriving an unbiased minimum variance estimator of a matrix exponential function of a normal mean. The result is then used to propose a reference scheme to solve Boltzmann/Bateman coupled equations, thanks to Monte Carlo transport codes. Numerical tests will be performed with an ad hoc Monte Carlo code on a very simple depletion case and will be compared to the theoretical results obtained with the reference scheme. Finally, the statistical error propagation

  5. CAD-based Monte Carlo program for integrated simulation of nuclear system SuperMC

    International Nuclear Information System (INIS)

    Wu, Yican; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Long, Pengcheng; Hu, Liqin

    2015-01-01

    Highlights: • The new developed CAD-based Monte Carlo program named SuperMC for integrated simulation of nuclear system makes use of hybrid MC-deterministic method and advanced computer technologies. SuperMC is designed to perform transport calculation of various types of particles, depletion and activation calculation including isotope burn-up, material activation and shutdown dose, and multi-physics coupling calculation including thermo-hydraulics, fuel performance and structural mechanics. The bi-directional automatic conversion between general CAD models and physical settings and calculation models can be well performed. Results and process of simulation can be visualized with dynamical 3D dataset and geometry model. Continuous-energy cross section, burnup, activation, irradiation damage and material data etc. are used to support the multi-process simulation. Advanced cloud computing framework makes the computation and storage extremely intensive simulation more attractive just as a network service to support design optimization and assessment. The modular design and generic interface promotes its flexible manipulation and coupling of external solvers. • The new developed and incorporated advanced methods in SuperMC was introduced including hybrid MC-deterministic transport method, particle physical interaction treatment method, multi-physics coupling calculation method, geometry automatic modeling and processing method, intelligent data analysis and visualization method, elastic cloud computing technology and parallel calculation method. • The functions of SuperMC2.1 integrating automatic modeling, neutron and photon transport calculation, results and process visualization was introduced. It has been validated by using a series of benchmarking cases such as the fusion reactor ITER model and the fast reactor BN-600 model. - Abstract: Monte Carlo (MC) method has distinct advantages to simulate complicated nuclear systems and is envisioned as a routine

  6. Development of a micro-depletion model to us WIMS properties in history-based local-parameter calculations in RFSP

    International Nuclear Information System (INIS)

    Shen, W.

    2004-01-01

    A micro-depletion model has been developed and implemented in the *SIMULATE module of RFSP to use WIMS-calculated lattice properties in history-based local-parameter calculations. A comparison between the micro-depletion and WIMS results for each type of lattice cross section and for the infinite-lattice multiplication factor was also performed for a fuel similar to that which may be used in the ACR fuel. The comparison shows that the micro-depletion calculation agrees well with the WIMS-IST calculation. The relative differences in k-infinity are within ±0.5 mk and ±0.9 mk for perturbation and depletion calculations, respectively. The micro-depletion model gives the *SIMULATE module of RFSP the capability to use WIMS-calculated lattice properties in history-based local-parameter calculations without resorting to the Simple-Cell-Methodology (SCM) surrogate for CANDU core-tracking simulations. (author)

  7. Depleted uranium: A DOE management guide

    International Nuclear Information System (INIS)

    1995-10-01

    The U.S. Department of Energy (DOE) has a management challenge and financial liability in the form of 50,000 cylinders containing 555,000 metric tons of depleted uranium hexafluoride (UF 6 ) that are stored at the gaseous diffusion plants. The annual storage and maintenance cost is approximately $10 million. This report summarizes several studies undertaken by the DOE Office of Technology Development (OTD) to evaluate options for long-term depleted uranium management. Based on studies conducted to date, the most likely use of the depleted uranium is for shielding of spent nuclear fuel (SNF) or vitrified high-level waste (HLW) containers. The alternative to finding a use for the depleted uranium is disposal as a radioactive waste. Estimated disposal costs, utilizing existing technologies, range between $3.8 and $11.3 billion, depending on factors such as applicability of the Resource Conservation and Recovery Act (RCRA) and the location of the disposal site. The cost of recycling the depleted uranium in a concrete based shielding in SNF/HLW containers, although substantial, is comparable to or less than the cost of disposal. Consequently, the case can be made that if DOE invests in developing depleted uranium shielded containers instead of disposal, a long-term solution to the UF 6 problem is attained at comparable or lower cost than disposal as a waste. Two concepts for depleted uranium storage casks were considered in these studies. The first is based on standard fabrication concepts previously developed for depleted uranium metal. The second converts the UF 6 to an oxide aggregate that is used in concrete to make dry storage casks

  8. Self-Regulatory Capacities Are Depleted in a Domain-Specific Manner.

    Science.gov (United States)

    Zhang, Rui; Stock, Ann-Kathrin; Rzepus, Anneka; Beste, Christian

    2017-01-01

    Performing an act of self-regulation such as making decisions has been suggested to deplete a common limited resource, which impairs all subsequent self-regulatory actions (ego depletion theory). It has however remained unclear whether self-referred decisions truly impair behavioral control even in seemingly unrelated cognitive domains, and which neurophysiological mechanisms are affected by these potential depletion effects. In the current study, we therefore used an inter-individual design to compare two kinds of depletion, namely a self-referred choice-based depletion and a categorization-based switching depletion, to a non-depleted control group. We used a backward inhibition (BI) paradigm to assess the effects of depletion on task switching and associated inhibition processes. It was combined with EEG and source localization techniques to assess both behavioral and neurophysiological depletion effects. The results challenge the ego depletion theory in its current form: Opposing the theory's prediction of a general limited resource, which should have yielded comparable effects in both depletion groups, or maybe even a larger depletion in the self-referred choice group, there were stronger performance impairments following a task domain-specific depletion (i.e., the switching-based depletion) than following a depletion based on self-referred choices. This suggests at least partly separate and independent resources for various cognitive control processes rather than just one joint resource for all self-regulation activities. The implications are crucial to consider for people making frequent far-reaching decisions e.g., in law or economy.

  9. A probability-conserving cross-section biasing mechanism for variance reduction in Monte Carlo particle transport calculations

    OpenAIRE

    Mendenhall, Marcus H.; Weller, Robert A.

    2011-01-01

    In Monte Carlo particle transport codes, it is often important to adjust reaction cross sections to reduce the variance of calculations of relatively rare events, in a technique known as non-analogous Monte Carlo. We present the theory and sample code for a Geant4 process which allows the cross section of a G4VDiscreteProcess to be scaled, while adjusting track weights so as to mitigate the effects of altered primary beam depletion induced by the cross section change. This makes it possible t...

  10. Fully Depleted Charge-Coupled Devices

    International Nuclear Information System (INIS)

    Holland, Stephen E.

    2006-01-01

    We have developed fully depleted, back-illuminated CCDs that build upon earlier research and development efforts directed towards technology development of silicon-strip detectors used in high-energy-physics experiments. The CCDs are fabricated on the same type of high-resistivity, float-zone-refined silicon that is used for strip detectors. The use of high-resistivity substrates allows for thick depletion regions, on the order of 200-300 um, with corresponding high detection efficiency for near-infrared and soft x-ray photons. We compare the fully depleted CCD to the p-i-n diode upon which it is based, and describe the use of fully depleted CCDs in astronomical and x-ray imaging applications

  11. Clinical dosimetry in photon radiotherapy. A Monte Carlo based investigation

    International Nuclear Information System (INIS)

    Wulff, Joerg

    2010-01-01

    Practical clinical dosimetry is a fundamental step within the radiation therapy process and aims at quantifying the absorbed radiation dose within a 1-2% uncertainty. To achieve this level of accuracy, corrections are needed for calibrated and air-filled ionization chambers, which are used for dose measurement. The procedures of correction are based on cavity theory of Spencer-Attix and are defined in current dosimetry protocols. Energy dependent corrections for deviations from calibration beams account for changed ionization chamber response in the treatment beam. The corrections applied are usually based on semi-analytical models or measurements and are generally hard to determine due to their magnitude of only a few percents or even less. Furthermore the corrections are defined for fixed geometrical reference-conditions and do not apply to non-reference conditions in modern radiotherapy applications. The stochastic Monte Carlo method for the simulation of radiation transport is becoming a valuable tool in the field of Medical Physics. As a suitable tool for calculation of these corrections with high accuracy the simulations enable the investigation of ionization chambers under various conditions. The aim of this work is the consistent investigation of ionization chamber dosimetry in photon radiation therapy with the use of Monte Carlo methods. Nowadays Monte Carlo systems exist, which enable the accurate calculation of ionization chamber response in principle. Still, their bare use for studies of this type is limited due to the long calculation times needed for a meaningful result with a small statistical uncertainty, inherent to every result of a Monte Carlo simulation. Besides heavy use of computer hardware, techniques methods of variance reduction to reduce the needed calculation time can be applied. Methods for increasing the efficiency in the results of simulation were developed and incorporated in a modern and established Monte Carlo simulation environment

  12. VERA Pin and Fuel Assembly Depletion Benchmark Calculations by McCARD and DeCART

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ho Jin; Cho, Jin Young [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    Monte Carlo (MC) codes have been developed and used to simulate a neutron transport since MC method was devised in the Manhattan project. Solving the neutron transport problem with the MC method is simple and straightforward to understand. Because there are few essential approximations for the 6- dimension phase of a neutron such as the location, energy, and direction in MC calculations, highly accurate solutions can be obtained through such calculations. In this work, the VERA pin and fuel assembly (FA) depletion benchmark calculations are performed to examine the depletion capability of the newly generated DeCART multi-group cross section library. To obtain the reference solutions, MC depletion calculations are conducted using McCARD. Moreover, to scrutinize the effect by stochastic uncertainty propagation, uncertainty propagation analyses are performed using a sensitivity and uncertainty (S/U) analysis method and stochastic sampling (S.S) method. It is still expensive and challenging to perform a depletion analysis by a MC code. Nevertheless, many studies and works for a MC depletion analysis have been conducted to utilize the benefits of the MC method. In this study, McCARD MC and DeCART MOC transport calculations are performed for the VERA pin and FA depletion benchmarks. The DeCART depletion calculations are conducted to examine the depletion capability of the newly generated multi-group cross section library. The DeCART depletion calculations give excellent agreement with the McCARD reference one. From the McCARD results, it is observed that the MC depletion results depend on how to split the burnup interval. First, only to quantify the effect of the stochastic uncertainty propagation at 40 DTS, the uncertainty propagation analyses are performed using the S/U and S.S. method.

  13. Present status of transport code development based on Monte Carlo method

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki

    1985-01-01

    The present status of development in Monte Carlo code is briefly reviewed. The main items are the followings; Application fields, Methods used in Monte Carlo code (geometry spectification, nuclear data, estimator and variance reduction technique) and unfinished works, Typical Monte Carlo codes and Merits of continuous energy Monte Carlo code. (author)

  14. A SAS2H/KENO-V Methodology for 3D Full Core depletion analysis

    International Nuclear Information System (INIS)

    Milosevic, M.; Greenspan, E.; Vujic, J.; Petrovic, B.

    2003-04-01

    This paper describes the use of a SAS2H/KENO-V methodology for 3D full core depletion analysis and illustrates its capabilities by applying it to burnup analysis of the IRIS core benchmarks. This new SAS2H/KENO-V sequence combines a 3D Monte Carlo full core calculation of node power distribution and a 1D Wigner-Seitz equivalent cell transport method for independent depletion calculation of each of the nodes. This approach reduces by more than an order of magnitude the time required for getting comparable results using the MOCUP code system. The SAS2H/KENO-V results for the asymmetric IRIS core benchmark are in good agreement with the results of the ALPHA/PHOENIX/ANC code system. (author)

  15. 11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing

    CERN Document Server

    Nuyens, Dirk

    2016-01-01

    This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.

  16. Visual improvement for bad handwriting based on Monte-Carlo method

    Science.gov (United States)

    Shi, Cao; Xiao, Jianguo; Xu, Canhui; Jia, Wenhua

    2014-03-01

    A visual improvement algorithm based on Monte Carlo simulation is proposed in this paper, in order to enhance visual effects for bad handwriting. The whole improvement process is to use well designed typeface so as to optimize bad handwriting image. In this process, a series of linear operators for image transformation are defined for transforming typeface image to approach handwriting image. And specific parameters of linear operators are estimated by Monte Carlo method. Visual improvement experiments illustrate that the proposed algorithm can effectively enhance visual effect for handwriting image as well as maintain the original handwriting features, such as tilt, stroke order and drawing direction etc. The proposed visual improvement algorithm, in this paper, has a huge potential to be applied in tablet computer and Mobile Internet, in order to improve user experience on handwriting.

  17. Depleted uranium

    International Nuclear Information System (INIS)

    Huffer, E.; Nifenecker, H.

    2001-02-01

    This document deals with the physical, chemical and radiological properties of the depleted uranium. What is the depleted uranium? Why do the military use depleted uranium and what are the risk for the health? (A.L.B.)

  18. Monte Carlo techniques in radiation therapy

    CERN Document Server

    Verhaegen, Frank

    2013-01-01

    Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...

  19. EPRI depletion benchmark calculations using PARAGON

    International Nuclear Information System (INIS)

    Kucukboyaci, Vefa N.

    2015-01-01

    Highlights: • PARAGON depletion calculations are benchmarked against the EPRI reactivity decrement experiments. • Benchmarks cover a wide range of enrichments, burnups, cooling times, and burnable absorbers, and different depletion and storage conditions. • Results from PARAGON-SCALE scheme are more conservative relative to the benchmark data. • ENDF/B-VII based data reduces the excess conservatism and brings the predictions closer to benchmark reactivity decrement values. - Abstract: In order to conservatively apply burnup credit in spent fuel pool criticality analyses, code validation for both fresh and used fuel is required. Fresh fuel validation is typically done by modeling experiments from the “International Handbook.” A depletion validation can determine a bias and bias uncertainty for the worth of the isotopes not found in the fresh fuel critical experiments. Westinghouse’s burnup credit methodology uses PARAGON™ (Westinghouse 2-D lattice physics code) and its 70-group cross-section library, which have been benchmarked, qualified, and licensed both as a standalone transport code and as a nuclear data source for core design simulations. A bias and bias uncertainty for the worth of depletion isotopes, however, are not available for PARAGON. Instead, the 5% decrement approach for depletion uncertainty is used, as set forth in the Kopp memo. Recently, EPRI developed a set of benchmarks based on a large set of power distribution measurements to ascertain reactivity biases. The depletion reactivity has been used to create 11 benchmark cases for 10, 20, 30, 40, 50, and 60 GWd/MTU and 3 cooling times 100 h, 5 years, and 15 years. These benchmark cases are analyzed with PARAGON and the SCALE package and sensitivity studies are performed using different cross-section libraries based on ENDF/B-VI.3 and ENDF/B-VII data to assess that the 5% decrement approach is conservative for determining depletion uncertainty

  20. Soil nutrients, aboveground productivity and vegetative diversity after 10 years of experimental acidification and base cation depletion

    Science.gov (United States)

    Mary Beth Adams; James A. Burger

    2010-01-01

    Soil acidification and base cation depletion are concerns for those wishing to manage central Appalachian hardwood forests sustainably. In this research, 2 experiments were established in 1996 and 1997 in two forest types common in the central Appalachian hardwood forests, to examine how these important forests respond to depletion of nutrients such as calcium and...

  1. On Monte Carlo estimation of radiation damage in light water reactor systems

    International Nuclear Information System (INIS)

    Read, Edward A.; Oliveira, Cassiano R.E. de

    2010-01-01

    There has been a growing need in recent years for the development of methodologies to calculate damage factors, namely displacements per atom (dpa), of structural components for Light Water Reactors (LWRs). The aim of this paper is discuss and highlight the main issues associated with the calculation of radiation damage factors utilizing the Monte Carlo method. Among these issues are: particle tracking and tallying in complex geometries, dpa calculation methodology, coupled fuel depletion and uncertainty propagation. The capabilities of the Monte Carlo code Serpent such as Woodcock tracking and burnup are assessed for radiation damage calculations and its capability demonstrated and compared to those of the MCNP code for dpa calculations of a typical LWR configuration involving the core vessel and the downcomer. (author)

  2. Depletion benchmarks calculation of random media using explicit modeling approach of RMC

    International Nuclear Information System (INIS)

    Liu, Shichang; She, Ding; Liang, Jin-gang; Wang, Kan

    2016-01-01

    Highlights: • Explicit modeling of RMC is applied to depletion benchmark for HTGR fuel element. • Explicit modeling can provide detailed burnup distribution and burnup heterogeneity. • The results would serve as a supplement for the HTGR fuel depletion benchmark. • The method of adjacent burnup regions combination is proposed for full-core problems. • The combination method can reduce memory footprint, keeping the computing accuracy. - Abstract: Monte Carlo method plays an important role in accurate simulation of random media, owing to its advantages of the flexible geometry modeling and the use of continuous-energy nuclear cross sections. Three stochastic geometry modeling methods including Random Lattice Method, Chord Length Sampling and explicit modeling approach with mesh acceleration technique, have been implemented in RMC to simulate the particle transport in the dispersed fuels, in which the explicit modeling method is regarded as the best choice. In this paper, the explicit modeling method is applied to the depletion benchmark for HTGR fuel element, and the method of combination of adjacent burnup regions has been proposed and investigated. The results show that the explicit modeling can provide detailed burnup distribution of individual TRISO particles, and this work would serve as a supplement for the HTGR fuel depletion benchmark calculations. The combination of adjacent burnup regions can effectively reduce the memory footprint while keeping the computational accuracy.

  3. Hybrid microscopic depletion model in nodal code DYN3D

    International Nuclear Information System (INIS)

    Bilodid, Y.; Kotlyar, D.; Shwageraus, E.; Fridman, E.; Kliem, S.

    2016-01-01

    Highlights: • A new hybrid method of accounting for spectral history effects is proposed. • Local concentrations of over 1000 nuclides are calculated using micro depletion. • The new method is implemented in nodal code DYN3D and verified. - Abstract: The paper presents a general hybrid method that combines the micro-depletion technique with correction of micro- and macro-diffusion parameters to account for the spectral history effects. The fuel in a core is subjected to time- and space-dependent operational conditions (e.g. coolant density), which cannot be predicted in advance. However, lattice codes assume some average conditions to generate cross sections (XS) for nodal diffusion codes such as DYN3D. Deviation of local operational history from average conditions leads to accumulation of errors in XS, which is referred as spectral history effects. Various methods to account for the spectral history effects, such as spectral index, burnup-averaged operational parameters and micro-depletion, were implemented in some nodal codes. Recently, an alternative method, which characterizes fuel depletion state by burnup and 239 Pu concentration (denoted as Pu-correction) was proposed, implemented in nodal code DYN3D and verified for a wide range of history effects. The method is computationally efficient, however, it has applicability limitations. The current study seeks to improve the accuracy and applicability range of Pu-correction method. The proposed hybrid method combines the micro-depletion method with a XS characterization technique similar to the Pu-correction method. The method was implemented in DYN3D and verified on multiple test cases. The results obtained with DYN3D were compared to those obtained with Monte Carlo code Serpent, which was also used to generate the XS. The observed differences are within the statistical uncertainties.

  4. Research on reactor physics analysis method based on Monte Carlo homogenization

    International Nuclear Information System (INIS)

    Ye Zhimin; Zhang Peng

    2014-01-01

    In order to meet the demand of nuclear energy market in the future, many new concepts of nuclear energy systems has been put forward. The traditional deterministic neutronics analysis method has been challenged in two aspects: one is the ability of generic geometry processing; the other is the multi-spectrum applicability of the multigroup cross section libraries. Due to its strong geometry modeling capability and the application of continuous energy cross section libraries, the Monte Carlo method has been widely used in reactor physics calculations, and more and more researches on Monte Carlo method has been carried out. Neutronics-thermal hydraulics coupling analysis based on Monte Carlo method has been realized. However, it still faces the problems of long computation time and slow convergence which make it not applicable to the reactor core fuel management simulations. Drawn from the deterministic core analysis method, a new two-step core analysis scheme is proposed in this work. Firstly, Monte Carlo simulations are performed for assembly, and the assembly homogenized multi-group cross sections are tallied at the same time. Secondly, the core diffusion calculations can be done with these multigroup cross sections. The new scheme can achieve high efficiency while maintain acceptable precision, so it can be used as an effective tool for the design and analysis of innovative nuclear energy systems. Numeric tests have been done in this work to verify the new scheme. (authors)

  5. Recurrence formulas for evaluating expansion series of depletion functions

    International Nuclear Information System (INIS)

    Vukadin, Z.

    1991-01-01

    A high-accuracy analytical method for solving the depletion equations for chains of radioactive nuclides is based on the formulation of depletion functions. When all the arguments of the depletion function are too close to each other, series expansions of the depletion function have to be used. However, the high-accuracy series expressions for the depletion functions of high index become too complicated. Recursion relations are derived which enable an efficient high-accuracy evaluation of the depletion functions with high indices. (orig.) [de

  6. Monte Carlo codes and Monte Carlo simulator program

    International Nuclear Information System (INIS)

    Higuchi, Kenji; Asai, Kiyoshi; Suganuma, Masayuki.

    1990-03-01

    Four typical Monte Carlo codes KENO-IV, MORSE, MCNP and VIM have been vectorized on VP-100 at Computing Center, JAERI. The problems in vector processing of Monte Carlo codes on vector processors have become clear through the work. As the result, it is recognized that these are difficulties to obtain good performance in vector processing of Monte Carlo codes. A Monte Carlo computing machine, which processes the Monte Carlo codes with high performances is being developed at our Computing Center since 1987. The concept of Monte Carlo computing machine and its performance have been investigated and estimated by using a software simulator. In this report the problems in vectorization of Monte Carlo codes, Monte Carlo pipelines proposed to mitigate these difficulties and the results of the performance estimation of the Monte Carlo computing machine by the simulator are described. (author)

  7. High-energy-ion depletion in the charge exchange spectrum of Alcator C

    International Nuclear Information System (INIS)

    Schissel, D.P.

    1982-01-01

    A three-dimensional, guiding center, Monte Carlo code is developed to study ion orbits in Alcator C. The highly peaked ripple of the magnetic field of Alcator is represented by an analytical expression for the vector potential. The analytical ripple field is compared to the resulting magnetic field generated by a current model of the toroidal plates; agreement is excellent. Ion-Ion scattering is simulated by a pitch angle and an energy scattering operator. The equations of motion are integrated with a variable time step, extrapolating integrator. The code produces collisionless banana and ripple trapped loss cones which agree well with present theory. Global energy distributions have been calculated and show a slight depletion above 8.5 keV. Particles which are ripple trapped and lost are at energies below where depletion is observed. It is found that ions pitch angle scatter less as energy is increased. The result is that, when viewed in velocity space, ions form probability lobes the shape of mouse ears which are fat near the thermal energy. Therefore, particles enter the loss cone at low energies near the bottom of the core. Recommendations for future work include improving the analytic model of the ripple field, testing the effect of del . B not equal to 0 on ion orbits, and improving the efficiency of the code by either using a spline fit for the magnetic fields or by creating a vectorized Monte Carlo code

  8. Too Depleted to Try? Testing the Process Model of Ego Depletion in the Context of Unhealthy Snack Consumption.

    Science.gov (United States)

    Haynes, Ashleigh; Kemps, Eva; Moffitt, Robyn

    2016-11-01

    The process model proposes that the ego depletion effect is due to (a) an increase in motivation toward indulgence, and (b) a decrease in motivation to control behaviour following an initial act of self-control. In contrast, the reflective-impulsive model predicts that ego depletion results in behaviour that is more consistent with desires, and less consistent with motivations, rather than influencing the strength of desires and motivations. The current study sought to test these alternative accounts of the relationships between ego depletion, motivation, desire, and self-control. One hundred and fifty-six undergraduate women were randomised to complete a depleting e-crossing task or a non-depleting task, followed by a lab-based measure of snack intake, and self-report measures of motivation and desire strength. In partial support of the process model, ego depletion was related to higher intake, but only indirectly via the influence of lowered motivation. Motivation was more strongly predictive of intake for those in the non-depletion condition, providing partial support for the reflective-impulsive model. Ego depletion did not affect desire, nor did depletion moderate the effect of desire on intake, indicating that desire may be an appropriate target for reducing unhealthy behaviour across situations where self-control resources vary. © 2016 The International Association of Applied Psychology.

  9. Confronting uncertainty in model-based geostatistics using Markov Chain Monte Carlo simulation

    NARCIS (Netherlands)

    Minasny, B.; Vrugt, J.A.; McBratney, A.B.

    2011-01-01

    This paper demonstrates for the first time the use of Markov Chain Monte Carlo (MCMC) simulation for parameter inference in model-based soil geostatistics. We implemented the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm to jointly summarize the posterior

  10. CO Depletion: A Microscopic Perspective

    Energy Technology Data Exchange (ETDEWEB)

    Cazaux, S. [Faculty of Aerospace Engineering, Delft University of Technology, Delft (Netherlands); Martín-Doménech, R.; Caro, G. M. Muñoz; Díaz, C. González [Centro de Astrobiología (INTA-CSIC), Ctra. de Ajalvir, km 4, Torrejón de Ardoz, E-28850 Madrid (Spain); Chen, Y. J. [Department of Physics, National Central University, Jhongli City, 32054, Taoyuan County, Taiwan (China)

    2017-11-10

    In regions where stars form, variations in density and temperature can cause gas to freeze out onto dust grains forming ice mantles, which influences the chemical composition of a cloud. The aim of this paper is to understand in detail the depletion (and desorption) of CO on (from) interstellar dust grains. Experimental simulations were performed under two different (astrophysically relevant) conditions. In parallel, Kinetic Monte Carlo simulations were used to mimic the experimental conditions. In our experiments, CO molecules accrete onto water ice at temperatures below 27 K, with a deposition rate that does not depend on the substrate temperature. During the warm-up phase, the desorption processes do exhibit subtle differences, indicating the presence of weakly bound CO molecules, therefore highlighting a low diffusion efficiency. IR measurements following the ice thickness during the TPD confirm that diffusion occurs at temperatures close to the desorption. Applied to astrophysical conditions, in a pre-stellar core, the binding energies of CO molecules, ranging between 300 and 850 K, depend on the conditions at which CO has been deposited. Because of this wide range of binding energies, the depletion of CO as a function of A{sub V} is much less important than initially thought. The weakly bound molecules, easily released into the gas phase through evaporation, change the balance between accretion and desorption, which result in a larger abundance of CO at high extinctions. In addition, weakly bound CO molecules are also more mobile, and this could increase the reactivity within interstellar ices.

  11. Response matrix Monte Carlo based on a general geometry local calculation for electron transport

    International Nuclear Information System (INIS)

    Ballinger, C.T.; Rathkopf, J.A.; Martin, W.R.

    1991-01-01

    A Response Matrix Monte Carlo (RMMC) method has been developed for solving electron transport problems. This method was born of the need to have a reliable, computationally efficient transport method for low energy electrons (below a few hundred keV) in all materials. Today, condensed history methods are used which reduce the computation time by modeling the combined effect of many collisions but fail at low energy because of the assumptions required to characterize the electron scattering. Analog Monte Carlo simulations are prohibitively expensive since electrons undergo coulombic scattering with little state change after a collision. The RMMC method attempts to combine the accuracy of an analog Monte Carlo simulation with the speed of the condensed history methods. Like condensed history, the RMMC method uses probability distributions functions (PDFs) to describe the energy and direction of the electron after several collisions. However, unlike the condensed history method the PDFs are based on an analog Monte Carlo simulation over a small region. Condensed history theories require assumptions about the electron scattering to derive the PDFs for direction and energy. Thus the RMMC method samples from PDFs which more accurately represent the electron random walk. Results show good agreement between the RMMC method and analog Monte Carlo. 13 refs., 8 figs

  12. Specification for the VERA Depletion Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  13. GPU-Monte Carlo based fast IMRT plan optimization

    Directory of Open Access Journals (Sweden)

    Yongbao Li

    2014-03-01

    Full Text Available Purpose: Intensity-modulated radiation treatment (IMRT plan optimization needs pre-calculated beamlet dose distribution. Pencil-beam or superposition/convolution type algorithms are typically used because of high computation speed. However, inaccurate beamlet dose distributions, particularly in cases with high levels of inhomogeneity, may mislead optimization, hindering the resulting plan quality. It is desire to use Monte Carlo (MC methods for beamlet dose calculations. Yet, the long computational time from repeated dose calculations for a number of beamlets prevents this application. It is our objective to integrate a GPU-based MC dose engine in lung IMRT optimization using a novel two-steps workflow.Methods: A GPU-based MC code gDPM is used. Each particle is tagged with an index of a beamlet where the source particle is from. Deposit dose are stored separately for beamlets based on the index. Due to limited GPU memory size, a pyramid space is allocated for each beamlet, and dose outside the space is neglected. A two-steps optimization workflow is proposed for fast MC-based optimization. At first step, a rough dose calculation is conducted with only a few number of particle per beamlet. Plan optimization is followed to get an approximated fluence map. In the second step, more accurate beamlet doses are calculated, where sampled number of particles for a beamlet is proportional to the intensity determined previously. A second-round optimization is conducted, yielding the final result.Results: For a lung case with 5317 beamlets, 105 particles per beamlet in the first round, and 108 particles per beam in the second round are enough to get a good plan quality. The total simulation time is 96.4 sec.Conclusion: A fast GPU-based MC dose calculation method along with a novel two-step optimization workflow are developed. The high efficiency allows the use of MC for IMRT optimizations.--------------------------------Cite this article as: Li Y, Tian Z

  14. Monte Carlo and Quasi-Monte Carlo Sampling

    CERN Document Server

    Lemieux, Christiane

    2009-01-01

    Presents essential tools for using quasi-Monte Carlo sampling in practice. This book focuses on issues related to Monte Carlo methods - uniform and non-uniform random number generation, variance reduction techniques. It covers several aspects of quasi-Monte Carlo methods.

  15. EXPERIMENTAL ACIDIFICATION CAUSES SOIL BASE-CATION DEPLETION AT THE BEAR BROOK WATERSHED IN MAINE

    Science.gov (United States)

    There is concern that changes in atmospheric deposition, climate, or land use have altered the biogeochemistry of forests causing soil base-cation depletion, particularly Ca. The Bear Brook Watershed in Maine (BBWM) is a paired watershed experiment with one watershed subjected to...

  16. Acceptance and implementation of a system of planning computerized based on Monte Carlo

    International Nuclear Information System (INIS)

    Lopez-Tarjuelo, J.; Garcia-Molla, R.; Suan-Senabre, X. J.; Quiros-Higueras, J. Q.; Santos-Serra, A.; Marco-Blancas, N.; Calzada-Feliu, S.

    2013-01-01

    It has been done the acceptance for use clinical Monaco computerized planning system, based on an on a virtual model of the energy yield of the head of the linear electron Accelerator and that performs the calculation of the dose with an algorithm of x-rays (XVMC) based on Monte Carlo algorithm. (Author)

  17. Sensibility analysis of fuel depletion using different nuclear fuel depletion codes

    Energy Technology Data Exchange (ETDEWEB)

    Martins, F.; Velasquez, C.E.; Castro, V.F.; Pereira, C.; Silva, C. A. Mello da, E-mail: felipmartins94@gmail.com, E-mail: carlosvelcab@hotmail.com, E-mail: victorfariascastro@gmail.com, E-mail: claubia@nuclear.ufmg.br, E-mail: clarysson@nuclear.ufmg.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Nuclear

    2017-07-01

    Nowadays, the utilization of different nuclear codes to perform the depletion and criticality calculations has been used to simulated nuclear reactors problems. Therefore, the goal is to analyze the sensibility of the fuel depletion of a PWR assembly using three different nuclear fuel depletion codes. The burnup calculations are performed using the codes MCNP5/ORIGEN2.1 (MONTEBURNS), KENO-VI/ORIGEN-S (TRITONSCALE6.0) and MCNPX (MCNPX/CINDER90). Each nuclear code performs the burnup using different depletion codes. Each depletion code works with collapsed energies from a master library in 1, 3 and 63 groups, respectively. Besides, each code uses different ways to obtain neutron flux that influences the depletions calculation. The results present a comparison of the neutronic parameters and isotopes composition such as criticality and nuclides build-up, the deviation in results are going to be assigned to features of the depletion code in use, such as the different radioactive decay internal libraries and the numerical method involved in solving the coupled differential depletion equations. It is also seen that the longer the period is and the more time steps are chosen, the larger the deviation become. (author)

  18. Sensibility analysis of fuel depletion using different nuclear fuel depletion codes

    International Nuclear Information System (INIS)

    Martins, F.; Velasquez, C.E.; Castro, V.F.; Pereira, C.; Silva, C. A. Mello da

    2017-01-01

    Nowadays, the utilization of different nuclear codes to perform the depletion and criticality calculations has been used to simulated nuclear reactors problems. Therefore, the goal is to analyze the sensibility of the fuel depletion of a PWR assembly using three different nuclear fuel depletion codes. The burnup calculations are performed using the codes MCNP5/ORIGEN2.1 (MONTEBURNS), KENO-VI/ORIGEN-S (TRITONSCALE6.0) and MCNPX (MCNPX/CINDER90). Each nuclear code performs the burnup using different depletion codes. Each depletion code works with collapsed energies from a master library in 1, 3 and 63 groups, respectively. Besides, each code uses different ways to obtain neutron flux that influences the depletions calculation. The results present a comparison of the neutronic parameters and isotopes composition such as criticality and nuclides build-up, the deviation in results are going to be assigned to features of the depletion code in use, such as the different radioactive decay internal libraries and the numerical method involved in solving the coupled differential depletion equations. It is also seen that the longer the period is and the more time steps are chosen, the larger the deviation become. (author)

  19. "When the going gets tough, who keeps going?" Depletion sensitivity moderates the ego-depletion effect.

    Science.gov (United States)

    Salmon, Stefanie J; Adriaanse, Marieke A; De Vet, Emely; Fennis, Bob M; De Ridder, Denise T D

    2014-01-01

    Self-control relies on a limited resource that can get depleted, a phenomenon that has been labeled ego-depletion. We argue that individuals may differ in their sensitivity to depleting tasks, and that consequently some people deplete their self-control resource at a faster rate than others. In three studies, we assessed individual differences in depletion sensitivity, and demonstrate that depletion sensitivity moderates ego-depletion effects. The Depletion Sensitivity Scale (DSS) was employed to assess depletion sensitivity. Study 1 employs the DSS to demonstrate that individual differences in sensitivity to ego-depletion exist. Study 2 shows moderate correlations of depletion sensitivity with related self-control concepts, indicating that these scales measure conceptually distinct constructs. Study 3 demonstrates that depletion sensitivity moderates the ego-depletion effect. Specifically, participants who are sensitive to depletion performed worse on a second self-control task, indicating a stronger ego-depletion effect, compared to participants less sensitive to depletion.

  20. When the Going Gets Tough, Who Keeps Going? Depletion Sensitivity Moderates the Ego-Depletion Effect

    Directory of Open Access Journals (Sweden)

    Stefanie J. Salmon

    2014-06-01

    Full Text Available Self-control relies on a limited resource that can get depleted, a phenomenon that has been labeled ego-depletion. We argue that individuals may differ in their sensitivity to depleting tasks, and that consequently some people deplete their self-control resource at a faster rate than others. In three studies, we assessed individual differences in depletion sensitivity, and demonstrate that depletion sensitivity moderates ego-depletion effects. The Depletion Sensitivity Scale (DSS was employed to assess depletion sensitivity. Study 1 employs the DSS to demonstrate that individual differences in sensitivity to ego-depletion exist. Study 2 shows moderate correlations of depletion sensitivity with related self-control concepts, indicating that these scales measure conceptually distinct constructs. Study 3 demonstrates that depletion sensitivity moderates the ego-depletion effect. Specifically, participants who are sensitive to depletion performed worse on a second self-control task, indicating a stronger ego-depletion effect, compared to participants less sensitive to depletion.

  1. Monte Carlo based treatment planning for modulated electron beam radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Michael C. [Radiation Physics Division, Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA (United States)]. E-mail: mclee@reyes.stanford.edu; Deng Jun; Li Jinsheng; Jiang, Steve B.; Ma, C.-M. [Radiation Physics Division, Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA (United States)

    2001-08-01

    A Monte Carlo based treatment planning system for modulated electron radiation therapy (MERT) is presented. This new variation of intensity modulated radiation therapy (IMRT) utilizes an electron multileaf collimator (eMLC) to deliver non-uniform intensity maps at several electron energies. In this way, conformal dose distributions are delivered to irregular targets located a few centimetres below the surface while sparing deeper-lying normal anatomy. Planning for MERT begins with Monte Carlo generation of electron beamlets. Electrons are transported with proper in-air scattering and the dose is tallied in the phantom for each beamlet. An optimized beamlet plan may be calculated using inverse-planning methods. Step-and-shoot leaf sequences are generated for the intensity maps and dose distributions recalculated using Monte Carlo simulations. Here, scatter and leakage from the leaves are properly accounted for by transporting electrons through the eMLC geometry. The weights for the segments of the plan are re-optimized with the leaf positions fixed and bremsstrahlung leakage and electron scatter doses included. This optimization gives the final optimized plan. It is shown that a significant portion of the calculation time is spent transporting particles in the leaves. However, this is necessary since optimizing segment weights based on a model in which leaf transport is ignored results in an improperly optimized plan with overdosing of target and critical structures. A method of rapidly calculating the bremsstrahlung contribution is presented and shown to be an efficient solution to this problem. A homogeneous model target and a 2D breast plan are presented. The potential use of this tool in clinical planning is discussed. (author)

  2. Development of a space radiation Monte Carlo computer simulation based on the FLUKA and ROOT codes

    CERN Document Server

    Pinsky, L; Ferrari, A; Sala, P; Carminati, F; Brun, R

    2001-01-01

    This NASA funded project is proceeding to develop a Monte Carlo-based computer simulation of the radiation environment in space. With actual funding only initially in place at the end of May 2000, the study is still in the early stage of development. The general tasks have been identified and personnel have been selected. The code to be assembled will be based upon two major existing software packages. The radiation transport simulation will be accomplished by updating the FLUKA Monte Carlo program, and the user interface will employ the ROOT software being developed at CERN. The end-product will be a Monte Carlo-based code which will complement the existing analytic codes such as BRYNTRN/HZETRN presently used by NASA to evaluate the effects of radiation shielding in space. The planned code will possess the ability to evaluate the radiation environment for spacecraft and habitats in Earth orbit, in interplanetary space, on the lunar surface, or on a planetary surface such as Mars. Furthermore, it will be usef...

  3. GPU-based high performance Monte Carlo simulation in neutron transport

    Energy Technology Data Exchange (ETDEWEB)

    Heimlich, Adino; Mol, Antonio C.A.; Pereira, Claudio M.N.A. [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. de Inteligencia Artificial Aplicada], e-mail: cmnap@ien.gov.br

    2009-07-01

    Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in neutron transport simulation by Monte Carlo method. To accomplish that, GPU- and CPU-based (single and multicore) approaches were developed and applied to a simple, but time-consuming problem. Comparisons demonstrated that the GPU-based approach is about 15 times faster than a parallel 8-core CPU-based approach also developed in this work. (author)

  4. GPU-based high performance Monte Carlo simulation in neutron transport

    International Nuclear Information System (INIS)

    Heimlich, Adino; Mol, Antonio C.A.; Pereira, Claudio M.N.A.

    2009-01-01

    Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in neutron transport simulation by Monte Carlo method. To accomplish that, GPU- and CPU-based (single and multicore) approaches were developed and applied to a simple, but time-consuming problem. Comparisons demonstrated that the GPU-based approach is about 15 times faster than a parallel 8-core CPU-based approach also developed in this work. (author)

  5. Verification of the depletion capabilities of the MCNPX code on a LWR MOX fuel assembly

    International Nuclear Information System (INIS)

    Cerba, S.; Hrncir, M.; Necas, V.

    2012-01-01

    The study deals with the verification of the depletion capabilities of the MCNPX code, which is a linked Monte-Carlo depletion code. For such a purpose the IV-B phase of the OECD NEA Burnup credit benchmark has been chosen. The mentioned benchmark is a code to code comparison of the multiplication coefficient k eff and the isotopic composition of a LWR MOX fuel assembly at three given burnup levels and after five years of cooling. The benchmark consists of 6 cases, 2 different Pu vectors and 3 geometry models, however in this study only the fuel assembly calculations with two Pu vectors were performed. The aim of this study was to compare the obtained result with data from the participants of the OECD NEA Burnup Credit project and confirm the burnup capability of the MCNPX code. (Authors)

  6. Development of burnup methods and capabilities in Monte Carlo code RMC

    International Nuclear Information System (INIS)

    She, Ding; Liu, Yuxuan; Wang, Kan; Yu, Ganglin; Forget, Benoit; Romano, Paul K.; Smith, Kord

    2013-01-01

    Highlights: ► The RMC code has been developed aiming at large-scale burnup calculations. ► Matrix exponential methods are employed to solve the depletion equations. ► The Energy-Bin method reduces the time expense of treating ACE libraries. ► The Cell-Mapping method is efficient to handle massive amounts of tally cells. ► Parallelized depletion is necessary for massive amounts of burnup regions. -- Abstract: The Monte Carlo burnup calculation has always been a challenging problem because of its large time consumption when applied to full-scale assembly or core calculations, and thus its application in routine analysis is limited. Most existing MC burnup codes are usually external wrappers between a MC code, e.g. MCNP, and a depletion code, e.g. ORIGEN. The code RMC is a newly developed MC code with an embedded depletion module aimed at performing burnup calculations of large-scale problems with high efficiency. Several measures have been taken to strengthen the burnup capabilities of RMC. Firstly, an accurate and efficient depletion module called DEPTH has been developed and built in, which employs the rational approximation and polynomial approximation methods. Secondly, the Energy-Bin method and the Cell-Mapping method are implemented to speed up the transport calculations with large numbers of nuclides and tally cells. Thirdly, the batch tally method and the parallelized depletion module have been utilized to better handle cases with massive amounts of burnup regions in parallel calculations. Burnup cases including a PWR pin and a 5 × 5 assembly group are calculated, thereby demonstrating the burnup capabilities of the RMC code. In addition, the computational time and memory requirements of RMC are compared with other MC burnup codes.

  7. GPU based Monte Carlo for PET image reconstruction: detector modeling

    International Nuclear Information System (INIS)

    Légrády; Cserkaszky, Á; Lantos, J.; Patay, G.; Bükki, T.

    2011-01-01

    Monte Carlo (MC) calculations and Graphical Processing Units (GPUs) are almost like the dedicated hardware designed for the specific task given the similarities between visible light transport and neutral particle trajectories. A GPU based MC gamma transport code has been developed for Positron Emission Tomography iterative image reconstruction calculating the projection from unknowns to data at each iteration step taking into account the full physics of the system. This paper describes the simplified scintillation detector modeling and its effect on convergence. (author)

  8. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans.

    Science.gov (United States)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-07

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients' CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  9. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    Science.gov (United States)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-01

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  10. Accelerated Monte Carlo system reliability analysis through machine-learning-based surrogate models of network connectivity

    International Nuclear Information System (INIS)

    Stern, R.E.; Song, J.; Work, D.B.

    2017-01-01

    The two-terminal reliability problem in system reliability analysis is known to be computationally intractable for large infrastructure graphs. Monte Carlo techniques can estimate the probability of a disconnection between two points in a network by selecting a representative sample of network component failure realizations and determining the source-terminal connectivity of each realization. To reduce the runtime required for the Monte Carlo approximation, this article proposes an approximate framework in which the connectivity check of each sample is estimated using a machine-learning-based classifier. The framework is implemented using both a support vector machine (SVM) and a logistic regression based surrogate model. Numerical experiments are performed on the California gas distribution network using the epicenter and magnitude of the 1989 Loma Prieta earthquake as well as randomly-generated earthquakes. It is shown that the SVM and logistic regression surrogate models are able to predict network connectivity with accuracies of 99% for both methods, and are 1–2 orders of magnitude faster than using a Monte Carlo method with an exact connectivity check. - Highlights: • Surrogate models of network connectivity are developed by machine-learning algorithms. • Developed surrogate models can reduce the runtime required for Monte Carlo simulations. • Support vector machine and logistic regressions are employed to develop surrogate models. • Numerical example of California gas distribution network demonstrate the proposed approach. • The developed models have accuracies 99%, and are 1–2 orders of magnitude faster than MCS.

  11. Deuterium-depleted water

    International Nuclear Information System (INIS)

    Stefanescu, Ion; Steflea, Dumitru; Saros-Rogobete, Irina; Titescu, Gheorghe; Tamaian, Radu

    2001-01-01

    Deuterium-depleted water represents water that has an isotopic content smaller than 145 ppm D/(D+H) which is the natural isotopic content of water. Deuterium depleted water is produced by vacuum distillation in columns equipped with structured packing made from phosphor bronze or stainless steel. Deuterium-depleted water, the production technique and structured packing are patents of National Institute of Research - Development for Cryogenics and Isotopic Technologies at Rm. Valcea. Researches made in the last few years showed the deuterium-depleted water is a biological active product that could have many applications in medicine and agriculture. (authors)

  12. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  13. Comparison of two lung clearance models based on the dissolution rates of oxidized depleted uranium

    Energy Technology Data Exchange (ETDEWEB)

    Crist, K.C.

    1984-10-01

    An in-vitro dissolution study was conducted on two respirable oxidized depleted uranium samples. The dissolution rates generated from this study were then utilized in the International Commission on Radiological Protection Task Group lung clearance model and a lung clearance model proposed by Cuddihy. Predictions from both models based on the dissolution rates of the amount of oxidized depleted uranium that would be cleared to blood from the pulmonary region following an inhalation exposure were compared. It was found that the predictions made by both models differed considerably. The difference between the predictions was attributed to the differences in the way each model perceives the clearance from the pulmonary region. 33 references, 11 figures, 9 tables.

  14. Comparison of two lung clearance models based on the dissolution rates of oxidized depleted uranium

    International Nuclear Information System (INIS)

    Crist, K.C.

    1984-10-01

    An in-vitro dissolution study was conducted on two respirable oxidized depleted uranium samples. The dissolution rates generated from this study were then utilized in the International Commission on Radiological Protection Task Group lung clearance model and a lung clearance model proposed by Cuddihy. Predictions from both models based on the dissolution rates of the amount of oxidized depleted uranium that would be cleared to blood from the pulmonary region following an inhalation exposure were compared. It was found that the predictions made by both models differed considerably. The difference between the predictions was attributed to the differences in the way each model perceives the clearance from the pulmonary region. 33 references, 11 figures, 9 tables

  15. Assesment of advanced step models for steady state Monte Carlo burnup calculations in application to prismatic HTGR

    Directory of Open Access Journals (Sweden)

    Kępisty Grzegorz

    2015-09-01

    Full Text Available In this paper, we compare the methodology of different time-step models in the context of Monte Carlo burnup calculations for nuclear reactors. We discuss the differences between staircase step model, slope model, bridge scheme and stochastic implicit Euler method proposed in literature. We focus on the spatial stability of depletion procedure and put additional emphasis on the problem of normalization of neutron source strength. Considered methodology has been implemented in our continuous energy Monte Carlo burnup code (MCB5. The burnup simulations have been performed using the simplified high temperature gas-cooled reactor (HTGR system with and without modeling of control rod withdrawal. Useful conclusions have been formulated on the basis of results.

  16. Ego depletion in visual perception: Ego-depleted viewers experience less ambiguous figure reversal.

    Science.gov (United States)

    Wimmer, Marina C; Stirk, Steven; Hancock, Peter J B

    2017-10-01

    This study examined the effects of ego depletion on ambiguous figure perception. Adults (N = 315) received an ego depletion task and were subsequently tested on their inhibitory control abilities that were indexed by the Stroop task (Experiment 1) and their ability to perceive both interpretations of ambiguous figures that was indexed by reversal (Experiment 2). Ego depletion had a very small effect on reducing inhibitory control (Cohen's d = .15) (Experiment 1). Ego-depleted participants had a tendency to take longer to respond in Stroop trials. In Experiment 2, ego depletion had small to medium effects on the experience of reversal. Ego-depleted viewers tended to take longer to reverse ambiguous figures (duration to first reversal) when naïve of the ambiguity and experienced less reversal both when naïve and informed of the ambiguity. Together, findings suggest that ego depletion has small effects on inhibitory control and small to medium effects on bottom-up and top-down perceptual processes. The depletion of cognitive resources can reduce our visual perceptual experience.

  17. Successful vectorization - reactor physics Monte Carlo code

    International Nuclear Information System (INIS)

    Martin, W.R.

    1989-01-01

    Most particle transport Monte Carlo codes in use today are based on the ''history-based'' algorithm, wherein one particle history at a time is simulated. Unfortunately, the ''history-based'' approach (present in all Monte Carlo codes until recent years) is inherently scalar and cannot be vectorized. In particular, the history-based algorithm cannot take advantage of vector architectures, which characterize the largest and fastest computers at the current time, vector supercomputers such as the Cray X/MP or IBM 3090/600. However, substantial progress has been made in recent years in developing and implementing a vectorized Monte Carlo algorithm. This algorithm follows portions of many particle histories at the same time and forms the basis for all successful vectorized Monte Carlo codes that are in use today. This paper describes the basic vectorized algorithm along with descriptions of several variations that have been developed by different researchers for specific applications. These applications have been mainly in the areas of neutron transport in nuclear reactor and shielding analysis and photon transport in fusion plasmas. The relative merits of the various approach schemes will be discussed and the present status of known vectorization efforts will be summarized along with available timing results, including results from the successful vectorization of 3-D general geometry, continuous energy Monte Carlo. (orig.)

  18. Microscopic to macroscopic depletion model development for FORMOSA-P

    International Nuclear Information System (INIS)

    Noh, J.M.; Turinsky, P.J.; Sarsour, H.N.

    1996-01-01

    Microscopic depletion has been gaining popularity with regard to employment in reactor core nodal calculations, mainly attributed to the superiority of microscopic depletion in treating spectral history effects during depletion. Another trend is the employment of loading pattern optimization computer codes in support of reload core design. Use of such optimization codes has significantly reduced design efforts to optimize reload core loading patterns associated with increasingly complicated lattice designs. A microscopic depletion model has been developed for the FORMOSA-P pressurized water reactor (PWR) loading pattern optimization code. This was done for both fidelity improvements and to make FORMOSA-P compatible with microscopic-based nuclear design methods. Needless to say, microscopic depletion requires more computational effort compared with macroscopic depletion. This implies that microscopic depletion may be computationally restrictive if employed during the loading pattern optimization calculation because many loading patterns are examined during the course of an optimization search. Therefore, the microscopic depletion model developed here uses combined models of microscopic and macroscopic depletion. This is done by first performing microscopic depletions for a subset of possible loading patterns from which 'collapsed' macroscopic cross sections are obtained. The collapsed macroscopic cross sections inherently incorporate spectral history effects. Subsequently, the optimization calculations are done using the collapsed macroscopic cross sections. Using this approach allows maintenance of microscopic depletion level accuracy without substantial additional computing resources

  19. Evaluation 2 of B10 depletion in the WH PWR

    International Nuclear Information System (INIS)

    Park, Sang Won; Woo, Hae Suk; Kim, Sun Doo; Chae, Hee Dong; Myung, Sun Yup; Jang, Ju Kyung

    2001-01-01

    This paper presents the methodology to evaluate the B 10 depletion behavior in the pressurized water reactor. And B 10 depletion evaluation is performed based on the prediction program and the measured data of B 10 . The result shows that B 10 depletion during normal operation is not negligible. Therefore, adjustments for this depletion effect should be made to calculate the estimated critical postion(ECP) and determine the boron concentration required to maintain the specified shutdown margin

  20. Experimental Acidification Causes Soil Base-Cation Depletion at the Bear Brook Watershed in Maine

    Science.gov (United States)

    Ivan J. Fernandez; Lindsey E. Rustad; Stephen A. Norton; Jeffrey S. Kahl; Bernard J. Cosby

    2003-01-01

    There is concern that changes in atmospheric deposition, climate, or land use have altered the biogeochemistry of forests causing soil base-cation depletion, particularly Ca. The Bear Brook Watershed in Maine (BBWM) is a paired watershed experiment with one watershed subjected to elevated N and S deposition through bimonthly additions of (NH4)2SO4. Quantitative soil...

  1. Monte-Carlo Simulation for PDC-Based Optical CDMA System

    Directory of Open Access Journals (Sweden)

    FAHIM AZIZ UMRANI

    2010-10-01

    Full Text Available This paper presents the Monte-Carlo simulation of Optical CDMA (Code Division Multiple Access systems, and analyse its performance in terms of the BER (Bit Error Rate. The spreading sequence chosen for CDMA is Perfect Difference Codes. Furthermore, this paper derives the expressions of noise variances from first principles to calibrate the noise for both bipolar (electrical domain and unipolar (optical domain signalling required for Monte-Carlo simulation. The simulated results conform to the theory and show that the receiver gain mismatch and splitter loss at the transceiver degrades the system performance.

  2. Application of the measurement-based Monte Carlo method in nasopharyngeal cancer patients for intensity modulated radiation therapy

    International Nuclear Information System (INIS)

    Yeh, C.Y.; Lee, C.C.; Chao, T.C.; Lin, M.H.; Lai, P.A.; Liu, F.H.; Tung, C.J.

    2014-01-01

    This study aims to utilize a measurement-based Monte Carlo (MBMC) method to evaluate the accuracy of dose distributions calculated using the Eclipse radiotherapy treatment planning system (TPS) based on the anisotropic analytical algorithm. Dose distributions were calculated for the nasopharyngeal carcinoma (NPC) patients treated with the intensity modulated radiotherapy (IMRT). Ten NPC IMRT plans were evaluated by comparing their dose distributions with those obtained from the in-house MBMC programs for the same CT images and beam geometry. To reconstruct the fluence distribution of the IMRT field, an efficiency map was obtained by dividing the energy fluence of the intensity modulated field by that of the open field, both acquired from an aS1000 electronic portal imaging device. The integrated image of the non-gated mode was used to acquire the full dose distribution delivered during the IMRT treatment. This efficiency map redistributed the particle weightings of the open field phase-space file for IMRT applications. Dose differences were observed in the tumor and air cavity boundary. The mean difference between MBMC and TPS in terms of the planning target volume coverage was 0.6% (range: 0.0–2.3%). The mean difference for the conformity index was 0.01 (range: 0.0–0.01). In conclusion, the MBMC method serves as an independent IMRT dose verification tool in a clinical setting. - Highlights: ► The patient-based Monte Carlo method serves as a reference standard to verify IMRT doses. ► 3D Dose distributions for NPC patients have been verified by the Monte Carlo method. ► Doses predicted by the Monte Carlo method matched closely with those by the TPS. ► The Monte Carlo method predicted a higher mean dose to the middle ears than the TPS. ► Critical organ doses should be confirmed to avoid overdose to normal organs

  3. “When the going gets tough, who keeps going?” Depletion sensitivity moderates the ego-depletion effect

    Science.gov (United States)

    Salmon, Stefanie J.; Adriaanse, Marieke A.; De Vet, Emely; Fennis, Bob M.; De Ridder, Denise T. D.

    2014-01-01

    Self-control relies on a limited resource that can get depleted, a phenomenon that has been labeled ego-depletion. We argue that individuals may differ in their sensitivity to depleting tasks, and that consequently some people deplete their self-control resource at a faster rate than others. In three studies, we assessed individual differences in depletion sensitivity, and demonstrate that depletion sensitivity moderates ego-depletion effects. The Depletion Sensitivity Scale (DSS) was employed to assess depletion sensitivity. Study 1 employs the DSS to demonstrate that individual differences in sensitivity to ego-depletion exist. Study 2 shows moderate correlations of depletion sensitivity with related self-control concepts, indicating that these scales measure conceptually distinct constructs. Study 3 demonstrates that depletion sensitivity moderates the ego-depletion effect. Specifically, participants who are sensitive to depletion performed worse on a second self-control task, indicating a stronger ego-depletion effect, compared to participants less sensitive to depletion. PMID:25009523

  4. CAD-based Monte Carlo automatic modeling method based on primitive solid

    International Nuclear Information System (INIS)

    Wang, Dong; Song, Jing; Yu, Shengpeng; Long, Pengcheng; Wang, Yongliang

    2016-01-01

    Highlights: • We develop a method which bi-convert between CAD model and primitive solid. • This method was improved from convert method between CAD model and half space. • This method was test by ITER model and validated the correctness and efficiency. • This method was integrated in SuperMC which could model for SuperMC and Geant4. - Abstract: Monte Carlo method has been widely used in nuclear design and analysis, where geometries are described with primitive solids. However, it is time consuming and error prone to describe a primitive solid geometry, especially for a complicated model. To reuse the abundant existed CAD models and conveniently model with CAD modeling tools, an automatic modeling method for accurate prompt modeling between CAD model and primitive solid is needed. An automatic modeling method for Monte Carlo geometry described by primitive solid was developed which could bi-convert between CAD model and Monte Carlo geometry represented by primitive solids. While converting from CAD model to primitive solid model, the CAD model was decomposed into several convex solid sets, and then corresponding primitive solids were generated and exported. While converting from primitive solid model to the CAD model, the basic primitive solids were created and related operation was done. This method was integrated in the SuperMC and was benchmarked with ITER benchmark model. The correctness and efficiency of this method were demonstrated.

  5. Efficient sampling algorithms for Monte Carlo based treatment planning

    International Nuclear Information System (INIS)

    DeMarco, J.J.; Solberg, T.D.; Chetty, I.; Smathers, J.B.

    1998-01-01

    Efficient sampling algorithms are necessary for producing a fast Monte Carlo based treatment planning code. This study evaluates several aspects of a photon-based tracking scheme and the effect of optimal sampling algorithms on the efficiency of the code. Four areas were tested: pseudo-random number generation, generalized sampling of a discrete distribution, sampling from the exponential distribution, and delta scattering as applied to photon transport through a heterogeneous simulation geometry. Generalized sampling of a discrete distribution using the cutpoint method can produce speedup gains of one order of magnitude versus conventional sequential sampling. Photon transport modifications based upon the delta scattering method were implemented and compared with a conventional boundary and collision checking algorithm. The delta scattering algorithm is faster by a factor of six versus the conventional algorithm for a boundary size of 5 mm within a heterogeneous geometry. A comparison of portable pseudo-random number algorithms and exponential sampling techniques is also discussed

  6. Convex-based void filling method for CAD-based Monte Carlo geometry modeling

    International Nuclear Information System (INIS)

    Yu, Shengpeng; Cheng, Mengyun; Song, Jing; Long, Pengcheng; Hu, Liqin

    2015-01-01

    Highlights: • We present a new void filling method named CVF for CAD based MC geometry modeling. • We describe convex based void description based and quality-based space subdivision. • The results showed improvements provided by CVF for both modeling and MC calculation efficiency. - Abstract: CAD based automatic geometry modeling tools have been widely applied to generate Monte Carlo (MC) calculation geometry for complex systems according to CAD models. Automatic void filling is one of the main functions in the CAD based MC geometry modeling tools, because the void space between parts in CAD models is traditionally not modeled while MC codes such as MCNP need all the problem space to be described. A dedicated void filling method, named Convex-based Void Filling (CVF), is proposed in this study for efficient void filling and concise void descriptions. The method subdivides all the problem space into disjointed regions using Quality based Subdivision (QS) and describes the void space in each region with complementary descriptions of the convex volumes intersecting with that region. It has been implemented in SuperMC/MCAM, the Multiple-Physics Coupling Analysis Modeling Program, and tested on International Thermonuclear Experimental Reactor (ITER) Alite model. The results showed that the new method reduced both automatic modeling time and MC calculation time

  7. NOMAD: a nodal microscopic analysis method for nuclear fuel depletion

    International Nuclear Information System (INIS)

    Rajic, H.L.; Ougouag, A.M.

    1987-01-01

    Recently developed assembly homogenization techniques made possible very efficient global burnup calculations based on modern nodal methods. There are two possible ways of modeling the global depletion process: macroscopic and microscopic depletion models. Using a microscopic global depletion approach NOMAD (NOdal Microscopic Analysis Method for Nuclear Fuel Depletion), a multigroup, two- and three-dimensional, multicycle depletion code was devised. The code uses the ILLICO nodal diffusion model. The formalism of the ILLICO methodology is extended to treat changes in the macroscopic cross sections during a depletion cycle without recomputing the coupling coefficients. This results in a computationally very efficient method. The code was tested against a well-known depletion benchmark problem. In this problem a two-dimensional pressurized water reactor is depleted through two cycles. Both cycles were run with 1 x 1 and 2 x 2 nodes per assembly. It is obvious that the one node per assembly solution gives unacceptable results while the 2 x 2 solution gives relative power errors consistently below 2%

  8. Statistical estimation Monte Carlo for unreliability evaluation of highly reliable system

    International Nuclear Information System (INIS)

    Xiao Gang; Su Guanghui; Jia Dounan; Li Tianduo

    2000-01-01

    Based on analog Monte Carlo simulation, statistical Monte Carlo methods for unreliable evaluation of highly reliable system are constructed, including direct statistical estimation Monte Carlo method and weighted statistical estimation Monte Carlo method. The basal element is given, and the statistical estimation Monte Carlo estimators are derived. Direct Monte Carlo simulation method, bounding-sampling method, forced transitions Monte Carlo method, direct statistical estimation Monte Carlo and weighted statistical estimation Monte Carlo are used to evaluate unreliability of a same system. By comparing, weighted statistical estimation Monte Carlo estimator has smallest variance, and has highest calculating efficiency

  9. Transient Treg depletion enhances therapeutic anti‐cancer vaccination

    Science.gov (United States)

    Aston, Wayne J.; Chee, Jonathan; Khong, Andrea; Cleaver, Amanda L.; Solin, Jessica N.; Ma, Shaokang; Lesterhuis, W. Joost; Dick, Ian; Holt, Robert A.; Creaney, Jenette; Boon, Louis; Robinson, Bruce; Lake, Richard A.

    2016-01-01

    Abstract Introduction Regulatory T cells (Treg) play an important role in suppressing anti‐ immunity and their depletion has been linked to improved outcomes. To better understand the role of Treg in limiting the efficacy of anti‐cancer immunity, we used a Diphtheria toxin (DTX) transgenic mouse model to specifically target and deplete Treg. Methods Tumor bearing BALB/c FoxP3.dtr transgenic mice were subjected to different treatment protocols, with or without Treg depletion and tumor growth and survival monitored. Results DTX specifically depleted Treg in a transient, dose‐dependent manner. Treg depletion correlated with delayed tumor growth, increased effector T cell (Teff) activation, and enhanced survival in a range of solid tumors. Tumor regression was dependent on Teffs as depletion of both CD4 and CD8 T cells completely abrogated any survival benefit. Severe morbidity following Treg depletion was only observed, when consecutive doses of DTX were given during peak CD8 T cell activation, demonstrating that Treg can be depleted on multiple occasions, but only when CD8 T cell activation has returned to base line levels. Finally, we show that even minimal Treg depletion is sufficient to significantly improve the efficacy of tumor‐peptide vaccination. Conclusions BALB/c.FoxP3.dtr mice are an ideal model to investigate the full therapeutic potential of Treg depletion to boost anti‐tumor immunity. DTX‐mediated Treg depletion is transient, dose‐dependent, and leads to strong anti‐tumor immunity and complete tumor regression at high doses, while enhancing the efficacy of tumor‐specific vaccination at low doses. Together this data highlight the importance of Treg manipulation as a useful strategy for enhancing current and future cancer immunotherapies. PMID:28250921

  10. How Ego Depletion Affects Sexual Self-Regulation: Is It More Than Resource Depletion?

    Science.gov (United States)

    Nolet, Kevin; Rouleau, Joanne-Lucine; Benbouriche, Massil; Carrier Emond, Fannie; Renaud, Patrice

    2015-12-21

    Rational thinking and decision making are impacted when in a state of sexual arousal. The inability to self-regulate arousal can be linked to numerous problems, like sexual risk taking, infidelity, and sexual coercion. Studies have shown that most men are able to exert voluntary control over their sexual excitation with various levels of success. Both situational and dispositional factors can influence self-regulation achievement. The goal of this research was to investigate how ego depletion, a state of low self-control capacity, interacts with personality traits-propensities for sexual excitation and inhibition-and cognitive absorption, to cause sexual self-regulation failure. The sexual responses of 36 heterosexual males were assessed using penile plethysmography. They were asked to control their sexual arousal in two conditions, with and without ego depletion. Results suggest that ego depletion has opposite effects based on the trait sexual inhibition, as individuals moderately inhibited showed an increase in performance while highly inhibited ones showed a decrease. These results challenge the limited resource model of self-regulation and point to the importance of considering how people adapt to acute and high challenging conditions.

  11. Halo Star Lithium Depletion

    International Nuclear Information System (INIS)

    Pinsonneault, M. H.; Walker, T. P.; Steigman, G.; Narayanan, Vijay K.

    1999-01-01

    The depletion of lithium during the pre-main-sequence and main-sequence phases of stellar evolution plays a crucial role in the comparison of the predictions of big bang nucleosynthesis with the abundances observed in halo stars. Previous work has indicated a wide range of possible depletion factors, ranging from minimal in standard (nonrotating) stellar models to as much as an order of magnitude in models that include rotational mixing. Recent progress in the study of the angular momentum evolution of low-mass stars permits the construction of theoretical models capable of reproducing the angular momentum evolution of low-mass open cluster stars. The distribution of initial angular momenta can be inferred from stellar rotation data in young open clusters. In this paper we report on the application of these models to the study of lithium depletion in main-sequence halo stars. A range of initial angular momenta produces a range of lithium depletion factors on the main sequence. Using the distribution of initial conditions inferred from young open clusters leads to a well-defined halo lithium plateau with modest scatter and a small population of outliers. The mass-dependent angular momentum loss law inferred from open cluster studies produces a nearly flat plateau, unlike previous models that exhibited a downward curvature for hotter temperatures in the 7Li-Teff plane. The overall depletion factor for the plateau stars is sensitive primarily to the solar initial angular momentum used in the calibration for the mixing diffusion coefficients. Uncertainties remain in the treatment of the internal angular momentum transport in the models, and the potential impact of these uncertainties on our results is discussed. The 6Li/7Li depletion ratio is also examined. We find that the dispersion in the plateau and the 6Li/7Li depletion ratio scale with the absolute 7Li depletion in the plateau, and we use observational data to set bounds on the 7Li depletion in main-sequence halo

  12. Evaluation of three high abundance protein depletion kits for umbilical cord serum proteomics

    Directory of Open Access Journals (Sweden)

    Nie Jing

    2011-05-01

    Full Text Available Abstract Background High abundance protein depletion is a major challenge in the study of serum/plasma proteomics. Prior to this study, most commercially available kits for depletion of highly abundant proteins had only been tested and evaluated in adult serum/plasma, while the depletion efficiency on umbilical cord serum/plasma had not been clarified. Structural differences between some adult and fetal proteins (such as albumin make it likely that depletion approaches for adult and umbilical cord serum/plasma will be variable. Therefore, the primary purposes of the present study are to investigate the efficiencies of several commonly-used commercial kits during high abundance protein depletion from umbilical cord serum and to determine which kit yields the most effective and reproducible results for further proteomics research on umbilical cord serum. Results The immunoaffinity based kits (PROTIA-Sigma and 5185-Agilent displayed higher depletion efficiency than the immobilized dye based kit (PROTBA-Sigma in umbilical cord serum samples. Both the PROTIA-Sigma and 5185-Agilent kit maintained high depletion efficiency when used three consecutive times. Depletion by the PROTIA-Sigma Kit improved 2DE gel quality by reducing smeared bands produced by the presence of high abundance proteins and increasing the intensity of other protein spots. During image analysis using the identical detection parameters, 411 ± 18 spots were detected in crude serum gels, while 757 ± 43 spots were detected in depleted serum gels. Eight spots unique to depleted serum gels were identified by MALDI- TOF/TOF MS, seven of which were low abundance proteins. Conclusions The immunoaffinity based kits exceeded the immobilized dye based kit in high abundance protein depletion of umbilical cord serum samples and dramatically improved 2DE gel quality for detection of trace biomarkers.

  13. Monte Carlo based radial shield design of typical PWR reactor

    Energy Technology Data Exchange (ETDEWEB)

    Gul, Anas; Khan, Rustam; Qureshi, M. Ayub; Azeem, Muhammad Waqar; Raza, S.A. [Pakistan Institute of Engineering and Applied Sciences, Islamabad (Pakistan). Dept. of Nuclear Engineering; Stummer, Thomas [Technische Univ. Wien (Austria). Atominst.

    2016-11-15

    Neutron and gamma flux and dose equivalent rate distribution are analysed in radial and shields of a typical PWR type reactor based on the Monte Carlo radiation transport computer code MCNP5. The ENDF/B-VI continuous energy cross-section library has been employed for the criticality and shielding analysis. The computed results are in good agreement with the reference results (maximum difference is less than 56 %). It implies that MCNP5 a good tool for accurate prediction of neutron and gamma flux and dose rates in radial shield around the core of PWR type reactors.

  14. Qualitative Simulation of Photon Transport in Free Space Based on Monte Carlo Method and Its Parallel Implementation

    Directory of Open Access Journals (Sweden)

    Xueli Chen

    2010-01-01

    Full Text Available During the past decade, Monte Carlo method has obtained wide applications in optical imaging to simulate photon transport process inside tissues. However, this method has not been effectively extended to the simulation of free-space photon transport at present. In this paper, a uniform framework for noncontact optical imaging is proposed based on Monte Carlo method, which consists of the simulation of photon transport both in tissues and in free space. Specifically, the simplification theory of lens system is utilized to model the camera lens equipped in the optical imaging system, and Monte Carlo method is employed to describe the energy transformation from the tissue surface to the CCD camera. Also, the focusing effect of camera lens is considered to establish the relationship of corresponding points between tissue surface and CCD camera. Furthermore, a parallel version of the framework is realized, making the simulation much more convenient and effective. The feasibility of the uniform framework and the effectiveness of the parallel version are demonstrated with a cylindrical phantom based on real experimental results.

  15. "When the going gets tough, who keeps going?" Depletion sensitivity moderates the ego-depletion effect

    NARCIS (Netherlands)

    Salmon, Stefanie J.; Adriaanse, Marieke A.; De Vet, Emely; Fennis, Bob M.; De Ridder, Denise T D

    2014-01-01

    Self-control relies on a limited resource that can get depleted, a phenomenon that has been labeled ego-depletion. We argue that individuals may differ in their sensitivity to depleting tasks, and that consequently some people deplete their self-control resource at a faster rate than others. In

  16. "When the going gets tough, who keeps going?" : Depletion sensitivity moderates the ego-depletion effect

    NARCIS (Netherlands)

    Salmon, Stefanie J.; Adriaanse, Marieke A.; De Vet, Emely; Fennis, Bob M.; De Ridder, Denise T. D.

    2014-01-01

    Self-control relies on a limited resource that can get depleted, a phenomenon that has been labeled ego-depletion. We argue that individuals may differ in their sensitivity to depleting tasks, and that consequently some people deplete their self-control resource at a faster rate than others. In

  17. The modality effect of ego depletion: Auditory task modality reduces ego depletion.

    Science.gov (United States)

    Li, Qiong; Wang, Zhenhong

    2016-08-01

    An initial act of self-control that impairs subsequent acts of self-control is called ego depletion. The ego depletion phenomenon has been observed consistently. The modality effect refers to the effect of the presentation modality on the processing of stimuli. The modality effect was also robustly found in a large body of research. However, no study to date has examined the modality effects of ego depletion. This issue was addressed in the current study. In Experiment 1, after all participants completed a handgrip task, one group's participants completed a visual attention regulation task and the other group's participants completed an auditory attention regulation task, and then all participants again completed a handgrip task. The ego depletion phenomenon was observed in both the visual and the auditory attention regulation task. Moreover, participants who completed the visual task performed worse on the handgrip task than participants who completed the auditory task, which indicated that there was high ego depletion in the visual task condition. In Experiment 2, participants completed an initial task that either did or did not deplete self-control resources, and then they completed a second visual or auditory attention control task. The results indicated that depleted participants performed better on the auditory attention control task than the visual attention control task. These findings suggest that altering task modality may reduce ego depletion. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  18. A Newton-based Jacobian-free approach for neutronic-Monte Carlo/thermal-hydraulic static coupled analysis

    International Nuclear Information System (INIS)

    Mylonakis, Antonios G.; Varvayanni, M.; Catsaros, N.

    2017-01-01

    Highlights: •A Newton-based Jacobian-free Monte Carlo/thermal-hydraulic coupling approach is introduced. •OpenMC is coupled with COBRA-EN with a Newton-based approach. •The introduced coupling approach is tested in numerical experiments. •The performance of the new approach is compared with the traditional “serial” coupling approach. -- Abstract: In the field of nuclear reactor analysis, multi-physics calculations that account for the bonded nature of the neutronic and thermal-hydraulic phenomena are of major importance for both reactor safety and design. So far in the context of Monte-Carlo neutronic analysis a kind of “serial” algorithm has been mainly used for coupling with thermal-hydraulics. The main motivation of this work is the interest for an algorithm that could maintain the distinct treatment of the involved fields within a tight coupling context that could be translated into higher convergence rates and more stable behaviour. This work investigates the possibility of replacing the usually used “serial” iteration with an approximate Newton algorithm. The selected algorithm, called Approximate Block Newton, is actually a version of the Jacobian-free Newton Krylov method suitably modified for coupling mono-disciplinary solvers. Within this Newton scheme the linearised system is solved with a Krylov solver in order to avoid the creation of the Jacobian matrix. A coupling algorithm between Monte-Carlo neutronics and thermal-hydraulics based on the above-mentioned methodology is developed and its performance is analysed. More specifically, OpenMC, a Monte-Carlo neutronics code and COBRA-EN, a thermal-hydraulics code for sub-channel and core analysis, are merged in a coupling scheme using the Approximate Block Newton method aiming to examine the performance of this scheme and compare with that of the “traditional” serial iterative scheme. First results show a clear improvement of the convergence especially in problems where significant

  19. Thermal transport in nanocrystalline Si and SiGe by ab initio based Monte Carlo simulation.

    Science.gov (United States)

    Yang, Lina; Minnich, Austin J

    2017-03-14

    Nanocrystalline thermoelectric materials based on Si have long been of interest because Si is earth-abundant, inexpensive, and non-toxic. However, a poor understanding of phonon grain boundary scattering and its effect on thermal conductivity has impeded efforts to improve the thermoelectric figure of merit. Here, we report an ab-initio based computational study of thermal transport in nanocrystalline Si-based materials using a variance-reduced Monte Carlo method with the full phonon dispersion and intrinsic lifetimes from first-principles as input. By fitting the transmission profile of grain boundaries, we obtain excellent agreement with experimental thermal conductivity of nanocrystalline Si [Wang et al. Nano Letters 11, 2206 (2011)]. Based on these calculations, we examine phonon transport in nanocrystalline SiGe alloys with ab-initio electron-phonon scattering rates. Our calculations show that low energy phonons still transport substantial amounts of heat in these materials, despite scattering by electron-phonon interactions, due to the high transmission of phonons at grain boundaries, and thus improvements in ZT are still possible by disrupting these modes. This work demonstrates the important insights into phonon transport that can be obtained using ab-initio based Monte Carlo simulations in complex nanostructured materials.

  20. The Toxicity of Depleted Uranium

    Directory of Open Access Journals (Sweden)

    Wayne Briner

    2010-01-01

    Full Text Available Depleted uranium (DU is an emerging environmental pollutant that is introduced into the environment primarily by military activity. While depleted uranium is less radioactive than natural uranium, it still retains all the chemical toxicity associated with the original element. In large doses the kidney is the target organ for the acute chemical toxicity of this metal, producing potentially lethal tubular necrosis. In contrast, chronic low dose exposure to depleted uranium may not produce a clear and defined set of symptoms. Chronic low-dose, or subacute, exposure to depleted uranium alters the appearance of milestones in developing organisms. Adult animals that were exposed to depleted uranium during development display persistent alterations in behavior, even after cessation of depleted uranium exposure. Adult animals exposed to depleted uranium demonstrate altered behaviors and a variety of alterations to brain chemistry. Despite its reduced level of radioactivity evidence continues to accumulate that depleted uranium, if ingested, may pose a radiologic hazard. The current state of knowledge concerning DU is discussed.

  1. Improvements for Monte Carlo burnup calculation

    Energy Technology Data Exchange (ETDEWEB)

    Shenglong, Q.; Dong, Y.; Danrong, S.; Wei, L., E-mail: qiangshenglong@tsinghua.org.cn, E-mail: d.yao@npic.ac.cn, E-mail: songdr@npic.ac.cn, E-mail: luwei@npic.ac.cn [Nuclear Power Inst. of China, Cheng Du, Si Chuan (China)

    2015-07-01

    Monte Carlo burnup calculation is development trend of reactor physics, there would be a lot of work to be done for engineering applications. Based on Monte Carlo burnup code MOI, non-fuel burnup calculation methods and critical search suggestions will be mentioned in this paper. For non-fuel burnup, mixed burnup mode will improve the accuracy of burnup calculation and efficiency. For critical search of control rod position, a new method called ABN based on ABA which used by MC21 will be proposed for the first time in this paper. (author)

  2. Acceptance and implementation of a system of planning computerized based on Monte Carlo; Aceptacion y puesta en marcha de un sistema de planificacion comutarizada basado en Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Lopez-Tarjuelo, J.; Garcia-Molla, R.; Suan-Senabre, X. J.; Quiros-Higueras, J. Q.; Santos-Serra, A.; Marco-Blancas, N.; Calzada-Feliu, S.

    2013-07-01

    It has been done the acceptance for use clinical Monaco computerized planning system, based on an on a virtual model of the energy yield of the head of the linear electron Accelerator and that performs the calculation of the dose with an algorithm of x-rays (XVMC) based on Monte Carlo algorithm. (Author)

  3. Modelling of scintillator based flat-panel detectors with Monte-Carlo simulations

    International Nuclear Information System (INIS)

    Reims, N; Sukowski, F; Uhlmann, N

    2011-01-01

    Scintillator based flat panel detectors are state of the art in the field of industrial X-ray imaging applications. Choosing the proper system and setup parameters for the vast range of different applications can be a time consuming task, especially when developing new detector systems. Since the system behaviour cannot always be foreseen easily, Monte-Carlo (MC) simulations are keys to gain further knowledge of system components and their behaviour for different imaging conditions. In this work we used two Monte-Carlo based models to examine an indirect converting flat panel detector, specifically the Hamamatsu C9312SK. We focused on the signal generation in the scintillation layer and its influence on the spatial resolution of the whole system. The models differ significantly in their level of complexity. The first model gives a global description of the detector based on different parameters characterizing the spatial resolution. With relatively small effort a simulation model can be developed which equates the real detector regarding signal transfer. The second model allows a more detailed insight of the system. It is based on the well established cascade theory, i.e. describing the detector as a cascade of elemental gain and scattering stages, which represent the built in components and their signal transfer behaviour. In comparison to the first model the influence of single components especially the important light spread behaviour in the scintillator can be analysed in a more differentiated way. Although the implementation of the second model is more time consuming both models have in common that a relatively small amount of system manufacturer parameters are needed. The results of both models were in good agreement with the measured parameters of the real system.

  4. Monte Carlo Based Framework to Support HAZOP Study

    DEFF Research Database (Denmark)

    Danko, Matej; Frutiger, Jerome; Jelemenský, Ľudovít

    2017-01-01

    deviations in process parameters simultaneously, thereby bringing an improvement to the Hazard and Operability study (HAZOP), which normally considers only one at a time deviation in process parameters. Furthermore, Monte Carlo filtering was then used to identify operability and hazard issues including...

  5. Active volume studies with depleted and enriched BEGe detectors

    Energy Technology Data Exchange (ETDEWEB)

    Sturm, Katharina von [Eberhard Karls Universitaet Tuebingen (Germany); Universita degli Studi di Padova, Padua (Italy); Collaboration: GERDA-Collaboration

    2013-07-01

    The Gerda experiment is currently taking data for the search of the 0νββ decay in {sup 76}Ge. In 2013, 30 newly manufactured Broad Energy Germanium (BEGe) diodes will be deployed which will double the active mass within Gerda. These detectors were fabricated from high-purity germanium enriched in {sup 76}Ge and tested in the HADES underground laboratory, owned by SCK.CEN, in Mol, Belgium. As the BEGes are source and detector at the same time, one crucial parameter is their active volume which directly enters into the evaluation of the half-life. This talk illustrates the dead layer and active volume determination of prototype detectors from depleted germanium as well as the newly produced detectors from enriched material, using gamma spectroscopy methods and comparing experimental results to Monte-Carlo simulations. Recent measurements and their results are presented, and systematic effects are discussed.

  6. Groundwater Depletion Embedded in International Food Trade

    Science.gov (United States)

    Dalin, Carole; Wada, Yoshihide; Kastner, Thomas; Puma, Michael J.

    2017-01-01

    Recent hydrological modeling and Earth observations have located and quantified alarming rates of groundwater depletion worldwide. This depletion is primarily due to water withdrawals for irrigation, but its connection with the main driver of irrigation, global food consumption, has not yet been explored. Here we show that approximately eleven per cent of non-renewable groundwater use for irrigation is embedded in international food trade, of which two-thirds are exported by Pakistan, the USA and India alone. Our quantification of groundwater depletion embedded in the world's food trade is based on a combination of global, crop-specific estimates of non-renewable groundwater abstraction and international food trade data. A vast majority of the world's population lives in countries sourcing nearly all their staple crop imports from partners who deplete groundwater to produce these crops, highlighting risks for global food and water security. Some countries, such as the USA, Mexico, Iran and China, are particularly exposed to these risks because they both produce and import food irrigated from rapidly depleting aquifers. Our results could help to improve the sustainability of global food production and groundwater resource management by identifying priority regions and agricultural products at risk as well as the end consumers of these products.

  7. Groundwater depletion embedded in international food trade

    Science.gov (United States)

    Dalin, Carole; Wada, Yoshihide; Kastner, Thomas; Puma, Michael J.

    2017-03-01

    Recent hydrological modelling and Earth observations have located and quantified alarming rates of groundwater depletion worldwide. This depletion is primarily due to water withdrawals for irrigation, but its connection with the main driver of irrigation, global food consumption, has not yet been explored. Here we show that approximately eleven per cent of non-renewable groundwater use for irrigation is embedded in international food trade, of which two-thirds are exported by Pakistan, the USA and India alone. Our quantification of groundwater depletion embedded in the world’s food trade is based on a combination of global, crop-specific estimates of non-renewable groundwater abstraction and international food trade data. A vast majority of the world’s population lives in countries sourcing nearly all their staple crop imports from partners who deplete groundwater to produce these crops, highlighting risks for global food and water security. Some countries, such as the USA, Mexico, Iran and China, are particularly exposed to these risks because they both produce and import food irrigated from rapidly depleting aquifers. Our results could help to improve the sustainability of global food production and groundwater resource management by identifying priority regions and agricultural products at risk as well as the end consumers of these products.

  8. Comparison of KANEXT and SERPENT for fuel depletion calculations of a sodium fast reactor

    International Nuclear Information System (INIS)

    Lopez-Solis, R.C.; Francois, J.L.; Becker, M.; Sanchez-Espinoza, V.H.

    2014-01-01

    As most of Generation-IV systems are in development, efficient and reliable computational tools are needed to obtain accurate results in reasonably computer time. In this study, KANEXT code system is presented and validated against the well-known Monte Carlo SERPENT code, for fuel depletion calculations of a sodium fast reactor (SFR). The KArlsruhe Neutronic EXtended Tool (KANEXT) is a modular code system for deterministic reactor calculations, consisting of one kernel and several modules. Results obtained with KANEXT for the SFR core are in good agreement with the ones of SERPENT, e.g. the neutron multiplication factor and the isotopes evolution with burn-up. (author)

  9. ERSN-OpenMC, a Java-based GUI for OpenMC Monte Carlo code

    Directory of Open Access Journals (Sweden)

    Jaafar EL Bakkali

    2016-07-01

    Full Text Available OpenMC is a new Monte Carlo transport particle simulation code focused on solving two types of neutronic problems mainly the k-eigenvalue criticality fission source problems and external fixed fission source problems. OpenMC does not have any Graphical User Interface and the creation of one is provided by our java-based application named ERSN-OpenMC. The main feature of this application is to provide to the users an easy-to-use and flexible graphical interface to build better and faster simulations, with less effort and great reliability. Additionally, this graphical tool was developed with several features, as the ability to automate the building process of OpenMC code and related libraries as well as the users are given the freedom to customize their installation of this Monte Carlo code. A full description of the ERSN-OpenMC application is presented in this paper.

  10. Burnup calculations using Monte Carlo method

    International Nuclear Information System (INIS)

    Ghosh, Biplab; Degweker, S.B.

    2009-01-01

    In the recent years, interest in burnup calculations using Monte Carlo methods has gained momentum. Previous burn up codes have used multigroup transport theory based calculations followed by diffusion theory based core calculations for the neutronic portion of codes. The transport theory methods invariably make approximations with regard to treatment of the energy and angle variables involved in scattering, besides approximations related to geometry simplification. Cell homogenisation to produce diffusion, theory parameters adds to these approximations. Moreover, while diffusion theory works for most reactors, it does not produce accurate results in systems that have strong gradients, strong absorbers or large voids. Also, diffusion theory codes are geometry limited (rectangular, hexagonal, cylindrical, and spherical coordinates). Monte Carlo methods are ideal to solve very heterogeneous reactors and/or lattices/assemblies in which considerable burnable poisons are used. The key feature of this approach is that Monte Carlo methods permit essentially 'exact' modeling of all geometrical detail, without resort to ene and spatial homogenization of neutron cross sections. Monte Carlo method would also be better for in Accelerator Driven Systems (ADS) which could have strong gradients due to the external source and a sub-critical assembly. To meet the demand for an accurate burnup code, we have developed a Monte Carlo burnup calculation code system in which Monte Carlo neutron transport code is coupled with a versatile code (McBurn) for calculating the buildup and decay of nuclides in nuclear materials. McBurn is developed from scratch by the authors. In this article we will discuss our effort in developing the continuous energy Monte Carlo burn-up code, McBurn. McBurn is intended for entire reactor core as well as for unit cells and assemblies. Generally, McBurn can do burnup of any geometrical system which can be handled by the underlying Monte Carlo transport code

  11. Management of depleted uranium

    International Nuclear Information System (INIS)

    2001-01-01

    Large stocks of depleted uranium have arisen as a result of enrichment operations, especially in the United States and the Russian Federation. Countries with depleted uranium stocks are interested in assessing strategies for the use and management of depleted uranium. The choice of strategy depends on several factors, including government and business policy, alternative uses available, the economic value of the material, regulatory aspects and disposal options, and international market developments in the nuclear fuel cycle. This report presents the results of a depleted uranium study conducted by an expert group organised jointly by the OECD Nuclear Energy Agency and the International Atomic Energy Agency. It contains information on current inventories of depleted uranium, potential future arisings, long term management alternatives, peaceful use options and country programmes. In addition, it explores ideas for international collaboration and identifies key issues for governments and policy makers to consider. (authors)

  12. Monte Carlo tests of the Rasch model based on scalability coefficients

    DEFF Research Database (Denmark)

    Christensen, Karl Bang; Kreiner, Svend

    2010-01-01

    that summarizes the number of Guttman errors in the data matrix. These coefficients are shown to yield efficient tests of the Rasch model using p-values computed using Markov chain Monte Carlo methods. The power of the tests of unequal item discrimination, and their ability to distinguish between local dependence......For item responses fitting the Rasch model, the assumptions underlying the Mokken model of double monotonicity are met. This makes non-parametric item response theory a natural starting-point for Rasch item analysis. This paper studies scalability coefficients based on Loevinger's H coefficient...

  13. Comparative Analysis of VERA Depletion Problems

    International Nuclear Information System (INIS)

    Park, Jinsu; Kim, Wonkyeong; Choi, Sooyoung; Lee, Hyunsuk; Lee, Deokjung

    2016-01-01

    Each code has its own solver for depletion, which can produce different depletion calculation results. In order to produce reference solutions for depletion calculation comparison, sensitivity studies should be preceded for each depletion solver. The sensitivity tests for burnup interval, number of depletion zones, and recoverable energy per fission (Q-value) were performed in this paper. For the comparison of depletion calculation results, usually the multiplication factors are compared as a function of burnup. In this study, new comparison methods have been introduced by using the number density of isotope or element, and a cumulative flux instead of burnup. In this paper, optimum depletion calculation options are determined through the sensitivity study of the burnup intervals and the number of depletion intrazones. Because the depletion using CRAM solver performs well for large burnup intervals, smaller number of burnup steps can be used to produce converged solutions. It was noted that the depletion intra-zone sensitivity is only pin-type dependent. The 1 and 10 depletion intra-zones for the normal UO2 pin and gadolinia rod, respectively, are required to obtain the reference solutions. When the optimized depletion calculation options are used, the differences of Q-values are found to be a main cause of the differences of solutions. In this paper, new comparison methods were introduced for consistent code-to-code comparisons even when different kappa libraries were used in the depletion calculations

  14. The future of new calculation concepts in dosimetry based on the Monte Carlo Methods; Avenir des nouveaux concepts des calculs dosimetriques bases sur les methodes de Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Makovicka, L.; Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J. [Universite de Franche-Comte, Equipe IRMA/ENISYS/FEMTO-ST, UMR6174 CNRS, 25 - Montbeliard (France); Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J.; Salomon, M. [Universite de Franche-Comte, Equipe AND/LIFC, 90 - Belfort (France)

    2009-01-15

    Monte Carlo codes, precise but slow, are very important tools in the vast majority of specialities connected to Radiation Physics, Radiation Protection and Dosimetry. A discussion about some other computing solutions is carried out; solutions not only based on the enhancement of computer power, or on the 'biasing'used for relative acceleration of these codes (in the case of photons), but on more efficient methods (A.N.N. - artificial neural network, C.B.R. - case-based reasoning - or other computer science techniques) already and successfully used for a long time in other scientific or industrial applications and not only Radiation Protection or Medical Dosimetry. (authors)

  15. Improving quantum efficiency and spectral resolution of a CCD through direct manipulation of the depletion region

    Science.gov (United States)

    Brown, Craig; Ambrosi, Richard M.; Abbey, Tony; Godet, Olivier; O'Brien, R.; Turner, M. J. L.; Holland, Andrew; Pool, Peter J.; Burt, David; Vernon, David

    2008-07-01

    Future generations of X-ray astronomy instruments will require position sensitive detectors in the form of charge-coupled devices (CCDs) for X-ray spectroscopy and imaging with the ability to probe the X-ray universe with greater efficiency. This will require the development of CCDs with structures that will improve their quantum efficiency over the current state of the art. The quantum efficiency improvements would have to span a broad energy range (0.2 keV to >15 keV). These devices will also have to be designed to withstand the harsh radiation environments associated with orbits that extend beyond the Earth's magnetosphere. This study outlines the most recent work carried out at the University of Leicester focused on improving the quantum efficiency of an X-ray sensitive CCD through direct manipulation of the device depletion region. It is also shown that increased spectral resolution is achieved using this method due to a decrease in the number of multi-pixel events. A Monte Carlo and analytical models of the CCD have been developed and used to determine the depletion depths achieved through variation of the device substrate voltage, Vss. The models are also used to investigate multi-pixel event distributions and quantum efficiency as a function of depletion depth.

  16. Depleting high-abundant and enriching low-abundant proteins in human serum: An evaluation of sample preparation methods using magnetic nanoparticle, chemical depletion and immunoaffinity techniques.

    Science.gov (United States)

    de Jesus, Jemmyson Romário; da Silva Fernandes, Rafael; de Souza Pessôa, Gustavo; Raimundo, Ivo Milton; Arruda, Marco Aurélio Zezzi

    2017-08-01

    The efficiency of three different depletion methods to remove the most abundant proteins, enriching those human serum proteins with low abundance is checked to make more efficient the search and discovery of biomarkers. These methods utilize magnetic nanoparticles (MNPs), chemical reagents (sequential application of dithiothreitol and acetonitrile, DTT/ACN), and commercial apparatus based on immunoaffinity (ProteoMiner, PM). The comparison between methods shows significant removal of abundant protein, remaining in the supernatant at concentrations of 4.6±0.2, 3.6±0.1, and 3.3±0.2µgµL -1 (n=3) for MNPs, DTT/ACN and PM respectively, from a total protein content of 54µgµL -1 . Using GeLC-MS/MS analysis, MNPs depletion shows good efficiency in removing high molecular weight proteins (>80kDa). Due to the synergic effect between the reagents DTT and ACN, DTT/ACN-based depletion offers good performance in the depletion of thiol-rich proteins, such as albumin and transferrin (DTT action), as well as of high molecular weight proteins (ACN action). Furthermore, PM equalization confirms its efficiency in concentrating low-abundant proteins, decreasing the dynamic range of protein levels in human serum. Direct comparison between the treatments reveals 72 proteins identified when using MNP depletion (43 of them exclusively by this method), but only 20 proteins using DTT/ACN (seven exclusively by this method). Additionally, after PM treatment 30 proteins were identified, seven exclusively by this method. Thus, MNPs and DTT/ACN depletion can be simple, quick, cheap, and robust alternatives for immunochemistry-based protein depletion, providing a potential strategy in the search for disease biomarkers. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Bond rupture between colloidal particles with a depletion interaction

    Energy Technology Data Exchange (ETDEWEB)

    Whitaker, Kathryn A.; Furst, Eric M., E-mail: furst@udel.edu [Department of Chemical and Biomolecular Engineering and Center for Molecular and Engineering Thermodynamics, University of Delaware, Newark, Delaware 19716 (United States)

    2016-05-15

    The force required to break the bonds of a depletion gel is measured by dynamically loading pairs of colloidal particles suspended in a solution of a nonadsorbing polymer. Sterically stabilized poly(methyl methacrylate) colloids that are 2.7 μm diameter are brought into contact in a solvent mixture of cyclohexane-cyclohexyl bromide and polystyrene polymer depletant. The particle pairs are subject to a tensile load at a constant loading rate over many approach-retraction cycles. The stochastic nature of the thermal rupture events results in a distribution of bond rupture forces with an average magnitude and variance that increases with increasing depletant concentration. The measured force distribution is described by the flux of particle pairs sampling the energy barrier of the bond interaction potential based on the Asakura–Oosawa depletion model. A transition state model demonstrates the significance of lubrication hydrodynamic interactions and the effect of the applied loading rate on the rupture force of bonds in a depletion gel.

  18. Volume Measurement Algorithm for Food Product with Irregular Shape using Computer Vision based on Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Joko Siswantoro

    2014-11-01

    Full Text Available Volume is one of important issues in the production and processing of food product. Traditionally, volume measurement can be performed using water displacement method based on Archimedes’ principle. Water displacement method is inaccurate and considered as destructive method. Computer vision offers an accurate and nondestructive method in measuring volume of food product. This paper proposes algorithm for volume measurement of irregular shape food product using computer vision based on Monte Carlo method. Five images of object were acquired from five different views and then processed to obtain the silhouettes of object. From the silhouettes of object, Monte Carlo method was performed to approximate the volume of object. The simulation result shows that the algorithm produced high accuracy and precision for volume measurement.

  19. WE-AB-204-11: Development of a Nuclear Medicine Dosimetry Module for the GPU-Based Monte Carlo Code ARCHER

    Energy Technology Data Exchange (ETDEWEB)

    Liu, T; Lin, H; Xu, X [Rensselaer Polytechnic Institute, Troy, NY (United States); Stabin, M [Vanderbilt Univ Medical Ctr, Nashville, TN (United States)

    2015-06-15

    Purpose: To develop a nuclear medicine dosimetry module for the GPU-based Monte Carlo code ARCHER. Methods: We have developed a nuclear medicine dosimetry module for the fast Monte Carlo code ARCHER. The coupled electron-photon Monte Carlo transport kernel included in ARCHER is built upon the Dose Planning Method code (DPM). The developed module manages the radioactive decay simulation by consecutively tracking several types of radiation on a per disintegration basis using the statistical sampling method. Optimization techniques such as persistent threads and prefetching are studied and implemented. The developed module is verified against the VIDA code, which is based on Geant4 toolkit and has previously been verified against OLINDA/EXM. A voxelized geometry is used in the preliminary test: a sphere made of ICRP soft tissue is surrounded by a box filled with water. Uniform activity distribution of I-131 is assumed in the sphere. Results: The self-absorption dose factors (mGy/MBqs) of the sphere with varying diameters are calculated by ARCHER and VIDA respectively. ARCHER’s result is in agreement with VIDA’s that are obtained from a previous publication. VIDA takes hours of CPU time to finish the computation, while it takes ARCHER 4.31 seconds for the 12.4-cm uniform activity sphere case. For a fairer CPU-GPU comparison, more effort will be made to eliminate the algorithmic differences. Conclusion: The coupled electron-photon Monte Carlo code ARCHER has been extended to radioactive decay simulation for nuclear medicine dosimetry. The developed code exhibits good performance in our preliminary test. The GPU-based Monte Carlo code is developed with grant support from the National Institute of Biomedical Imaging and Bioengineering through an R01 grant (R01EB015478)

  20. Monte Carlo-based investigation of water-equivalence of solid phantoms at 137Cs energy

    International Nuclear Information System (INIS)

    Vishwakarma, Ramkrushna S.; Palani Selvam, T.; Sahoo, Sridhar; Mishra, Subhalaxmi; Chourasiya, Ghanshyam

    2013-01-01

    Investigation of solid phantom materials such as solid water, virtual water, plastic water, RW1, polystyrene, and polymethylmethacrylate (PMMA) for their equivalence to liquid water at 137 Cs energy (photon energy of 662 keV) under full scatter conditions is carried out using the EGSnrc Monte Carlo code system. Monte Carlo-based EGSnrc code system was used in the work to calculate distance-dependent phantom scatter corrections. The study also includes separation of primary and scattered dose components. Monte Carlo simulations are carried out using primary particle histories up to 5 x 10 9 to attain less than 0.3% statistical uncertainties in the estimation of dose. Water equivalence of various solid phantoms such as solid water, virtual water, RW1, PMMA, polystyrene, and plastic water materials are investigated at 137 Cs energy under full scatter conditions. The investigation reveals that solid water, virtual water, and RW1 phantoms are water equivalent up to 15 cm from the source. Phantom materials such as plastic water, PMMA, and polystyrene phantom materials are water equivalent up to 10 cm. At 15 cm from the source, the phantom scatter corrections are 1.035, 1.050, and 0.949 for the phantoms PMMA, plastic water, and polystyrene, respectively. (author)

  1. Physics of fully depleted CCDs

    International Nuclear Information System (INIS)

    Holland, S E; Bebek, C J; Kolbe, W F; Lee, J S

    2014-01-01

    In this work we present simple, physics-based models for two effects that have been noted in the fully depleted CCDs that are presently used in the Dark Energy Survey Camera. The first effect is the observation that the point-spread function increases slightly with the signal level. This is explained by considering the effect on charge-carrier diffusion due to the reduction in the magnitude of the channel potential as collected signal charge acts to partially neutralize the fixed charge in the depleted channel. The resulting reduced voltage drop across the carrier drift region decreases the vertical electric field and increases the carrier transit time. The second effect is the observation of low-level, concentric ring patterns seen in uniformly illuminated images. This effect is shown to be most likely due to lateral deflection of charge during the transit of the photo-generated carriers to the potential wells as a result of lateral electric fields. The lateral fields are a result of space charge in the fully depleted substrates arising from resistivity variations inherent to the growth of the high-resistivity silicon used to fabricate the CCDs

  2. Development of a shield based on Monte-Carlo studies for the COBRA experiment

    Energy Technology Data Exchange (ETDEWEB)

    Heidrich, Nadine [Institut fuer Experimentalphysik, 22761 Hamburg (Germany); Collaboration: COBRA-Collaboration

    2013-07-01

    COBRA is a next-generation experiment searching for neutrinoless double beta decay using CdZnTe semiconductor detectors. The main focus is on {sup 116}Cd, with a decay energy of 2813.5 keV well above the highest naturally occurring gamma lines. The concept for a large scale set-up consists of an array of CdZnTe detectors with a total mass of 420 kg enriched in {sup 116}Cd up to 90 %. With a background rate in the order of 10{sup -3} counts/keV/kg/year, the experiment would be sensitive to a half-life larger than 10{sup 26} years, corresponding to a Majorana mass term m{sub ββ} smaller than 50 meV. To achieve the background level, an appropriate shield is necessary. The shield is developed based on Monte-Carlo simulations. For that, different materials and configurations are tested. In the talk the current status of the Monte-Carlo survey is presented and discussed.

  3. Efficient Monte Carlo sampling of inverse problems using a neural network-based forward—applied to GPR crosshole traveltime inversion

    Science.gov (United States)

    Hansen, T. M.; Cordua, K. S.

    2017-12-01

    Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.

  4. Variational Monte Carlo Technique

    Indian Academy of Sciences (India)

    ias

    on the development of nuclear weapons in Los Alamos ..... cantly improved the paper. ... Carlo simulations of solids, Reviews of Modern Physics, Vol.73, pp.33– ... The computer algorithms are usually based on a random seed that starts the ...

  5. Ozone-depleting Substances (ODS)

    Data.gov (United States)

    U.S. Environmental Protection Agency — This site includes all of the ozone-depleting substances (ODS) recognized by the Montreal Protocol. The data include ozone depletion potentials (ODP), global warming...

  6. Depletion policies for oil-exporting developing economies

    Energy Technology Data Exchange (ETDEWEB)

    Stournaras, Y A

    1984-01-01

    The fact that most oil-exporting countries are developing economies has important implications for oil supply which have not been properly taken into account in the literature on exhaustible resource depletion. The way in which depletion policies are affected by trade uncertainty, given the high degree of the major oil exporters' 'dependence' on crude oil revenues, by investment time lags which delay the exploitation of (some of) these countries' comparative advantage in a petroleum based development, and by ideological objections to the ideal of a rentier society and to foreign capital are examined.

  7. Determination of the spatial response of neutron based analysers using a Monte Carlo based method

    International Nuclear Information System (INIS)

    Tickner, James

    2000-01-01

    One of the principal advantages of using thermal neutron capture (TNC, also called prompt gamma neutron activation analysis or PGNAA) or neutron inelastic scattering (NIS) techniques for measuring elemental composition is the high penetrating power of both the incident neutrons and the resultant gamma-rays, which means that large sample volumes can be interrogated. Gauges based on these techniques are widely used in the mineral industry for on-line determination of the composition of bulk samples. However, attenuation of both neutrons and gamma-rays in the sample and geometric (source/detector distance) effects typically result in certain parts of the sample contributing more to the measured composition than others. In turn, this introduces errors in the determination of the composition of inhomogeneous samples. This paper discusses a combined Monte Carlo/analytical method for estimating the spatial response of a neutron gauge. Neutron propagation is handled using a Monte Carlo technique which allows an arbitrarily complex neutron source and gauge geometry to be specified. Gamma-ray production and detection is calculated analytically which leads to a dramatic increase in the efficiency of the method. As an example, the method is used to study ways of reducing the spatial sensitivity of on-belt composition measurements of cement raw meal

  8. Human podocyte depletion in association with older age and hypertension.

    Science.gov (United States)

    Puelles, Victor G; Cullen-McEwen, Luise A; Taylor, Georgina E; Li, Jinhua; Hughson, Michael D; Kerr, Peter G; Hoy, Wendy E; Bertram, John F

    2016-04-01

    Podocyte depletion plays a major role in the development and progression of glomerulosclerosis. Many kidney diseases are more common in older age and often coexist with hypertension. We hypothesized that podocyte depletion develops in association with older age and is exacerbated by hypertension. Kidneys from 19 adult Caucasian American males without overt renal disease were collected at autopsy in Mississippi. Demographic data were obtained from medical and autopsy records. Subjects were categorized by age and hypertension as potential independent and additive contributors to podocyte depletion. Design-based stereology was used to estimate individual glomerular volume and total podocyte number per glomerulus, which allowed the calculation of podocyte density (number per volume). Podocyte depletion was defined as a reduction in podocyte number (absolute depletion) or podocyte density (relative depletion). The cortical location of glomeruli (outer or inner cortex) and presence of parietal podocytes were also recorded. Older age was an independent contributor to both absolute and relative podocyte depletion, featuring glomerular hypertrophy, podocyte loss, and thus reduced podocyte density. Hypertension was an independent contributor to relative podocyte depletion by exacerbating glomerular hypertrophy, mostly in glomeruli from the inner cortex. However, hypertension was not associated with podocyte loss. Absolute and relative podocyte depletion were exacerbated by the combination of older age and hypertension. The proportion of glomeruli with parietal podocytes increased with age but not with hypertension alone. These findings demonstrate that older age and hypertension are independent and additive contributors to podocyte depletion in white American men without kidney disease. Copyright © 2016 the American Physiological Society.

  9. Quantum statistical Monte Carlo methods and applications to spin systems

    International Nuclear Information System (INIS)

    Suzuki, M.

    1986-01-01

    A short review is given concerning the quantum statistical Monte Carlo method based on the equivalence theorem that d-dimensional quantum systems are mapped onto (d+1)-dimensional classical systems. The convergence property of this approximate tansformation is discussed in detail. Some applications of this general appoach to quantum spin systems are reviewed. A new Monte Carlo method, ''thermo field Monte Carlo method,'' is presented, which is an extension of the projection Monte Carlo method at zero temperature to that at finite temperatures

  10. A comparison study for dose calculation in radiation therapy: pencil beam Kernel based vs. Monte Carlo simulation vs. measurements

    Energy Technology Data Exchange (ETDEWEB)

    Cheong, Kwang-Ho; Suh, Tae-Suk; Lee, Hyoung-Koo; Choe, Bo-Young [The Catholic Univ. of Korea, Seoul (Korea, Republic of); Kim, Hoi-Nam; Yoon, Sei-Chul [Kangnam St. Mary' s Hospital, Seoul (Korea, Republic of)

    2002-07-01

    Accurate dose calculation in radiation treatment planning is most important for successful treatment. Since human body is composed of various materials and not an ideal shape, it is not easy to calculate the accurate effective dose in the patients. Many methods have been proposed to solve inhomogeneity and surface contour problems. Monte Carlo simulations are regarded as the most accurate method, but it is not appropriate for routine planning because it takes so much time. Pencil beam kernel based convolution/superposition methods were also proposed to correct those effects. Nowadays, many commercial treatment planning systems have adopted this algorithm as a dose calculation engine. The purpose of this study is to verify the accuracy of the dose calculated from pencil beam kernel based treatment planning system comparing to Monte Carlo simulations and measurements especially in inhomogeneous region. Home-made inhomogeneous phantom, Helax-TMS ver. 6.0 and Monte Carlo code BEAMnrc and DOSXYZnrc were used in this study. In homogeneous media, the accuracy was acceptable but in inhomogeneous media, the errors were more significant. However in general clinical situation, pencil beam kernel based convolution algorithm is thought to be a valuable tool to calculate the dose.

  11. PeneloPET, a Monte Carlo PET simulation tool based on PENELOPE: features and validation

    Energy Technology Data Exchange (ETDEWEB)

    Espana, S; Herraiz, J L; Vicente, E; Udias, J M [Grupo de Fisica Nuclear, Departmento de Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid, Madrid (Spain); Vaquero, J J; Desco, M [Unidad de Medicina y CirugIa Experimental, Hospital General Universitario Gregorio Maranon, Madrid (Spain)], E-mail: jose@nuc2.fis.ucm.es

    2009-03-21

    Monte Carlo simulations play an important role in positron emission tomography (PET) imaging, as an essential tool for the research and development of new scanners and for advanced image reconstruction. PeneloPET, a PET-dedicated Monte Carlo tool, is presented and validated in this work. PeneloPET is based on PENELOPE, a Monte Carlo code for the simulation of the transport in matter of electrons, positrons and photons, with energies from a few hundred eV to 1 GeV. PENELOPE is robust, fast and very accurate, but it may be unfriendly to people not acquainted with the FORTRAN programming language. PeneloPET is an easy-to-use application which allows comprehensive simulations of PET systems within PENELOPE. Complex and realistic simulations can be set by modifying a few simple input text files. Different levels of output data are available for analysis, from sinogram and lines-of-response (LORs) histogramming to fully detailed list mode. These data can be further exploited with the preferred programming language, including ROOT. PeneloPET simulates PET systems based on crystal array blocks coupled to photodetectors and allows the user to define radioactive sources, detectors, shielding and other parts of the scanner. The acquisition chain is simulated in high level detail; for instance, the electronic processing can include pile-up rejection mechanisms and time stamping of events, if desired. This paper describes PeneloPET and shows the results of extensive validations and comparisons of simulations against real measurements from commercial acquisition systems. PeneloPET is being extensively employed to improve the image quality of commercial PET systems and for the development of new ones.

  12. PeneloPET, a Monte Carlo PET simulation tool based on PENELOPE: features and validation

    International Nuclear Information System (INIS)

    Espana, S; Herraiz, J L; Vicente, E; Udias, J M; Vaquero, J J; Desco, M

    2009-01-01

    Monte Carlo simulations play an important role in positron emission tomography (PET) imaging, as an essential tool for the research and development of new scanners and for advanced image reconstruction. PeneloPET, a PET-dedicated Monte Carlo tool, is presented and validated in this work. PeneloPET is based on PENELOPE, a Monte Carlo code for the simulation of the transport in matter of electrons, positrons and photons, with energies from a few hundred eV to 1 GeV. PENELOPE is robust, fast and very accurate, but it may be unfriendly to people not acquainted with the FORTRAN programming language. PeneloPET is an easy-to-use application which allows comprehensive simulations of PET systems within PENELOPE. Complex and realistic simulations can be set by modifying a few simple input text files. Different levels of output data are available for analysis, from sinogram and lines-of-response (LORs) histogramming to fully detailed list mode. These data can be further exploited with the preferred programming language, including ROOT. PeneloPET simulates PET systems based on crystal array blocks coupled to photodetectors and allows the user to define radioactive sources, detectors, shielding and other parts of the scanner. The acquisition chain is simulated in high level detail; for instance, the electronic processing can include pile-up rejection mechanisms and time stamping of events, if desired. This paper describes PeneloPET and shows the results of extensive validations and comparisons of simulations against real measurements from commercial acquisition systems. PeneloPET is being extensively employed to improve the image quality of commercial PET systems and for the development of new ones.

  13. Monte Carlo methods

    Directory of Open Access Journals (Sweden)

    Bardenet Rémi

    2013-07-01

    Full Text Available Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of Monte Carlo in experimental physics, and point to landmarks in the literature for the curious reader.

  14. [Acute tryptophan depletion in eating disorders].

    Science.gov (United States)

    Díaz-Marsa, M; Lozano, C; Herranz, A S; Asensio-Vegas, M J; Martín, O; Revert, L; Saiz-Ruiz, J; Carrasco, J L

    2006-01-01

    This work describes the rational bases justifying the use of acute tryptophan depletion technique in eating disorders (ED) and the methods and design used in our studies. Tryptophan depletion technique has been described and used in previous studies safely and makes it possible to evaluate the brain serotonin activity. Therefore it is used in the investigation of hypotheses on serotonergic deficiency in eating disorders. Furthermore, and given the relationship of the dysfunctions of serotonin activity with impulsive symptoms, the technique may be useful in biological differentiation of different subtypes, that is restrictive and bulimic, of ED. 57 female patients with DSM-IV eating disorders and 20 female controls were investigated with the tryptophan depletion test. A tryptophan-free amino acid solution was administered orally after a two-day low tryptophan diet to patients and controls. Free plasma tryptophan was measured at two and five hours following administration of the drink. Eating and emotional responses were measured with specific scales for five hours following the depletion. A study of the basic characteristics of the personality and impulsivity traits was also done. Relationship of the response to the test with the different clinical subtypes and with the temperamental and impulsive characteristics of the patients was studied. The test was effective in considerably reducing plasma tryptophan in five hours from baseline levels (76%) in the global sample. The test was well tolerated and no severe adverse effects were reported. Two patients withdrew from the test due to gastric intolerance. The tryptophan depletion test could be of value to study involvement of serotonin deficits in the symptomatology and pathophysiology of eating disorders.

  15. Depleted uranium plasma reduction system study

    International Nuclear Information System (INIS)

    Rekemeyer, P.; Feizollahi, F.; Quapp, W.J.; Brown, B.W.

    1994-12-01

    A system life-cycle cost study was conducted of a preliminary design concept for a plasma reduction process for converting depleted uranium to uranium metal and anhydrous HF. The plasma-based process is expected to offer significant economic and environmental advantages over present technology. Depleted Uranium is currently stored in the form of solid UF 6 , of which approximately 575,000 metric tons is stored at three locations in the U.S. The proposed system is preconceptual in nature, but includes all necessary processing equipment and facilities to perform the process. The study has identified total processing cost of approximately $3.00/kg of UF 6 processed. Based on the results of this study, the development of a laboratory-scale system (1 kg/h throughput of UF6) is warranted. Further scaling of the process to pilot scale will be determined after laboratory testing is complete

  16. PENBURN - A 3-D Zone-Based Depletion/Burnup Solver

    International Nuclear Information System (INIS)

    Manalo, Kevin; Plower, Thomas; Rowe, Mireille; Mock, Travis; Sjoden, Glenn E.

    2008-01-01

    PENBURN (Parallel Environment Burnup) is a general depletion/burnup solver which, when provided with zone-based reaction rates, computes time-dependent isotope concentrations for a set of actinides and fission products. Burnup analysis in PENBURN is performed with a direct Bateman-solver chain solution technique. Specifically, in tandem with PENBURN is the use of PENTRAN, a parallel multi-group anisotropic Sn code for 3-D Cartesian geometries. In PENBURN, the linear chain method is actively used to solve individual isotope chains which are then fully attributed by the burnup code to yield integrated isotope concentrations for each nuclide specified. Included with the discussion of code features, a single PWR fuel pin calculation with the burnup code is performed and detailed with a benchmark comparison to PIE (Post-Irradiation Examination) data within the SFCOMPO (Spent Fuel Composition / NEA) database, and also with burnup codes in SCALE5.1. Conclusions within the paper detail, in PENBURN, the accuracy of major actinides, flux profile behavior as a function of burnup, and criticality calculations for the PWR fuel pin model. (authors)

  17. “When the going gets tough, who keeps going?” Depletion sensitivity moderates the ego-depletion effect

    NARCIS (Netherlands)

    Salmon, S.J.; Adriaanse, M.A.; Vet, de E.W.M.L.; Fennis, B.M.; Ridder, de D.T.D.

    2014-01-01

    Self-control relies on a limited resource that can get depleted, a phenomenon that has been labeled ego-depletion. We argue that individuals may differ in their sensitivity to depleting tasks, and that consequently some people deplete their self-control resource at a faster rate than others. In

  18. A probability-conserving cross-section biasing mechanism for variance reduction in Monte Carlo particle transport calculations

    Energy Technology Data Exchange (ETDEWEB)

    Mendenhall, Marcus H., E-mail: marcus.h.mendenhall@vanderbilt.edu [Vanderbilt University, Department of Electrical Engineering, P.O. Box 351824B, Nashville, TN 37235 (United States); Weller, Robert A., E-mail: robert.a.weller@vanderbilt.edu [Vanderbilt University, Department of Electrical Engineering, P.O. Box 351824B, Nashville, TN 37235 (United States)

    2012-03-01

    In Monte Carlo particle transport codes, it is often important to adjust reaction cross-sections to reduce the variance of calculations of relatively rare events, in a technique known as non-analog Monte Carlo. We present the theory and sample code for a Geant4 process which allows the cross-section of a G4VDiscreteProcess to be scaled, while adjusting track weights so as to mitigate the effects of altered primary beam depletion induced by the cross-section change. This makes it possible to increase the cross-section of nuclear reactions by factors exceeding 10{sup 4} (in appropriate cases), without distorting the results of energy deposition calculations or coincidence rates. The procedure is also valid for bias factors less than unity, which is useful in problems that involve the computation of particle penetration deep into a target (e.g. atmospheric showers or shielding studies).

  19. A probability-conserving cross-section biasing mechanism for variance reduction in Monte Carlo particle transport calculations

    International Nuclear Information System (INIS)

    Mendenhall, Marcus H.; Weller, Robert A.

    2012-01-01

    In Monte Carlo particle transport codes, it is often important to adjust reaction cross-sections to reduce the variance of calculations of relatively rare events, in a technique known as non-analog Monte Carlo. We present the theory and sample code for a Geant4 process which allows the cross-section of a G4VDiscreteProcess to be scaled, while adjusting track weights so as to mitigate the effects of altered primary beam depletion induced by the cross-section change. This makes it possible to increase the cross-section of nuclear reactions by factors exceeding 10 4 (in appropriate cases), without distorting the results of energy deposition calculations or coincidence rates. The procedure is also valid for bias factors less than unity, which is useful in problems that involve the computation of particle penetration deep into a target (e.g. atmospheric showers or shielding studies).

  20. The role of Monte Carlo burnup calculations in quantifying plutonium mass in spent fuel assemblies with non-destructive assay

    Energy Technology Data Exchange (ETDEWEB)

    Galloway, Jack D.; Tobin, Stephen J.; Trellue, Holly R.; Fensin, Michael L. [Los Alamos National Laboratory, Los Alamos, (United States)

    2011-12-15

    The Next Generation Safeguards Initiate (NGSI) of the United States Department of Energy has funded a multi-laboratory/university collaboration to quantify plutonium content in spent fuel (SF) with non-destructive assay (NDA) techniques and quantify the capability of these NDA techniques to detect pin diversions from SF assemblies. The first Monte Carlo based spent fuel library (SFL) developed for the NGSI program contained information for 64 different types of SF assemblies (four initial enrichments, burnups, and cooling times). The maximum amount of fission products allowed to still model a 17x17 Westinghouse pressurized water reactor (PWR) fuel assembly with four regions per fuel pin was modelled. The number of fission products tracked was limited by the available memory. Studies have since indicated that additional fission product inclusion and asymmetric burning of the assembly is desired. Thus, an updated SFL has been developed using an enhanced version of MCNPX, more powerful computing resources, and the Monte Carlo-based burnup code Monteburns, which links MCNPX to a depletion code and models a representative 1 Division-Slash 8 core geometry containing one region per fuel pin in the assemblies of interest, including a majority of the fission products with available cross sections. Often in safeguards, the limiting factor in the accuracy of NDA instruments is the quality of the working standard used in calibration. In the case of SF this is anticipated to also be true, particularly for several of the neutron techniques. The fissile isotopes of interest are co-mingled with neutron absorbers that alter the measured count rate. This paper will quantify how well working standards can be generated for PWR spent fuel assemblies and also describe the spatial plutonium distribution across an assembly. More specifically we will demonstrate how Monte Carlo gamma measurement simulations and a Monte Carlo burnup code can be used to characterize the emitted gamma

  1. GGEMS-Brachy: GPU GEant4-based Monte Carlo simulation for brachytherapy applications

    Science.gov (United States)

    Lemaréchal, Yannick; Bert, Julien; Falconnet, Claire; Després, Philippe; Valeri, Antoine; Schick, Ulrike; Pradier, Olivier; Garcia, Marie-Paule; Boussion, Nicolas; Visvikis, Dimitris

    2015-07-01

    In brachytherapy, plans are routinely calculated using the AAPM TG43 formalism which considers the patient as a simple water object. An accurate modeling of the physical processes considering patient heterogeneity using Monte Carlo simulation (MCS) methods is currently too time-consuming and computationally demanding to be routinely used. In this work we implemented and evaluated an accurate and fast MCS on Graphics Processing Units (GPU) for brachytherapy low dose rate (LDR) applications. A previously proposed Geant4 based MCS framework implemented on GPU (GGEMS) was extended to include a hybrid GPU navigator, allowing navigation within voxelized patient specific images and analytically modeled 125I seeds used in LDR brachytherapy. In addition, dose scoring based on track length estimator including uncertainty calculations was incorporated. The implemented GGEMS-brachy platform was validated using a comparison with Geant4 simulations and reference datasets. Finally, a comparative dosimetry study based on the current clinical standard (TG43) and the proposed platform was performed on twelve prostate cancer patients undergoing LDR brachytherapy. Considering patient 3D CT volumes of 400  × 250  × 65 voxels and an average of 58 implanted seeds, the mean patient dosimetry study run time for a 2% dose uncertainty was 9.35 s (≈500 ms 10-6 simulated particles) and 2.5 s when using one and four GPUs, respectively. The performance of the proposed GGEMS-brachy platform allows envisaging the use of Monte Carlo simulation based dosimetry studies in brachytherapy compatible with clinical practice. Although the proposed platform was evaluated for prostate cancer, it is equally applicable to other LDR brachytherapy clinical applications. Future extensions will allow its application in high dose rate brachytherapy applications.

  2. GGEMS-Brachy: GPU GEant4-based Monte Carlo simulation for brachytherapy applications

    International Nuclear Information System (INIS)

    Lemaréchal, Yannick; Bert, Julien; Schick, Ulrike; Pradier, Olivier; Garcia, Marie-Paule; Boussion, Nicolas; Visvikis, Dimitris; Falconnet, Claire; Després, Philippe; Valeri, Antoine

    2015-01-01

    In brachytherapy, plans are routinely calculated using the AAPM TG43 formalism which considers the patient as a simple water object. An accurate modeling of the physical processes considering patient heterogeneity using Monte Carlo simulation (MCS) methods is currently too time-consuming and computationally demanding to be routinely used. In this work we implemented and evaluated an accurate and fast MCS on Graphics Processing Units (GPU) for brachytherapy low dose rate (LDR) applications. A previously proposed Geant4 based MCS framework implemented on GPU (GGEMS) was extended to include a hybrid GPU navigator, allowing navigation within voxelized patient specific images and analytically modeled 125 I seeds used in LDR brachytherapy. In addition, dose scoring based on track length estimator including uncertainty calculations was incorporated. The implemented GGEMS-brachy platform was validated using a comparison with Geant4 simulations and reference datasets. Finally, a comparative dosimetry study based on the current clinical standard (TG43) and the proposed platform was performed on twelve prostate cancer patients undergoing LDR brachytherapy. Considering patient 3D CT volumes of 400  × 250  × 65 voxels and an average of 58 implanted seeds, the mean patient dosimetry study run time for a 2% dose uncertainty was 9.35 s (≈500 ms 10 −6 simulated particles) and 2.5 s when using one and four GPUs, respectively. The performance of the proposed GGEMS-brachy platform allows envisaging the use of Monte Carlo simulation based dosimetry studies in brachytherapy compatible with clinical practice. Although the proposed platform was evaluated for prostate cancer, it is equally applicable to other LDR brachytherapy clinical applications. Future extensions will allow its application in high dose rate brachytherapy applications. (paper)

  3. Comparison of nonstationary generalized logistic models based on Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    S. Kim

    2015-06-01

    Full Text Available Recently, the evidences of climate change have been observed in hydrologic data such as rainfall and flow data. The time-dependent characteristics of statistics in hydrologic data are widely defined as nonstationarity. Therefore, various nonstationary GEV and generalized Pareto models have been suggested for frequency analysis of nonstationary annual maximum and POT (peak-over-threshold data, respectively. However, the alternative models are required for nonstatinoary frequency analysis because of analyzing the complex characteristics of nonstationary data based on climate change. This study proposed the nonstationary generalized logistic model including time-dependent parameters. The parameters of proposed model are estimated using the method of maximum likelihood based on the Newton-Raphson method. In addition, the proposed model is compared by Monte Carlo simulation to investigate the characteristics of models and applicability.

  4. Markov Chain Monte Carlo Methods for Bayesian Data Analysis in Astronomy

    Science.gov (United States)

    Sharma, Sanjib

    2017-08-01

    Markov Chain Monte Carlo based Bayesian data analysis has now become the method of choice for analyzing and interpreting data in almost all disciplines of science. In astronomy, over the last decade, we have also seen a steady increase in the number of papers that employ Monte Carlo based Bayesian analysis. New, efficient Monte Carlo based methods are continuously being developed and explored. In this review, we first explain the basics of Bayesian theory and discuss how to set up data analysis problems within this framework. Next, we provide an overview of various Monte Carlo based methods for performing Bayesian data analysis. Finally, we discuss advanced ideas that enable us to tackle complex problems and thus hold great promise for the future. We also distribute downloadable computer software (available at https://github.com/sanjibs/bmcmc/ ) that implements some of the algorithms and examples discussed here.

  5. Ego Depletion Impairs Implicit Learning

    Science.gov (United States)

    Thompson, Kelsey R.; Sanchez, Daniel J.; Wesley, Abigail H.; Reber, Paul J.

    2014-01-01

    Implicit skill learning occurs incidentally and without conscious awareness of what is learned. However, the rate and effectiveness of learning may still be affected by decreased availability of central processing resources. Dual-task experiments have generally found impairments in implicit learning, however, these studies have also shown that certain characteristics of the secondary task (e.g., timing) can complicate the interpretation of these results. To avoid this problem, the current experiments used a novel method to impose resource constraints prior to engaging in skill learning. Ego depletion theory states that humans possess a limited store of cognitive resources that, when depleted, results in deficits in self-regulation and cognitive control. In a first experiment, we used a standard ego depletion manipulation prior to performance of the Serial Interception Sequence Learning (SISL) task. Depleted participants exhibited poorer test performance than did non-depleted controls, indicating that reducing available executive resources may adversely affect implicit sequence learning, expression of sequence knowledge, or both. In a second experiment, depletion was administered either prior to or after training. Participants who reported higher levels of depletion before or after training again showed less sequence-specific knowledge on the post-training assessment. However, the results did not allow for clear separation of ego depletion effects on learning versus subsequent sequence-specific performance. These results indicate that performance on an implicitly learned sequence can be impaired by a reduction in executive resources, in spite of learning taking place outside of awareness and without conscious intent. PMID:25275517

  6. Ego depletion impairs implicit learning.

    Directory of Open Access Journals (Sweden)

    Kelsey R Thompson

    Full Text Available Implicit skill learning occurs incidentally and without conscious awareness of what is learned. However, the rate and effectiveness of learning may still be affected by decreased availability of central processing resources. Dual-task experiments have generally found impairments in implicit learning, however, these studies have also shown that certain characteristics of the secondary task (e.g., timing can complicate the interpretation of these results. To avoid this problem, the current experiments used a novel method to impose resource constraints prior to engaging in skill learning. Ego depletion theory states that humans possess a limited store of cognitive resources that, when depleted, results in deficits in self-regulation and cognitive control. In a first experiment, we used a standard ego depletion manipulation prior to performance of the Serial Interception Sequence Learning (SISL task. Depleted participants exhibited poorer test performance than did non-depleted controls, indicating that reducing available executive resources may adversely affect implicit sequence learning, expression of sequence knowledge, or both. In a second experiment, depletion was administered either prior to or after training. Participants who reported higher levels of depletion before or after training again showed less sequence-specific knowledge on the post-training assessment. However, the results did not allow for clear separation of ego depletion effects on learning versus subsequent sequence-specific performance. These results indicate that performance on an implicitly learned sequence can be impaired by a reduction in executive resources, in spite of learning taking place outside of awareness and without conscious intent.

  7. Ego depletion impairs implicit learning.

    Science.gov (United States)

    Thompson, Kelsey R; Sanchez, Daniel J; Wesley, Abigail H; Reber, Paul J

    2014-01-01

    Implicit skill learning occurs incidentally and without conscious awareness of what is learned. However, the rate and effectiveness of learning may still be affected by decreased availability of central processing resources. Dual-task experiments have generally found impairments in implicit learning, however, these studies have also shown that certain characteristics of the secondary task (e.g., timing) can complicate the interpretation of these results. To avoid this problem, the current experiments used a novel method to impose resource constraints prior to engaging in skill learning. Ego depletion theory states that humans possess a limited store of cognitive resources that, when depleted, results in deficits in self-regulation and cognitive control. In a first experiment, we used a standard ego depletion manipulation prior to performance of the Serial Interception Sequence Learning (SISL) task. Depleted participants exhibited poorer test performance than did non-depleted controls, indicating that reducing available executive resources may adversely affect implicit sequence learning, expression of sequence knowledge, or both. In a second experiment, depletion was administered either prior to or after training. Participants who reported higher levels of depletion before or after training again showed less sequence-specific knowledge on the post-training assessment. However, the results did not allow for clear separation of ego depletion effects on learning versus subsequent sequence-specific performance. These results indicate that performance on an implicitly learned sequence can be impaired by a reduction in executive resources, in spite of learning taking place outside of awareness and without conscious intent.

  8. Exposure to nature counteracts aggression after depletion.

    Science.gov (United States)

    Wang, Yan; She, Yihan; Colarelli, Stephen M; Fang, Yuan; Meng, Hui; Chen, Qiuju; Zhang, Xin; Zhu, Hongwei

    2018-01-01

    Acts of self-control are more likely to fail after previous exertion of self-control, known as the ego depletion effect. Research has shown that depleted participants behave more aggressively than non-depleted participants, especially after being provoked. Although exposure to nature (e.g., a walk in the park) has been predicted to replenish resources common to executive functioning and self-control, the extent to which exposure to nature may counteract the depletion effect on aggression has yet to be determined. The present study investigated the effects of exposure to nature on aggression following depletion. Aggression was measured by the intensity of noise blasts participants delivered to an ostensible opponent in a competition reaction-time task. As predicted, an interaction occurred between depletion and environmental manipulations for provoked aggression. Specifically, depleted participants behaved more aggressively in response to provocation than non-depleted participants in the urban condition. However, provoked aggression did not differ between depleted and non-depleted participants in the natural condition. Moreover, within the depletion condition, participants in the natural condition had lower levels of provoked aggression than participants in the urban condition. This study suggests that a brief period of nature exposure may restore self-control and help depleted people regain control over aggressive urges. © 2017 Wiley Periodicals, Inc.

  9. VIP-Man: An image-based whole-body adult male model constructed from color photographs of the visible human project for multi-particle Monte Carlo calculations

    International Nuclear Information System (INIS)

    Xu, X.G.; Chao, T.C.; Bozkurt, A.

    2000-01-01

    Human anatomical models have been indispensable to radiation protection dosimetry using Monte Carlo calculations. Existing MIRD-based mathematical models are easy to compute and standardize, but they are simplified and crude compared to human anatomy. This article describes the development of an image-based whole-body model, called VIP-Man, using transversal color photographic images obtained from the National Library of Medicine's Visible Human Project for Monte Carlo organ dose calculations involving photons, electron, neutrons, and protons. As the first of a series of papers on dose calculations based on VIP-Man, this article provides detailed information about how to construct an image-based model, as well as how to adopt it into well-tested Monte Carlo codes, EGS4, MCNP4B, and MCNPX

  10. Discrete diffusion Monte Carlo for frequency-dependent radiative transfer

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Thompson, Kelly G.; Urbatsch, Todd J.

    2011-01-01

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique. (author)

  11. Base cation depletion, eutrophication and acidification of species-rich grasslands in response to long-term simulated nitrogen deposition

    Energy Technology Data Exchange (ETDEWEB)

    Horswill, Paul [Department of Animal and Plant Sciences, University of Sheffield, Alfred Denny Building, Western Bank, Sheffield S10 2TN (United Kingdom)], E-mail: paul.horswill@naturalengland.org.uk; O' Sullivan, Odhran; Phoenix, Gareth K.; Lee, John A.; Leake, Jonathan R. [Department of Animal and Plant Sciences, University of Sheffield, Alfred Denny Building, Western Bank, Sheffield S10 2TN (United Kingdom)

    2008-09-15

    Pollutant nitrogen deposition effects on soil and foliar element concentrations were investigated in acidic and limestone grasslands, located in one of the most nitrogen and acid rain polluted regions of the UK, using plots treated for 8-10 years with 35-140 kg N ha{sup -2} y{sup -1} as NH{sub 4}NO{sub 3}. Historic data suggests both grasslands have acidified over the past 50 years. Nitrogen deposition treatments caused the grassland soils to lose 23-35% of their total available bases (Ca, Mg, K, and Na) and they became acidified by 0.2-0.4 pH units. Aluminium, iron and manganese were mobilised and taken up by limestone grassland forbs and were translocated down the acid grassland soil. Mineral nitrogen availability increased in both grasslands and many species showed foliar N enrichment. This study provides the first definitive evidence that nitrogen deposition depletes base cations from grassland soils. The resulting acidification, metal mobilisation and eutrophication are implicated in driving floristic changes. - Nitrogen deposition causes base cation depletion, acidification and eutrophication of semi-natural grassland soils.

  12. New Approaches and Applications for Monte Carlo Perturbation Theory

    Energy Technology Data Exchange (ETDEWEB)

    Aufiero, Manuele; Bidaud, Adrien; Kotlyar, Dan; Leppänen, Jaakko; Palmiotti, Giuseppe; Salvatores, Massimo; Sen, Sonat; Shwageraus, Eugene; Fratoni, Massimiliano

    2017-02-01

    This paper presents some of the recent and new advancements in the extension of Monte Carlo Perturbation Theory methodologies and application. In particular, the discussed problems involve Brunup calculation, perturbation calculation based on continuous energy functions, and Monte Carlo Perturbation Theory in loosely coupled systems.

  13. CARMEN: a system Monte Carlo based on linear programming from direct openings

    International Nuclear Information System (INIS)

    Ureba, A.; Pereira-Barbeiro, A. R.; Jimenez-Ortega, E.; Baeza, J. A.; Salguero, F. J.; Leal, A.

    2013-01-01

    The use of Monte Carlo (MC) has shown an improvement in the accuracy of the calculation of the dose compared to other analytics algorithms installed on the systems of business planning, especially in the case of non-standard situations typical of complex techniques such as IMRT and VMAT. Our treatment planning system called CARMEN, is based on the complete simulation, both the beam transport in the head of the accelerator and the patient, and simulation designed for efficient operation in terms of the accuracy of the estimate and the required computation times. (Author)

  14. Is Monte Carlo embarrassingly parallel?

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  15. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  16. Mesh-based weight window approach for Monte Carlo simulation

    International Nuclear Information System (INIS)

    Liu, L.; Gardner, R.P.

    1997-01-01

    The Monte Carlo method has been increasingly used to solve particle transport problems. Statistical fluctuation from random sampling is the major limiting factor of its application. To obtain the desired precision, variance reduction techniques are indispensable for most practical problems. Among various variance reduction techniques, the weight window method proves to be one of the most general, powerful, and robust. The method is implemented in the current MCNP code. An importance map is estimated during a regular Monte Carlo run, and then the map is used in the subsequent run for splitting and Russian roulette games. The major drawback of this weight window method is lack of user-friendliness. It normally requires that users divide the large geometric cells into smaller ones by introducing additional surfaces to ensure an acceptable spatial resolution of the importance map. In this paper, we present a new weight window approach to overcome this drawback

  17. Integrated layout based Monte-Carlo simulation for design arc optimization

    Science.gov (United States)

    Shao, Dongbing; Clevenger, Larry; Zhuang, Lei; Liebmann, Lars; Wong, Robert; Culp, James

    2016-03-01

    Design rules are created considering a wafer fail mechanism with the relevant design levels under various design cases, and the values are set to cover the worst scenario. Because of the simplification and generalization, design rule hinders, rather than helps, dense device scaling. As an example, SRAM designs always need extensive ground rule waivers. Furthermore, dense design also often involves "design arc", a collection of design rules, the sum of which equals critical pitch defined by technology. In design arc, a single rule change can lead to chain reaction of other rule violations. In this talk we present a methodology using Layout Based Monte-Carlo Simulation (LBMCS) with integrated multiple ground rule checks. We apply this methodology on SRAM word line contact, and the result is a layout that has balanced wafer fail risks based on Process Assumptions (PAs). This work was performed at the IBM Microelectronics Div, Semiconductor Research and Development Center, Hopewell Junction, NY 12533

  18. Continuous energy Monte Carlo calculations for randomly distributed spherical fuels based on statistical geometry model

    Energy Technology Data Exchange (ETDEWEB)

    Murata, Isao [Osaka Univ., Suita (Japan); Mori, Takamasa; Nakagawa, Masayuki; Itakura, Hirofumi

    1996-03-01

    The method to calculate neutronics parameters of a core composed of randomly distributed spherical fuels has been developed based on a statistical geometry model with a continuous energy Monte Carlo method. This method was implemented in a general purpose Monte Carlo code MCNP, and a new code MCNP-CFP had been developed. This paper describes the model and method how to use it and the validation results. In the Monte Carlo calculation, the location of a spherical fuel is sampled probabilistically along the particle flight path from the spatial probability distribution of spherical fuels, called nearest neighbor distribution (NND). This sampling method was validated through the following two comparisons: (1) Calculations of inventory of coated fuel particles (CFPs) in a fuel compact by both track length estimator and direct evaluation method, and (2) Criticality calculations for ordered packed geometries. This method was also confined by applying to an analysis of the critical assembly experiment at VHTRC. The method established in the present study is quite unique so as to a probabilistic model of the geometry with a great number of spherical fuels distributed randomly. Realizing the speed-up by vector or parallel computations in future, it is expected to be widely used in calculation of a nuclear reactor core, especially HTGR cores. (author).

  19. SPQR: a Monte Carlo reactor kinetics code

    International Nuclear Information System (INIS)

    Cramer, S.N.; Dodds, H.L.

    1980-02-01

    The SPQR Monte Carlo code has been developed to analyze fast reactor core accident problems where conventional methods are considered inadequate. The code is based on the adiabatic approximation of the quasi-static method. This initial version contains no automatic material motion or feedback. An existing Monte Carlo code is used to calculate the shape functions and the integral quantities needed in the kinetics module. Several sample problems have been devised and analyzed. Due to the large statistical uncertainty associated with the calculation of reactivity in accident simulations, the results, especially at later times, differ greatly from deterministic methods. It was also found that in large uncoupled systems, the Monte Carlo method has difficulty in handling asymmetric perturbations

  20. Measurement of time-dependent fast neutron energy spectra in a depleted uranium assembly

    International Nuclear Information System (INIS)

    Whittlestone, S.

    1980-10-01

    Time-dependent neutron energy spectra in the range 0.6 to 6.4 MeV have been measured in a depleted uranium assembly. By selecting windows in the time range 0.9 to 82 ns after the beam pulse, it was possible to observe the change of the neutron energy distributions from spectra of predominantly 4 to 6 MeV neutrons to spectra composed almost entirely of fission neutrons. The measured spectra were compared to a Monte Carlo calculation of the experiment using the ENDF/B-IV data file. At times and energies at which the calculation predicted a fission spectrum, the experiment agreed with the calculation, confirming the accuracy of the neutron spectroscopy system. However, the presence of discrepancies at other times and energies suggested that there are significant inconsistencies in the inelastic cross sections in the 1 to 6 MeV range. The time response generated concurrently with the energy spectra was compared to the Monte Carlo calculation. From this comparison, and from examination of time spectra measured by other workers using 235 U and 237 Np fission detectors, it would appear that there are discrepancies in the ENDF/B-IV cross sections below 1 MeV. The predicted decay rates were too low below and too high above 0.8 MeV

  1. Monte-Carlo-based uncertainty propagation with hierarchical models—a case study in dynamic torque

    Science.gov (United States)

    Klaus, Leonard; Eichstädt, Sascha

    2018-04-01

    For a dynamic calibration, a torque transducer is described by a mechanical model, and the corresponding model parameters are to be identified from measurement data. A measuring device for the primary calibration of dynamic torque, and a corresponding model-based calibration approach, have recently been developed at PTB. The complete mechanical model of the calibration set-up is very complex, and involves several calibration steps—making a straightforward implementation of a Monte Carlo uncertainty evaluation tedious. With this in mind, we here propose to separate the complete model into sub-models, with each sub-model being treated with individual experiments and analysis. The uncertainty evaluation for the overall model then has to combine the information from the sub-models in line with Supplement 2 of the Guide to the Expression of Uncertainty in Measurement. In this contribution, we demonstrate how to carry this out using the Monte Carlo method. The uncertainty evaluation involves various input quantities of different origin and the solution of a numerical optimisation problem.

  2. Adaptive Multilevel Monte Carlo Simulation

    KAUST Repository

    Hoel, H

    2011-08-23

    This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).

  3. Automated Monte Carlo biasing for photon-generated electrons near surfaces.

    Energy Technology Data Exchange (ETDEWEB)

    Franke, Brian Claude; Crawford, Martin James; Kensek, Ronald Patrick

    2009-09-01

    This report describes efforts to automate the biasing of coupled electron-photon Monte Carlo particle transport calculations. The approach was based on weight-windows biasing. Weight-window settings were determined using adjoint-flux Monte Carlo calculations. A variety of algorithms were investigated for adaptivity of the Monte Carlo tallies. Tree data structures were used to investigate spatial partitioning. Functional-expansion tallies were used to investigate higher-order spatial representations.

  4. Kinetics of depletion interactions

    NARCIS (Netherlands)

    Vliegenthart, G.A.; Schoot, van der P.P.A.M.

    2003-01-01

    Depletion interactions between colloidal particles dispersed in a fluid medium are effective interactions induced by the presence of other types of colloid. They are not instantaneous but built up in time. We show by means of Brownian dynamics simulations that the static (mean-field) depletion force

  5. Associative Interactions in Crowded Solutions of Biopolymers Counteract Depletion Effects.

    Science.gov (United States)

    Groen, Joost; Foschepoth, David; te Brinke, Esra; Boersma, Arnold J; Imamura, Hiromi; Rivas, Germán; Heus, Hans A; Huck, Wilhelm T S

    2015-10-14

    The cytosol of Escherichia coli is an extremely crowded environment, containing high concentrations of biopolymers which occupy 20-30% of the available volume. Such conditions are expected to yield depletion forces, which strongly promote macromolecular complexation. However, crowded macromolecule solutions, like the cytosol, are very prone to nonspecific associative interactions that can potentially counteract depletion. It remains unclear how the cytosol balances these opposing interactions. We used a FRET-based probe to systematically study depletion in vitro in different crowded environments, including a cytosolic mimic, E. coli lysate. We also studied bundle formation of FtsZ protofilaments under identical crowded conditions as a probe for depletion interactions at much larger overlap volumes of the probe molecule. The FRET probe showed a more compact conformation in synthetic crowding agents, suggesting strong depletion interactions. However, depletion was completely negated in cell lysate and other protein crowding agents, where the FRET probe even occupied slightly more volume. In contrast, bundle formation of FtsZ protofilaments proceeded as readily in E. coli lysate and other protein solutions as in synthetic crowding agents. Our experimental results and model suggest that, in crowded biopolymer solutions, associative interactions counterbalance depletion forces for small macromolecules. Furthermore, the net effects of macromolecular crowding will be dependent on both the size of the macromolecule and its associative interactions with the crowded background.

  6. A strategy for selective detection based on interferent depleting and redox cycling using the plane-recessed microdisk array electrodes

    International Nuclear Information System (INIS)

    Zhu Feng; Yan Jiawei; Lu Miao; Zhou Yongliang; Yang Yang; Mao Bingwei

    2011-01-01

    Highlights: → A novel strategy based on a combination of interferent depleting and redox cycling is proposed for the plane-recessed microdisk array electrodes. → The strategy break up the restriction of selectively detecting a species that exhibits reversible reaction in a mixture with one that exhibits an irreversible reaction. → The electrodes enhance the current signal by redox cycling. → The electrodes can work regardless of the reversibility of interfering species. - Abstract: The fabrication, characterization and application of the plane-recessed microdisk array electrodes for selective detection are demonstrated. The electrodes, fabricated by lithographic microfabrication technology, are composed of a planar film electrode and a 32 x 32 recessed microdisk array electrode. Different from commonly used redox cycling operating mode for array configurations such as interdigitated array electrodes, a novel strategy based on a combination of interferent depleting and redox cycling is proposed for the electrodes with an appropriate configuration. The planar film electrode (the plane electrode) is used to deplete the interferent in the diffusion layer. The recessed microdisk array electrode (the microdisk array), locating within the diffusion layer of the plane electrode, works for detecting the target analyte in the interferent-depleted diffusion layer. In addition, the microdisk array overcomes the disadvantage of low current signal for a single microelectrode. Moreover, the current signal of the target analyte that undergoes reversible electron transfer can be enhanced due to the redox cycling between the plane electrode and the microdisk array. Based on the above working principle, the plane-recessed microdisk array electrodes break up the restriction of selectively detecting a species that exhibits reversible reaction in a mixture with one that exhibits an irreversible reaction, which is a limitation of single redox cycling operating mode. The

  7. Contributon Monte Carlo

    International Nuclear Information System (INIS)

    Dubi, A.; Gerstl, S.A.W.

    1979-05-01

    The contributon Monte Carlo method is based on a new recipe to calculate target responses by means of volume integral of the contributon current in a region between the source and the detector. A comprehensive description of the method, its implementation in the general-purpose MCNP code, and results of the method for realistic nonhomogeneous, energy-dependent problems are presented. 23 figures, 10 tables

  8. Lithium Depletion in Solar-like Stars: Effect of Overshooting Based on Realistic Multi-dimensional Simulations

    Science.gov (United States)

    Baraffe, I.; Pratt, J.; Goffrey, T.; Constantino, T.; Folini, D.; Popov, M. V.; Walder, R.; Viallet, M.

    2017-08-01

    We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a young low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ˜50 Myr to ˜4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.

  9. Lithium Depletion in Solar-like Stars: Effect of Overshooting Based on Realistic Multi-dimensional Simulations

    International Nuclear Information System (INIS)

    Baraffe, I.; Pratt, J.; Goffrey, T.; Constantino, T.; Viallet, M.; Folini, D.; Popov, M. V.; Walder, R.

    2017-01-01

    We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a young low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ∼50 Myr to ∼4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.

  10. Lithium Depletion in Solar-like Stars: Effect of Overshooting Based on Realistic Multi-dimensional Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Baraffe, I.; Pratt, J.; Goffrey, T.; Constantino, T.; Viallet, M. [Astrophysics Group, University of Exeter, Exeter EX4 4QL (United Kingdom); Folini, D.; Popov, M. V.; Walder, R., E-mail: i.baraffe@ex.ac.uk [Ecole Normale Supérieure de Lyon, CRAL, UMR CNRS 5574, F-69364 Lyon Cedex 07 (France)

    2017-08-10

    We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a young low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ∼50 Myr to ∼4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.

  11. Deuterium - depleted water. Achievements and perspectives

    International Nuclear Information System (INIS)

    Titescu, Gh.; Stefanescu, I.; Saros-Rogobete, I.

    2001-01-01

    Deuterium - depleted water represents water that has an isotopic content lower than 145 ppm D/(D+H) which is the natural isotopic content of water. The research conducted at ICSI Ramnicu Valcea, regarding deuterium - depleted water were completed by the following patents: - technique and installation for deuterium - depleted water production; - distilled water with low deuterium content; - technique and installation for the production of distilled water with low deuterium content; - mineralized water with low deuterium content and technique to produce it. The gold and silver medals won at international salons for inventions confirmed the novelty of these inventions. Knowing that deuterium content of water has a big influence on living organisms, beginning with 1996, the ICSI Ramnicu Valcea, deuterium - depleted water producer, co-operated with Romanian specialized institutes for biological effects' evaluation of deuterium - depleted water. The role of natural deuterium in living organisms was examined by using deuterium - depleted water instead of natural water. These investigations led to the following conclusions: 1. deuterium - depleted water caused a tendency towards the increase of the basal tone, accompanied by the intensification of the vasoconstrictor effects of phenylefrine, noradrenaline and angiotensin; the increase of the basal tone and vascular reactivity produced by the deuterium - depleted water persists after the removal of the vascular endothelium; -2. animals treated with deuterium - depleted water showed an increase of the resistance both to sublethal and to lethal gamma radiation doses, suggesting a radioprotective action by the stimulation of non-specific immune defence mechanism; 3, deuterium - depleted water stimulates immune defence reactions, represented by the opsonic, bactericidal and phagocyte capacity of the immune system, together with increase in the numbers of polymorphonuclear neutrophils; 4. investigations regarding artificial

  12. Simulations and observations of plasma depletion, ion composition, and airglow emissions in two auroral ionospheric depletion experiments

    International Nuclear Information System (INIS)

    Yau, A.W.; Whalen, B.A.; Harris, F.R.; Gattinger, R.L.; Pongratz, M.B.; Bernhardt, P.A.

    1985-01-01

    In an ionospheric depletion experiment where chemically reactive vapors such as H 2 O and CO 2 are injected into the O + dominant F region to accelerate the plasma recombination rate and to reduce the plasma density, the ion composition in the depleted region is modified, and photometric emissions are produced. We compare in situ ion composition, density, and photometric measurements from two ionospheric depletion experiments with predictions from chemical modeling. The two injections, Waterhole I and III, were part of an auroral perturbation experiment and occurred in different ambient conditions. In both injections a core region of greater than fivefold plasma depletion was observed over roughly-equal5-km diameter within seconds of the injection, surrounded by an outer region of less drastic and slower depletion. In Waterhole I the plasma density was depleted tenfold over a 30-km diamter region after 2 min. The ambient O + density was drastically reduced, and the molecular O + 2 abundance was enhanced fivehold in the depletion region. OH airglow emission associated with the depletion was observed with a peak emission intensity of roughly-equal1 kR. In Waterhole III the ambient density was a decade lower, and the plasma depletion was less drastic, being twofold over 30 km after 2 min. The airglow emissions were also much less intense and below measurement sensitivity (30 R for the OH 306.4-nm emission; 50 R for the 630.0-nm emission)

  13. A Monte Carlo-based model for simulation of digital chest tomo-synthesis

    International Nuclear Information System (INIS)

    Ullman, G.; Dance, D. R.; Sandborg, M.; Carlsson, G. A.; Svalkvist, A.; Baath, M.

    2010-01-01

    The aim of this work was to calculate synthetic digital chest tomo-synthesis projections using a computer simulation model based on the Monte Carlo method. An anthropomorphic chest phantom was scanned in a computed tomography scanner, segmented and included in the computer model to allow for simulation of realistic high-resolution X-ray images. The input parameters to the model were adapted to correspond to the VolumeRAD chest tomo-synthesis system from GE Healthcare. Sixty tomo-synthesis projections were calculated with projection angles ranging from + 15 to -15 deg. The images from primary photons were calculated using an analytical model of the anti-scatter grid and a pre-calculated detector response function. The contributions from scattered photons were calculated using an in-house Monte Carlo-based model employing a number of variance reduction techniques such as the collision density estimator. Tomographic section images were reconstructed by transferring the simulated projections into the VolumeRAD system. The reconstruction was performed for three types of images using: (i) noise-free primary projections, (ii) primary projections including contributions from scattered photons and (iii) projections as in (ii) with added correlated noise. The simulated section images were compared with corresponding section images from projections taken with the real, anthropomorphic phantom from which the digital voxel phantom was originally created. The present article describes a work in progress aiming towards developing a model intended for optimisation of chest tomo-synthesis, allowing for simulation of both existing and future chest tomo-synthesis systems. (authors)

  14. JMCT Monte Carlo simulation analysis of full core PWR Pin-By-Pin and shielding

    International Nuclear Information System (INIS)

    Deng, L.; Li, G.; Zhang, B.; Shangguan, D.; Ma, Y.; Hu, Z.; Fu, Y.; Li, R.; Hu, X.; Cheng, T.; Shi, D.

    2015-01-01

    This paper describes the application of the JMCT Monte Carlo code to the simulation of Kord Smith Challenge H-M model, BEAVRS model and Chinese SG-III model. For H-M model, the 6.3624 millions tally regions and the 98.3 billion neutron histories do. The detailed pin flux and energy deposition densities obtain. 95% regions have less 1% standard deviation. For BEAVRS model, firstly, we performed the neutron transport calculation of 398 axial planes in the Hot Zero Power (HZP) status. Almost the same results with MC21 and OpenMC results are achieved. The detailed pin-power density distribution and standard deviation are shown. Then, we performed the calculation of ten depletion steps in 30 axial plane cases. The depletion regions exceed 1.5 million and 12,000 processors uses. Finally, the Chinese SG-III laser model is simulated. The neutron and photon flux distributions are given, respectively. The results show that the JMCT code well suits for extremely large reactor and shielding simulation. (author)

  15. Optix: A Monte Carlo scintillation light transport code

    Energy Technology Data Exchange (ETDEWEB)

    Safari, M.J., E-mail: mjsafari@aut.ac.ir [Department of Energy Engineering and Physics, Amir Kabir University of Technology, PO Box 15875-4413, Tehran (Iran, Islamic Republic of); Afarideh, H. [Department of Energy Engineering and Physics, Amir Kabir University of Technology, PO Box 15875-4413, Tehran (Iran, Islamic Republic of); Ghal-Eh, N. [School of Physics, Damghan University, PO Box 36716-41167, Damghan (Iran, Islamic Republic of); Davani, F. Abbasi [Nuclear Engineering Department, Shahid Beheshti University, PO Box 1983963113, Tehran (Iran, Islamic Republic of)

    2014-02-11

    The paper reports on the capabilities of Monte Carlo scintillation light transport code Optix, which is an extended version of previously introduced code Optics. Optix provides the user a variety of both numerical and graphical outputs with a very simple and user-friendly input structure. A benchmarking strategy has been adopted based on the comparison with experimental results, semi-analytical solutions, and other Monte Carlo simulation codes to verify various aspects of the developed code. Besides, some extensive comparisons have been made against the tracking abilities of general-purpose MCNPX and FLUKA codes. The presented benchmark results for the Optix code exhibit promising agreements. -- Highlights: • Monte Carlo simulation of scintillation light transport in 3D geometry. • Evaluation of angular distribution of detected photons. • Benchmark studies to check the accuracy of Monte Carlo simulations.

  16. The Toxicity of Depleted Uranium

    OpenAIRE

    Briner, Wayne

    2010-01-01

    Depleted uranium (DU) is an emerging environmental pollutant that is introduced into the environment primarily by military activity. While depleted uranium is less radioactive than natural uranium, it still retains all the chemical toxicity associated with the original element. In large doses the kidney is the target organ for the acute chemical toxicity of this metal, producing potentially lethal tubular necrosis. In contrast, chronic low dose exposure to depleted uranium may not produce a c...

  17. Reconstruction of Monte Carlo replicas from Hessian parton distributions

    Energy Technology Data Exchange (ETDEWEB)

    Hou, Tie-Jiun [Department of Physics, Southern Methodist University,Dallas, TX 75275-0181 (United States); Gao, Jun [INPAC, Shanghai Key Laboratory for Particle Physics and Cosmology,Department of Physics and Astronomy, Shanghai Jiao-Tong University, Shanghai 200240 (China); High Energy Physics Division, Argonne National Laboratory,Argonne, Illinois, 60439 (United States); Huston, Joey [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States); Nadolsky, Pavel [Department of Physics, Southern Methodist University,Dallas, TX 75275-0181 (United States); Schmidt, Carl; Stump, Daniel [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States); Wang, Bo-Ting; Xie, Ke Ping [Department of Physics, Southern Methodist University,Dallas, TX 75275-0181 (United States); Dulat, Sayipjamal [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States); School of Physics Science and Technology, Xinjiang University,Urumqi, Xinjiang 830046 (China); Center for Theoretical Physics, Xinjiang University,Urumqi, Xinjiang 830046 (China); Pumplin, Jon; Yuan, C.P. [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States)

    2017-03-20

    We explore connections between two common methods for quantifying the uncertainty in parton distribution functions (PDFs), based on the Hessian error matrix and Monte-Carlo sampling. CT14 parton distributions in the Hessian representation are converted into Monte-Carlo replicas by a numerical method that reproduces important properties of CT14 Hessian PDFs: the asymmetry of CT14 uncertainties and positivity of individual parton distributions. The ensembles of CT14 Monte-Carlo replicas constructed this way at NNLO and NLO are suitable for various collider applications, such as cross section reweighting. Master formulas for computation of asymmetric standard deviations in the Monte-Carlo representation are derived. A correction is proposed to address a bias in asymmetric uncertainties introduced by the Taylor series approximation. A numerical program is made available for conversion of Hessian PDFs into Monte-Carlo replicas according to normal, log-normal, and Watt-Thorne sampling procedures.

  18. Ozone Depletion Caused by Rocket Engine Emissions: A Fundamental Limit on the Scale and Viability of Space-Based Geoengineering Schemes

    Science.gov (United States)

    Ross, M. N.; Toohey, D.

    2008-12-01

    Emissions from solid and liquid propellant rocket engines reduce global stratospheric ozone levels. Currently ~ one kiloton of payloads are launched into earth orbit annually by the global space industry. Stratospheric ozone depletion from present day launches is a small fraction of the ~ 4% globally averaged ozone loss caused by halogen gases. Thus rocket engine emissions are currently considered a minor, if poorly understood, contributor to ozone depletion. Proposed space-based geoengineering projects designed to mitigate climate change would require order of magnitude increases in the amount of material launched into earth orbit. The increased launches would result in comparable increases in the global ozone depletion caused by rocket emissions. We estimate global ozone loss caused by three space-based geoengineering proposals to mitigate climate change: (1) mirrors, (2) sunshade, and (3) space-based solar power (SSP). The SSP concept does not directly engineer climate, but is touted as a mitigation strategy in that SSP would reduce CO2 emissions. We show that launching the mirrors or sunshade would cause global ozone loss between 2% and 20%. Ozone loss associated with an economically viable SSP system would be at least 0.4% and possibly as large as 3%. It is not clear which, if any, of these levels of ozone loss would be acceptable under the Montreal Protocol. The large uncertainties are mainly caused by a lack of data or validated models regarding liquid propellant rocket engine emissions. Our results offer four main conclusions. (1) The viability of space-based geoengineering schemes could well be undermined by the relatively large ozone depletion that would be caused by the required rocket launches. (2) Analysis of space- based geoengineering schemes should include the difficult tradeoff between the gain of long-term (~ decades) climate control and the loss of short-term (~ years) deep ozone loss. (3) The trade can be properly evaluated only if our

  19. Depleted uranium management alternatives

    Energy Technology Data Exchange (ETDEWEB)

    Hertzler, T.J.; Nishimoto, D.D.

    1994-08-01

    This report evaluates two management alternatives for Department of Energy depleted uranium: continued storage as uranium hexafluoride, and conversion to uranium metal and fabrication to shielding for spent nuclear fuel containers. The results will be used to compare the costs with other alternatives, such as disposal. Cost estimates for the continued storage alternative are based on a life-cycle of 27 years through the year 2020. Cost estimates for the recycle alternative are based on existing conversion process costs and Capital costs for fabricating the containers. Additionally, the recycle alternative accounts for costs associated with intermediate product resale and secondary waste disposal for materials generated during the conversion process.

  20. Depleted uranium management alternatives

    International Nuclear Information System (INIS)

    Hertzler, T.J.; Nishimoto, D.D.

    1994-08-01

    This report evaluates two management alternatives for Department of Energy depleted uranium: continued storage as uranium hexafluoride, and conversion to uranium metal and fabrication to shielding for spent nuclear fuel containers. The results will be used to compare the costs with other alternatives, such as disposal. Cost estimates for the continued storage alternative are based on a life-cycle of 27 years through the year 2020. Cost estimates for the recycle alternative are based on existing conversion process costs and Capital costs for fabricating the containers. Additionally, the recycle alternative accounts for costs associated with intermediate product resale and secondary waste disposal for materials generated during the conversion process

  1. Exploring Monte Carlo methods

    CERN Document Server

    Dunn, William L

    2012-01-01

    Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble

  2. Time step length versus efficiency of Monte Carlo burnup calculations

    International Nuclear Information System (INIS)

    Dufek, Jan; Valtavirta, Ville

    2014-01-01

    Highlights: • Time step length largely affects efficiency of MC burnup calculations. • Efficiency of MC burnup calculations improves with decreasing time step length. • Results were obtained from SIE-based Monte Carlo burnup calculations. - Abstract: We demonstrate that efficiency of Monte Carlo burnup calculations can be largely affected by the selected time step length. This study employs the stochastic implicit Euler based coupling scheme for Monte Carlo burnup calculations that performs a number of inner iteration steps within each time step. In a series of calculations, we vary the time step length and the number of inner iteration steps; the results suggest that Monte Carlo burnup calculations get more efficient as the time step length is reduced. More time steps must be simulated as they get shorter; however, this is more than compensated by the decrease in computing cost per time step needed for achieving a certain accuracy

  3. Specialized Monte Carlo codes versus general-purpose Monte Carlo codes

    International Nuclear Information System (INIS)

    Moskvin, Vadim; DesRosiers, Colleen; Papiez, Lech; Lu, Xiaoyi

    2002-01-01

    The possibilities of Monte Carlo modeling for dose calculations and optimization treatment are quite limited in radiation oncology applications. The main reason is that the Monte Carlo technique for dose calculations is time consuming while treatment planning may require hundreds of possible cases of dose simulations to be evaluated for dose optimization. The second reason is that general-purpose codes widely used in practice, require an experienced user to customize them for calculations. This paper discusses the concept of Monte Carlo code design that can avoid the main problems that are preventing wide spread use of this simulation technique in medical physics. (authors)

  4. Clinical considerations of Monte Carlo for electron radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Faddegon, Bruce; Balogh, Judith; Mackenzie, Robert; Scora, Daryl

    1998-01-01

    Technical requirements for Monte Carlo based electron radiotherapy treatment planning are outlined. The targeted overall accuracy for estimate of the delivered dose is the least restrictive of 5% in dose, 5 mm in isodose position. A system based on EGS4 and capable of achieving this accuracy is described. Experience gained in system design and commissioning is summarized. The key obstacle to widespread clinical use of Monte Carlo is lack of clinically acceptable measurement based methodology for accurate commissioning

  5. Combinatorial nuclear level density by a Monte Carlo method

    International Nuclear Information System (INIS)

    Cerf, N.

    1994-01-01

    We present a new combinatorial method for the calculation of the nuclear level density. It is based on a Monte Carlo technique, in order to avoid a direct counting procedure which is generally impracticable for high-A nuclei. The Monte Carlo simulation, making use of the Metropolis sampling scheme, allows a computationally fast estimate of the level density for many fermion systems in large shell model spaces. We emphasize the advantages of this Monte Carlo approach, particularly concerning the prediction of the spin and parity distributions of the excited states,and compare our results with those derived from a traditional combinatorial or a statistical method. Such a Monte Carlo technique seems very promising to determine accurate level densities in a large energy range for nuclear reaction calculations

  6. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    International Nuclear Information System (INIS)

    Booth, T.E.

    1992-12-01

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed

  7. TU-F-CAMPUS-T-05: A Cloud-Based Monte Carlo Dose Calculation for Electron Cutout Factors

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, T; Bush, K [Stanford School of Medicine, Stanford, CA (United States)

    2015-06-15

    Purpose: For electron cutouts of smaller sizes, it is necessary to verify electron cutout factors due to perturbations in electron scattering. Often, this requires a physical measurement using a small ion chamber, diode, or film. The purpose of this study is to develop a fast Monte Carlo based dose calculation framework that requires only a smart phone photograph of the cutout and specification of the SSD and energy to determine the electron cutout factor, with the ultimate goal of making this cloud-based calculation widely available to the medical physics community. Methods: The algorithm uses a pattern recognition technique to identify the corners of the cutout in the photograph as shown in Figure 1. It then corrects for variations in perspective, scaling, and translation of the photograph introduced by the user’s positioning of the camera. Blob detection is used to identify the portions of the cutout which comprise the aperture and the portions which are cutout material. This information is then used define physical densities of the voxels used in the Monte Carlo dose calculation algorithm as shown in Figure 2, and select a particle source from a pre-computed library of phase-spaces scored above the cutout. The electron cutout factor is obtained by taking a ratio of the maximum dose delivered with the cutout in place to the dose delivered under calibration/reference conditions. Results: The algorithm has been shown to successfully identify all necessary features of the electron cutout to perform the calculation. Subsequent testing will be performed to compare the Monte Carlo results with a physical measurement. Conclusion: A simple, cloud-based method of calculating electron cutout factors could eliminate the need for physical measurements and substantially reduce the time required to properly assure accurate dose delivery.

  8. Statistics of Monte Carlo methods used in radiation transport calculation

    International Nuclear Information System (INIS)

    Datta, D.

    2009-01-01

    Radiation transport calculation can be carried out by using either deterministic or statistical methods. Radiation transport calculation based on statistical methods is basic theme of the Monte Carlo methods. The aim of this lecture is to describe the fundamental statistics required to build the foundations of Monte Carlo technique for radiation transport calculation. Lecture note is organized in the following way. Section (1) will describe the introduction of Basic Monte Carlo and its classification towards the respective field. Section (2) will describe the random sampling methods, a key component of Monte Carlo radiation transport calculation, Section (3) will provide the statistical uncertainty of Monte Carlo estimates, Section (4) will describe in brief the importance of variance reduction techniques while sampling particles such as photon, or neutron in the process of radiation transport

  9. Shell model the Monte Carlo way

    International Nuclear Information System (INIS)

    Ormand, W.E.

    1995-01-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined

  10. Shell model the Monte Carlo way

    Energy Technology Data Exchange (ETDEWEB)

    Ormand, W.E.

    1995-03-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.

  11. Deuterium-depleted water. Romanian achievements and perspective

    International Nuclear Information System (INIS)

    Stefanescu, Ion; Saros-Rogobete, Irina; Titescu, Gheorghe

    2001-01-01

    Deuterium-depleted water has an isotopic content smaller than 145 ppm D/(D+H) which is the natural isotopic content of water. Beginning with 1996 ICSI Rm. Valcea, deuterium-depleted water producer, co-operated with Romanian specialized institutes for biological effect's evaluation of deuterium-depleted water. These investigations lead to the following conclusions: - Deuterium-depleted water caused a tendency towards the increase of the basal tonus, accompanied by the intensification of the vasoconstrictor effects of phenylefrine, noradrenaline and angiotensin; the increase of the basal tonus and vascular reactivity produced by the deuterium-depleted water persist after the removal of the vascular endothelium; - Animals treated with deuterium-depleted water showed an increase of the resistance both to sublethal and to lethal gamma radiation doses, suggesting a radioprotective action; - Deuterium-depleted water stimulates immune defence reactions and increases the numbers of polymorphonuclear neutrophils; - Investigations regarding artificial reproduction of fish with deuterium-depleted water fecundated solutions confirmed favourable influence in embryo growth stage and resistance in subsequent growth stages; - It was studied germination, growth and quantitative character's variability in plants; one can remark the favourable influence of deuterium-depleted water on biological process in plants in various ontogenetic stages; - The deuterium depletion in seawater produces the diminution of the water spectral energy related to an increased metabolism of Tetraselmis Suecica. (authors)

  12. Monte Carlo principles and applications

    Energy Technology Data Exchange (ETDEWEB)

    Raeside, D E [Oklahoma Univ., Oklahoma City (USA). Health Sciences Center

    1976-03-01

    The principles underlying the use of Monte Carlo methods are explained, for readers who may not be familiar with the approach. The generation of random numbers is discussed, and the connection between Monte Carlo methods and random numbers is indicated. Outlines of two well established Monte Carlo sampling techniques are given, together with examples illustrating their use. The general techniques for improving the efficiency of Monte Carlo calculations are considered. The literature relevant to the applications of Monte Carlo calculations in medical physics is reviewed.

  13. Monte Carlo simulation of grating-based neutron phase contrast imaging at CPHS

    International Nuclear Information System (INIS)

    Zhang Ran; Chen Zhiqiang; Huang Zhifeng; Xiao Yongshun; Wang Xuewu; Wie Jie; Loong, C.-K.

    2011-01-01

    Since the launching of the Compact Pulsed Hadron Source (CPHS) project of Tsinghua University in 2009, works have begun on the design and engineering of an imaging/radiography instrument for the neutron source provided by CPHS. The instrument will perform basic tasks such as transmission imaging and computerized tomography. Additionally, we include in the design the utilization of coded-aperture and grating-based phase contrast methodology, as well as the options of prompt gamma-ray analysis and neutron-energy selective imaging. Previously, we had implemented the hardware and data-analysis software for grating-based X-ray phase contrast imaging. Here, we investigate Geant4-based Monte Carlo simulations of neutron refraction phenomena and then model the grating-based neutron phase contrast imaging system according to the classic-optics-based method. The simulated experimental results of the retrieving phase shift gradient information by five-step phase-stepping approach indicate the feasibility of grating-based neutron phase contrast imaging as an option for the cold neutron imaging instrument at the CPHS.

  14. Monte Carlo systems used for treatment planning and dose verification

    Energy Technology Data Exchange (ETDEWEB)

    Brualla, Lorenzo [Universitaetsklinikum Essen, NCTeam, Strahlenklinik, Essen (Germany); Rodriguez, Miguel [Centro Medico Paitilla, Balboa (Panama); Lallena, Antonio M. [Universidad de Granada, Departamento de Fisica Atomica, Molecular y Nuclear, Granada (Spain)

    2017-04-15

    General-purpose radiation transport Monte Carlo codes have been used for estimation of the absorbed dose distribution in external photon and electron beam radiotherapy patients since several decades. Results obtained with these codes are usually more accurate than those provided by treatment planning systems based on non-stochastic methods. Traditionally, absorbed dose computations based on general-purpose Monte Carlo codes have been used only for research, owing to the difficulties associated with setting up a simulation and the long computation time required. To take advantage of radiation transport Monte Carlo codes applied to routine clinical practice, researchers and private companies have developed treatment planning and dose verification systems that are partly or fully based on fast Monte Carlo algorithms. This review presents a comprehensive list of the currently existing Monte Carlo systems that can be used to calculate or verify an external photon and electron beam radiotherapy treatment plan. Particular attention is given to those systems that are distributed, either freely or commercially, and that do not require programming tasks from the end user. These systems are compared in terms of features and the simulation time required to compute a set of benchmark calculations. (orig.) [German] Seit mehreren Jahrzehnten werden allgemein anwendbare Monte-Carlo-Codes zur Simulation des Strahlungstransports benutzt, um die Verteilung der absorbierten Dosis in der perkutanen Strahlentherapie mit Photonen und Elektronen zu evaluieren. Die damit erzielten Ergebnisse sind meist akkurater als solche, die mit nichtstochastischen Methoden herkoemmlicher Bestrahlungsplanungssysteme erzielt werden koennen. Wegen des damit verbundenen Arbeitsaufwands und der langen Dauer der Berechnungen wurden Monte-Carlo-Simulationen von Dosisverteilungen in der konventionellen Strahlentherapie in der Vergangenheit im Wesentlichen in der Forschung eingesetzt. Im Bemuehen, Monte-Carlo

  15. Fission yield calculation using toy model based on Monte Carlo simulation

    International Nuclear Information System (INIS)

    Jubaidah; Kurniadi, Rizal

    2015-01-01

    Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R c ), mean of left curve (μ L ) and mean of right curve (μ R ), deviation of left curve (σ L ) and deviation of right curve (σ R ). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90

  16. Fission yield calculation using toy model based on Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Jubaidah, E-mail: jubaidah@student.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia); Physics Department, Faculty of Mathematics and Natural Science – State University of Medan. Jl. Willem Iskandar Pasar V Medan Estate – North Sumatera, Indonesia 20221 (Indonesia); Kurniadi, Rizal, E-mail: rijalk@fi.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia)

    2015-09-30

    Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90

  17. Depletion and capture: revisiting "the source of water derived from wells".

    Science.gov (United States)

    Konikow, L F; Leake, S A

    2014-09-01

    A natural consequence of groundwater withdrawals is the removal of water from subsurface storage, but the overall rates and magnitude of groundwater depletion and capture relative to groundwater withdrawals (extraction or pumpage) have not previously been well characterized. This study assesses the partitioning of long-term cumulative withdrawal volumes into fractions derived from storage depletion and capture, where capture includes both increases in recharge and decreases in discharge. Numerical simulation of a hypothetical groundwater basin is used to further illustrate some of Theis' (1940) principles, particularly when capture is constrained by insufficient available water. Most prior studies of depletion and capture have assumed that capture is unconstrained through boundary conditions that yield linear responses. Examination of real systems indicates that capture and depletion fractions are highly variable in time and space. For a large sample of long-developed groundwater systems, the depletion fraction averages about 0.15 and the capture fraction averages about 0.85 based on cumulative volumes. Higher depletion fractions tend to occur in more arid regions, but the variation is high and the correlation coefficient between average annual precipitation and depletion fraction for individual systems is only 0.40. Because 85% of long-term pumpage is derived from capture in these real systems, capture must be recognized as a critical factor in assessing water budgets, groundwater storage depletion, and sustainability of groundwater development. Most capture translates into streamflow depletion, so it can detrimentally impact ecosystems. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.

  18. The depletion of aqueous nitrous acid in packed towers

    International Nuclear Information System (INIS)

    Counce, R.M.; Crawford, D.B.

    1987-01-01

    The depletion of aqueous nitrous acid was studied at 298 0 K and at slightly greater than atmospheric pressure. Solutions containing nitrous and nitric acids were contacted with nitrogen in towers packed with 6- and 13-mm Intalox saddles. The results indicate the existence of two depletion mechanisms for the conditions studied - liquid-phase decomposition and direct desorption of nitrous acid. Models based on mass-transfer and chemical-kinetic information are presented to explain the experimental results. 24 refs., 8 figs., 3 tabs

  19. Comparative evaluation of seven commercial products for human serum enrichment/depletion by shotgun proteomics.

    Science.gov (United States)

    Pisanu, Salvatore; Biosa, Grazia; Carcangiu, Laura; Uzzau, Sergio; Pagnozzi, Daniela

    2018-08-01

    Seven commercial products for human serum depletion/enrichment were tested and compared by shotgun proteomics. Methods were based on four different capturing agents: antibodies (Qproteome Albumin/IgG Depletion kit, ProteoPrep Immunoaffinity Albumin and IgG Depletion Kit, Top 2 Abundant Protein Depletion Spin Columns, and Top 12 Abundant Protein Depletion Spin Columns), specific ligands (Albumin/IgG Removal), mixture of antibodies and ligands (Albumin and IgG Depletion SpinTrap), and combinatorial peptide ligand libraries (ProteoMiner beads), respectively. All procedures, to a greater or lesser extent, allowed an increase of identified proteins. ProteoMiner beads provided the highest number of proteins; Albumin and IgG Depletion SpinTrap and ProteoPrep Immunoaffinity Albumin and IgG Depletion Kit resulted the most efficient in albumin removal; Top 2 and Top 12 Abundant Protein Depletion Spin Columns decreased the overall immunoglobulin levels more than other procedures, whereas specifically gamma immunoglobulins were mostly removed by Albumin and IgG Depletion SpinTrap, ProteoPrep Immunoaffinity Albumin and IgG Depletion Kit, and Top 2 Abundant Protein Depletion Spin Columns. Albumin/IgG Removal, a resin bound to a mixture of protein A and Cibacron Blue, behaved less efficiently than the other products. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Health and environmental impact of depleted uranium

    International Nuclear Information System (INIS)

    Furitsu, Katsumi

    2010-01-01

    Depleted Uranium (DU) is 'nuclear waste' produced from the enrichment process and is mostly made up of 238 U and is depleted in the fissionable isotope 235 U compared to natural uranium (NU). Depleted uranium has about 60% of the radioactivity of natural uranium. Depleted uranium and natural uranium are identical in terms of the chemical toxicity. Uranium's high density gives depleted uranium shells increased range and penetrative power. This density, combined with uranium's pyrophoric nature, results in a high-energy kinetic weapon that can punch and burn through armour plating. Striking a hard target, depleted uranium munitions create extremely high temperatures. The uranium immediately burns and vaporizes into an aerosol, which is easily diffused in the environment. People can inhale the micro-particles of uranium oxide in an aerosol and absorb them mainly from lung. Depleted uranium has both aspects of radiological toxicity and chemical toxicity. The possible synergistic effect of both kinds of toxicities is also pointed out. Animal and cellular studies have been reported the carcinogenic, neurotoxic, immuno-toxic and some other effects of depleted uranium including the damage on reproductive system and foetus. In addition, the health effects of micro/ nano-particles, similar in size of depleted uranium aerosols produced by uranium weapons, have been reported. Aerosolized DU dust can easily spread over the battlefield spreading over civilian areas, sometimes even crossing international borders. Therefore, not only the military personnel but also the civilians can be exposed. The contamination continues after the cessation of hostilities. Taking these aspects into account, DU weapon is illegal under international humanitarian laws and is considered as one of the inhumane weapons of 'indiscriminate destruction'. The international society is now discussing the prohibition of DU weapons based on 'precautionary principle'. The 1991 Gulf War is reportedly the first

  1. Sign Learning Kink-based (SiLK) Quantum Monte Carlo for molecular systems

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Xiaoyao [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Hall, Randall W. [Department of Natural Sciences and Mathematics, Dominican University of California, San Rafael, California 94901 (United States); Department of Chemistry, Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Löffler, Frank [Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Kowalski, Karol [William R. Wiley Environmental Molecular Sciences Laboratory, Battelle, Pacific Northwest National Laboratory, Richland, Washington 99352 (United States); Bhaskaran-Nair, Kiran; Jarrell, Mark; Moreno, Juana [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803 (United States)

    2016-01-07

    The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H{sub 2}O, N{sub 2}, and F{sub 2} molecules. The method is based on Feynman’s path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem.

  2. Sign Learning Kink-based (SiLK) Quantum Monte Carlo for molecular systems

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Xiaoyao [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Hall, Randall W. [Department of Natural Sciences and Mathematics, Dominican University of California, San Rafael, California 94901, USA; Department of Chemistry, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Löffler, Frank [Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Kowalski, Karol [William R. Wiley Environmental Molecular Sciences Laboratory, Battelle, Pacific Northwest National Laboratory, Richland, Washington 99352, USA; Bhaskaran-Nair, Kiran [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Jarrell, Mark [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Moreno, Juana [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803, USA

    2016-01-07

    The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H2O, N2, and F2 molecules. The method is based on Feynman’s path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem.

  3. Sign Learning Kink-based (SiLK) Quantum Monte Carlo for molecular systems

    International Nuclear Information System (INIS)

    Ma, Xiaoyao; Hall, Randall W.; Löffler, Frank; Kowalski, Karol; Bhaskaran-Nair, Kiran; Jarrell, Mark; Moreno, Juana

    2016-01-01

    The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H 2 O, N 2 , and F 2 molecules. The method is based on Feynman’s path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem

  4. Propagation of nuclear data uncertainties in fuel cycle calculations using Monte-Carlo technique

    International Nuclear Information System (INIS)

    Diez, C.J.; Cabellos, O.; Martinez, J.S.

    2011-01-01

    Nowadays, the knowledge of uncertainty propagation in depletion calculations is a critical issue because of the safety and economical performance of fuel cycles. Response magnitudes such as decay heat, radiotoxicity and isotopic inventory and their uncertainties should be known to handle spent fuel in present fuel cycles (e.g. high burnup fuel programme) and furthermore in new fuel cycles designs (e.g. fast breeder reactors and ADS). To deal with this task, there are different error propagation techniques, deterministic (adjoint/forward sensitivity analysis) and stochastic (Monte-Carlo technique) to evaluate the error in response magnitudes due to nuclear data uncertainties. In our previous works, cross-section uncertainties were propagated using a Monte-Carlo technique to calculate the uncertainty of response magnitudes such as decay heat and neutron emission. Also, the propagation of decay data, fission yield and cross-section uncertainties was performed, but only isotopic composition was the response magnitude calculated. Following the previous technique, the nuclear data uncertainties are taken into account and propagated to response magnitudes, decay heat and radiotoxicity. These uncertainties are assessed during cooling time. To evaluate this Monte-Carlo technique, two different applications are performed. First, a fission pulse decay heat calculation is carried out to check the Monte-Carlo technique, using decay data and fission yields uncertainties. Then, the results, experimental data and reference calculation (JEFF Report20), are compared. Second, we assess the impact of basic nuclear data (activation cross-section, decay data and fission yields) uncertainties on relevant fuel cycle parameters (decay heat and radiotoxicity) for a conceptual design of a modular European Facility for Industrial Transmutation (EFIT) fuel cycle. After identifying which time steps have higher uncertainties, an assessment of which uncertainties have more relevance is performed

  5. GPU based Monte Carlo for PET image reconstruction: parameter optimization

    International Nuclear Information System (INIS)

    Cserkaszky, Á; Légrády, D.; Wirth, A.; Bükki, T.; Patay, G.

    2011-01-01

    This paper presents the optimization of a fully Monte Carlo (MC) based iterative image reconstruction of Positron Emission Tomography (PET) measurements. With our MC re- construction method all the physical effects in a PET system are taken into account thus superior image quality is achieved in exchange for increased computational effort. The method is feasible because we utilize the enormous processing power of Graphical Processing Units (GPUs) to solve the inherently parallel problem of photon transport. The MC approach regards the simulated positron decays as samples in mathematical sums required in the iterative reconstruction algorithm, so to complement the fast architecture, our work of optimization focuses on the number of simulated positron decays required to obtain sufficient image quality. We have achieved significant results in determining the optimal number of samples for arbitrary measurement data, this allows to achieve the best image quality with the least possible computational effort. Based on this research recommendations can be given for effective partitioning of computational effort into the iterations in limited time reconstructions. (author)

  6. An improved energy-range relationship for high-energy electron beams based on multiple accurate experimental and Monte Carlo data sets

    International Nuclear Information System (INIS)

    Sorcini, B.B.; Andreo, P.; Hyoedynmaa, S.; Brahme, A.; Bielajew, A.F.

    1995-01-01

    A theoretically based analytical energy-range relationship has been developed and calibrated against well established experimental and Monte Carlo calculated energy-range data. Only published experimental data with a clear statement of accuracy and method of evaluation have been used. Besides published experimental range data for different uniform media, new accurate experimental data on the practical range of high-energy electron beams in water for the energy range 10-50 MeV from accurately calibrated racetrack microtrons have been used. Largely due to the simultaneous pooling of accurate experimental and Monte Carlo data for different materials, the fit has resulted in an increased accuracy of the resultant energy-range relationship, particularly at high energies. Up to date Monte Carlo data from the latest versions of the codes ITS3 and EGS4 for absorbers of atomic numbers between four and 92 (Be, C, H 2 O, PMMA, Al, Cu, Ag, Pb and U) and incident electron energies between 1 and 100 MeV have been used as a complement where experimental data are sparse or missing. The standard deviation of the experimental data relative to the new relation is slightly larger than that of the Monte Carlo data. This is partly due to the fact that theoretically based stopping and scattering cross-sections are used both to account for the material dependence of the analytical energy-range formula and to calculate ranges with the Monte Carlo programs. For water the deviation from the traditional energy-range relation of ICRU Report 35 is only 0.5% at 20 MeV but as high as - 2.2% at 50 MeV. An improved method for divergence and ionization correction in high-energy electron beams has also been developed to enable use of a wider range of experimental results. (Author)

  7. Sensitivity theory for reactor burnup analysis based on depletion perturbation theory

    International Nuclear Information System (INIS)

    Yang, Wonsik.

    1989-01-01

    The large computational effort involved in the design and analysis of advanced reactor configurations motivated the development of Depletion Perturbation Theory (DPT) for general fuel cycle analysis. The work here focused on two important advances in the current methods. First, the adjoint equations were developed for using the efficient linear flux approximation to decouple the neutron/nuclide field equations. And second, DPT was extended to the constrained equilibrium cycle which is important for the consistent comparison and evaluation of alternative reactor designs. Practical strategies were formulated for solving the resulting adjoint equations and a computer code was developed for practical applications. In all cases analyzed, the sensitivity coefficients generated by DPT were in excellent agreement with the results of exact calculations. The work here indicates that for a given core response, the sensitivity coefficients to all input parameters can be computed by DPT with a computational effort similar to a single forward depletion calculation

  8. Monte Carlo based dosimetry and treatment planning for neutron capture therapy of brain tumors

    International Nuclear Information System (INIS)

    Zamenhof, R.G.; Brenner, J.F.; Wazer, D.E.; Madoc-Jones, H.; Clement, S.D.; Harling, O.K.; Yanch, J.C.

    1990-01-01

    Monte Carlo based dosimetry and computer-aided treatment planning for neutron capture therapy have been developed to provide the necessary link between physical dosimetric measurements performed on the MITR-II epithermal-neutron beams and the need of the radiation oncologist to synthesize large amounts of dosimetric data into a clinically meaningful treatment plan for each individual patient. Monte Carlo simulation has been employed to characterize the spatial dose distributions within a skull/brain model irradiated by an epithermal-neutron beam designed for neutron capture therapy applications. The geometry and elemental composition employed for the mathematical skull/brain model and the neutron and photon fluence-to-dose conversion formalism are presented. A treatment planning program, NCTPLAN, developed specifically for neutron capture therapy, is described. Examples are presented illustrating both one and two-dimensional dose distributions obtainable within the brain with an experimental epithermal-neutron beam, together with beam quality and treatment plan efficacy criteria which have been formulated for neutron capture therapy. The incorporation of three-dimensional computed tomographic image data into the treatment planning procedure is illustrated

  9. An Overview of the Monte Carlo Application ToolKit (MCATK)

    Energy Technology Data Exchange (ETDEWEB)

    Trahan, Travis John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-01-07

    MCATK is a C++ component-based Monte Carlo neutron-gamma transport software library designed to build specialized applications and designed to provide new functionality in existing general-purpose Monte Carlo codes like MCNP; it was developed with Agile software engineering methodologies under the motivation to reduce costs. The characteristics of MCATK can be summarized as follows: MCATK physics – continuous energy neutron-gamma transport with multi-temperature treatment, static eigenvalue (k and α) algorithms, time-dependent algorithm, fission chain algorithms; MCATK geometry – mesh geometries, solid body geometries. MCATK provides verified, unit-tested Monte Carlo components, flexibility in Monte Carlo applications development, and numerous tools such as geometry and cross section plotters. Recent work has involved deterministic and Monte Carlo analysis of stochastic systems. Static and dynamic analysis is discussed, and the results of a dynamic test problem are given.

  10. Intrinsic fluorescence of protein in turbid media using empirical relation based on Monte Carlo lookup table

    Science.gov (United States)

    Einstein, Gnanatheepam; Udayakumar, Kanniyappan; Aruna, Prakasarao; Ganesan, Singaravelu

    2017-03-01

    Fluorescence of Protein has been widely used in diagnostic oncology for characterizing cellular metabolism. However, the intensity of fluorescence emission is affected due to the absorbers and scatterers in tissue, which may lead to error in estimating exact protein content in tissue. Extraction of intrinsic fluorescence from measured fluorescence has been achieved by different methods. Among them, Monte Carlo based method yields the highest accuracy for extracting intrinsic fluorescence. In this work, we have attempted to generate a lookup table for Monte Carlo simulation of fluorescence emission by protein. Furthermore, we fitted the generated lookup table using an empirical relation. The empirical relation between measured and intrinsic fluorescence is validated using tissue phantom experiments. The proposed relation can be used for estimating intrinsic fluorescence of protein for real-time diagnostic applications and thereby improving the clinical interpretation of fluorescence spectroscopic data.

  11. Recommender engine for continuous-time quantum Monte Carlo methods

    Science.gov (United States)

    Huang, Li; Yang, Yi-feng; Wang, Lei

    2017-03-01

    Recommender systems play an essential role in the modern business world. They recommend favorable items such as books, movies, and search queries to users based on their past preferences. Applying similar ideas and techniques to Monte Carlo simulations of physical systems boosts their efficiency without sacrificing accuracy. Exploiting the quantum to classical mapping inherent in the continuous-time quantum Monte Carlo methods, we construct a classical molecular gas model to reproduce the quantum distributions. We then utilize powerful molecular simulation techniques to propose efficient quantum Monte Carlo updates. The recommender engine approach provides a general way to speed up the quantum impurity solvers.

  12. DOUBLE-SHELL TANK (DST) HYDROXIDE DEPLETION MODEL FOR CARBON DIOXIDE ABSORPTION

    International Nuclear Information System (INIS)

    OGDEN DM; KIRCH NW

    2007-01-01

    This document generates a supernatant hydroxide ion depletion model based on mechanistic principles. The carbon dioxide absorption mechanistic model is developed in this report. The report also benchmarks the model against historical tank supernatant hydroxide data and vapor space carbon dioxide data. A comparison of the newly generated mechanistic model with previously applied empirical hydroxide depletion equations is also performed

  13. Off-diagonal expansion quantum Monte Carlo.

    Science.gov (United States)

    Albash, Tameem; Wagenbreth, Gene; Hen, Itay

    2017-12-01

    We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.

  14. A dual resolution measurement based Monte Carlo simulation technique for detailed dose analysis of small volume organs in the skull base region

    International Nuclear Information System (INIS)

    Yeh, Chi-Yuan; Tung, Chuan-Jung; Chao, Tsi-Chain; Lin, Mu-Han; Lee, Chung-Chi

    2014-01-01

    The purpose of this study was to examine dose distribution of a skull base tumor and surrounding critical structures in response to high dose intensity-modulated radiosurgery (IMRS) with Monte Carlo (MC) simulation using a dual resolution sandwich phantom. The measurement-based Monte Carlo (MBMC) method (Lin et al., 2009) was adopted for the study. The major components of the MBMC technique involve (1) the BEAMnrc code for beam transport through the treatment head of a Varian 21EX linear accelerator, (2) the DOSXYZnrc code for patient dose simulation and (3) an EPID-measured efficiency map which describes non-uniform fluence distribution of the IMRS treatment beam. For the simulated case, five isocentric 6 MV photon beams were designed to deliver a total dose of 1200 cGy in two fractions to the skull base tumor. A sandwich phantom for the MBMC simulation was created based on the patient's CT scan of a skull base tumor [gross tumor volume (GTV)=8.4 cm 3 ] near the right 8th cranial nerve. The phantom, consisted of a 1.2-cm thick skull base region, had a voxel resolution of 0.05×0.05×0.1 cm 3 and was sandwiched in between 0.05×0.05×0.3 cm 3 slices of a head phantom. A coarser 0.2×0.2×0.3 cm 3 single resolution (SR) phantom was also created for comparison with the sandwich phantom. A particle history of 3×10 8 for each beam was used for simulations of both the SR and the sandwich phantoms to achieve a statistical uncertainty of <2%. Our study showed that the planning target volume (PTV) receiving at least 95% of the prescribed dose (VPTV95) was 96.9%, 96.7% and 99.9% for the TPS, SR, and sandwich phantom, respectively. The maximum and mean doses to large organs such as the PTV, brain stem, and parotid gland for the TPS, SR and sandwich MC simulations did not show any significant difference; however, significant dose differences were observed for very small structures like the right 8th cranial nerve, right cochlea, right malleus and right semicircular

  15. A Monte Carlo method for the simulation of coagulation and nucleation based on weighted particles and the concepts of stochastic resolution and merging

    Energy Technology Data Exchange (ETDEWEB)

    Kotalczyk, G., E-mail: Gregor.Kotalczyk@uni-due.de; Kruis, F.E.

    2017-07-01

    Monte Carlo simulations based on weighted simulation particles can solve a variety of population balance problems and allow thus to formulate a solution-framework for many chemical engineering processes. This study presents a novel concept for the calculation of coagulation rates of weighted Monte Carlo particles by introducing a family of transformations to non-weighted Monte Carlo particles. The tuning of the accuracy (named ‘stochastic resolution’ in this paper) of those transformations allows the construction of a constant-number coagulation scheme. Furthermore, a parallel algorithm for the inclusion of newly formed Monte Carlo particles due to nucleation is presented in the scope of a constant-number scheme: the low-weight merging. This technique is found to create significantly less statistical simulation noise than the conventional technique (named ‘random removal’ in this paper). Both concepts are combined into a single GPU-based simulation method which is validated by comparison with the discrete-sectional simulation technique. Two test models describing a constant-rate nucleation coupled to a simultaneous coagulation in 1) the free-molecular regime or 2) the continuum regime are simulated for this purpose.

  16. Binocular optical axis parallelism detection precision analysis based on Monte Carlo method

    Science.gov (United States)

    Ying, Jiaju; Liu, Bingqi

    2018-02-01

    According to the working principle of the binocular photoelectric instrument optical axis parallelism digital calibration instrument, and in view of all components of the instrument, the various factors affect the system precision is analyzed, and then precision analysis model is established. Based on the error distribution, Monte Carlo method is used to analyze the relationship between the comprehensive error and the change of the center coordinate of the circle target image. The method can further guide the error distribution, optimize control the factors which have greater influence on the comprehensive error, and improve the measurement accuracy of the optical axis parallelism digital calibration instrument.

  17. Monte Carlo simulation based reliability evaluation in a multi-bilateral contracts market

    International Nuclear Information System (INIS)

    Goel, L.; Viswanath, P.A.; Wang, P.

    2004-01-01

    This paper presents a time sequential Monte Carlo simulation technique to evaluate customer load point reliability in multi-bilateral contracts market. The effects of bilateral transactions, reserve agreements, and the priority commitments of generating companies on customer load point reliability have been investigated. A generating company with bilateral contracts is modelled as an equivalent time varying multi-state generation (ETMG). A procedure to determine load point reliability based on ETMG has been developed. The developed procedure is applied to a reliability test system to illustrate the technique. Representing each bilateral contract by an ETMG provides flexibility in determining the reliability at various customer load points. (authors)

  18. Depletion and capture: revisiting “The source of water derived from wells"

    Science.gov (United States)

    Konikow, Leonard F.; Leake, Stanley A.

    2014-01-01

    A natural consequence of groundwater withdrawals is the removal of water from subsurface storage, but the overall rates and magnitude of groundwater depletion and capture relative to groundwater withdrawals (extraction or pumpage) have not previously been well characterized. This study assesses the partitioning of long-term cumulative withdrawal volumes into fractions derived from storage depletion and capture, where capture includes both increases in recharge and decreases in discharge. Numerical simulation of a hypothetical groundwater basin is used to further illustrate some of Theis' (1940) principles, particularly when capture is constrained by insufficient available water. Most prior studies of depletion and capture have assumed that capture is unconstrained through boundary conditions that yield linear responses. Examination of real systems indicates that capture and depletion fractions are highly variable in time and space. For a large sample of long-developed groundwater systems, the depletion fraction averages about 0.15 and the capture fraction averages about 0.85 based on cumulative volumes. Higher depletion fractions tend to occur in more arid regions, but the variation is high and the correlation coefficient between average annual precipitation and depletion fraction for individual systems is only 0.40. Because 85% of long-term pumpage is derived from capture in these real systems, capture must be recognized as a critical factor in assessing water budgets, groundwater storage depletion, and sustainability of groundwater development. Most capture translates into streamflow depletion, so it can detrimentally impact ecosystems.

  19. Depletion of elements in shock-driven gas

    International Nuclear Information System (INIS)

    Gondhalekar, P.M.

    1985-01-01

    The depletion of elements in shocked gas in supernova remnants and in interstellar bubbles is examined. It is shown that elements are depleted in varying degrees in gas filaments shocked to velocities up to 200 km s -1 and that large differences in depletions are observed in gas filaments shocked to similar velocities. In the shocked gas the depletion of an element appears to be correlated with the electron density (or the neutral gas density) in the filaments. This correlation, if confirmed, is similar to the correlation between depletion and mean density of gas in the clouds in interstellar space. (author)

  20. Standardization of formulations for the acute amino acid depletion and loading tests.

    Science.gov (United States)

    Badawy, Abdulla A-B; Dougherty, Donald M

    2015-04-01

    The acute tryptophan depletion and loading and the acute tyrosine plus phenylalanine depletion tests are powerful tools for studying the roles of cerebral monoamines in behaviour and symptoms related to various disorders. The tests use either amino acid mixtures or proteins. Current amino acid mixtures lack specificity in humans, but not in rodents, because of the faster disposal of branched-chain amino acids (BCAAs) by the latter. The high content of BCAA (30-60%) is responsible for the poor specificity in humans and we recommend, in a 50g dose, a control formulation with a lowered BCAA content (18%) as a common control for the above tests. With protein-based formulations, α-lactalbumin is specific for acute tryptophan loading, whereas gelatine is only partially effective for acute tryptophan depletion. We recommend the use of the whey protein fraction glycomacropeptide as an alternative protein. Its BCAA content is ideal for specificity and the absence of tryptophan, tyrosine and phenylalanine render it suitable as a template for seven formulations (separate and combined depletion or loading and a truly balanced control). We invite the research community to participate in standardization of the depletion and loading methodologies by using our recommended amino acid formulation and developing those based on glycomacropeptide. © The Author(s) 2015.

  1. The future of new calculation concepts in dosimetry based on the Monte Carlo Methods

    International Nuclear Information System (INIS)

    Makovicka, L.; Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J.; Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J.; Salomon, M.

    2009-01-01

    Monte Carlo codes, precise but slow, are very important tools in the vast majority of specialities connected to Radiation Physics, Radiation Protection and Dosimetry. A discussion about some other computing solutions is carried out; solutions not only based on the enhancement of computer power, or on the 'biasing'used for relative acceleration of these codes (in the case of photons), but on more efficient methods (A.N.N. - artificial neural network, C.B.R. - case-based reasoning - or other computer science techniques) already and successfully used for a long time in other scientific or industrial applications and not only Radiation Protection or Medical Dosimetry. (authors)

  2. A validation report for the KALIMER core design computing system by the Monte Carlo transport theory code

    International Nuclear Information System (INIS)

    Lee, Ki Bog; Kim, Yeong Il; Kim, Kang Seok; Kim, Sang Ji; Kim, Young Gyun; Song, Hoon; Lee, Dong Uk; Lee, Byoung Oon; Jang, Jin Wook; Lim, Hyun Jin; Kim, Hak Sung

    2004-05-01

    In this report, the results of KALIMER (Korea Advanced LIquid MEtal Reactor) core design calculated by the K-CORE computing system are compared and analyzed with those of MCDEP calculation. The effective multiplication factor, flux distribution, fission power distribution and the number densities of the important nuclides effected from the depletion calculation for the R-Z model and Hex-Z model of KALIMER core are compared. It is confirmed that the results of K-CORE system compared with those of MCDEP based on the Monte Carlo transport theory method agree well within 700 pcm for the effective multiplication factor estimation and also within 2% in the driver fuel region, within 10% in the radial blanket region for the reaction rate and the fission power density. Thus, the K-CORE system for the core design of KALIMER by treating the lumped fission product and mainly important nuclides can be used as a core design tool keeping the necessary accuracy

  3. CERN honours Carlo Rubbia

    CERN Document Server

    2009-01-01

    Carlo Rubbia turned 75 on March 31, and CERN held a symposium to mark his birthday and pay tribute to his impressive contribution to both CERN and science. Carlo Rubbia, 4th from right, together with the speakers at the symposium.On 7 April CERN hosted a celebration marking Carlo Rubbia’s 75th birthday and 25 years since he was awarded the Nobel Prize for Physics. "Today we will celebrate 100 years of Carlo Rubbia" joked CERN’s Director-General, Rolf Heuer in his opening speech, "75 years of his age and 25 years of the Nobel Prize." Rubbia received the Nobel Prize along with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. During the symposium, which was held in the Main Auditorium, several eminent speakers gave lectures on areas of science to which Carlo Rubbia made decisive contributions. Among those who spoke were Michel Spiro, Director of the French National Insti...

  4. Suppression of the initial transient in Monte Carlo criticality simulations; Suppression du regime transitoire initial des simulations Monte-Carlo de criticite

    Energy Technology Data Exchange (ETDEWEB)

    Richet, Y

    2006-12-15

    Criticality Monte Carlo calculations aim at estimating the effective multiplication factor (k-effective) for a fissile system through iterations simulating neutrons propagation (making a Markov chain). Arbitrary initialization of the neutron population can deeply bias the k-effective estimation, defined as the mean of the k-effective computed at each iteration. A simplified model of this cycle k-effective sequence is built, based on characteristics of industrial criticality Monte Carlo calculations. Statistical tests, inspired by Brownian bridge properties, are designed to discriminate stationarity of the cycle k-effective sequence. The initial detected transient is, then, suppressed in order to improve the estimation of the system k-effective. The different versions of this methodology are detailed and compared, firstly on a plan of numerical tests fitted on criticality Monte Carlo calculations, and, secondly on real criticality calculations. Eventually, the best methodologies observed in these tests are selected and allow to improve industrial Monte Carlo criticality calculations. (author)

  5. Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation

    Science.gov (United States)

    Ziegenhein, Peter; Pirner, Sven; Kamerling, Cornelis Ph; Oelfke, Uwe

    2015-08-01

    Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37× compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25× and 1.95× faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.

  6. SELF-ABSORPTION CORRECTIONS BASED ON MONTE CARLO SIMULATIONS

    Directory of Open Access Journals (Sweden)

    Kamila Johnová

    2016-12-01

    Full Text Available The main aim of this article is to demonstrate how Monte Carlo simulations are implemented in our gamma spectrometry laboratory at the Department of Dosimetry and Application of Ionizing Radiation in order to calculate the self-absorption within the samples. A model of real HPGe detector created for MCNP simulations is presented in this paper. All of the possible parameters, which may influence the self-absorption, are at first discussed theoretically and lately described using the calculated results.

  7. Monte Carlo simulation of neutron scattering instruments

    International Nuclear Information System (INIS)

    Seeger, P.A.

    1995-01-01

    A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width

  8. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z. [Institute of Applied Physics and Computational Mathematics, Beijing, 100094 (China)

    2013-07-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  9. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    International Nuclear Information System (INIS)

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.

    2013-01-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  10. Are relative depletions altered inside diffuse clouds?

    International Nuclear Information System (INIS)

    Joseph, C.L.

    1988-01-01

    The data of Jenkins, Savage, and Spitzer (1986) were used to analyze interstellar abundances and depletions of Fe, P, Mg, and Mn toward 37 stars, spanning nearly 1.0 (dex) in mean line-of-sight depletion. It was found that the depletions of these elements are linearly correlated and do not show evidence of differences in the rates of depletion or sputtering from one element to another. For a given level of overall depletion, the sightline-to-sightline rms variance in the depletion for each of these elements was less than 0.16 (dex), which is significantly smaller than is the element-to-element variance. The results suggest that, for most diffuse lines of sight, the relative abundances of these elements are set early in the lifetime of the grains and are not altered significantly thereafter. 53 references

  11. Interstellar depletion anomalies and ionization potentials

    International Nuclear Information System (INIS)

    Tabak, R.G.

    1979-01-01

    Satellite observations indicate that (1) most elements are depleted from the gas phase when compared to cosmic abundances, (2) some elements are several orders of magnitude more depleted than others, and (3) these depletions vary from cloud to cloud. Since the most likely possibility is that the 'missing' atoms are locked into grains, depletions occur either by accretion onto core particles in interstellar clouds or earlier, during the period of primary grain formation. If the latter mechanism is dominant, then the most important depletion parameter is the condensation temperature of the elements and their various compounds. However, this alone is not sufficient to explain all the observed anomalies. It is shown that electrostatic effects - under a wide variety of conditions- can enormously enhance the capture cross-section of the grain. It is suggested that this mechanism can also account for such anomalies as the apparent 'overabundance' of the alkali metals in the gas phase. (orig.)

  12. Coded aperture optimization using Monte Carlo simulations

    International Nuclear Information System (INIS)

    Martineau, A.; Rocchisani, J.M.; Moretti, J.L.

    2010-01-01

    Coded apertures using Uniformly Redundant Arrays (URA) have been unsuccessfully evaluated for two-dimensional and three-dimensional imaging in Nuclear Medicine. The images reconstructed from coded projections contain artifacts and suffer from poor spatial resolution in the longitudinal direction. We introduce a Maximum-Likelihood Expectation-Maximization (MLEM) algorithm for three-dimensional coded aperture imaging which uses a projection matrix calculated by Monte Carlo simulations. The aim of the algorithm is to reduce artifacts and improve the three-dimensional spatial resolution in the reconstructed images. Firstly, we present the validation of GATE (Geant4 Application for Emission Tomography) for Monte Carlo simulations of a coded mask installed on a clinical gamma camera. The coded mask modelling was validated by comparison between experimental and simulated data in terms of energy spectra, sensitivity and spatial resolution. In the second part of the study, we use the validated model to calculate the projection matrix with Monte Carlo simulations. A three-dimensional thyroid phantom study was performed to compare the performance of the three-dimensional MLEM reconstruction with conventional correlation method. The results indicate that the artifacts are reduced and three-dimensional spatial resolution is improved with the Monte Carlo-based MLEM reconstruction.

  13. SU-F-T-619: Dose Evaluation of Specific Patient Plans Based On Monte Carlo Algorithm for a CyberKnife Stereotactic Radiosurgery System

    Energy Technology Data Exchange (ETDEWEB)

    Piao, J [PLA General Hospital, Beijing (China); PLA 302 Hospital, Beijing (China); Xu, S [PLA General Hospital, Beijing (China); Tsinghua University, Beijing (China); Wu, Z; Liu, Y [Tsinghua University, Beijing (China); Li, Y [Beihang University, Beijing (China); Qu, B [PLA General Hospital, Beijing (China); Duan, X [PLA 302 Hospital, Beijing (China)

    2016-06-15

    Purpose: This study will use Monte Carlo to simulate the Cyberknife system, and intend to develop the third-party tool to evaluate the dose verification of specific patient plans in TPS. Methods: By simulating the treatment head using the BEAMnrc and DOSXYZnrc software, the comparison between the calculated and measured data will be done to determine the beam parameters. The dose distribution calculated in the Raytracing, Monte Carlo algorithms of TPS (Multiplan Ver4.0.2) and in-house Monte Carlo simulation method for 30 patient plans, which included 10 head, lung and liver cases in each, were analyzed. The γ analysis with the combined 3mm/3% criteria would be introduced to quantitatively evaluate the difference of the accuracy between three algorithms. Results: More than 90% of the global error points were less than 2% for the comparison of the PDD and OAR curves after determining the mean energy and FWHM.The relative ideal Monte Carlo beam model had been established. Based on the quantitative evaluation of dose accuracy for three algorithms, the results of γ analysis shows that the passing rates (84.88±9.67% for head,98.83±1.05% for liver,98.26±1.87% for lung) of PTV in 30 plans between Monte Carlo simulation and TPS Monte Carlo algorithms were good. And the passing rates (95.93±3.12%,99.84±0.33% in each) of PTV in head and liver plans between Monte Carlo simulation and TPS Ray-tracing algorithms were also good. But the difference of DVHs in lung plans between Monte Carlo simulation and Ray-tracing algorithms was obvious, and the passing rate (51.263±38.964%) of γ criteria was not good. It is feasible that Monte Carlo simulation was used for verifying the dose distribution of patient plans. Conclusion: Monte Carlo simulation algorithm developed in the CyberKnife system of this study can be used as a reference tool for the third-party tool, which plays an important role in dose verification of patient plans. This work was supported in part by the grant

  14. A multi-agent quantum Monte Carlo model for charge transport: Application to organic field-effect transistors

    International Nuclear Information System (INIS)

    Bauer, Thilo; Jäger, Christof M.; Jordan, Meredith J. T.; Clark, Timothy

    2015-01-01

    We have developed a multi-agent quantum Monte Carlo model to describe the spatial dynamics of multiple majority charge carriers during conduction of electric current in the channel of organic field-effect transistors. The charge carriers are treated by a neglect of diatomic differential overlap Hamiltonian using a lattice of hydrogen-like basis functions. The local ionization energy and local electron affinity defined previously map the bulk structure of the transistor channel to external potentials for the simulations of electron- and hole-conduction, respectively. The model is designed without a specific charge-transport mechanism like hopping- or band-transport in mind and does not arbitrarily localize charge. An electrode model allows dynamic injection and depletion of charge carriers according to source-drain voltage. The field-effect is modeled by using the source-gate voltage in a Metropolis-like acceptance criterion. Although the current cannot be calculated because the simulations have no time axis, using the number of Monte Carlo moves as pseudo-time gives results that resemble experimental I/V curves

  15. A multi-agent quantum Monte Carlo model for charge transport: Application to organic field-effect transistors

    Energy Technology Data Exchange (ETDEWEB)

    Bauer, Thilo; Jäger, Christof M. [Department of Chemistry and Pharmacy, Computer-Chemistry-Center and Interdisciplinary Center for Molecular Materials, Friedrich-Alexander-Universität Erlangen-Nürnberg, Nägelsbachstrasse 25, 91052 Erlangen (Germany); Jordan, Meredith J. T. [School of Chemistry, University of Sydney, Sydney, NSW 2006 (Australia); Clark, Timothy, E-mail: tim.clark@fau.de [Department of Chemistry and Pharmacy, Computer-Chemistry-Center and Interdisciplinary Center for Molecular Materials, Friedrich-Alexander-Universität Erlangen-Nürnberg, Nägelsbachstrasse 25, 91052 Erlangen (Germany); Centre for Molecular Design, University of Portsmouth, Portsmouth PO1 2DY (United Kingdom)

    2015-07-28

    We have developed a multi-agent quantum Monte Carlo model to describe the spatial dynamics of multiple majority charge carriers during conduction of electric current in the channel of organic field-effect transistors. The charge carriers are treated by a neglect of diatomic differential overlap Hamiltonian using a lattice of hydrogen-like basis functions. The local ionization energy and local electron affinity defined previously map the bulk structure of the transistor channel to external potentials for the simulations of electron- and hole-conduction, respectively. The model is designed without a specific charge-transport mechanism like hopping- or band-transport in mind and does not arbitrarily localize charge. An electrode model allows dynamic injection and depletion of charge carriers according to source-drain voltage. The field-effect is modeled by using the source-gate voltage in a Metropolis-like acceptance criterion. Although the current cannot be calculated because the simulations have no time axis, using the number of Monte Carlo moves as pseudo-time gives results that resemble experimental I/V curves.

  16. A Monte Carlo based development of a cavity theory for solid state detectors irradiated in electron beams

    International Nuclear Information System (INIS)

    Mobit, P.

    2002-01-01

    Recent Monte Carlo simulations have shown that the assumption in the small cavity theory (and the extension of the small cavity theory by Spencer-Attix) that the cavity does not perturb the electron fluence is seriously flawed. For depths beyond d max not only is there a significant difference between the energy spectra in the medium and in the solid cavity materials but there is also a significant difference in the number of low-energy electrons which cannot travel across the solid cavity and hence deposit their dose in it (i.e. stopper electrons whose residual range is less than the cavity thickness). The number of these low-energy electrons that are not able to travel across the solid state cavity increases with depth and effective thickness of the detector. This also invalidates the assumption in the small cavity theory that most of the dose deposited in a small cavity is delivered by crossers. Based on Monte Carlo simulations, a new cavity theory for solid state detectors irradiated in electron beams has been proposed as: D med (p)=D det (p) x s S-A med.det x gamma(p) e x S T , where D med (p) is the dose to the medium at point, p, D det (p) is the average detector dose to the same point, s S-A med.det is the Spencer-Attix mass collision stopping power ratio of the medium to the detector material, gamma(p) e is the electron fluence perturbation correction factor and S T is a stopper-to-crosser correction factor to correct for the dependence of the stopper-to-crosser ratio on depth and the effective cavity size. Monte Carlo simulations have been computed for all the terms in this equation. The new cavity theory has been tested against the Spencer-Attix cavity equation as the small cavity limiting case and also Monte Carlo simulations. The agreement between this new cavity theory and Monte Carlo simulations is within 0.3%. (author)

  17. The Chemistry and Toxicology of Depleted Uranium

    Directory of Open Access Journals (Sweden)

    Sidney A. Katz

    2014-03-01

    Full Text Available Natural uranium is comprised of three radioactive isotopes: 238U, 235U, and 234U. Depleted uranium (DU is a byproduct of the processes for the enrichment of the naturally occurring 235U isotope. The world wide stock pile contains some 1½ million tons of depleted uranium. Some of it has been used to dilute weapons grade uranium (~90% 235U down to reactor grade uranium (~5% 235U, and some of it has been used for heavy tank armor and for the fabrication of armor-piercing bullets and missiles. Such weapons were used by the military in the Persian Gulf, the Balkans and elsewhere. The testing of depleted uranium weapons and their use in combat has resulted in environmental contamination and human exposure. Although the chemical and the toxicological behaviors of depleted uranium are essentially the same as those of natural uranium, the respective chemical forms and isotopic compositions in which they usually occur are different. The chemical and radiological toxicity of depleted uranium can injure biological systems. Normal functioning of the kidney, liver, lung, and heart can be adversely affected by depleted uranium intoxication. The focus of this review is on the chemical and toxicological properties of depleted and natural uranium and some of the possible consequences from long term, low dose exposure to depleted uranium in the environment.

  18. Evaluation of the interindividual human variation in bioactivation of methyleugenol using physiologically based kinetic modeling and Monte Carlo simulations

    International Nuclear Information System (INIS)

    Al-Subeihi, Ala' A.A.; Alhusainy, Wasma; Kiwamoto, Reiko; Spenkelink, Bert; Bladeren, Peter J. van; Rietjens, Ivonne M.C.M.; Punt, Ans

    2015-01-01

    The present study aims at predicting the level of formation of the ultimate carcinogenic metabolite of methyleugenol, 1′-sulfooxymethyleugenol, in the human population by taking variability in key bioactivation and detoxification reactions into account using Monte Carlo simulations. Depending on the metabolic route, variation was simulated based on kinetic constants obtained from incubations with a range of individual human liver fractions or by combining kinetic constants obtained for specific isoenzymes with literature reported human variation in the activity of these enzymes. The results of the study indicate that formation of 1′-sulfooxymethyleugenol is predominantly affected by variation in i) P450 1A2-catalyzed bioactivation of methyleugenol to 1′-hydroxymethyleugenol, ii) P450 2B6-catalyzed epoxidation of methyleugenol, iii) the apparent kinetic constants for oxidation of 1′-hydroxymethyleugenol, and iv) the apparent kinetic constants for sulfation of 1′-hydroxymethyleugenol. Based on the Monte Carlo simulations a so-called chemical-specific adjustment factor (CSAF) for intraspecies variation could be derived by dividing different percentiles by the 50th percentile of the predicted population distribution for 1′-sulfooxymethyleugenol formation. The obtained CSAF value at the 90th percentile was 3.2, indicating that the default uncertainty factor of 3.16 for human variability in kinetics may adequately cover the variation within 90% of the population. Covering 99% of the population requires a larger uncertainty factor of 6.4. In conclusion, the results showed that adequate predictions on interindividual human variation can be made with Monte Carlo-based PBK modeling. For methyleugenol this variation was observed to be in line with the default variation generally assumed in risk assessment. - Highlights: • Interindividual human differences in methyleugenol bioactivation were simulated. • This was done using in vitro incubations, PBK modeling

  19. Evaluation of the interindividual human variation in bioactivation of methyleugenol using physiologically based kinetic modeling and Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Al-Subeihi, Ala' A.A., E-mail: subeihi@yahoo.com [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands); BEN-HAYYAN-Aqaba International Laboratories, Aqaba Special Economic Zone Authority (ASEZA), P. O. Box 2565, Aqaba 77110 (Jordan); Alhusainy, Wasma; Kiwamoto, Reiko; Spenkelink, Bert [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands); Bladeren, Peter J. van [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands); Nestec S.A., Avenue Nestlé 55, 1800 Vevey (Switzerland); Rietjens, Ivonne M.C.M.; Punt, Ans [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands)

    2015-03-01

    The present study aims at predicting the level of formation of the ultimate carcinogenic metabolite of methyleugenol, 1′-sulfooxymethyleugenol, in the human population by taking variability in key bioactivation and detoxification reactions into account using Monte Carlo simulations. Depending on the metabolic route, variation was simulated based on kinetic constants obtained from incubations with a range of individual human liver fractions or by combining kinetic constants obtained for specific isoenzymes with literature reported human variation in the activity of these enzymes. The results of the study indicate that formation of 1′-sulfooxymethyleugenol is predominantly affected by variation in i) P450 1A2-catalyzed bioactivation of methyleugenol to 1′-hydroxymethyleugenol, ii) P450 2B6-catalyzed epoxidation of methyleugenol, iii) the apparent kinetic constants for oxidation of 1′-hydroxymethyleugenol, and iv) the apparent kinetic constants for sulfation of 1′-hydroxymethyleugenol. Based on the Monte Carlo simulations a so-called chemical-specific adjustment factor (CSAF) for intraspecies variation could be derived by dividing different percentiles by the 50th percentile of the predicted population distribution for 1′-sulfooxymethyleugenol formation. The obtained CSAF value at the 90th percentile was 3.2, indicating that the default uncertainty factor of 3.16 for human variability in kinetics may adequately cover the variation within 90% of the population. Covering 99% of the population requires a larger uncertainty factor of 6.4. In conclusion, the results showed that adequate predictions on interindividual human variation can be made with Monte Carlo-based PBK modeling. For methyleugenol this variation was observed to be in line with the default variation generally assumed in risk assessment. - Highlights: • Interindividual human differences in methyleugenol bioactivation were simulated. • This was done using in vitro incubations, PBK modeling

  20. On the use of stochastic approximation Monte Carlo for Monte Carlo integration

    KAUST Repository

    Liang, Faming

    2009-03-01

    The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration via a dynamically weighted estimator by calling some results from the literature of nonhomogeneous Markov chains. Our numerical results indicate that SAMC can yield significant savings over conventional Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, for the problems for which the energy landscape is rugged. © 2008 Elsevier B.V. All rights reserved.

  1. On the use of stochastic approximation Monte Carlo for Monte Carlo integration

    KAUST Repository

    Liang, Faming

    2009-01-01

    The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration

  2. Time-on-task effects in children with and without ADHD: depletion of executive resources or depletion of motivation?

    Science.gov (United States)

    Dekkers, Tycho J; Agelink van Rentergem, Joost A; Koole, Alette; van den Wildenberg, Wery P M; Popma, Arne; Bexkens, Anika; Stoffelsen, Reino; Diekmann, Anouk; Huizenga, Hilde M

    2017-12-01

    Children with attention-deficit/hyperactivity disorder (ADHD) are characterized by deficits in their executive functioning and motivation. In addition, these children are characterized by a decline in performance as time-on-task increases (i.e., time-on-task effects). However, it is unknown whether these time-on-task effects should be attributed to deficits in executive functioning or to deficits in motivation. Some studies in typically developing (TD) adults indicated that time-on-task effects should be interpreted as depletion of executive resources, but other studies suggested that they represent depletion of motivation. We, therefore, investigated, in children with and without ADHD, whether there were time-on-task effects on executive functions, such as inhibition and (in)attention, and whether these were best explained by depletion of executive resources or depletion of motivation. The stop-signal task (SST), which generates both indices of inhibition (stop-signal reaction time) and attention (reaction time variability and errors), was administered in 96 children (42 ADHD, 54 TD controls; aged 9-13). To differentiate between depletion of resources and depletion of motivation, the SST was administered twice. Half of the participants was reinforced during second task performance, potentially counteracting depletion of motivation. Multilevel analyses indicated that children with ADHD were more affected by time-on-task than controls on two measures of inattention, but not on inhibition. In the ADHD group, reinforcement only improved performance on one index of attention (i.e., reaction time variability). The current findings suggest that time-on-task effects in children with ADHD occur specifically in the attentional domain, and seem to originate in both depletion of executive resources and depletion of motivation. Clinical implications for diagnostics, psycho-education, and intervention are discussed.

  3. Spectral history model in DYN3D: Verification against coupled Monte-Carlo thermal-hydraulic code BGCore

    International Nuclear Information System (INIS)

    Bilodid, Y.; Kotlyar, D.; Margulis, M.; Fridman, E.; Shwageraus, E.

    2015-01-01

    Highlights: • Pu-239 based spectral history method was tested on 3D BWR single assembly case. • Burnup of a BWR fuel assembly was performed with the nodal code DYN3D. • Reference solution was obtained by coupled Monte-Carlo thermal-hydraulic code BGCore. • The proposed method accurately reproduces moderator density history effect for BWR test case. - Abstract: This research focuses on the verification of a recently developed methodology accounting for spectral history effects in 3D full core nodal simulations. The traditional deterministic core simulation procedure includes two stages: (1) generation of homogenized macroscopic cross section sets and (2) application of these sets to obtain a full 3D core solution with nodal codes. The standard approach adopts the branch methodology in which the branches represent all expected combinations of operational conditions as a function of burnup (main branch). The main branch is produced for constant, usually averaged, operating conditions (e.g. coolant density). As a result, the spectral history effects that associated with coolant density variation are not taken into account properly. Number of methods to solve this problem (such as micro-depletion and spectral indexes) were developed and implemented in modern nodal codes. Recently, we proposed a new and robust method to account for history effects. The methodology was implemented in DYN3D and involves modification of the few-group cross section sets. The method utilizes the local Pu-239 concentration as an indicator of spectral history. The method was verified for PWR and VVER applications. However, the spectrum variation in BWR core is more pronounced due to the stronger coolant density change. The purpose of the current work is investigating the applicability of the method to BWR analysis. The proposed methodology was verified against recently developed BGCore system, which couples Monte Carlo neutron transport with depletion and thermal-hydraulic solvers and

  4. Monte Carlo based statistical power analysis for mediation models: methods and software.

    Science.gov (United States)

    Zhang, Zhiyong

    2014-12-01

    The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.

  5. Monte Carlo techniques in diagnostic and therapeutic nuclear medicine

    International Nuclear Information System (INIS)

    Zaidi, H.

    2002-01-01

    Monte Carlo techniques have become one of the most popular tools in different areas of medical radiation physics following the development and subsequent implementation of powerful computing systems for clinical use. In particular, they have been extensively applied to simulate processes involving random behaviour and to quantify physical parameters that are difficult or even impossible to calculate analytically or to determine by experimental measurements. The use of the Monte Carlo method to simulate radiation transport turned out to be the most accurate means of predicting absorbed dose distributions and other quantities of interest in the radiation treatment of cancer patients using either external or radionuclide radiotherapy. The same trend has occurred for the estimation of the absorbed dose in diagnostic procedures using radionuclides. There is broad consensus in accepting that the earliest Monte Carlo calculations in medical radiation physics were made in the area of nuclear medicine, where the technique was used for dosimetry modelling and computations. Formalism and data based on Monte Carlo calculations, developed by the Medical Internal Radiation Dose (MIRD) committee of the Society of Nuclear Medicine, were published in a series of supplements to the Journal of Nuclear Medicine, the first one being released in 1968. Some of these pamphlets made extensive use of Monte Carlo calculations to derive specific absorbed fractions for electron and photon sources uniformly distributed in organs of mathematical phantoms. Interest in Monte Carlo-based dose calculations with β-emitters has been revived with the application of radiolabelled monoclonal antibodies to radioimmunotherapy. As a consequence of this generalized use, many questions are being raised primarily about the need and potential of Monte Carlo techniques, but also about how accurate it really is, what would it take to apply it clinically and make it available widely to the medical physics

  6. Calculation of absorbed fractions to human skeletal tissues due to alpha particles using the Monte Carlo and 3-d chord-based transport techniques

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, J.G. [Institute of Radiation Protection and Dosimetry, Av. Salvador Allende s/n, Recreio, Rio de Janeiro, CEP 22780-160 (Brazil); Watchman, C.J. [Department of Radiation Oncology, University of Arizona, Tucson, AZ, 85721 (United States); Bolch, W.E. [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL, 32611 (United States); Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611 (United States)

    2007-07-01

    Absorbed fraction (AF) calculations to the human skeletal tissues due to alpha particles are of interest to the internal dosimetry of occupationally exposed workers and members of the public. The transport of alpha particles through the skeletal tissue is complicated by the detailed and complex microscopic histology of the skeleton. In this study, both Monte Carlo and chord-based techniques were applied to the transport of alpha particles through 3-D micro-CT images of the skeletal microstructure of trabecular spongiosa. The Monte Carlo program used was 'Visual Monte Carlo-VMC'. VMC simulates the emission of the alpha particles and their subsequent energy deposition track. The second method applied to alpha transport is the chord-based technique, which randomly generates chord lengths across bone trabeculae and the marrow cavities via alternate and uniform sampling of their cumulative density functions. This paper compares the AF of energy to two radiosensitive skeletal tissues, active marrow and shallow active marrow, obtained with these two techniques. (authors)

  7. Monte Carlo method for solving a parabolic problem

    Directory of Open Access Journals (Sweden)

    Tian Yi

    2016-01-01

    Full Text Available In this paper, we present a numerical method based on random sampling for a parabolic problem. This method combines use of the Crank-Nicolson method and Monte Carlo method. In the numerical algorithm, we first discretize governing equations by Crank-Nicolson method, and obtain a large sparse system of linear algebraic equations, then use Monte Carlo method to solve the linear algebraic equations. To illustrate the usefulness of this technique, we apply it to some test problems.

  8. Image based Monte Carlo modeling for computational phantom

    International Nuclear Information System (INIS)

    Cheng, M.; Wang, W.; Zhao, K.; Fan, Y.; Long, P.; Wu, Y.

    2013-01-01

    Full text of the publication follows. The evaluation on the effects of ionizing radiation and the risk of radiation exposure on human body has been becoming one of the most important issues for radiation protection and radiotherapy fields, which is helpful to avoid unnecessary radiation and decrease harm to human body. In order to accurately evaluate the dose on human body, it is necessary to construct more realistic computational phantom. However, manual description and verification of the models for Monte Carlo (MC) simulation are very tedious, error-prone and time-consuming. In addition, it is difficult to locate and fix the geometry error, and difficult to describe material information and assign it to cells. MCAM (CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport Simulation) was developed as an interface program to achieve both CAD- and image-based automatic modeling. The advanced version (Version 6) of MCAM can achieve automatic conversion from CT/segmented sectioned images to computational phantoms such as MCNP models. Imaged-based automatic modeling program(MCAM6.0) has been tested by several medical images and sectioned images. And it has been applied in the construction of Rad-HUMAN. Following manual segmentation and 3D reconstruction, a whole-body computational phantom of Chinese adult female called Rad-HUMAN was created by using MCAM6.0 from sectioned images of a Chinese visible human dataset. Rad-HUMAN contains 46 organs/tissues, which faithfully represented the average anatomical characteristics of the Chinese female. The dose conversion coefficients (Dt/Ka) from kerma free-in-air to absorbed dose of Rad-HUMAN were calculated. Rad-HUMAN can be applied to predict and evaluate dose distributions in the Treatment Plan System (TPS), as well as radiation exposure for human body in radiation protection. (authors)

  9. Interface methods for hybrid Monte Carlo-diffusion radiation-transport simulations

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.

    2006-01-01

    Discrete diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Monte Carlo simulations in diffusive media. An important aspect of DDMC is the treatment of interfaces between diffusive regions, where DDMC is used, and transport regions, where standard Monte Carlo is employed. Three previously developed methods exist for treating transport-diffusion interfaces: the Marshak interface method, based on the Marshak boundary condition, the asymptotic interface method, based on the asymptotic diffusion-limit boundary condition, and the Nth-collided source technique, a scheme that allows Monte Carlo particles to undergo several collisions in a diffusive region before DDMC is used. Numerical calculations have shown that each of these interface methods gives reasonable results as part of larger radiation-transport simulations. In this paper, we use both analytic and numerical examples to compare the ability of these three interface techniques to treat simpler, transport-diffusion interface problems outside of a more complex radiation-transport calculation. We find that the asymptotic interface method is accurate regardless of the angular distribution of Monte Carlo particles incident on the interface surface. In contrast, the Marshak boundary condition only produces correct solutions if the incident particles are isotropic. We also show that the Nth-collided source technique has the capacity to yield accurate results if spatial cells are optically small and Monte Carlo particles are allowed to undergo many collisions within a diffusive region before DDMC is employed. These requirements make the Nth-collided source technique impractical for realistic radiation-transport calculations

  10. Comparison of Depletion Strategies for the Enrichment of Low-Abundance Proteins in Urine.

    Science.gov (United States)

    Filip, Szymon; Vougas, Konstantinos; Zoidakis, Jerome; Latosinska, Agnieszka; Mullen, William; Spasovski, Goce; Mischak, Harald; Vlahou, Antonia; Jankowski, Joachim

    2015-01-01

    Proteome analysis of complex biological samples for biomarker identification remains challenging, among others due to the extended range of protein concentrations. High-abundance proteins like albumin or IgG of plasma and urine, may interfere with the detection of potential disease biomarkers. Currently, several options are available for the depletion of abundant proteins in plasma. However, the applicability of these methods in urine has not been thoroughly investigated. In this study, we compared different, commercially available immunodepletion and ion-exchange based approaches on urine samples from both healthy subjects and CKD patients, for their reproducibility and efficiency in protein depletion. A starting urine volume of 500 μL was used to simulate conditions of a multi-institutional biomarker discovery study. All depletion approaches showed satisfactory reproducibility (n=5) in protein identification as well as protein abundance. Comparison of the depletion efficiency between the unfractionated and fractionated samples and the different depletion strategies, showed efficient depletion in all cases, with the exception of the ion-exchange kit. The depletion efficiency was found slightly higher in normal than in CKD samples and normal samples yielded more protein identifications than CKD samples when using both initial as well as corresponding depleted fractions. Along these lines, decrease in the amount of albumin and other targets as applicable, following depletion, was observed. Nevertheless, these depletion strategies did not yield a higher number of identifications in neither the urine from normal nor CKD patients. Collectively, when analyzing urine in the context of CKD biomarker identification, no added value of depletion strategies can be observed and analysis of unfractionated starting urine appears to be preferable.

  11. Monte Carlo Method with Heuristic Adjustment for Irregularly Shaped Food Product Volume Measurement

    Directory of Open Access Journals (Sweden)

    Joko Siswantoro

    2014-01-01

    Full Text Available Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method.

  12. Monte Carlo method with heuristic adjustment for irregularly shaped food product volume measurement.

    Science.gov (United States)

    Siswantoro, Joko; Prabuwono, Anton Satria; Abdullah, Azizi; Idrus, Bahari

    2014-01-01

    Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method.

  13. Erythrocyte depletion from bone marrow: performance evaluation after 50 clinical-scale depletions with Spectra Optia BMC.

    Science.gov (United States)

    Kim-Wanner, Soo-Zin; Bug, Gesine; Steinmann, Juliane; Ajib, Salem; Sorg, Nadine; Poppe, Carolin; Bunos, Milica; Wingenfeld, Eva; Hümmer, Christiane; Luxembourg, Beate; Seifried, Erhard; Bonig, Halvard

    2017-08-11

    Red blood cell (RBC) depletion is a standard graft manipulation technique for ABO-incompatible bone marrow (BM) transplants. The BM processing module for Spectra Optia, "BMC", was previously introduced. We here report the largest series to date of routine quality data after performing 50 clinical-scale RBC-depletions. Fifty successive RBC-depletions from autologous (n = 5) and allogeneic (n = 45) BM transplants were performed with the Spectra Optia BMC apheresis suite. Product quality was assessed before and after processing for volume, RBC and leukocyte content; RBC-depletion and stem cell (CD34+ cells) recovery was calculated there from. Clinical engraftment data were collected from 26/45 allogeneic recipients. Median RBC removal was 98.2% (range 90.8-99.1%), median CD34+ cell recovery was 93.6%, minimum recovery being 72%, total product volume was reduced to 7.5% (range 4.7-23.0%). Products engrafted with expected probability and kinetics. Performance indicators were stable over time. Spectra Optia BMC is a robust and efficient technology for RBC-depletion and volume reduction of BM, providing near-complete RBC removal and excellent CD34+ cell recovery.

  14. Three-Dimensional Simulation of DRIE Process Based on the Narrow Band Level Set and Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Jia-Cheng Yu

    2018-02-01

    Full Text Available A three-dimensional topography simulation of deep reactive ion etching (DRIE is developed based on the narrow band level set method for surface evolution and Monte Carlo method for flux distribution. The advanced level set method is implemented to simulate the time-related movements of etched surface. In the meanwhile, accelerated by ray tracing algorithm, the Monte Carlo method incorporates all dominant physical and chemical mechanisms such as ion-enhanced etching, ballistic transport, ion scattering, and sidewall passivation. The modified models of charged particles and neutral particles are epitomized to determine the contributions of etching rate. The effects such as scalloping effect and lag effect are investigated in simulations and experiments. Besides, the quantitative analyses are conducted to measure the simulation error. Finally, this simulator will be served as an accurate prediction tool for some MEMS fabrications.

  15. Vectorized Monte Carlo

    International Nuclear Information System (INIS)

    Brown, F.B.

    1981-01-01

    Examination of the global algorithms and local kernels of conventional general-purpose Monte Carlo codes shows that multigroup Monte Carlo methods have sufficient structure to permit efficient vectorization. A structured multigroup Monte Carlo algorithm for vector computers is developed in which many particle events are treated at once on a cell-by-cell basis. Vectorization of kernels for tracking and variance reduction is described, and a new method for discrete sampling is developed to facilitate the vectorization of collision analysis. To demonstrate the potential of the new method, a vectorized Monte Carlo code for multigroup radiation transport analysis was developed. This code incorporates many features of conventional general-purpose production codes, including general geometry, splitting and Russian roulette, survival biasing, variance estimation via batching, a number of cutoffs, and generalized tallies of collision, tracklength, and surface crossing estimators with response functions. Predictions of vectorized performance characteristics for the CYBER-205 were made using emulated coding and a dynamic model of vector instruction timing. Computation rates were examined for a variety of test problems to determine sensitivities to batch size and vector lengths. Significant speedups are predicted for even a few hundred particles per batch, and asymptotic speedups by about 40 over equivalent Amdahl 470V/8 scalar codes arepredicted for a few thousand particles per batch. The principal conclusion is that vectorization of a general-purpose multigroup Monte Carlo code is well worth the significant effort required for stylized coding and major algorithmic changes

  16. Research on Monte Carlo improved quasi-static method for reactor space-time dynamics

    International Nuclear Information System (INIS)

    Xu Qi; Wang Kan; Li Shirui; Yu Ganglin

    2013-01-01

    With large time steps, improved quasi-static (IQS) method can improve the calculation speed for reactor dynamic simulations. The Monte Carlo IQS method was proposed in this paper, combining the advantages of both the IQS method and MC method. Thus, the Monte Carlo IQS method is beneficial for solving space-time dynamics problems of new concept reactors. Based on the theory of IQS, Monte Carlo algorithms for calculating adjoint neutron flux, reactor kinetic parameters and shape function were designed and realized. A simple Monte Carlo IQS code and a corresponding diffusion IQS code were developed, which were used for verification of the Monte Carlo IQS method. (authors)

  17. A keff calculation method by Monte Carlo

    International Nuclear Information System (INIS)

    Shen, H; Wang, K.

    2008-01-01

    The effective multiplication factor (k eff ) is defined as the ratio between the number of neutrons in successive generations, which definition is adopted by most Monte Carlo codes (e.g. MCNP). Also, it can be thought of as the ratio of the generation rate of neutrons by the sum of the leakage rate and the absorption rate, which should exclude the effect of the neutron reaction such as (n, 2n) and (n, 3n). This article discusses the Monte Carlo method for k eff calculation based on the second definition. A new code has been developed and the results are presented. (author)

  18. Global Monte Carlo Simulation with High Order Polynomial Expansions

    International Nuclear Information System (INIS)

    William R. Martin; James Paul Holloway; Kaushik Banerjee; Jesse Cheatham; Jeremy Conlin

    2007-01-01

    The functional expansion technique (FET) was recently developed for Monte Carlo simulation. The basic idea of the FET is to expand a Monte Carlo tally in terms of a high order expansion, the coefficients of which can be estimated via the usual random walk process in a conventional Monte Carlo code. If the expansion basis is chosen carefully, the lowest order coefficient is simply the conventional histogram tally, corresponding to a flat mode. This research project studied the applicability of using the FET to estimate the fission source, from which fission sites can be sampled for the next generation. The idea is that individual fission sites contribute to expansion modes that may span the geometry being considered, possibly increasing the communication across a loosely coupled system and thereby improving convergence over the conventional fission bank approach used in most production Monte Carlo codes. The project examined a number of basis functions, including global Legendre polynomials as well as 'local' piecewise polynomials such as finite element hat functions and higher order versions. The global FET showed an improvement in convergence over the conventional fission bank approach. The local FET methods showed some advantages versus global polynomials in handling geometries with discontinuous material properties. The conventional finite element hat functions had the disadvantage that the expansion coefficients could not be estimated directly but had to be obtained by solving a linear system whose matrix elements were estimated. An alternative fission matrix-based response matrix algorithm was formulated. Studies were made of two alternative applications of the FET, one based on the kernel density estimator and one based on Arnoldi's method of minimized iterations. Preliminary results for both methods indicate improvements in fission source convergence. These developments indicate that the FET has promise for speeding up Monte Carlo fission source convergence

  19. Fault Risk Assessment of Underwater Vehicle Steering System Based on Virtual Prototyping and Monte Carlo Simulation

    Directory of Open Access Journals (Sweden)

    He Deyu

    2016-09-01

    Full Text Available Assessing the risks of steering system faults in underwater vehicles is a human-machine-environment (HME systematic safety field that studies faults in the steering system itself, the driver’s human reliability (HR and various environmental conditions. This paper proposed a fault risk assessment method for an underwater vehicle steering system based on virtual prototyping and Monte Carlo simulation. A virtual steering system prototype was established and validated to rectify a lack of historic fault data. Fault injection and simulation were conducted to acquire fault simulation data. A Monte Carlo simulation was adopted that integrated randomness due to the human operator and environment. Randomness and uncertainty of the human, machine and environment were integrated in the method to obtain a probabilistic risk indicator. To verify the proposed method, a case of stuck rudder fault (SRF risk assessment was studied. This method may provide a novel solution for fault risk assessment of a vehicle or other general HME system.

  20. Adjoint electron Monte Carlo calculations

    International Nuclear Information System (INIS)

    Jordan, T.M.

    1986-01-01

    Adjoint Monte Carlo is the most efficient method for accurate analysis of space systems exposed to natural and artificially enhanced electron environments. Recent adjoint calculations for isotropic electron environments include: comparative data for experimental measurements on electronics boxes; benchmark problem solutions for comparing total dose prediction methodologies; preliminary assessment of sectoring methods used during space system design; and total dose predictions on an electronics package. Adjoint Monte Carlo, forward Monte Carlo, and experiment are in excellent agreement for electron sources that simulate space environments. For electron space environments, adjoint Monte Carlo is clearly superior to forward Monte Carlo, requiring one to two orders of magnitude less computer time for relatively simple geometries. The solid-angle sectoring approximations used for routine design calculations can err by more than a factor of 2 on dose in simple shield geometries. For critical space systems exposed to severe electron environments, these potential sectoring errors demand the establishment of large design margins and/or verification of shield design by adjoint Monte Carlo/experiment

  1. Tryptophan depletion affects compulsive behaviour in rats

    DEFF Research Database (Denmark)

    Merchán, A; Navarro, S V; Klein, A B

    2017-01-01

    investigated whether 5-HT manipulation, through a tryptophan (TRP) depletion by diet in Wistar and Lister Hooded rats, modulates compulsive drinking in schedule-induced polydipsia (SIP) and locomotor activity in the open-field test. The levels of dopamine, noradrenaline, serotonin and its metabolite were......-depleted HD Wistar rats, while the LD Wistar and the Lister Hooded rats did not exhibit differences in SIP. In contrast, the TRP-depleted Lister Hooded rats increased locomotor activity compared to the non-depleted rats, while no differences were found in the Wistar rats. Serotonin 2A receptor binding...

  2. Revisiting Antarctic Ozone Depletion

    Science.gov (United States)

    Grooß, Jens-Uwe; Tritscher, Ines; Müller, Rolf

    2015-04-01

    Antarctic ozone depletion is known for almost three decades and it has been well settled that it is caused by chlorine catalysed ozone depletion inside the polar vortex. However, there are still some details, which need to be clarified. In particular, there is a current debate on the relative importance of liquid aerosol and crystalline NAT and ice particles for chlorine activation. Particles have a threefold impact on polar chlorine chemistry, temporary removal of HNO3 from the gas-phase (uptake), permanent removal of HNO3 from the atmosphere (denitrification), and chlorine activation through heterogeneous reactions. We have performed simulations with the Chemical Lagrangian Model of the Stratosphere (CLaMS) employing a recently developed algorithm for saturation-dependent NAT nucleation for the Antarctic winters 2011 and 2012. The simulation results are compared with different satellite observations. With the help of these simulations, we investigate the role of the different processes responsible for chlorine activation and ozone depletion. Especially the sensitivity with respect to the particle type has been investigated. If temperatures are artificially forced to only allow cold binary liquid aerosol, the simulation still shows significant chlorine activation and ozone depletion. The results of the 3-D Chemical Transport Model CLaMS simulations differ from purely Lagrangian longtime trajectory box model simulations which indicates the importance of mixing processes.

  3. Monte Carlo methods beyond detailed balance

    NARCIS (Netherlands)

    Schram, Raoul D.; Barkema, Gerard T.|info:eu-repo/dai/nl/101275080

    2015-01-01

    Monte Carlo algorithms are nearly always based on the concept of detailed balance and ergodicity. In this paper we focus on algorithms that do not satisfy detailed balance. We introduce a general method for designing non-detailed balance algorithms, starting from a conventional algorithm satisfying

  4. Current and future applications of Monte Carlo

    International Nuclear Information System (INIS)

    Zaidi, H.

    2003-01-01

    Full text: The use of radionuclides in medicine has a long history and encompasses a large area of applications including diagnosis and radiation treatment of cancer patients using either external or radionuclide radiotherapy. The 'Monte Carlo method'describes a very broad area of science, in which many processes, physical systems, and phenomena are simulated by statistical methods employing random numbers. The general idea of Monte Carlo analysis is to create a model, which is as similar as possible to the real physical system of interest, and to create interactions within that system based on known probabilities of occurrence, with random sampling of the probability density functions (pdfs). As the number of individual events (called 'histories') is increased, the quality of the reported average behavior of the system improves, meaning that the statistical uncertainty decreases. The use of the Monte Carlo method to simulate radiation transport has become the most accurate means of predicting absorbed dose distributions and other quantities of interest in the radiation treatment of cancer patients using either external or radionuclide radiotherapy. The same trend has occurred for the estimation of the absorbed dose in diagnostic procedures using radionuclides as well as the assessment of image quality and quantitative accuracy of radionuclide imaging. As a consequence of this generalized use, many questions are being raised primarily about the need and potential of Monte Carlo techniques, but also about how accurate it really is, what would it take to apply it clinically and make it available widely to the nuclear medicine community at large. Many of these questions will be answered when Monte Carlo techniques are implemented and used for more routine calculations and for in-depth investigations. In this paper, the conceptual role of the Monte Carlo method is briefly introduced and followed by a survey of its different applications in diagnostic and therapeutic

  5. Monte Carlo: Basics

    OpenAIRE

    Murthy, K. P. N.

    2001-01-01

    An introduction to the basics of Monte Carlo is given. The topics covered include, sample space, events, probabilities, random variables, mean, variance, covariance, characteristic function, chebyshev inequality, law of large numbers, central limit theorem (stable distribution, Levy distribution), random numbers (generation and testing), random sampling techniques (inversion, rejection, sampling from a Gaussian, Metropolis sampling), analogue Monte Carlo and Importance sampling (exponential b...

  6. ARCHER, a new Monte Carlo software tool for emerging heterogeneous computing environments

    International Nuclear Information System (INIS)

    Xu, X. George; Liu, Tianyu; Su, Lin; Du, Xining; Riblett, Matthew; Ji, Wei; Gu, Deyang; Carothers, Christopher D.; Shephard, Mark S.; Brown, Forrest B.; Kalra, Mannudeep K.; Liu, Bob

    2015-01-01

    Highlights: • A fast Monte Carlo based radiation transport code ARCHER was developed. • ARCHER supports different hardware including CPU, GPU and Intel Xeon Phi coprocessor. • Code is benchmarked again MCNP for medical applications. • A typical CT scan dose simulation only takes 6.8 s on an NVIDIA M2090 GPU. • GPU and coprocessor-based codes are 5–8 times faster than the CPU-based codes. - Abstract: The Monte Carlo radiation transport community faces a number of challenges associated with peta- and exa-scale computing systems that rely increasingly on heterogeneous architectures involving hardware accelerators such as GPUs and Xeon Phi coprocessors. Existing Monte Carlo codes and methods must be strategically upgraded to meet emerging hardware and software needs. In this paper, we describe the development of a software, called ARCHER (Accelerated Radiation-transport Computations in Heterogeneous EnviRonments), which is designed as a versatile testbed for future Monte Carlo codes. Preliminary results from five projects in nuclear engineering and medical physics are presented

  7. ANATOMY OF DEPLETED INTERPLANETARY CORONAL MASS EJECTIONS

    Energy Technology Data Exchange (ETDEWEB)

    Kocher, M.; Lepri, S. T.; Landi, E.; Zhao, L.; Manchester, W. B. IV, E-mail: mkocher@umich.edu [Department of Climate and Space Sciences and Engineering, University of Michigan, 2455 Hayward Street, Ann Arbor, MI 48109-2143 (United States)

    2017-01-10

    We report a subset of interplanetary coronal mass ejections (ICMEs) containing distinct periods of anomalous heavy-ion charge state composition and peculiar ion thermal properties measured by ACE /SWICS from 1998 to 2011. We label them “depleted ICMEs,” identified by the presence of intervals where C{sup 6+}/C{sup 5+} and O{sup 7+}/O{sup 6+} depart from the direct correlation expected after their freeze-in heights. These anomalous intervals within the depleted ICMEs are referred to as “Depletion Regions.” We find that a depleted ICME would be indistinguishable from all other ICMEs in the absence of the Depletion Region, which has the defining property of significantly low abundances of fully charged species of helium, carbon, oxygen, and nitrogen. Similar anomalies in the slow solar wind were discussed by Zhao et al. We explore two possibilities for the source of the Depletion Region associated with magnetic reconnection in the tail of a CME, using CME simulations of the evolution of two Earth-bound CMEs described by Manchester et al.

  8. Development, implementation, and verification of multicycle depletion perturbation theory for reactor burnup analysis

    Energy Technology Data Exchange (ETDEWEB)

    White, J.R.

    1980-08-01

    A generalized depletion perturbation formulation based on the quasi-static method for solving realistic multicycle reactor depletion problems is developed and implemented within the VENTURE/BURNER modular code system. The present development extends the original formulation derived by M.L. Williams to include nuclide discontinuities such as fuel shuffling and discharge. This theory is first described in detail with particular emphasis given to the similarity of the forward and adjoint quasi-static burnup equations. The specific algorithm and computational methods utilized to solve the adjoint problem within the newly developed DEPTH (Depletion Perturbation Theory) module are then briefly discussed. Finally, the main features and computational accuracy of this new method are illustrated through its application to several representative reactor depletion problems.

  9. Efficient characterization of fuel depletion in boiling water reactor

    International Nuclear Information System (INIS)

    Kim, S.H.

    1980-01-01

    An efficient fuel depletion method for boiling water reactor (BWR) fuel assemblies has been developed for fuel cycle analysis. A computer program HISTORY based on this method was designed to carry out accurate and rapid fuel burnup calculation for the fuel assembly. It has been usefully employed to study the depletion characteristics of the fuel assemblies for the preparation of nodal code input data and the fuel management study. The adequacy and the effectiveness of the assessment of this method used in HISTORY were demonstrated by comparing HISTORY results with more detailed CASMO results. The computing cost of HISTORY typically has been less than one dollar for the fuel assembly-level depletion calculations over the full life of the assembly, in contrast to more than $1000 for CASMO. By combining CASMO and HISTORY, a large number of expensive CASMO calculations can be replaced by inexpensive HISTORY. For the depletion calculations via CASMO/HISTORY, CASMO calculations are required only for the reference conditions and just at the beginning of life for other cases such as changes in void fraction, control rod condition and temperature. The simple and inexpensive HISTORY is sufficienty accurate and fast to be used in conjunction with CASMO for fuel cycle analysis and some BWR design calculations

  10. MORSE Monte Carlo code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described

  11. MC21 Monte Carlo analysis of the Hoogenboom-Martin full-core PWR benchmark problem - 301

    International Nuclear Information System (INIS)

    Kelly, D.J.; Sutton, Th.M.; Trumbull, T.H.; Dobreff, P.S.

    2010-01-01

    At the 2009 American Nuclear Society Mathematics and Computation conference, Hoogenboom and Martin proposed a full-core PWR model to monitor the improvement of Monte Carlo codes to compute detailed power density distributions. This paper describes the application of the MC21 Monte Carlo code to the analysis of this benchmark model. With the MC21 code, we obtained detailed power distributions over the entire core. The model consisted of 214 assemblies, each made up of a 17x17 array of pins. Each pin was subdivided into 100 axial nodes, thus resulting in over seven million tally regions. Various cases were run to assess the statistical convergence of the model. This included runs of 10 billion and 40 billion neutron histories, as well as ten independent runs of 4 billion neutron histories each. The 40 billion neutron-history calculation resulted in 43% of all regions having a 95% confidence level of 2% or less implying a relative standard deviation of 1%. Furthermore, 99.7% of regions having a relative power density of 1.0 or greater have a similar confidence level. We present timing results that assess the MC21 performance relative to the number of tallies requested. Source convergence was monitored by analyzing plots of the Shannon entropy and eigenvalue versus active cycle. We also obtained an estimate of the dominance ratio. Additionally, we performed an analysis of the error in an attempt to ascertain the validity of the confidence intervals predicted by MC21. Finally, we look forward to the prospect of full core 3-D Monte Carlo depletion by scoping out the required problem size. This study provides an initial data point for the Hoogenboom-Martin benchmark model using a state-of-the-art Monte Carlo code. (authors)

  12. The specific bias in dynamic Monte Carlo simulations of nuclear reactors

    International Nuclear Information System (INIS)

    Yamamoto, T.; Endo, H.; Ishizu, T.; Tatewaki, I.

    2013-01-01

    During the development of Monte-Carlo-based dynamic code system, we have encountered two major Monte-Carlo-specific problems. One is the break down due to 'false super-criticality' which is caused by an accidentally large eigenvalue due to statistical error in spite of the fact that the reactor is actually not critical. The other problem, which is the main topic in this paper, is that the statistical error in power level using the reactivity calculated with Monte Carlo code is not symmetric about its mean but always positively biased. This signifies that the bias is accumulated as the calculation proceeds and consequently results in an over-estimation of the final power level. It should be noted that the bias will not be eliminated by refining the time step as long as the variance is not zero. A preliminary investigation on this matter using the one-group-precursor point kinetic equations was made and it was concluded that the bias in power level is approximately proportional to the product of variance in Monte Carlo calculation and elapsed time. This conclusion was verified with some numerical experiments. This outcome is important in quantifying the required precision of the Monte-Carlo-based reactivity calculations. (authors)

  13. Development of three-dimensional program based on Monte Carlo and discrete ordinates bidirectional coupling method

    International Nuclear Information System (INIS)

    Han Jingru; Chen Yixue; Yuan Longjun

    2013-01-01

    The Monte Carlo (MC) and discrete ordinates (SN) are the commonly used methods in the design of radiation shielding. Monte Carlo method is able to treat the geometry exactly, but time-consuming in dealing with the deep penetration problem. The discrete ordinate method has great computational efficiency, but it is quite costly in computer memory and it suffers from ray effect. Single discrete ordinates method or single Monte Carlo method has limitation in shielding calculation for large complex nuclear facilities. In order to solve the problem, the Monte Carlo and discrete ordinates bidirectional coupling method is developed. The bidirectional coupling method is implemented in the interface program to transfer the particle probability distribution of MC and angular flux of discrete ordinates. The coupling method combines the advantages of MC and SN. The test problems of cartesian and cylindrical coordinate have been calculated by the coupling methods. The calculation results are performed with comparison to MCNP and TORT and satisfactory agreements are obtained. The correctness of the program is proved. (authors)

  14. Deep-depletion physics-based analytical model for scanning capacitance microscopy carrier profile extraction

    International Nuclear Information System (INIS)

    Wong, K. M.; Chim, W. K.

    2007-01-01

    An approach for fast and accurate carrier profiling using deep-depletion analytical modeling of scanning capacitance microscopy (SCM) measurements is shown for an ultrashallow p-n junction with a junction depth of less than 30 nm and a profile steepness of about 3 nm per decade change in carrier concentration. In addition, the analytical model is also used to extract the SCM dopant profiles of three other p-n junction samples with different junction depths and profile steepnesses. The deep-depletion effect arises from rapid changes in the bias applied between the sample and probe tip during SCM measurements. The extracted carrier profile from the model agrees reasonably well with the more accurate carrier profile from inverse modeling and the dopant profile from secondary ion mass spectroscopy measurements

  15. Bayesian Modelling, Monte Carlo Sampling and Capital Allocation of Insurance Risks

    Directory of Open Access Journals (Sweden)

    Gareth W. Peters

    2017-09-01

    Full Text Available The main objective of this work is to develop a detailed step-by-step guide to the development and application of a new class of efficient Monte Carlo methods to solve practically important problems faced by insurers under the new solvency regulations. In particular, a novel Monte Carlo method to calculate capital allocations for a general insurance company is developed, with a focus on coherent capital allocation that is compliant with the Swiss Solvency Test. The data used is based on the balance sheet of a representative stylized company. For each line of business in that company, allocations are calculated for the one-year risk with dependencies based on correlations given by the Swiss Solvency Test. Two different approaches for dealing with parameter uncertainty are discussed and simulation algorithms based on (pseudo-marginal Sequential Monte Carlo algorithms are described and their efficiency is analysed.

  16. PERHITUNGAN VaR PORTOFOLIO SAHAM MENGGUNAKAN DATA HISTORIS DAN DATA SIMULASI MONTE CARLO

    Directory of Open Access Journals (Sweden)

    WAYAN ARTHINI

    2012-09-01

    Full Text Available Value at Risk (VaR is the maximum potential loss on a portfolio based on the probability at a certain time.  In this research, portfolio VaR values calculated from historical data and Monte Carlo simulation data. Historical data is processed so as to obtain stock returns, variance, correlation coefficient, and variance-covariance matrix, then the method of Markowitz sought proportion of each stock fund, and portfolio risk and return portfolio. The data was then simulated by Monte Carlo simulation, Exact Monte Carlo Simulation and Expected Monte Carlo Simulation. Exact Monte Carlo simulation have same returns and standard deviation  with historical data, while the Expected Monte Carlo Simulation satistic calculation similar to historical data. The results of this research is the portfolio VaR  with time horizon T=1, T=10, T=22 and the confidence level of 95 %, values obtained VaR between historical data and Monte Carlo simulation data with the method exact and expected. Value of VaR from both Monte Carlo simulation is greater than VaR historical data.

  17. Uncertainty analysis in Monte Carlo criticality computations

    International Nuclear Information System (INIS)

    Qi Ao

    2011-01-01

    Highlights: ► Two types of uncertainty methods for k eff Monte Carlo computations are examined. ► Sampling method has the least restrictions on perturbation but computing resources. ► Analytical method is limited to small perturbation on material properties. ► Practicality relies on efficiency, multiparameter applicability and data availability. - Abstract: Uncertainty analysis is imperative for nuclear criticality risk assessments when using Monte Carlo neutron transport methods to predict the effective neutron multiplication factor (k eff ) for fissionable material systems. For the validation of Monte Carlo codes for criticality computations against benchmark experiments, code accuracy and precision are measured by both the computational bias and uncertainty in the bias. The uncertainty in the bias accounts for known or quantified experimental, computational and model uncertainties. For the application of Monte Carlo codes for criticality analysis of fissionable material systems, an administrative margin of subcriticality must be imposed to provide additional assurance of subcriticality for any unknown or unquantified uncertainties. Because of a substantial impact of the administrative margin of subcriticality on economics and safety of nuclear fuel cycle operations, recently increasing interests in reducing the administrative margin of subcriticality make the uncertainty analysis in criticality safety computations more risk-significant. This paper provides an overview of two most popular k eff uncertainty analysis methods for Monte Carlo criticality computations: (1) sampling-based methods, and (2) analytical methods. Examples are given to demonstrate their usage in the k eff uncertainty analysis due to uncertainties in both neutronic and non-neutronic parameters of fissionable material systems.

  18. A general transform for variance reduction in Monte Carlo simulations

    International Nuclear Information System (INIS)

    Becker, T.L.; Larsen, E.W.

    2011-01-01

    This paper describes a general transform to reduce the variance of the Monte Carlo estimate of some desired solution, such as flux or biological dose. This transform implicitly includes many standard variance reduction techniques, including source biasing, collision biasing, the exponential transform for path-length stretching, and weight windows. Rather than optimizing each of these techniques separately or choosing semi-empirical biasing parameters based on the experience of a seasoned Monte Carlo practitioner, this General Transform unites all these variance techniques to achieve one objective: a distribution of Monte Carlo particles that attempts to optimize the desired solution. Specifically, this transform allows Monte Carlo particles to be distributed according to the user's specification by using information obtained from a computationally inexpensive deterministic simulation of the problem. For this reason, we consider the General Transform to be a hybrid Monte Carlo/Deterministic method. The numerical results con rm that the General Transform distributes particles according to the user-specified distribution and generally provide reasonable results for shielding applications. (author)

  19. PELE:  Protein Energy Landscape Exploration. A Novel Monte Carlo Based Technique.

    Science.gov (United States)

    Borrelli, Kenneth W; Vitalis, Andreas; Alcantara, Raul; Guallar, Victor

    2005-11-01

    Combining protein structure prediction algorithms and Metropolis Monte Carlo techniques, we provide a novel method to explore all-atom energy landscapes. The core of the technique is based on a steered localized perturbation followed by side-chain sampling as well as minimization cycles. The algorithm and its application to ligand diffusion are presented here. Ligand exit pathways are successfully modeled for different systems containing ligands of various sizes:  carbon monoxide in myoglobin, camphor in cytochrome P450cam, and palmitic acid in the intestinal fatty-acid-binding protein. These initial applications reveal the potential of this new technique in mapping millisecond-time-scale processes. The computational cost associated with the exploration is significantly less than that of conventional MD simulations.

  20. Improvements in EBR-2 core depletion calculations

    International Nuclear Information System (INIS)

    Finck, P.J.; Hill, R.N.; Sakamoto, S.

    1991-01-01

    The need for accurate core depletion calculations in Experimental Breeder Reactor No. 2 (EBR-2) is discussed. Because of the unique physics characteristics of EBR-2, it is difficult to obtain accurate and computationally efficient multigroup flux predictions. This paper describes the effect of various conventional and higher order schemes for group constant generation and for flux computations; results indicate that higher-order methods are required, particularly in the outer regions (i.e. the radial blanket). A methodology based on Nodal Equivalence Theory (N.E.T.) is developed which allows retention of the accuracy of a higher order solution with the computational efficiency of a few group nodal diffusion solution. The application of this methodology to three-dimensional EBR-2 flux predictions is demonstrated; this improved methodology allows accurate core depletion calculations at reasonable cost. 13 refs., 4 figs., 3 tabs

  1. Depletion sensitivity predicts unhealthy snack purchases.

    Science.gov (United States)

    Salmon, Stefanie J; Adriaanse, Marieke A; Fennis, Bob M; De Vet, Emely; De Ridder, Denise T D

    2016-01-01

    The aim of the present research is to examine the relation between depletion sensitivity - a novel construct referring to the speed or ease by which one's self-control resources are drained - and snack purchase behavior. In addition, interactions between depletion sensitivity and the goal to lose weight on snack purchase behavior were explored. Participants included in the study were instructed to report every snack they bought over the course of one week. The dependent variables were the number of healthy and unhealthy snacks purchased. The results of the present study demonstrate that depletion sensitivity predicts the amount of unhealthy (but not healthy) snacks bought. The more sensitive people are to depletion, the more unhealthy snacks they buy. Moreover, there was some tentative evidence that this relation is more pronounced for people with a weak as opposed to a strong goal to lose weight, suggesting that a strong goal to lose weight may function as a motivational buffer against self-control failures. All in all, these findings provide evidence for the external validity of depletion sensitivity and the relevance of this construct in the domain of eating behavior. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Monte Carlo theory and practice

    International Nuclear Information System (INIS)

    James, F.

    1987-01-01

    Historically, the first large-scale calculations to make use of the Monte Carlo method were studies of neutron scattering and absorption, random processes for which it is quite natural to employ random numbers. Such calculations, a subset of Monte Carlo calculations, are known as direct simulation, since the 'hypothetical population' of the narrower definition above corresponds directly to the real population being studied. The Monte Carlo method may be applied wherever it is possible to establish equivalence between the desired result and the expected behaviour of a stochastic system. The problem to be solved may already be of a probabilistic or statistical nature, in which case its Monte Carlo formulation will usually be a straightforward simulation, or it may be of a deterministic or analytic nature, in which case an appropriate Monte Carlo formulation may require some imagination and may appear contrived or artificial. In any case, the suitability of the method chosen will depend on its mathematical properties and not on its superficial resemblance to the problem to be solved. The authors show how Monte Carlo techniques may be compared with other methods of solution of the same physical problem

  3. Monte Carlo simulation of continuous-space crystal growth

    International Nuclear Information System (INIS)

    Dodson, B.W.; Taylor, P.A.

    1986-01-01

    We describe a method, based on Monte Carlo techniques, of simulating the atomic growth of crystals without the discrete lattice space assumed by conventional Monte Carlo growth simulations. Since no lattice space is assumed, problems involving epitaxial growth, heteroepitaxy, phonon-driven mechanisms, surface reconstruction, and many other phenomena incompatible with the lattice-space approximation can be studied. Also, use of the Monte Carlo method circumvents to some extent the extreme limitations on simulated timescale inherent in crystal-growth techniques which might be proposed using molecular dynamics. The implementation of the new method is illustrated by studying the growth of strained-layer superlattice (SLS) interfaces in two-dimensional Lennard-Jones atomic systems. Despite the extreme simplicity of such systems, the qualitative features of SLS growth seen here are similar to those observed experimentally in real semiconductor systems

  4. Uncertainty Analysis Based on Sparse Grid Collocation and Quasi-Monte Carlo Sampling with Application in Groundwater Modeling

    Science.gov (United States)

    Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.

    2011-12-01

    Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently

  5. Simulation of Rossi-α method with analog Monte-Carlo method

    International Nuclear Information System (INIS)

    Lu Yuzhao; Xie Qilin; Song Lingli; Liu Hangang

    2012-01-01

    The analog Monte-Carlo code for simulating Rossi-α method based on Geant4 was developed. The prompt neutron decay constant α of six metal uranium configurations in Oak Ridge National Laboratory were calculated. α was also calculated by Burst-Neutron method and the result was consistent with the result of Rossi-α method. There is the difference between results of analog Monte-Carlo simulation and experiment, and the reasons for the difference is the gaps between uranium layers. The influence of gaps decrease as the sub-criticality deepens. The relative difference between results of analog Monte-Carlo simulation and experiment changes from 19% to 0.19%. (authors)

  6. Study on shielding design method of radiation streaming in a tokamak-type DT fusion reactor based on Monte Carlo calculation

    International Nuclear Information System (INIS)

    Sato, Satoshi

    2003-09-01

    In tokamak-type DT nuclear fusion reactor, there are various type slits and ducts in the blanket and the vacuum vessel. The helium production in the rewelding location of the blanket and the vacuum vessel, the nuclear properties in the super-conductive TF coil, e.g. the nuclear heating rate in the coil winding pack, are enhanced by the radiation streaming through the slits and ducts, and they are critical concern in the shielding design. The decay gamma ray dose rate around the duct penetrating the blanket and the vacuum vessel is also enhanced by the radiation streaming through the duct, and they are also critical concern from the view point of the human access to the cryostat during maintenance. In order to evaluate these nuclear properties with good accuracy, three dimensional Monte Carlo calculation is required but requires long calculation time. Therefore, the development of the effective simple design evaluation method for radiation streaming is substantially important. This study aims to establish the systematic evaluation method for the nuclear properties of the blanket, the vacuum vessel and the Toroidal Field (TF) coil taking into account the radiation streaming through various types of slits and ducts, based on three dimensional Monte Carlo calculation using the MNCP code, and for the decay gamma ray dose rates penetrated around the ducts. The present thesis describes three topics in five chapters as follows; 1) In Chapter 2, the results calculated by the Monte Carlo code, MCNP, are compared with those by the Sn code, DOT3.5, for the radiation streaming in the tokamak-type nuclear fusion reactor, for validating the results of the Sn calculation. From this comparison, the uncertainties of the Sn calculation results coming from the ray-effect and the effect due to approximation of the geometry are investigated whether the two dimensional Sn calculation can be applied instead of the Monte Carlo calculation. Through the study, it can be concluded that the

  7. Simplified monte carlo simulation for Beijing spectrometer

    International Nuclear Information System (INIS)

    Wang Taijie; Wang Shuqin; Yan Wuguang; Huang Yinzhi; Huang Deqiang; Lang Pengfei

    1986-01-01

    The Monte Carlo method based on the functionization of the performance of detectors and the transformation of values of kinematical variables into ''measured'' ones by means of smearing has been used to program the Monte Carlo simulation of the performance of the Beijing Spectrometer (BES) in FORTRAN language named BESMC. It can be used to investigate the multiplicity, the particle type, and the distribution of four-momentum of the final states of electron-positron collision, and also the response of the BES to these final states. Thus, it provides a measure to examine whether the overall design of the BES is reasonable and to decide the physical topics of the BES

  8. Depleted depletion drives polymer swelling in poor solvent mixtures.

    Science.gov (United States)

    Mukherji, Debashish; Marques, Carlos M; Stuehn, Torsten; Kremer, Kurt

    2017-11-09

    Establishing a link between macromolecular conformation and microscopic interaction is a key to understand properties of polymer solutions and for designing technologically relevant "smart" polymers. Here, polymer solvation in solvent mixtures strike as paradoxical phenomena. For example, when adding polymers to a solvent, such that all particle interactions are repulsive, polymer chains can collapse due to increased monomer-solvent repulsion. This depletion induced monomer-monomer attraction is well known from colloidal stability. A typical example is poly(methyl methacrylate) (PMMA) in water or small alcohols. While polymer collapse in a single poor solvent is well understood, the observed polymer swelling in mixtures of two repulsive solvents is surprising. By combining simulations and theoretical concepts known from polymer physics and colloidal science, we unveil the microscopic, generic origin of this collapse-swelling-collapse behavior. We show that this phenomenon naturally emerges at constant pressure when an appropriate balance of entropically driven depletion interactions is achieved.

  9. Is Ego Depletion Real? An Analysis of Arguments.

    Science.gov (United States)

    Friese, Malte; Loschelder, David D; Gieseler, Karolin; Frankenbach, Julius; Inzlicht, Michael

    2018-03-01

    An influential line of research suggests that initial bouts of self-control increase the susceptibility to self-control failure (ego depletion effect). Despite seemingly abundant evidence, some researchers have suggested that evidence for ego depletion was the sole result of publication bias and p-hacking, with the true effect being indistinguishable from zero. Here, we examine (a) whether the evidence brought forward against ego depletion will convince a proponent that ego depletion does not exist and (b) whether arguments that could be brought forward in defense of ego depletion will convince a skeptic that ego depletion does exist. We conclude that despite several hundred published studies, the available evidence is inconclusive. Both additional empirical and theoretical works are needed to make a compelling case for either side of the debate. We discuss necessary steps for future work toward this aim.

  10. Depletion-induced biaxial nematic states of boardlike particles

    International Nuclear Information System (INIS)

    Belli, S; Van Roij, R; Dijkstra, M

    2012-01-01

    With the aim of investigating the stability conditions of biaxial nematic liquid crystals, we study the effect of adding a non-adsorbing ideal depletant on the phase behavior of colloidal hard boardlike particles. We take into account the presence of the depletant by introducing an effective depletion attraction between a pair of boardlike particles. At fixed depletant fugacity, the stable liquid-crystal phase is determined through a mean-field theory with restricted orientations. Interestingly, we predict that for slightly elongated boardlike particles a critical depletant density exists, where the system undergoes a direct transition from an isotropic liquid to a biaxial nematic phase. As a consequence, by tuning the depletant density, an easy experimental control parameter, one can stabilize states of high biaxial nematic order even when these states are unstable for pure systems of boardlike particles. (paper)

  11. Monte Carlo Methods in Physics

    International Nuclear Information System (INIS)

    Santoso, B.

    1997-01-01

    Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained

  12. Ultrafast cone-beam CT scatter correction with GPU-based Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    Yuan Xu

    2014-03-01

    Full Text Available Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT. We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstruction within 30 seconds.Methods: The method consists of six steps: 1 FDK reconstruction using raw projection data; 2 Rigid Registration of planning CT to the FDK results; 3 MC scatter calculation at sparse view angles using the planning CT; 4 Interpolation of the calculated scatter signals to other angles; 5 Removal of scatter from the raw projections; 6 FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC noise from the simulated scatter images caused by low photon numbers. The method is validated on one simulated head-and-neck case with 364 projection angles.Results: We have examined variation of the scatter signal among projection angles using Fourier analysis. It is found that scatter images at 31 angles are sufficient to restore those at all angles with < 0.1% error. For the simulated patient case with a resolution of 512 × 512 × 100, we simulated 5 × 106 photons per angle. The total computation time is 20.52 seconds on a Nvidia GTX Titan GPU, and the time at each step is 2.53, 0.64, 14.78, 0.13, 0.19, and 2.25 seconds, respectively. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU.Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. It accomplished the whole procedure of scatter correction and reconstruction within 30 seconds.----------------------------Cite this

  13. Observations and Simulations of Formation of Broad Plasma Depletions Through Merging Process

    Science.gov (United States)

    Huang, Chao-Song; Retterer, J. M.; Beaujardiere, O. De La; Roddy, P. A.; Hunton, D.E.; Ballenthin, J. O.; Pfaff, Robert F.

    2012-01-01

    Broad plasma depletions in the equatorial ionosphere near dawn are region in which the plasma density is reduced by 1-3 orders of magnitude over thousands of kilometers in longitude. This phenomenon is observed repeatedly by the Communication/Navigation Outage Forecasting System (C/NOFS) satellite during deep solar minimum. The plasma flow inside the depletion region can be strongly upward. The possible causal mechanism for the formation of broad plasma depletions is that the broad depletions result from merging of multiple equatorial plasma bubbles. The purpose of this study is to demonstrate the feasibility of the merging mechanism with new observations and simulations. We present C/NOFS observations for two cases. A series of plasma bubbles is first detected by C/NOFS over a longitudinal range of 3300-3800 km around midnight. Each of the individual bubbles has a typical width of approx 100 km in longitude, and the upward ion drift velocity inside the bubbles is 200-400 m/s. The plasma bubbles rotate with the Earth to the dawn sector and become broad plasma depletions. The observations clearly show the evolution from multiple plasma bubbles to broad depletions. Large upward plasma flow occurs inside the depletion region over 3800 km in longitude and exists for approx 5 h. We also present the numerical simulations of bubble merging with the physics-based low-latitude ionospheric model. It is found that two separate plasma bubbles join together and form a single, wider bubble. The simulations show that the merging process of plasma bubbles can indeed occur in incompressible ionospheric plasma. The simulation results support the merging mechanism for the formation of broad plasma depletions.

  14. Partitioning ratio of depleted uranium during a melt decontamination by arc melting

    International Nuclear Information System (INIS)

    Min, Byeong Yeon; Choi, Wang Kyu; Oh, Won Zin; Jung, Chong Hun

    2008-01-01

    In a study of the optimum operational condition for a melting decontamination, the effects of the basicity, slag type and slag composition on the distribution of depleted uranium were investigated for radioactively contaminated metallic wastes of iron-based metals such as stainless steel (SUS 304L) in a direct current graphite arc furnace. Most of the depleted uranium was easily moved into the slag from the radioactive metal waste. The partitioning ratio of the depleted uranium was influenced by the amount of added slag former and the slag basicity. The composition of the slag former used to capture contaminants such as depleted uranium during the melt decontamination process generally consists of silica (SiO 2 ), calcium oxide (CaO) and aluminum oxide (Al 2 O 3 ). Furthermore, calcium fluoride (CaF 2 ), magnesium oxide (MgO), and ferric oxide (Fe 2 O 3 ) were added to increase the slag fluidity and oxidative potential. The partitioning ratio of the depleted uranium was increased as the amount of slag former was increased. Up to 97% of the depleted uranium was captured between the ingot phase and the slag phase. The partitioning ratio of the uranium was considerably dependent on the basicity and composition of the slag. The optimum condition for the removal of the depleted uranium was a basicity level of about 1.5. The partitioning ratio of uranium was high, exceeding 5.5x10 3 . The slag formers containing calcium fluoride (CaF 2 ) and a high amount of silica proved to be more effective for a melt decontamination of stainless steel wastes contaminated with depleted uranium

  15. Unbiased Selective Isolation of Protein N-Terminal Peptides from Complex Proteome Samples Using Phospho Tagging PTAG) and TiO2-based Depletion

    NARCIS (Netherlands)

    Mommen, G.P.M.; Waterbeemd, van de B.; Meiring, H.D.; Kersten, G.; Heck, A.J.R.; Jong, de A.P.J.M.

    2012-01-01

    A positional proteomics strategy for global N-proteome analysis is presented based on phospho tagging (PTAG) of internal peptides followed by depletion by titanium dioxide (TiO2) affinity chromatography. Therefore, N-terminal and lysine amino groups are initially completely dimethylated with

  16. Acceleration of monte Carlo solution by conjugate gradient method

    International Nuclear Information System (INIS)

    Toshihisa, Yamamoto

    2005-01-01

    The conjugate gradient method (CG) was applied to accelerate Monte Carlo solutions in fixed source problems. The equilibrium model based formulation enables to use CG scheme as well as initial guess to maximize computational performance. This method is available to arbitrary geometry provided that the neutron source distribution in each subregion can be regarded as flat. Even if it is not the case, the method can still be used as a powerful tool to provide an initial guess very close to the converged solution. The major difference of Monte Carlo CG to deterministic CG is that residual error is estimated using Monte Carlo sampling, thus statistical error exists in the residual. This leads to a flow diagram specific to Monte Carlo-CG. Three pre-conditioners were proposed for CG scheme and the performance was compared with a simple 1-D slab heterogeneous test problem. One of them, Sparse-M option, showed an excellent performance in convergence. The performance per unit cost was improved by four times in the test problem. Although direct estimation of efficiency of the method is impossible mainly because of the strong problem-dependence of the optimized pre-conditioner in CG, the method seems to have efficient potential as a fast solution algorithm for Monte Carlo calculations. (author)

  17. Monte Carlo simulation for IRRMA

    International Nuclear Information System (INIS)

    Gardner, R.P.; Liu Lianyan

    2000-01-01

    Monte Carlo simulation is fast becoming a standard approach for many radiation applications that were previously treated almost entirely by experimental techniques. This is certainly true for Industrial Radiation and Radioisotope Measurement Applications - IRRMA. The reasons for this include: (1) the increased cost and inadequacy of experimentation for design and interpretation purposes; (2) the availability of low cost, large memory, and fast personal computers; and (3) the general availability of general purpose Monte Carlo codes that are increasingly user-friendly, efficient, and accurate. This paper discusses the history and present status of Monte Carlo simulation for IRRMA including the general purpose (GP) and specific purpose (SP) Monte Carlo codes and future needs - primarily from the experience of the authors

  18. Maximizing percentage depletion in solid minerals

    International Nuclear Information System (INIS)

    Tripp, J.; Grove, H.D.; McGrath, M.

    1982-01-01

    This article develops a strategy for maximizing percentage depletion deductions when extracting uranium or other solid minerals. The goal is to avoid losing percentage depletion deductions by staying below the 50% limitation on taxable income from the property. The article is divided into two major sections. The first section is comprised of depletion calculations that illustrate the problem and corresponding solutions. The last section deals with the feasibility of applying the strategy and complying with the Internal Revenue Code and appropriate regulations. Three separate strategies or appropriate situations are developed and illustrated. 13 references, 3 figures, 7 tables

  19. Podocyte Depletion in Thin GBM and Alport Syndrome.

    Science.gov (United States)

    Wickman, Larysa; Hodgin, Jeffrey B; Wang, Su Q; Afshinnia, Farsad; Kershaw, David; Wiggins, Roger C

    2016-01-01

    The proximate genetic cause of both Thin GBM and Alport Syndrome (AS) is abnormal α3, 4 and 5 collagen IV chains resulting in abnormal glomerular basement membrane (GBM) structure/function. We previously reported that podocyte detachment rate measured in urine is increased in AS, suggesting that podocyte depletion could play a role in causing progressive loss of kidney function. To test this hypothesis podometric parameters were measured in 26 kidney biopsies from 21 patients aged 2-17 years with a clinic-pathologic diagnosis including both classic Alport Syndrome with thin and thick GBM segments and lamellated lamina densa [n = 15] and Thin GBM cases [n = 6]. Protocol biopsies from deceased donor kidneys were used as age-matched controls. Podocyte depletion was present in AS biopsies prior to detectable histologic abnormalities. No abnormality was detected by light microscopy at 70% podocyte depletion. Low level proteinuria was an early event at about 25% podocyte depletion and increased in proportion to podocyte depletion. These quantitative data parallel those from model systems where podocyte depletion is the causative event. This result supports a hypothesis that in AS podocyte adherence to the GBM is defective resulting in accelerated podocyte detachment causing progressive podocyte depletion leading to FSGS-like pathologic changes and eventual End Stage Kidney Disease. Early intervention to reduce podocyte depletion is projected to prolong kidney survival in AS.

  20. Monte Carlo tests of the ELIPGRID-PC algorithm

    International Nuclear Information System (INIS)

    Davidson, J.R.

    1995-04-01

    The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM reg-sign PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within ±0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangular sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error

  1. Improved Monte Carlo Method for PSA Uncertainty Analysis

    International Nuclear Information System (INIS)

    Choi, Jongsoo

    2016-01-01

    The treatment of uncertainty is an important issue for regulatory decisions. Uncertainties exist from knowledge limitations. A probabilistic approach has exposed some of these limitations and provided a framework to assess their significance and assist in developing a strategy to accommodate them in the regulatory process. The uncertainty analysis (UA) is usually based on the Monte Carlo method. This paper proposes a Monte Carlo UA approach to calculate the mean risk metrics accounting for the SOKC between basic events (including CCFs) using efficient random number generators and to meet Capability Category III of the ASME/ANS PRA standard. Audit calculation is needed in PSA regulatory reviews of uncertainty analysis results submitted for licensing. The proposed Monte Carlo UA approach provides a high degree of confidence in PSA reviews. All PSA needs accounting for the SOKC between event probabilities to meet the ASME/ANS PRA standard

  2. Two proposed convergence criteria for Monte Carlo solutions

    International Nuclear Information System (INIS)

    Forster, R.A.; Pederson, S.P.; Booth, T.E.

    1992-01-01

    The central limit theorem (CLT) can be applied to a Monte Carlo solution if two requirements are satisfied: (1) The random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these two conditions are satisfied, a confidence interval (CI) based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the Monte Carlo tally being used. The Monte Carlo practitioner has a limited number of marginal methods to assess the fulfillment of the second requirement, such as statistical error reduction proportional to 1/√N with error magnitude guidelines. Two proposed methods are discussed in this paper to assist in deciding if N is large enough: estimating the relative variance of the variance (VOV) and examining the empirical history score probability density function (pdf)

  3. Improved Monte Carlo Method for PSA Uncertainty Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jongsoo [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2016-10-15

    The treatment of uncertainty is an important issue for regulatory decisions. Uncertainties exist from knowledge limitations. A probabilistic approach has exposed some of these limitations and provided a framework to assess their significance and assist in developing a strategy to accommodate them in the regulatory process. The uncertainty analysis (UA) is usually based on the Monte Carlo method. This paper proposes a Monte Carlo UA approach to calculate the mean risk metrics accounting for the SOKC between basic events (including CCFs) using efficient random number generators and to meet Capability Category III of the ASME/ANS PRA standard. Audit calculation is needed in PSA regulatory reviews of uncertainty analysis results submitted for licensing. The proposed Monte Carlo UA approach provides a high degree of confidence in PSA reviews. All PSA needs accounting for the SOKC between event probabilities to meet the ASME/ANS PRA standard.

  4. A Monte Carlo-based treatment-planning tool for ion beam therapy

    CERN Document Server

    Böhlen, T T; Dosanjh, M; Ferrari, A; Haberer, T; Parodi, K; Patera, V; Mairan, A

    2013-01-01

    Ion beam therapy, as an emerging radiation therapy modality, requires continuous efforts to develop and improve tools for patient treatment planning (TP) and research applications. Dose and fluence computation algorithms using the Monte Carlo (MC) technique have served for decades as reference tools for accurate dose computations for radiotherapy. In this work, a novel MC-based treatment-planning (MCTP) tool for ion beam therapy using the pencil beam scanning technique is presented. It allows single-field and simultaneous multiple-fields optimization for realistic patient treatment conditions and for dosimetric quality assurance for irradiation conditions at state-of-the-art ion beam therapy facilities. It employs iterative procedures that allow for the optimization of absorbed dose and relative biological effectiveness (RBE)-weighted dose using radiobiological input tables generated by external RBE models. Using a re-implementation of the local effect model (LEM), theMCTP tool is able to perform TP studies u...

  5. Long-term groundwater depletion in the United States

    Science.gov (United States)

    Konikow, Leonard F.

    2015-01-01

    The volume of groundwater stored in the subsurface in the United States decreased by almost 1000 km3 during 1900–2008. The aquifer systems with the three largest volumes of storage depletion include the High Plains aquifer, the Mississippi Embayment section of the Gulf Coastal Plain aquifer system, and the Central Valley of California. Depletion rates accelerated during 1945–1960, averaging 13.6 km3/year during the last half of the century, and after 2000 increased again to about 24 km3/year. Depletion intensity is a new parameter, introduced here, to provide a more consistent basis for comparing storage depletion problems among various aquifers by factoring in time and areal extent of the aquifer. During 2001–2008, the Central Valley of California had the largest depletion intensity. Groundwater depletion in the United States can explain 1.4% of observed sea-level rise during the 108-year study period and 2.1% during 2001–2008. Groundwater depletion must be confronted on local and regional scales to help reduce demand (primarily in irrigated agriculture) and/or increase supply.

  6. Transport methods: general. 1. The Analytical Monte Carlo Method for Radiation Transport Calculations

    International Nuclear Information System (INIS)

    Martin, William R.; Brown, Forrest B.

    2001-01-01

    We present an alternative Monte Carlo method for solving the coupled equations of radiation transport and material energy. This method is based on incorporating the analytical solution to the material energy equation directly into the Monte Carlo simulation for the radiation intensity. This method, which we call the Analytical Monte Carlo (AMC) method, differs from the well known Implicit Monte Carlo (IMC) method of Fleck and Cummings because there is no discretization of the material energy equation since it is solved as a by-product of the Monte Carlo simulation of the transport equation. Our method also differs from the method recently proposed by Ahrens and Larsen since they use Monte Carlo to solve both equations, while we are solving only the radiation transport equation with Monte Carlo, albeit with effective sources and cross sections to represent the emission sources. Our method bears some similarity to a method developed and implemented by Carter and Forest nearly three decades ago, but there are substantive differences. We have implemented our method in a simple zero-dimensional Monte Carlo code to test the feasibility of the method, and the preliminary results are very promising, justifying further extension to more realistic geometries. (authors)

  7. An NPT Monte Carlo Molecular Simulation-Based Approach to Investigate Solid-Vapor Equilibrium: Application to Elemental Sulfur-H2S System

    KAUST Repository

    Kadoura, Ahmad Salim; Salama, Amgad; Sun, Shuyu; Sherik, Abdelmounam

    2013-01-01

    In this work, a method to estimate solid elemental sulfur solubility in pure and gas mixtures using Monte Carlo (MC) molecular simulation is proposed. This method is based on Isobaric-Isothermal (NPT) ensemble and the Widom insertion technique

  8. Depletion sensitivity predicts unhealthy snack purchases

    NARCIS (Netherlands)

    Salmon, Stefanie J.; Adriaanse, Marieke A.; Fennis, Bob M.; De Vet, Emely; De Ridder, Denise T D

    2016-01-01

    The aim of the present research is to examine the relation between depletion sensitivity - a novel construct referring to the speed or ease by which one's self-control resources are drained - and snack purchase behavior. In addition, interactions between depletion sensitivity and the goal to lose

  9. Real depletion in nodal diffusion codes

    International Nuclear Information System (INIS)

    Petkov, P.T.

    2002-01-01

    The fuel depletion is described by more than one hundred fuel isotopes in the advanced lattice codes like HELIOS, but only a few fuel isotopes are accounted for even in the advanced steady-state diffusion codes. The general assumption that the number densities of the majority of the fuel isotopes depend only on the fuel burnup is seriously in error if high burnup is considered. The real depletion conditions in the reactor core differ from the asymptotic ones at the stage of lattice depletion calculations. This study reveals which fuel isotopes should be explicitly accounted for in the diffusion codes in order to predict adequately the real depletion effects in the core. A somewhat strange conclusion is that if the real number densities of the main fissionable isotopes are not explicitly accounted for in the diffusion code, then Sm-149 should not be accounted for either, because the net error in k-inf is smaller (Authors)

  10. Multilevel and Multi-index Monte Carlo methods for the McKean–Vlasov equation

    KAUST Repository

    Haji Ali, Abdul Lateef; Tempone, Raul

    2017-01-01

    of particles. Based on these two parameters, we consider different variants of the Monte Carlo and Multilevel Monte Carlo (MLMC) methods and show that, in the best case, the optimal work complexity of MLMC, to estimate the functional in one typical setting

  11. LCG MCDB - a Knowledgebase of Monte Carlo Simulated Events

    CERN Document Server

    Belov, S; Galkin, E; Gusev, A; Pokorski, Witold; Sherstnev, A V

    2008-01-01

    In this paper we report on LCG Monte Carlo Data Base (MCDB) and software which has been developed to operate MCDB. The main purpose of the LCG MCDB project is to provide a storage and documentation system for sophisticated event samples simulated for the LHC collaborations by experts. In many cases, the modern Monte Carlo simulation of physical processes requires expert knowledge in Monte Carlo generators or significant amount of CPU time to produce the events. MCDB is a knowledgebase mainly to accumulate simulated events of this type. The main motivation behind LCG MCDB is to make the sophisticated MC event samples available for various physical groups. All the data from MCDB is accessible in several convenient ways. LCG MCDB is being developed within the CERN LCG Application Area Simulation project.

  12. Exponentially-convergent Monte Carlo via finite-element trial spaces

    International Nuclear Information System (INIS)

    Morel, Jim E.; Tooley, Jared P.; Blamer, Brandon J.

    2011-01-01

    Exponentially-Convergent Monte Carlo (ECMC) methods, also known as adaptive Monte Carlo and residual Monte Carlo methods, were the subject of intense research over a decade ago, but they never became practical for solving the realistic problems. We believe that the failure of previous efforts may be related to the choice of trial spaces that were global and thus highly oscillatory. As an alternative, we consider finite-element trial spaces, which have the ability to treat fully realistic problems. As a first step towards more general methods, we apply piecewise-linear trial spaces to the spatially-continuous two-stream transport equation. Using this approach, we achieve exponential convergence and computationally demonstrate several fundamental properties of finite-element based ECMC methods. Finally, our results indicate that the finite-element approach clearly deserves further investigation. (author)

  13. Hybrid SN/Monte Carlo research and results

    International Nuclear Information System (INIS)

    Baker, R.S.

    1993-01-01

    The neutral particle transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S N ) and stochastic (Monte Carlo) methods are applied. The Monte Carlo and S N regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid Monte Carlo/S N method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S N is well suited for by themselves. The hybrid method has been successfully applied to realistic shielding problems. The vectorized Monte Carlo algorithm in the hybrid method has been ported to the massively parallel architecture of the Connection Machine. Comparisons of performance on a vector machine (Cray Y-MP) and the Connection Machine (CM-2) show that significant speedups are obtainable for vectorized Monte Carlo algorithms on massively parallel machines, even when realistic problems requiring variance reduction are considered. However, the architecture of the Connection Machine does place some limitations on the regime in which the Monte Carlo algorithm may be expected to perform well

  14. Both cladribine and alemtuzumab may effect MS via B-cell depletion.

    Science.gov (United States)

    Baker, David; Herrod, Samuel S; Alvarez-Gonzalez, Cesar; Zalewski, Lukasz; Albor, Christo; Schmierer, Klaus

    2017-07-01

    To understand the efficacy of cladribine (CLAD) treatment in MS through analysis of lymphocyte subsets collected, but not reported, in the pivotal phase III trials of cladribine and alemtuzumab induction therapies. The regulatory submissions of the CLAD Tablets Treating Multiple Sclerosis Orally (CLARITY) (NCT00213135) cladribine and Comparison of Alemtuzumab and Rebif Efficacy in Multiple Sclerosis, study one (CARE-MS I) (NCT00530348) alemtuzumab trials were obtained from the European Medicine Agency through Freedom of Information requests. Data were extracted and statistically analyzed. Either dose of cladribine (3.5 mg/kg; 5.25 mg/kg) tested in CLARITY reduced the annualized relapse rate to 0.16-0.18 over 96 weeks, and both doses were similarly effective in reducing the risk of MRI lesions and disability. Surprisingly, however, T-cell depletion was rather modest. Cladribine 3.5 mg/kg depleted CD4 + cells by 40%-45% and CD8 + cells by 15%-30%, whereas alemtuzumab suppressed CD4 + cells by 70%-95% and CD8 + cells by 47%-55%. However, either dose of cladribine induced 70%-90% CD19 + B-cell depletion, similar to alemtuzumab (90%). CD19 + cells slowly repopulated to 15%-25% of baseline before cladribine redosing. However, alemtuzumab induced hyperrepopulation of CD19 + B cells 6-12 months after infusion, which probably forms the substrate for B-cell autoimmunities associated with alemtuzumab. Cladribine induced only modest depletion of T cells, which may not be consistent with a marked influence on MS, based on previous CD4 + T-cell depletion studies. The therapeutic drug-response relationship with cladribine is more consistent with lasting B-cell depletion and, coupled with the success seen with monoclonal CD20 + depletion, suggests that B-cell suppression could be the major direct mechanism of action.

  15. Optimization of FIBMOS Through 2D Silvaco ATLAS and 2D Monte Carlo Particle-based Device Simulations

    OpenAIRE

    Kang, J.; He, X.; Vasileska, D.; Schroder, D. K.

    2001-01-01

    Focused Ion Beam MOSFETs (FIBMOS) demonstrate large enhancements in core device performance areas such as output resistance, hot electron reliability and voltage stability upon channel length or drain voltage variation. In this work, we describe an optimization technique for FIBMOS threshold voltage characterization using the 2D Silvaco ATLAS simulator. Both ATLAS and 2D Monte Carlo particle-based simulations were used to show that FIBMOS devices exhibit enhanced current drive ...

  16. Systematic uncertainties on Monte Carlo simulation of lead based ADS

    International Nuclear Information System (INIS)

    Embid, M.; Fernandez, R.; Garcia-Sanz, J.M.; Gonzalez, E.

    1999-01-01

    Computer simulations of the neutronic behaviour of ADS systems foreseen for actinide and fission product transmutation are affected by many sources of systematic uncertainties, both from the nuclear data and by the methodology selected when applying the codes. Several actual ADS Monte Carlo simulations are presented, comparing different options both for the data and for the methodology, evaluating the relevance of the different uncertainties. (author)

  17. Gas generation matrix depletion quality assurance project plan

    International Nuclear Information System (INIS)

    1998-01-01

    The Los Alamos National Laboratory (LANL) is to provide the necessary expertise, experience, equipment and instrumentation, and management structure to: Conduct the matrix depletion experiments using simulated waste for quantifying matrix depletion effects; and Conduct experiments on 60 cylinders containing simulated TRU waste to determine the effects of matrix depletion on gas generation for transportation. All work for the Gas Generation Matrix Depletion (GGMD) experiment is performed according to the quality objectives established in the test plan and under this Quality Assurance Project Plan (QAPjP)

  18. Conditional Monte Carlo randomization tests for regression models.

    Science.gov (United States)

    Parhat, Parwen; Rosenberger, William F; Diao, Guoqing

    2014-08-15

    We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.

  19. Suppression of the initial transient in Monte Carlo criticality simulations

    International Nuclear Information System (INIS)

    Richet, Y.

    2006-12-01

    Criticality Monte Carlo calculations aim at estimating the effective multiplication factor (k-effective) for a fissile system through iterations simulating neutrons propagation (making a Markov chain). Arbitrary initialization of the neutron population can deeply bias the k-effective estimation, defined as the mean of the k-effective computed at each iteration. A simplified model of this cycle k-effective sequence is built, based on characteristics of industrial criticality Monte Carlo calculations. Statistical tests, inspired by Brownian bridge properties, are designed to discriminate stationarity of the cycle k-effective sequence. The initial detected transient is, then, suppressed in order to improve the estimation of the system k-effective. The different versions of this methodology are detailed and compared, firstly on a plan of numerical tests fitted on criticality Monte Carlo calculations, and, secondly on real criticality calculations. Eventually, the best methodologies observed in these tests are selected and allow to improve industrial Monte Carlo criticality calculations. (author)

  20. Hsp90 depletion goes wild

    OpenAIRE

    Siegal, Mark L; Masel, Joanna

    2012-01-01

    Abstract Hsp90 reveals phenotypic variation in the laboratory, but is Hsp90 depletion important in the wild? Recent work from Chen and Wagner in BMC Evolutionary Biology has discovered a naturally occurring Drosophila allele that downregulates Hsp90, creating sensitivity to cryptic genetic variation. Laboratory studies suggest that the exact magnitude of Hsp90 downregulation is important. Extreme Hsp90 depletion might reactivate transposable elements and/or induce aneuploidy, in addition to r...

  1. A residual Monte Carlo method for discrete thermal radiative diffusion

    International Nuclear Information System (INIS)

    Evans, T.M.; Urbatsch, T.J.; Lichtenstein, H.; Morel, J.E.

    2003-01-01

    Residual Monte Carlo methods reduce statistical error at a rate of exp(-bN), where b is a positive constant and N is the number of particle histories. Contrast this convergence rate with 1/√N, which is the rate of statistical error reduction for conventional Monte Carlo methods. Thus, residual Monte Carlo methods hold great promise for increased efficiency relative to conventional Monte Carlo methods. Previous research has shown that the application of residual Monte Carlo methods to the solution of continuum equations, such as the radiation transport equation, is problematic for all but the simplest of cases. However, the residual method readily applies to discrete systems as long as those systems are monotone, i.e., they produce positive solutions given positive sources. We develop a residual Monte Carlo method for solving a discrete 1D non-linear thermal radiative equilibrium diffusion equation, and we compare its performance with that of the discrete conventional Monte Carlo method upon which it is based. We find that the residual method provides efficiency gains of many orders of magnitude. Part of the residual gain is due to the fact that we begin each timestep with an initial guess equal to the solution from the previous timestep. Moreover, fully consistent non-linear solutions can be obtained in a reasonable amount of time because of the effective lack of statistical noise. We conclude that the residual approach has great potential and that further research into such methods should be pursued for more general discrete and continuum systems

  2. Regret causes ego-depletion and finding benefits in the regrettable events alleviates ego-depletion.

    Science.gov (United States)

    Gao, Hongmei; Zhang, Yan; Wang, Fang; Xu, Yan; Hong, Ying-Yi; Jiang, Jiang

    2014-01-01

    This study tested the hypotheses that experiencing regret would result in ego-depletion, while finding benefits (i.e., "silver linings") in the regret-eliciting events counteracted the ego-depletion effect. Using a modified gambling paradigm (Experiments 1, 2, and 4) and a retrospective method (Experiments 3 and 5), five experiments were conducted to induce regret. Results revealed that experiencing regret undermined performance on subsequent tasks, including a paper-and-pencil calculation task (Experiment 1), a Stroop task (Experiment 2), and a mental arithmetic task (Experiment 3). Furthermore, finding benefits in the regret-eliciting events improved subsequent performance (Experiments 4 and 5), and this improvement was mediated by participants' perceived vitality (Experiment 4). This study extended the depletion model of self-regulation by considering emotions with self-conscious components (in our case, regret). Moreover, it provided a comprehensive understanding of how people felt and performed after experiencing regret and after finding benefits in the events that caused the regret.

  3. (U) Introduction to Monte Carlo Methods

    Energy Technology Data Exchange (ETDEWEB)

    Hungerford, Aimee L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-20

    Monte Carlo methods are very valuable for representing solutions to particle transport problems. Here we describe a “cook book” approach to handling the terms in a transport equation using Monte Carlo methods. Focus is on the mechanics of a numerical Monte Carlo code, rather than the mathematical foundations of the method.

  4. Is gas in the Orion nebula depleted

    International Nuclear Information System (INIS)

    Aiello, S.; Guidi, I.

    1978-01-01

    Depletion of heavy elements has been recognized to be important in the understanding of the chemical composition of the interstellar medium. This problem is also relevant to the study of H II regions. In this paper the gaseous depletion in the physical conditions of the Orion nebula is investigated. The authors reach the conclusion that very probably no depletion of heavy elements, due to sticking on dust grains, took place during the lifetime of the Orion nebula. (Auth.)

  5. Mechanism of potassium depletion during chronic metabolic acidosis in the rat

    International Nuclear Information System (INIS)

    Scandling, J.D.; Ornt, D.B.

    1987-01-01

    Pair-fed rats on a normal K diet were given either 1.5% NH 4 Cl or water for 4 days. The acid-fed animals developed metabolic acidosis, negative K balance, and K depletion. Urinary Na excretion and urinary flow were not different between the groups beyond the first day. After the 4 days, isolated kidneys from animals in each of these groups were perfused at normal pH and bicarbonate concentrations. Urinary K excretion was similar between the groups despite the potassium depletion in the acid-fed animals. In contrast, isolated kidneys from animals with comparable K depletion induced by dietary K restriction readily conserved K. Sodium excretion and urinary flow were similar among the three groups of isolated kidneys. Plasma aldosterone concentrations were greater in the acid-fed rats after the 4 days of NH 4 Cl ingestion than in the control animals. Adrenalectomized rats were treated with either normal (4 μg/day) or high (22 μg/day) aldosterone replacement while ingesting NH 4 Cl for 4 days. Only in the presence of high aldosterone replacement did the acid-fed adrenalectomized animals develop K depletion. The authors conclude that chronic metabolic acidosis stimulates aldosterone secretion, and that aldosterone maintains the inappropriately high urinary potassium excretion and K depletion seen in this acid-base disorder

  6. Carlo Caso (1940 - 2007)

    CERN Multimedia

    Leonardo Rossi

    Carlo Caso (1940 - 2007) Our friend and colleague Carlo Caso passed away on July 7th, after several months of courageous fight against cancer. Carlo spent most of his scientific career at CERN, taking an active part in the experimental programme of the laboratory. His long and fruitful involvement in particle physics started in the sixties, in the Genoa group led by G. Tomasini. He then made several experiments using the CERN liquid hydrogen bubble chambers -first the 2000HBC and later BEBC- to study various facets of the production and decay of meson and baryon resonances. He later made his own group and joined the NA27 Collaboration to exploit the EHS Spectrometer with a rapid cycling bubble chamber as vertex detector. Amongst their many achievements, they were the first to measure, with excellent precision, the lifetime of the charmed D mesons. At the start of the LEP era, Carlo and his group moved to the DELPHI experiment, participating in the construction and running of the HPC electromagnetic c...

  7. Nature gives us strength: exposure to nature counteracts ego-depletion.

    Science.gov (United States)

    Chow, Jason T; Lau, Shun

    2015-01-01

    Previous research rarely investigated the role of physical environment in counteracting ego-depletion. In the present research, we hypothesized that exposure to natural environment counteracts ego-depletion. Three experiments were conducted to test this hypothesis. In Experiment 1, initially depleted participants who viewed pictures of nature scenes showed greater persistence on a subsequent anagram task than those who were given a rest period. Experiment 2 expanded upon this finding by showing that natural environment enhanced logical reasoning performance after ego-depleting task. Experiment 3 adopted a two- (depletion vs. no-depletion) -by-two (nature exposure vs. urban exposure) factorial design. We found that nature exposure moderated the effect of depletion on anagram task performance. Taken together, the present studies offer a viable and novel strategy to mitigate the negative impacts of ego-depletion.

  8. Cloud-based Monte Carlo modelling of BSSRDF for the rendering of human skin appearance (Conference Presentation)

    Science.gov (United States)

    Doronin, Alexander; Rushmeier, Holly E.; Meglinski, Igor; Bykov, Alexander V.

    2016-03-01

    We present a new Monte Carlo based approach for the modelling of Bidirectional Scattering-Surface Reflectance Distribution Function (BSSRDF) for accurate rendering of human skin appearance. The variations of both skin tissues structure and the major chromophores are taken into account correspondingly to the different ethnic and age groups. The computational solution utilizes HTML5, accelerated by the graphics processing units (GPUs), and therefore is convenient for the practical use at the most of modern computer-based devices and operating systems. The results of imitation of human skin reflectance spectra, corresponding skin colours and examples of 3D faces rendering are presented and compared with the results of phantom studies.

  9. NOTE: Monte Carlo evaluation of kerma in an HDR brachytherapy bunker

    Science.gov (United States)

    Pérez-Calatayud, J.; Granero, D.; Ballester, F.; Casal, E.; Crispin, V.; Puchades, V.; León, A.; Verdú, G.

    2004-12-01

    In recent years, the use of high dose rate (HDR) after-loader machines has greatly increased due to the shift from traditional Cs-137/Ir-192 low dose rate (LDR) to HDR brachytherapy. The method used to calculate the required concrete and, where appropriate, lead shielding in the door is based on analytical methods provided by documents published by the ICRP, the IAEA and the NCRP. The purpose of this study is to perform a more realistic kerma evaluation at the entrance maze door of an HDR bunker using the Monte Carlo code GEANT4. The Monte Carlo results were validated experimentally. The spectrum at the maze entrance door, obtained with Monte Carlo, has an average energy of about 110 keV, maintaining a similar value along the length of the maze. The comparison of results from the aforementioned values with the Monte Carlo ones shows that results obtained using the albedo coefficient from the ICRP document more closely match those given by the Monte Carlo method, although the maximum value given by MC calculations is 30% greater.

  10. Monte Carlo application based on GEANT4 toolkit to simulate a laser–plasma electron beam line for radiobiological studies

    Energy Technology Data Exchange (ETDEWEB)

    Lamia, D., E-mail: debora.lamia@ibfm.cnr.it [Institute of Molecular Bioimaging and Physiology IBFM CNR – LATO, Cefalù (Italy); Russo, G., E-mail: giorgio.russo@ibfm.cnr.it [Institute of Molecular Bioimaging and Physiology IBFM CNR – LATO, Cefalù (Italy); Casarino, C.; Gagliano, L.; Candiano, G.C. [Institute of Molecular Bioimaging and Physiology IBFM CNR – LATO, Cefalù (Italy); Labate, L. [Intense Laser Irradiation Laboratory (ILIL) – National Institute of Optics INO CNR, Pisa (Italy); National Institute for Nuclear Physics INFN, Pisa Section and Frascati National Laboratories LNF (Italy); Baffigi, F.; Fulgentini, L.; Giulietti, A.; Koester, P.; Palla, D. [Intense Laser Irradiation Laboratory (ILIL) – National Institute of Optics INO CNR, Pisa (Italy); Gizzi, L.A. [Intense Laser Irradiation Laboratory (ILIL) – National Institute of Optics INO CNR, Pisa (Italy); National Institute for Nuclear Physics INFN, Pisa Section and Frascati National Laboratories LNF (Italy); Gilardi, M.C. [Institute of Molecular Bioimaging and Physiology IBFM CNR, Segrate (Italy); University of Milano-Bicocca, Milano (Italy)

    2015-06-21

    We report on the development of a Monte Carlo application, based on the GEANT4 toolkit, for the characterization and optimization of electron beams for clinical applications produced by a laser-driven plasma source. The GEANT4 application is conceived so as to represent in the most general way the physical and geometrical features of a typical laser-driven accelerator. It is designed to provide standard dosimetric figures such as percentage dose depth curves, two-dimensional dose distributions and 3D dose profiles at different positions both inside and outside the interaction chamber. The application was validated by comparing its predictions to experimental measurements carried out on a real laser-driven accelerator. The work is aimed at optimizing the source, by using this novel application, for radiobiological studies and, in perspective, for medical applications. - Highlights: • Development of a Monte Carlo application based on GEANT4 toolkit. • Experimental measurements carried out with a laser-driven acceleration system. • Validation of Geant4 application comparing experimental data with the simulated ones. • Dosimetric characterization of the acceleration system.

  11. Lithium depletion and rotation in main-sequence stars

    International Nuclear Information System (INIS)

    Balachandran, S.

    1990-01-01

    Lithium abundances were measured in nearly 200 old disk-population F stars to examine the effects of rotational braking on the depletion of Li. The sample was selected to be slightly evolved off the main sequence so that the stars have completed all the Li depletion they will undergo on the main sequence. A large scatter in Li abundances in the late F stars is found, indicating that the Li depletion is not related to age and spectral type alone. Conventional depletion mechanisms like convective overshoot and microscopic diffusion are unable to explain Li depletion in F stars with thin convective envelopes and are doubly taxed to explain such a scatter. No correlation is found between Li abundance and the present projected rotational velocity and some of the most rapid rotators are undepleted, ruling out meridional circulation as the cause of Li depletion. There is a somewhat larger spread in Li abundances in the spun-down late F stars compared to the early F stars which should remain rotationally unaltered on the main sequence. 85 refs

  12. Algorithm simulating the atom displacement processes induced by the gamma rays on the base of Monte Carlo method

    International Nuclear Information System (INIS)

    Cruz, C. M.; Pinera, I; Abreu, Y.; Leyva, A.

    2007-01-01

    Present work concerns with the implementation of a Monte Carlo based calculation algorithm describing particularly the occurrence of Atom Displacements induced by the Gamma Radiation interactions at a given target material. The Atom Displacement processes were considered only on the basis of single elastic scattering interactions among fast secondary electrons with matrix atoms, which are ejected from their crystalline sites at recoil energies higher than a given threshold energy. The secondary electron transport was described assuming typical approaches on this matter, where consecutive small angle scattering and very low energy transfer events behave as a continuously cuasi-classical electron state changes along a given path length delimited by two discrete high scattering angle and electron energy losses events happening on a random way. A limiting scattering angle was introduced and calculated according Moliere-Bethe-Goudsmit-Saunderson Electron Multiple Scattering, which allows splitting away secondary electrons single scattering processes from multiple one, according which a modified McKinley-Feshbach electron elastic scattering cross section arises. This distribution was statistically sampled and simulated in the framework of the Monte Carlo Method to perform discrete single electron scattering processes, particularly those leading to Atom Displacement events. The possibility of adding this algorithm to present existing open Monte Carlo code systems is analyze, in order to improve their capabilities. (Author)

  13. Application of Monte Carlo method to solving boundary value problem of differential equations

    International Nuclear Information System (INIS)

    Zuo Yinghong; Wang Jianguo

    2012-01-01

    This paper introduces the foundation of the Monte Carlo method and the way how to generate the random numbers. Based on the basic thought of the Monte Carlo method and finite differential method, the stochastic model for solving the boundary value problem of differential equations is built. To investigate the application of the Monte Carlo method to solving the boundary value problem of differential equations, the model is used to solve Laplace's equations with the first boundary condition and the unsteady heat transfer equation with initial values and boundary conditions. The results show that the boundary value problem of differential equations can be effectively solved with the Monte Carlo method, and the differential equations with initial condition can also be calculated by using a stochastic probability model which is based on the time-domain finite differential equations. Both the simulation results and theoretical analyses show that the errors of numerical results are lowered as the number of simulation particles is increased. (authors)

  14. TREEDE, Point Fluxes and Currents Based on Track Rotation Estimator by Monte-Carlo Method

    International Nuclear Information System (INIS)

    Dubi, A.

    1985-01-01

    1 - Description of problem or function: TREEDE is a Monte Carlo transport code based on the Track Rotation estimator, used, in general, to calculate fluxes and currents at a point. This code served as a test code in the development of the concept of the Track Rotation estimator, and therefore analogue Monte Carlo is used (i.e. no importance biasing). 2 - Method of solution: The basic idea is to follow the particle's track in the medium and then to rotate it such that it passes through the detector point. That is, rotational symmetry considerations (even in non-spherically symmetric configurations) are applied to every history, so that a very large fraction of the track histories can be rotated and made to pass through the point of interest; in this manner the 1/r 2 singularity in the un-collided flux estimator (next event estimator) is avoided. TREEDE, being a test code, is used to estimate leakage or in-medium fluxes at given points in a 3-dimensional finite box, where the source is an isotropic point source at the centre of the z = 0 surface. However, many of the constraints of geometry and source can be easily removed. The medium is assumed homogeneous with isotropic scattering, and one energy group only is considered. 3 - Restrictions on the complexity of the problem: One energy group, a homogeneous medium, isotropic scattering

  15. Skin fluorescence model based on the Monte Carlo technique

    Science.gov (United States)

    Churmakov, Dmitry Y.; Meglinski, Igor V.; Piletsky, Sergey A.; Greenhalgh, Douglas A.

    2003-10-01

    The novel Monte Carlo technique of simulation of spatial fluorescence distribution within the human skin is presented. The computational model of skin takes into account spatial distribution of fluorophores following the collagen fibers packing, whereas in epidermis and stratum corneum the distribution of fluorophores assumed to be homogeneous. The results of simulation suggest that distribution of auto-fluorescence is significantly suppressed in the NIR spectral region, while fluorescence of sensor layer embedded in epidermis is localized at the adjusted depth. The model is also able to simulate the skin fluorescence spectra.

  16. Department of Energy depleted uranium recycle

    International Nuclear Information System (INIS)

    Kosinski, F.E.; Butturini, W.G.; Kurtz, J.J.

    1994-01-01

    With its strategic supply of depleted uranium, the Department of Energy is studying reuse of the material in nuclear radiation shields, military hardware, and commercial applications. the study is expected to warrant a more detailed uranium recycle plan which would include consideration of a demonstration program and a program implementation decision. Such a program, if implemented, would become the largest nuclear material recycle program in the history of the Department of Energy. The bulk of the current inventory of depleted uranium is stored in 14-ton cylinders in the form of solid uranium hexafluoride (UF 6 ). The radioactive 235 U content has been reduced to a concentration of 0.2% to 0.4%. Present estimates indicate there are about 55,000 UF 6 -filled cylinders in inventory and planned operations will provide another 2,500 cylinders of depleted uranium each year. The United States government, under the auspices of the Department of Energy, considers the depleted uranium a highly-refined strategic resource of significant value. A possible utilization of a large portion of the depleted uranium inventory is as radiation shielding for spent reactor fuels and high-level radioactive waste. To this end, the Department of Energy study to-date has included a preliminary technical review to ascertain DOE chemical forms useful for commercial products. The presentation summarized the information including preliminary cost estimates. The status of commercial uranium processing is discussed. With a shrinking market, the number of chemical conversion and fabrication plants is reduced; however, the commercial capability does exist for chemical conversion of the UF 6 to the metal form and for the fabrication of uranium radiation shields and other uranium products. Department of Energy facilities no longer possess a capability for depleted uranium chemical conversion

  17. Monte Carlo modeling of Lead-Cooled Fast Reactor in adiabatic equilibrium state

    Energy Technology Data Exchange (ETDEWEB)

    Stanisz, Przemysław, E-mail: pstanisz@agh.edu.pl; Oettingen, Mikołaj, E-mail: moettin@agh.edu.pl; Cetnar, Jerzy, E-mail: cetnar@mail.ftj.agh.edu.pl

    2016-05-15

    Graphical abstract: - Highlights: • We present the Monte Carlo modeling of the LFR in the adiabatic equilibrium state. • We assess the adiabatic equilibrium fuel composition using the MCB code. • We define the self-adjusting process of breeding gain by the control rod operation. • The designed LFR can work in the adiabatic cycle with zero fuel breeding. - Abstract: Nuclear power would appear to be the only energy source able to satisfy the global energy demand while also achieving a significant reduction of greenhouse gas emissions. Moreover, it can provide a stable and secure source of electricity, and plays an important role in many European countries. However, nuclear power generation from its birth has been doomed by the legacy of radioactive nuclear waste. In addition, the looming decrease in the available resources of fissile U235 may influence the future sustainability of nuclear energy. The integrated solution to both problems is not trivial, and postulates the introduction of a closed-fuel cycle strategy based on breeder reactors. The perfect choice of a novel reactor system fulfilling both requirements is the Lead-Cooled Fast Reactor operating in the adiabatic equilibrium state. In such a state, the reactor converts depleted or natural uranium into plutonium while consuming any self-generated minor actinides and transferring only fission products as waste. We present the preliminary design of a Lead-Cooled Fast Reactor operating in the adiabatic equilibrium state with the Monte Carlo Continuous Energy Burnup Code – MCB. As a reference reactor model we apply the core design developed initially under the framework of the European Lead-cooled SYstem (ELSY) project and refined in the follow-up Lead-cooled European Advanced DEmonstration Reactor (LEADER) project. The major objective of the study is to show to what extent the constraints of the adiabatic cycle are maintained and to indicate the phase space for further improvements. The analysis

  18. Depleted UF6 programmatic environmental impact statement

    International Nuclear Information System (INIS)

    1997-01-01

    The US Department of Energy has developed a program for long-term management and use of depleted uranium hexafluoride, a product of the uranium enrichment process. As part of this effort, DOE is preparing a Programmatic Environmental Impact Statement (PEIS) for the depleted UF 6 management program. This report duplicates the information available at the web site (http://www.ead.anl.gov/web/newduf6) set up as a repository for the PEIS. Options for the web site include: reviewing recent additions or changes to the web site; learning more about depleted UF 6 and the PEIS; browsing the PEIS and related documents, or submitting official comments on the PEIS; downloading all or part of the PEIS documents; and adding or deleting one's name from the depleted UF 6 mailing list

  19. Radiosensitization of CHO cells by the combination of glutathione depletion and low concentrations of oxygen: The effect of different levels of GSH depletion

    International Nuclear Information System (INIS)

    Clark, E.P.; Epp, E.R.; Zachgo, E.A.; Biaglow, J.E.

    1984-01-01

    Recently, the authors have examined the effect of GSH depletion by BSO on CHO cells equilibrated with oxygen at various concentrations (0.05-4.0%) and irradiated with 50 kVp x-rays. This is of interest because of the uncertain radiosensitizing effect GSH depletion may have on cells equilibrated with low oxygen concentrations. GSH depletion (0.1 mM BSO/24 hrs reduced [GSH] ≅ 10% of control) enhanced the radiosensitizing action of moderate (0.4-4.0%) concentrations of oxygen, i.e., GSH depletion reduced the [O/sub 2/] necessary to achieve an equivalent ER by ≅ 2-3 fold. However, GSH depletion was much more effective as a rediosensitizer when cells were equilibrated with low (<0.4%) concentrations of oxygen, i.e., GSH depletion reduced the [O/sub 2/] necessary to achieve an equivalent ER by 8-10 fold. Furthermore, while the addition of exogenous 5 mM GSH restored the ER to that observed when GSH was not depleted, the intracellular [GSH] was not increased. The results of these studies carried out at different levels of GSH depletion are presented

  20. Experiment on neutron transmission through depleted uranium layers and analysis with DOT 3.5 and MCNP

    International Nuclear Information System (INIS)

    Oka, Y.; Kodama, T.; Akiyama, M.; Hashikura, H.; Kondo, S.

    1987-01-01

    The reaction rates in the multi-layers containing depleted uranium were measured by activation foils and micro-fission chambers. The analysis of the experiment was carried out by using the multi-group transport calculation code, DOT 3.5 and the continuous energy Monte Carlo code, MCNP. The multi-group calculation overpredicted the low energy reaction rates in the DU layers, while the continuous energy calculation agreed well. The multi-group and continuous energy calculation was compared for the one-dimensional transmission of iron spheres. The results revealed overprediction of the multi-group calculation near the fast neutron source. The averaging of the resonance shapes in generating the multi-group cross sections made minima of the resonance valleys higher than that of the pointwise cross section. This increased the scattering of the neutrons inside and caused the overprediction of the multi-group calculation

  1. Density-based Monte Carlo filter and its applications in nonlinear stochastic differential equation models.

    Science.gov (United States)

    Huang, Guanghui; Wan, Jianping; Chen, Hui

    2013-02-01

    Nonlinear stochastic differential equation models with unobservable state variables are now widely used in analysis of PK/PD data. Unobservable state variables are usually estimated with extended Kalman filter (EKF), and the unknown pharmacokinetic parameters are usually estimated by maximum likelihood estimator. However, EKF is inadequate for nonlinear PK/PD models, and MLE is known to be biased downwards. A density-based Monte Carlo filter (DMF) is proposed to estimate the unobservable state variables, and a simulation-based M estimator is proposed to estimate the unknown parameters in this paper, where a genetic algorithm is designed to search the optimal values of pharmacokinetic parameters. The performances of EKF and DMF are compared through simulations for discrete time and continuous time systems respectively, and it is found that the results based on DMF are more accurate than those given by EKF with respect to mean absolute error. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Monte Carlo simulation methods in moment-based scale-bridging algorithms for thermal radiative-transfer problems

    Energy Technology Data Exchange (ETDEWEB)

    Densmore, J.D., E-mail: jeffery.densmore@unnpp.gov [Bettis Atomic Power Laboratory, P.O. Box 79, West Mifflin, PA 15122 (United States); Park, H., E-mail: hkpark@lanl.gov [Fluid Dynamics and Solid Mechanics Group, Los Alamos National Laboratory, P.O. Box 1663, MS B216, Los Alamos, NM 87545 (United States); Wollaber, A.B., E-mail: wollaber@lanl.gov [Computational Physics and Methods Group, Los Alamos National Laboratory, P.O. Box 1663, MS D409, Los Alamos, NM 87545 (United States); Rauenzahn, R.M., E-mail: rick@lanl.gov [Fluid Dynamics and Solid Mechanics Group, Los Alamos National Laboratory, P.O. Box 1663, MS B216, Los Alamos, NM 87545 (United States); Knoll, D.A., E-mail: nol@lanl.gov [Fluid Dynamics and Solid Mechanics Group, Los Alamos National Laboratory, P.O. Box 1663, MS B216, Los Alamos, NM 87545 (United States)

    2015-03-01

    We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm.

  3. Monte Carlo simulation methods in moment-based scale-bridging algorithms for thermal radiative-transfer problems

    International Nuclear Information System (INIS)

    Densmore, J.D.; Park, H.; Wollaber, A.B.; Rauenzahn, R.M.; Knoll, D.A.

    2015-01-01

    We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm

  4. A hybrid transport-diffusion method for Monte Carlo radiative-transfer simulations

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Urbatsch, Todd J.; Evans, Thomas M.; Buksas, Michael W.

    2007-01-01

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Monte Carlo particle-transport simulations in diffusive media. If standard Monte Carlo is used in such media, particle histories will consist of many small steps, resulting in a computationally expensive calculation. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Each discrete step replaces many small Monte Carlo steps, thus increasing the efficiency of the simulation. In addition, given that DDMC is based on a diffusion equation, it should produce accurate solutions if used judiciously. In practice, DDMC is combined with standard Monte Carlo to form a hybrid transport-diffusion method that can accurately simulate problems with both diffusive and non-diffusive regions. In this paper, we extend previously developed DDMC techniques in several ways that improve the accuracy and utility of DDMC for nonlinear, time-dependent, radiative-transfer calculations. The use of DDMC in these types of problems is advantageous since, due to the underlying linearizations, optically thick regions appear to be diffusive. First, we employ a diffusion equation that is discretized in space but is continuous in time. Not only is this methodology theoretically more accurate than temporally discretized DDMC techniques, but it also has the benefit that a particle's time is always known. Thus, there is no ambiguity regarding what time to assign a particle that leaves an optically thick region (where DDMC is used) and begins transporting by standard Monte Carlo in an optically thin region. Also, we treat the interface between optically thick and optically thin regions with an improved method, based on the asymptotic diffusion-limit boundary condition, that can produce accurate results regardless of the angular distribution of the incident Monte Carlo particles. Finally, we develop a technique for estimating radiation momentum deposition during the

  5. Monte Carlo dose calculation using a cell processor based PlayStation 3 system

    International Nuclear Information System (INIS)

    Chow, James C L; Lam, Phil; Jaffray, David A

    2012-01-01

    This study investigates the performance of the EGSnrc computer code coupled with a Cell-based hardware in Monte Carlo simulation of radiation dose in radiotherapy. Performance evaluations of two processor-intensive functions namely, HOWNEAR and RANMAR G ET in the EGSnrc code were carried out basing on the 20-80 rule (Pareto principle). The execution speeds of the two functions were measured by the profiler gprof specifying the number of executions and total time spent on the functions. A testing architecture designed for Cell processor was implemented in the evaluation using a PlayStation3 (PS3) system. The evaluation results show that the algorithms examined are readily parallelizable on the Cell platform, provided that an architectural change of the EGSnrc was made. However, as the EGSnrc performance was limited by the PowerPC Processing Element in the PS3, PC coupled with graphics processing units or GPCPU may provide a more viable avenue for acceleration.

  6. Monte Carlo dose calculation using a cell processor based PlayStation 3 system

    Science.gov (United States)

    Chow, James C. L.; Lam, Phil; Jaffray, David A.

    2012-02-01

    This study investigates the performance of the EGSnrc computer code coupled with a Cell-based hardware in Monte Carlo simulation of radiation dose in radiotherapy. Performance evaluations of two processor-intensive functions namely, HOWNEAR and RANMAR_GET in the EGSnrc code were carried out basing on the 20-80 rule (Pareto principle). The execution speeds of the two functions were measured by the profiler gprof specifying the number of executions and total time spent on the functions. A testing architecture designed for Cell processor was implemented in the evaluation using a PlayStation3 (PS3) system. The evaluation results show that the algorithms examined are readily parallelizable on the Cell platform, provided that an architectural change of the EGSnrc was made. However, as the EGSnrc performance was limited by the PowerPC Processing Element in the PS3, PC coupled with graphics processing units or GPCPU may provide a more viable avenue for acceleration.

  7. Monte Carlo dose calculation using a cell processor based PlayStation 3 system

    Energy Technology Data Exchange (ETDEWEB)

    Chow, James C L; Lam, Phil; Jaffray, David A, E-mail: james.chow@rmp.uhn.on.ca [Department of Radiation Oncology, University of Toronto and Radiation Medicine Program, Princess Margaret Hospital, University Health Network, Toronto, Ontario M5G 2M9 (Canada)

    2012-02-09

    This study investigates the performance of the EGSnrc computer code coupled with a Cell-based hardware in Monte Carlo simulation of radiation dose in radiotherapy. Performance evaluations of two processor-intensive functions namely, HOWNEAR and RANMAR{sub G}ET in the EGSnrc code were carried out basing on the 20-80 rule (Pareto principle). The execution speeds of the two functions were measured by the profiler gprof specifying the number of executions and total time spent on the functions. A testing architecture designed for Cell processor was implemented in the evaluation using a PlayStation3 (PS3) system. The evaluation results show that the algorithms examined are readily parallelizable on the Cell platform, provided that an architectural change of the EGSnrc was made. However, as the EGSnrc performance was limited by the PowerPC Processing Element in the PS3, PC coupled with graphics processing units or GPCPU may provide a more viable avenue for acceleration.

  8. Development of a fuel depletion sensitivity calculation module for multi-cell problems in a deterministic reactor physics code system CBZ

    International Nuclear Information System (INIS)

    Chiba, Go; Kawamoto, Yosuke; Narabayashi, Tadashi

    2016-01-01

    Highlights: • A new functionality of fuel depletion sensitivity calculations is developed in a code system CBZ. • This is based on the generalized perturbation theory for fuel depletion problems. • The theory with a multi-layer depletion step division scheme is described. • Numerical techniques employed in actual implementation are also provided. - Abstract: A new functionality of fuel depletion sensitivity calculations is developed as one module in a deterministic reactor physics code system CBZ. This is based on the generalized perturbation theory for fuel depletion problems. The theory for fuel depletion problems with a multi-layer depletion step division scheme is described in detail. Numerical techniques employed in actual implementation are also provided. Verification calculations are carried out for a 3 × 3 multi-cell problem consisting of two different types of fuel pins. It is shown that the sensitivities of nuclide number densities after fuel depletion with respect to the nuclear data calculated by the new module agree well with reference sensitivities calculated by direct numerical differentiation. To demonstrate the usefulness of the new module, fuel depletion sensitivities in different multi-cell arrangements are compared and non-negligible differences are observed. Nuclear data-induced uncertainties of nuclide number densities obtained with the calculated sensitivities are also compared.

  9. Lectures on Monte Carlo methods

    CERN Document Server

    Madras, Neal

    2001-01-01

    Monte Carlo methods form an experimental branch of mathematics that employs simulations driven by random number generators. These methods are often used when others fail, since they are much less sensitive to the "curse of dimensionality", which plagues deterministic methods in problems with a large number of variables. Monte Carlo methods are used in many fields: mathematics, statistics, physics, chemistry, finance, computer science, and biology, for instance. This book is an introduction to Monte Carlo methods for anyone who would like to use these methods to study various kinds of mathemati

  10. Prediction of betavoltaic battery output parameters based on SEM measurements and Monte Carlo simulation

    International Nuclear Information System (INIS)

    Yakimov, Eugene B.

    2016-01-01

    An approach for a prediction of "6"3Ni-based betavoltaic battery output parameters is described. It consists of multilayer Monte Carlo simulation to obtain the depth dependence of excess carrier generation rate inside the semiconductor converter, a determination of collection probability based on the electron beam induced current measurements, a calculation of current induced in the semiconductor converter by beta-radiation, and SEM measurements of output parameters using the calculated induced current value. Such approach allows to predict the betavoltaic battery parameters and optimize the converter design for any real semiconductor structure and any thickness and specific activity of beta-radiation source. - Highlights: • New procedure for betavoltaic battery output parameters prediction is described. • A depth dependence of beta particle energy deposition for Si and SiC is calculated. • Electron trajectories are assumed isotropic and uniformly started under simulation.

  11. Levy-Lieb-Based Monte Carlo Study of the Dimensionality Behaviour of the Electronic Kinetic Functional

    Directory of Open Access Journals (Sweden)

    Seshaditya A.

    2017-06-01

    Full Text Available We consider a gas of interacting electrons in the limit of nearly uniform density and treat the one dimensional (1D, two dimensional (2D and three dimensional (3D cases. We focus on the determination of the correlation part of the kinetic functional by employing a Monte Carlo sampling technique of electrons in space based on an analytic derivation via the Levy-Lieb constrained search principle. Of particular interest is the question of the behaviour of the functional as one passes from 1D to 3D; according to the basic principles of Density Functional Theory (DFT the form of the universal functional should be independent of the dimensionality. However, in practice the straightforward use of current approximate functionals in different dimensions is problematic. Here, we show that going from the 3D to the 2D case the functional form is consistent (concave function but in 1D becomes convex; such a drastic difference is peculiar of 1D electron systems as it is for other quantities. Given the interesting behaviour of the functional, this study represents a basic first-principle approach to the problem and suggests further investigations using highly accurate (though expensive many-electron computational techniques, such as Quantum Monte Carlo.

  12. Effects of drop testing on scale model shipping containers shielded with depleted uranium

    International Nuclear Information System (INIS)

    Butler, T.A.

    1980-02-01

    Three scale model shipping containers shielded with depleted uranium were dropped onto an essentially unyielding surface from various heights to determine their margins to failure. This report presents the results of a thorough posttest examination of the models to check for basic structural integrity, shielding integrity, and deformations. Because of unexpected behavior exhibited by the depleted uranium shielding, several tests were performed to further characterize its mechanical properties. Based on results of the investigations, recommendations are made for improved container design and for applying the results to full-scale containers. Even though the specimens incorporated specific design features, the results of this study are generally applicable to any container design using depleted uranium

  13. Monte Carlo determination of the spin-dependent potentials

    International Nuclear Information System (INIS)

    Campostrini, M.; Moriarty, K.J.M.; Rebbi, C.

    1987-05-01

    Calculation of the bound states of heavy quark systems by a Hamiltonian formulation based on an expansion of the interaction into inverse powers of the quark mass is discussed. The potentials for the spin-orbit and spin-spin coupling between quark and antiquark, which are responsible for the fine and hyperfine splittings in heavy quark spectroscopy, are expressed as expectation values of Wilson loop factors with suitable insertions of chromomagnetic or chromoelectric fields. A Monte Carlo simulation has been used to evaluate the expectation values and, from them, the spin-dependent potentials. The Monte Carlo calculation is reported to show a long-range, non-perturbative component in the interaction

  14. EGS-Ray, a program for the visualization of Monte-Carlo calculations in the radiation physics

    International Nuclear Information System (INIS)

    Kleinschmidt, C.

    2001-01-01

    A Windows program is introduced which allows a relatively easy and interactive access to Monte Carlo techniques in clinical radiation physics. Furthermore, this serves as a visualization tool of the methodology and the results of Monte Carlo simulations. The program requires only little effort to formulate and calculate a Monte Carlo problem. The Monte Carlo module of the program is based on the well-known EGS4/PRESTA code. The didactic features of the program are presented using several examples common to the routine of the clinical radiation physicist. (orig.) [de

  15. CARMEN: a system Monte Carlo based on linear programming from direct openings; CARMEN: Un sistema de planficiacion Monte Carlo basado en programacion lineal a partir de aberturas directas

    Energy Technology Data Exchange (ETDEWEB)

    Ureba, A.; Pereira-Barbeiro, A. R.; Jimenez-Ortega, E.; Baeza, J. A.; Salguero, F. J.; Leal, A.

    2013-07-01

    The use of Monte Carlo (MC) has shown an improvement in the accuracy of the calculation of the dose compared to other analytics algorithms installed on the systems of business planning, especially in the case of non-standard situations typical of complex techniques such as IMRT and VMAT. Our treatment planning system called CARMEN, is based on the complete simulation, both the beam transport in the head of the accelerator and the patient, and simulation designed for efficient operation in terms of the accuracy of the estimate and the required computation times. (Author)

  16. Probabilistic learning of nonlinear dynamical systems using sequential Monte Carlo

    Science.gov (United States)

    Schön, Thomas B.; Svensson, Andreas; Murray, Lawrence; Lindsten, Fredrik

    2018-05-01

    Probabilistic modeling provides the capability to represent and manipulate uncertainty in data, models, predictions and decisions. We are concerned with the problem of learning probabilistic models of dynamical systems from measured data. Specifically, we consider learning of probabilistic nonlinear state-space models. There is no closed-form solution available for this problem, implying that we are forced to use approximations. In this tutorial we will provide a self-contained introduction to one of the state-of-the-art methods-the particle Metropolis-Hastings algorithm-which has proven to offer a practical approximation. This is a Monte Carlo based method, where the particle filter is used to guide a Markov chain Monte Carlo method through the parameter space. One of the key merits of the particle Metropolis-Hastings algorithm is that it is guaranteed to converge to the "true solution" under mild assumptions, despite being based on a particle filter with only a finite number of particles. We will also provide a motivating numerical example illustrating the method using a modeling language tailored for sequential Monte Carlo methods. The intention of modeling languages of this kind is to open up the power of sophisticated Monte Carlo methods-including particle Metropolis-Hastings-to a large group of users without requiring them to know all the underlying mathematical details.

  17. Monte Carlo simulation in nuclear medicine

    International Nuclear Information System (INIS)

    Morel, Ch.

    2007-01-01

    The Monte Carlo method allows for simulating random processes by using series of pseudo-random numbers. It became an important tool in nuclear medicine to assist in the design of new medical imaging devices, optimise their use and analyse their data. Presently, the sophistication of the simulation tools allows the introduction of Monte Carlo predictions in data correction and image reconstruction processes. The availability to simulate time dependent processes opens up new horizons for Monte Carlo simulation in nuclear medicine. In a near future, these developments will allow to tackle simultaneously imaging and dosimetry issues and soon, case system Monte Carlo simulations may become part of the nuclear medicine diagnostic process. This paper describes some Monte Carlo method basics and the sampling methods that were developed for it. It gives a referenced list of different simulation software used in nuclear medicine and enumerates some of their present and prospective applications. (author)

  18. Plasmonic Nanoprobes for Stimulated Emission Depletion Nanoscopy.

    Science.gov (United States)

    Cortés, Emiliano; Huidobro, Paloma A; Sinclair, Hugo G; Guldbrand, Stina; Peveler, William J; Davies, Timothy; Parrinello, Simona; Görlitz, Frederik; Dunsby, Chris; Neil, Mark A A; Sivan, Yonatan; Parkin, Ivan P; French, Paul M W; Maier, Stefan A

    2016-11-22

    Plasmonic nanoparticles influence the absorption and emission processes of nearby emitters due to local enhancements of the illuminating radiation and the photonic density of states. Here, we use the plasmon resonance of metal nanoparticles in order to enhance the stimulated depletion of excited molecules for super-resolved nanoscopy. We demonstrate stimulated emission depletion (STED) nanoscopy with gold nanorods with a long axis of only 26 nm and a width of 8 nm. These particles provide an enhancement of up to 50% of the resolution compared to fluorescent-only probes without plasmonic components irradiated with the same depletion power. The nanoparticle-assisted STED probes reported here represent a ∼2 × 10 3 reduction in probe volume compared to previously used nanoparticles. Finally, we demonstrate their application toward plasmon-assisted STED cellular imaging at low-depletion powers, and we also discuss their current limitations.

  19. Ecological considerations of natural and depleted uranium

    International Nuclear Information System (INIS)

    Hanson, W.C.

    1980-01-01

    Depleted 238 U is a major by-product of the nuclear fuel cycle for which increasing use is being made in counterweights, radiation shielding, and ordnance applications. This paper (1) summarizes the pertinent literature on natural and depleted uranium in the environment, (2) integrates results of a series of ecological studies conducted at Los Alamos Scientific Laboratory (LASL) in New Mexico where 70,000 kg of depleted and natural uranium has been expended to the environment over the past 34 years, and (3) synthesizes the information into an assessment of the ecological consequences of natural and depleted uranium released to the environment by various means. Results of studies of soil, plant, and animal communities exposed to this radiation and chemical environment over a third of a century provide a means of evaluating the behavior and effects of uranium in many contexts

  20. Adapting to an initial self-regulatory task cancels the ego depletion effect.

    Science.gov (United States)

    Dang, Junhua; Dewitte, Siegfried; Mao, Lihua; Xiao, Shanshan; Shi, Yucai

    2013-09-01

    The resource-based model of self-regulation provides a pessimistic view of self-regulation that people are destined to lose their self-control after having engaged in any act of self-regulation because these acts deplete the limited resource that people need for successful self-regulation. The cognitive control theory, however, offers an alternative explanation and suggests that the depletion effect reflects switch costs between different cognitive control processes recruited to deal with demanding tasks. This account implies that the depletion effect will not occur once people have had the opportunity to adapt to the self-regulatory task initially engaged in. Consistent with this idea, the present study showed that engaging in a demanding task led to performance deficits on a subsequent self-regulatory task (i.e. the depletion effect) only when the initial demanding task was relatively short but not when it was long enough for participants to adapt. Our results were unrelated to self-efficacy, mood, and motivation. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Optimization of mass of plastic scintillator film for flow-cell based tritium monitoring: a Monte Carlo study

    International Nuclear Information System (INIS)

    Roy, Arup Singha; Palani Selvam, T.; Raman, Anand; Raja, V.; Chaudhury, Probal

    2014-01-01

    Over the years, various types of tritium-in-air monitors have been designed and developed based on different principles. Ionization chamber, proportional counter and scintillation detector systems are few among them. A plastic scintillator based, flow-cell type online tritium-in-air monitoring system was developed for online monitoring of tritium in air. The value of the scintillator mass inside the cell-volume, which maximizes the response of the detector system, should be obtained to get maximum efficiency. The present study is aimed to optimize the amount of mass of the plastic scintillator film for the flow-cell based tritium monitoring instrument so that maximum efficiency is achieved. The Monte Carlo based EGSnrc code system has been used for this purpose

  2. Mantle depletion and metasomatism recorded in orthopyroxene in highly depleted peridotites

    DEFF Research Database (Denmark)

    Scott, James; Liu, Jingao; Pearson, D. Graham

    2016-01-01

    Although trace element concentrations in clinopyroxene serve as a useful tool for assessing the depletion and enrichment history of mantle peridotites, this is not applicable for peridotites in which the clinopyroxene component has been consumed (~ 25% partial melting). Orthopyroxene persists in ...

  3. PACER: a Monte Carlo time-dependent spectrum program for generating few-group diffusion-theory cross sections

    International Nuclear Information System (INIS)

    Candelore, N.R.; Kerrick, W.E.; Johnson, E.G.; Gast, R.C.; Dei, D.E.; Fields, D.L.

    1982-09-01

    The PACER Monte Carlo program for the CDC-7600 performs fixed source or eigenvalue calculations of spatially dependent neutron spectra in rod-lattice geometries. The neutron flux solution is used to produce few group, flux-weighted cross sections spatially averaged over edit regions. In general, PACER provides environmentally dependent flux-weighted few group microscopic cross sections which can be made time (depletion) dependent. These cross sections can be written in a standard POX output file format. To minimize computer storage requirements, PACER allows separate spectrum and edit options. PACER also calculates an explicit (n, 2n) cross section. The PACER geometry allows multiple rod arrays with axial detail. This report provides details of the neutron kinematics and the input required

  4. The study of necessity of verification-methods for Depleted Uranium

    International Nuclear Information System (INIS)

    Park, J. B.; Ahn, S. H.; Ahn, G. H.; Chung, S. T.; Shin, J. S.

    2006-01-01

    ROK has tried to establish management system for depleted uranium from 2004, and ROK achieved some results in this field including management software, management skill, and the list of company using the nuclear material. But, the studies for the depleted uranium are insufficient exclude the studies of KAERI. In terms of SSAC, we have to study more about whether the depleted uranium is really dangerous material or not and how is the depleted uranium diverted to the nuclear weapon. The depleted uranium was controlled by the item counting in the national system for the small quantity nuclear material. We don't have unique technical methods to clarify the depleted uranium on-the-spot inspection not laboratory scale. Therefore, I would like to suggest of the necessity of the verification methods for depleted uranium. Furthermore, I would like to show you the methods of the verification of the depleted uranium in national system up to now

  5. Development of a consistent Monte Carlo-deterministic transport methodology based on the method of characteristics and MCNP5

    International Nuclear Information System (INIS)

    Karriem, Z.; Ivanov, K.; Zamonsky, O.

    2011-01-01

    This paper presents work that has been performed to develop an integrated Monte Carlo- Deterministic transport methodology in which the two methods make use of exactly the same general geometry and multigroup nuclear data. The envisioned application of this methodology is in reactor lattice physics methods development and shielding calculations. The methodology will be based on the Method of Long Characteristics (MOC) and the Monte Carlo N-Particle Transport code MCNP5. Important initial developments pertaining to ray tracing and the development of an MOC flux solver for the proposed methodology are described. Results showing the viability of the methodology are presented for two 2-D general geometry transport problems. The essential developments presented is the use of MCNP as geometry construction and ray tracing tool for the MOC, verification of the ray tracing indexing scheme that was developed to represent the MCNP geometry in the MOC and the verification of the prototype 2-D MOC flux solver. (author)

  6. Evaluation of nuclides with closely spaced values of depletion constants in transmutation chains

    International Nuclear Information System (INIS)

    Vukadin, Z.S.

    1977-01-01

    New method of calculating nuclide concentrations in a transmutation chain is developed in this thesis. Method is based on originally derived recurrence formulas for expansion series of depletion functions and on originally obtained, nonsingular, Bateman coefficients. Explicit expression for the nuclide concentrations in a transmutation chain is obtained. This expression can be used as it stands for arbitrary values of nuclides depletion constants. By computing hypothetical transmutation chains and neptunium series, method is compared with the Bateman analytical solution, with the approximate solutions and with the matrix exponential method. It comes out that the method presented in this thesis is suitable for calculating very long depletion chains even in the case of some closely spaced and/or equal values of nuclide depletion constants. Though, presented method is of great practical applicability in a number of nuclear physics problems that are dealing with the nuclide transmutations: starting from the studies of the stellar evolution up to the design of nuclear reactors (author) [sr

  7. Clinical implementation of full Monte Carlo dose calculation in proton beam therapy

    International Nuclear Information System (INIS)

    Paganetti, Harald; Jiang, Hongyu; Parodi, Katia; Slopsema, Roelf; Engelsman, Martijn

    2008-01-01

    The goal of this work was to facilitate the clinical use of Monte Carlo proton dose calculation to support routine treatment planning and delivery. The Monte Carlo code Geant4 was used to simulate the treatment head setup, including a time-dependent simulation of modulator wheels (for broad beam modulation) and magnetic field settings (for beam scanning). Any patient-field-specific setup can be modeled according to the treatment control system of the facility. The code was benchmarked against phantom measurements. Using a simulation of the ionization chamber reading in the treatment head allows the Monte Carlo dose to be specified in absolute units (Gy per ionization chamber reading). Next, the capability of reading CT data information was implemented into the Monte Carlo code to model patient anatomy. To allow time-efficient dose calculation, the standard Geant4 tracking algorithm was modified. Finally, a software link of the Monte Carlo dose engine to the patient database and the commercial planning system was established to allow data exchange, thus completing the implementation of the proton Monte Carlo dose calculation engine ('DoC++'). Monte Carlo re-calculated plans are a valuable tool to revisit decisions in the planning process. Identification of clinically significant differences between Monte Carlo and pencil-beam-based dose calculations may also drive improvements of current pencil-beam methods. As an example, four patients (29 fields in total) with tumors in the head and neck regions were analyzed. Differences between the pencil-beam algorithm and Monte Carlo were identified in particular near the end of range, both due to dose degradation and overall differences in range prediction due to bony anatomy in the beam path. Further, the Monte Carlo reports dose-to-tissue as compared to dose-to-water by the planning system. Our implementation is tailored to a specific Monte Carlo code and the treatment planning system XiO (Computerized Medical Systems Inc

  8. Fully depleted back-illuminated p-channel CCD development

    Energy Technology Data Exchange (ETDEWEB)

    Bebek, Chris J.; Bercovitz, John H.; Groom, Donald E.; Holland, Stephen E.; Kadel, Richard W.; Karcher, Armin; Kolbe, William F.; Oluseyi, Hakeem M.; Palaio, Nicholas P.; Prasad, Val; Turko, Bojan T.; Wang, Guobin

    2003-07-08

    An overview of CCD development efforts at Lawrence Berkeley National Laboratory is presented. Operation of fully-depleted, back-illuminated CCD's fabricated on high resistivity silicon is described, along with results on the use of such CCD's at ground-based observatories. Radiation damage and point-spread function measurements are described, as well as discussion of CCD fabrication technologies.

  9. Analytic continuation of quantum Monte Carlo data by stochastic analytical inference.

    Science.gov (United States)

    Fuchs, Sebastian; Pruschke, Thomas; Jarrell, Mark

    2010-05-01

    We present an algorithm for the analytic continuation of imaginary-time quantum Monte Carlo data which is strictly based on principles of Bayesian statistical inference. Within this framework we are able to obtain an explicit expression for the calculation of a weighted average over possible energy spectra, which can be evaluated by standard Monte Carlo simulations, yielding as by-product also the distribution function as function of the regularization parameter. Our algorithm thus avoids the usual ad hoc assumptions introduced in similar algorithms to fix the regularization parameter. We apply the algorithm to imaginary-time quantum Monte Carlo data and compare the resulting energy spectra with those from a standard maximum-entropy calculation.

  10. Monte Carlo simulation as a tool to predict blasting fragmentation based on the Kuz Ram model

    Science.gov (United States)

    Morin, Mario A.; Ficarazzo, Francesco

    2006-04-01

    Rock fragmentation is considered the most important aspect of production blasting because of its direct effects on the costs of drilling and blasting and on the economics of the subsequent operations of loading, hauling and crushing. Over the past three decades, significant progress has been made in the development of new technologies for blasting applications. These technologies include increasingly sophisticated computer models for blast design and blast performance prediction. Rock fragmentation depends on many variables such as rock mass properties, site geology, in situ fracturing and blasting parameters and as such has no complete theoretical solution for its prediction. However, empirical models for the estimation of size distribution of rock fragments have been developed. In this study, a blast fragmentation Monte Carlo-based simulator, based on the Kuz-Ram fragmentation model, has been developed to predict the entire fragmentation size distribution, taking into account intact and joints rock properties, the type and properties of explosives and the drilling pattern. Results produced by this simulator were quite favorable when compared with real fragmentation data obtained from a blast quarry. It is anticipated that the use of Monte Carlo simulation will increase our understanding of the effects of rock mass and explosive properties on the rock fragmentation by blasting, as well as increase our confidence in these empirical models. This understanding will translate into improvements in blasting operations, its corresponding costs and the overall economics of open pit mines and rock quarries.

  11. Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations

    Science.gov (United States)

    Hoogenboom, J. Eduard; Dufek, Jan

    2014-06-01

    This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.

  12. Optimized iteration in coupled Monte-Carlo - Thermal-hydraulics calculations

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.; Dufek, J.

    2013-01-01

    This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration methods are also tested and it is concluded that the presented iteration method is near optimal. (authors)

  13. Profit Forecast Model Using Monte Carlo Simulation in Excel

    Directory of Open Access Journals (Sweden)

    Petru BALOGH

    2014-01-01

    Full Text Available Profit forecast is very important for any company. The purpose of this study is to provide a method to estimate the profit and the probability of obtaining the expected profit. Monte Carlo methods are stochastic techniques–meaning they are based on the use of random numbers and probability statistics to investigate problems. Monte Carlo simulation furnishes the decision-maker with a range of possible outcomes and the probabilities they will occur for any choice of action. Our example of Monte Carlo simulation in Excel will be a simplified profit forecast model. Each step of the analysis will be described in detail. The input data for the case presented: the number of leads per month, the percentage of leads that result in sales, , the cost of a single lead, the profit per sale and fixed cost, allow obtaining profit and associated probabilities of achieving.

  14. Monte Carlo methods of PageRank computation

    NARCIS (Netherlands)

    Litvak, Nelli

    2004-01-01

    We describe and analyze an on-line Monte Carlo method of PageRank computation. The PageRank is being estimated basing on results of a large number of short independent simulation runs initiated from each page that contains outgoing hyperlinks. The method does not require any storage of the hyperlink

  15. Ego Depletion Does Not Interfere With Working Memory Performance.

    Science.gov (United States)

    Singh, Ranjit K; Göritz, Anja S

    2018-01-01

    Ego depletion happens if exerting self-control reduces a person's capacity to subsequently control themselves. Previous research has suggested that ego depletion not only interferes with subsequent self-control but also with working memory. However, recent meta-analytical evidence casts doubt onto this. The present study tackles the question if ego depletion does interfere with working memory performance. We induced ego depletion in two ways: using an e-crossing task and using a Stroop task. We then measured working memory performance using the letter-number sequencing task. There was no evidence of ego depletion interfering with working memory performance. Several aspects of our study render this null finding highly robust. We had a large and heterogeneous sample of N = 1,385, which provided sufficient power. We deployed established depletion tasks from two task families (e-crossing task and Stroop), thus making it less likely that the null finding is due to a specific depletion paradigm. We derived several performance scores from the working memory task and ran different analyses to maximize the chances of finding an effect. Lastly, we controlled for two potential moderators, the implicit theories about willpower and dispositional self-control capacity, to ensure that a possible effect on working memory is not obscured by an interaction effect. In sum, this experiment strengthens the position that ego depletion works but does not affect working memory performance.

  16. The 2003 Update of the ASPO Oil and Gas Depletion Model

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, Colin; Sivertsson, Anders [Uppsala Univ. (Sweden). Hydrocarbon Depletion Study Group

    2003-07-01

    What we can term the ASPO Oil and Gas Depletion Model has developed over many years, based on an evolving knowledge of the resource base, culled from many sources, and evolving ideas about how to model depletion. It is sure that the estimates and forecasts are incorrect. The question is: By how much? The model recognises so-called Regular Oil, which excludes the following categories: Oil from coal and shale; Bitumen and synthetics derived therefrom; Extra Heavy Oil (<10 deg API); Heavy Oil (10-17 deg API); Deepwater Oil (>500 m); Polar Oil; Liquids from gas fields and gas plants. It has provided most oil to-date and will dominate all supply far into the future. Its depletion therefore determines the date of peak. The evidence suggests that about 896 Gb (billion barrels) had been produced to end 2002; about 871 Gb remain to produce from known fields and about 113 Gb is expected to be produced from new fields. It is convenient to set a cut-off of, say 2075, for such production, to avoid having to worry about the tail end that can drag on for a long time. A simple depletion model assumes that production declines at the current Depletion Rate (annual production as a percentage of future production) or at the Midpoint Rate in countries that have not yet reached Midpoint (namely half the total). The five main Middle East producers, which hold about half of what remains, are assumed to exercise a swing role, making up the difference between world demand and what the other countries can supply. The base case scenario assumes that consumption will be on average flat until 2010 because of recession; and that the Middle East swing role will end then, as in practice those countries will no longer have the capacity to discharge it. Whether the Iraq war results in extending or shortening the swing role remains to be seen. Adding the contributions of the other categories of oil and gas liquids gives an overall peak in 2010. Gas depletes differently, being more influenced by

  17. The 2003 Update of the ASPO Oil and Gas Depletion Model

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, Colin; Sivertsson, Anders [Uppsala Univ. (Sweden). Hydrocarbon Depletion Study Group

    2003-07-01

    What we can term the ASPO Oil and Gas Depletion Model has developed over many years, based on an evolving knowledge of the resource base, culled from many sources, and evolving ideas about how to model depletion. It is sure that the estimates and forecasts are incorrect. The question is: By how much? The model recognises so-called Regular Oil, which excludes the following categories: Oil from coal and shale; Bitumen and synthetics derived therefrom; Extra Heavy Oil (<10 deg API); Heavy Oil (10-17 deg API); Deepwater Oil (>500 m); Polar Oil; Liquids from gas fields and gas plants. It has provided most oil to-date and will dominate all supply far into the future. Its depletion therefore determines the date of peak. The evidence suggests that about 896 Gb (billion barrels) had been produced to end 2002; about 871 Gb remain to produce from known fields and about 113 Gb is expected to be produced from new fields. It is convenient to set a cut-off of, say 2075, for such production, to avoid having to worry about the tail end that can drag on for a long time. A simple depletion model assumes that production declines at the current Depletion Rate (annual production as a percentage of future production) or at the Midpoint Rate in countries that have not yet reached Midpoint (namely half the total). The five main Middle East producers, which hold about half of what remains, are assumed to exercise a swing role, making up the difference between world demand and what the other countries can supply. The base case scenario assumes that consumption will be on average flat until 2010 because of recession; and that the Middle East swing role will end then, as in practice those countries will no longer have the capacity to discharge it. Whether the Iraq war results in extending or shortening the swing role remains to be seen. Adding the contributions of the other categories of oil and gas liquids gives an overall peak in 2010. Gas depletes differently, being more influenced by

  18. 26 CFR 1.611-1 - Allowance of deduction for depletion.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 7 2010-04-01 2010-04-01 true Allowance of deduction for depletion. 1.611-1... depletion. (a) Depletion of mines, oil and gas wells, other natural deposits, and timber—(1) In general... mines, oil and gas wells, other natural deposits, and timber, a reasonable allowance for depletion. In...

  19. Monte Carlo based radial shield design of typical PWR reactor

    Energy Technology Data Exchange (ETDEWEB)

    Gul, Anas; Khan, Rustam; Qureshi, M. Ayub; Azeem, Muhammad Waqar; Raza, S.A. [Pakistan Institute of Engineering and Applied Sciences, Islamabad (Pakistan). Dept. of Nuclear Engineering; Stummer, Thomas [Technische Univ. Wien (Austria). Atominst.

    2017-04-15

    This paper presents the radiation shielding model of a typical PWR (CNPP-II) at Chashma, Pakistan. The model was developed using Monte Carlo N Particle code [2], equipped with ENDF/B-VI continuous energy cross section libraries. This model was applied to calculate the neutron and gamma flux and dose rates in the radial direction at core mid plane. The simulated results were compared with the reference results of Shanghai Nuclear Engineering Research and Design Institute (SNERDI).

  20. Temperature dependent simulation of diamond depleted Schottky PIN diodes

    International Nuclear Information System (INIS)

    Hathwar, Raghuraj; Dutta, Maitreya; Chowdhury, Srabanti; Goodnick, Stephen M.; Koeck, Franz A. M.; Nemanich, Robert J.

    2016-01-01

    Diamond is considered as an ideal material for high field and high power devices due to its high breakdown field, high lightly doped carrier mobility, and high thermal conductivity. The modeling and simulation of diamond devices are therefore important to predict the performances of diamond based devices. In this context, we use Silvaco ® Atlas, a drift-diffusion based commercial software, to model diamond based power devices. The models used in Atlas were modified to account for both variable range and nearest neighbor hopping transport in the impurity bands associated with high activation energies for boron doped and phosphorus doped diamond. The models were fit to experimentally reported resistivity data over a wide range of doping concentrations and temperatures. We compare to recent data on depleted diamond Schottky PIN diodes demonstrating low turn-on voltages and high reverse breakdown voltages, which could be useful for high power rectifying applications due to the low turn-on voltage enabling high forward current densities. Three dimensional simulations of the depleted Schottky PIN diamond devices were performed and the results are verified with experimental data at different operating temperatures

  1. Temperature dependent simulation of diamond depleted Schottky PIN diodes

    Science.gov (United States)

    Hathwar, Raghuraj; Dutta, Maitreya; Koeck, Franz A. M.; Nemanich, Robert J.; Chowdhury, Srabanti; Goodnick, Stephen M.

    2016-06-01

    Diamond is considered as an ideal material for high field and high power devices due to its high breakdown field, high lightly doped carrier mobility, and high thermal conductivity. The modeling and simulation of diamond devices are therefore important to predict the performances of diamond based devices. In this context, we use Silvaco® Atlas, a drift-diffusion based commercial software, to model diamond based power devices. The models used in Atlas were modified to account for both variable range and nearest neighbor hopping transport in the impurity bands associated with high activation energies for boron doped and phosphorus doped diamond. The models were fit to experimentally reported resistivity data over a wide range of doping concentrations and temperatures. We compare to recent data on depleted diamond Schottky PIN diodes demonstrating low turn-on voltages and high reverse breakdown voltages, which could be useful for high power rectifying applications due to the low turn-on voltage enabling high forward current densities. Three dimensional simulations of the depleted Schottky PIN diamond devices were performed and the results are verified with experimental data at different operating temperatures

  2. Temperature dependent simulation of diamond depleted Schottky PIN diodes

    Energy Technology Data Exchange (ETDEWEB)

    Hathwar, Raghuraj; Dutta, Maitreya; Chowdhury, Srabanti; Goodnick, Stephen M. [Department of Electrical Engineering, Arizona State University, Tempe, Arizona 85287-8806 (United States); Koeck, Franz A. M.; Nemanich, Robert J. [Department of Physics, Arizona State University, Tempe, Arizona 85287-8806 (United States)

    2016-06-14

    Diamond is considered as an ideal material for high field and high power devices due to its high breakdown field, high lightly doped carrier mobility, and high thermal conductivity. The modeling and simulation of diamond devices are therefore important to predict the performances of diamond based devices. In this context, we use Silvaco{sup ®} Atlas, a drift-diffusion based commercial software, to model diamond based power devices. The models used in Atlas were modified to account for both variable range and nearest neighbor hopping transport in the impurity bands associated with high activation energies for boron doped and phosphorus doped diamond. The models were fit to experimentally reported resistivity data over a wide range of doping concentrations and temperatures. We compare to recent data on depleted diamond Schottky PIN diodes demonstrating low turn-on voltages and high reverse breakdown voltages, which could be useful for high power rectifying applications due to the low turn-on voltage enabling high forward current densities. Three dimensional simulations of the depleted Schottky PIN diamond devices were performed and the results are verified with experimental data at different operating temperatures.

  3. The Abiotic Depletion Potential: Background, Updates, and Future

    Directory of Open Access Journals (Sweden)

    Lauran van Oers

    2016-03-01

    Full Text Available Depletion of abiotic resources is a much disputed impact category in life cycle assessment (LCA. The reason is that the problem can be defined in different ways. Furthermore, within a specified problem definition, many choices can still be made regarding which parameters to include in the characterization model and which data to use. This article gives an overview of the problem definition and the choices that have been made when defining the abiotic depletion potentials (ADPs for a characterization model for abiotic resource depletion in LCA. Updates of the ADPs since 2002 are also briefly discussed. Finally, some possible new developments of the impact category of abiotic resource depletion are suggested, such as redefining the depletion problem as a dilution problem. This means taking the reserves in the environment and the economy into account in the reserve parameter and using leakage from the economy, instead of extraction rate, as a dilution parameter.

  4. Electrolyte pore/solution partitioning by expanded grand canonical ensemble Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Moucka, Filip [Department of Chemistry, Virginia Commonwealth University, Richmond, Virginia 23221 (United States); Faculty of Science, J. E. Purkinje University, 400 96 Ústí nad Labem (Czech Republic); Bratko, Dusan, E-mail: dbratko@vcu.edu; Luzar, Alenka, E-mail: aluzar@vcu.edu [Department of Chemistry, Virginia Commonwealth University, Richmond, Virginia 23221 (United States)

    2015-03-28

    Using a newly developed grand canonical Monte Carlo approach based on fractional exchanges of dissolved ions and water molecules, we studied equilibrium partitioning of both components between laterally extended apolar confinements and surrounding electrolyte solution. Accurate calculations of the Hamiltonian and tensorial pressure components at anisotropic conditions in the pore required the development of a novel algorithm for a self-consistent correction of nonelectrostatic cut-off effects. At pore widths above the kinetic threshold to capillary evaporation, the molality of the salt inside the confinement grows in parallel with that of the bulk phase, but presents a nonuniform width-dependence, being depleted at some and elevated at other separations. The presence of the salt enhances the layered structure in the slit and lengthens the range of inter-wall pressure exerted by the metastable liquid. Solvation pressure becomes increasingly repulsive with growing salt molality in the surrounding bath. Depending on the sign of the excess molality in the pore, the wetting free energy of pore walls is either increased or decreased by the presence of the salt. Because of simultaneous rise in the solution surface tension, which increases the free-energy cost of vapor nucleation, the rise in the apparent hydrophobicity of the walls has not been shown to enhance the volatility of the metastable liquid in the pores.

  5. Electrolyte pore/solution partitioning by expanded grand canonical ensemble Monte Carlo simulation

    International Nuclear Information System (INIS)

    Moucka, Filip; Bratko, Dusan; Luzar, Alenka

    2015-01-01

    Using a newly developed grand canonical Monte Carlo approach based on fractional exchanges of dissolved ions and water molecules, we studied equilibrium partitioning of both components between laterally extended apolar confinements and surrounding electrolyte solution. Accurate calculations of the Hamiltonian and tensorial pressure components at anisotropic conditions in the pore required the development of a novel algorithm for a self-consistent correction of nonelectrostatic cut-off effects. At pore widths above the kinetic threshold to capillary evaporation, the molality of the salt inside the confinement grows in parallel with that of the bulk phase, but presents a nonuniform width-dependence, being depleted at some and elevated at other separations. The presence of the salt enhances the layered structure in the slit and lengthens the range of inter-wall pressure exerted by the metastable liquid. Solvation pressure becomes increasingly repulsive with growing salt molality in the surrounding bath. Depending on the sign of the excess molality in the pore, the wetting free energy of pore walls is either increased or decreased by the presence of the salt. Because of simultaneous rise in the solution surface tension, which increases the free-energy cost of vapor nucleation, the rise in the apparent hydrophobicity of the walls has not been shown to enhance the volatility of the metastable liquid in the pores

  6. A vectorized Monte Carlo code for modeling photon transport in SPECT

    International Nuclear Information System (INIS)

    Smith, M.F.; Floyd, C.E. Jr.; Jaszczak, R.J.

    1993-01-01

    A vectorized Monte Carlo computer code has been developed for modeling photon transport in single photon emission computed tomography (SPECT). The code models photon transport in a uniform attenuating region and photon detection by a gamma camera. It is adapted from a history-based Monte Carlo code in which photon history data are stored in scalar variables and photon histories are computed sequentially. The vectorized code is written in FORTRAN77 and uses an event-based algorithm in which photon history data are stored in arrays and photon history computations are performed within DO loops. The indices of the DO loops range over the number of photon histories, and these loops may take advantage of the vector processing unit of our Stellar GS1000 computer for pipelined computations. Without the use of the vector processor the event-based code is faster than the history-based code because of numerical optimization performed during conversion to the event-based algorithm. When only the detection of unscattered photons is modeled, the event-based code executes 5.1 times faster with the use of the vector processor than without; when the detection of scattered and unscattered photons is modeled the speed increase is a factor of 2.9. Vectorization is a valuable way to increase the performance of Monte Carlo code for modeling photon transport in SPECT

  7. Final Report: 06-LW-013, Nuclear Physics the Monte Carlo Way

    International Nuclear Information System (INIS)

    Ormand, W.E.

    2009-01-01

    This is document reports the progress and accomplishments achieved in 2006-2007 with LDRD funding under the proposal 06-LW-013, 'Nuclear Physics the Monte Carlo Way'. The project was a theoretical study to explore a novel approach to dealing with a persistent problem in Monte Carlo approaches to quantum many-body systems. The goal was to implement a solution to the notorious 'sign-problem', which if successful, would permit, for the first time, exact solutions to quantum many-body systems that cannot be addressed with other methods. In this document, we outline the progress and accomplishments achieved during FY2006-2007 with LDRD funding in the proposal 06-LW-013, 'Nuclear Physics the Monte Carlo Way'. This project was funded under the Lab Wide LDRD competition at Lawrence Livermore National Laboratory. The primary objective of this project was to test the feasibility of implementing a novel approach to solving the generic quantum many-body problem, which is one of the most important problems being addressed in theoretical physics today. Instead of traditional methods based matrix diagonalization, this proposal focused a Monte Carlo method. The principal difficulty with Monte Carlo methods, is the so-called 'sign problem'. The sign problem, which will discussed in some detail later, is endemic to Monte Carlo approaches to the quantum many-body problem, and is the principal reason that they have not been completely successful in the past. Here, we outline our research in the 'shifted-contour method' applied the Auxiliary Field Monte Carlo (AFMC) method

  8. Hsp90 depletion goes wild

    Directory of Open Access Journals (Sweden)

    Siegal Mark L

    2012-02-01

    Full Text Available Abstract Hsp90 reveals phenotypic variation in the laboratory, but is Hsp90 depletion important in the wild? Recent work from Chen and Wagner in BMC Evolutionary Biology has discovered a naturally occurring Drosophila allele that downregulates Hsp90, creating sensitivity to cryptic genetic variation. Laboratory studies suggest that the exact magnitude of Hsp90 downregulation is important. Extreme Hsp90 depletion might reactivate transposable elements and/or induce aneuploidy, in addition to revealing cryptic genetic variation. See research article http://wwww.biomedcentral.com/1471-2148/12/25

  9. Monte Carlo simulation: tool for the calibration in analytical determination of radionuclides; Simulacion Monte Carlo: herramienta para la calibracion en determinaciones analiticas de radionucleidos

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez, Jorge A. Carrazana; Ferrera, Eduardo A. Capote; Gomez, Isis M. Fernandez; Castro, Gloria V. Rodriguez; Ricardo, Niury Martinez, E-mail: cphr@cphr.edu.cu [Centro de Proteccion e Higiene de las Radiaciones (CPHR), La Habana (Cuba)

    2013-07-01

    This work shows how is established the traceability of the analytical determinations using this calibration method. Highlights the advantages offered by Monte Carlo simulation for the application of corrections by differences in chemical composition, density and height of the samples analyzed. Likewise, the results obtained by the LVRA in two exercises organized by the International Agency for Atomic Energy (IAEA) are presented. In these exercises (an intercomparison and a proficiency test) all reported analytical results were obtained based on calibrations in efficiency by Monte Carlo simulation using the DETEFF program.

  10. Advanced Multilevel Monte Carlo Methods

    KAUST Repository

    Jasra, Ajay

    2017-04-24

    This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.

  11. Advanced Multilevel Monte Carlo Methods

    KAUST Repository

    Jasra, Ajay; Law, Kody; Suciu, Carina

    2017-01-01

    This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.

  12. Experiences with the parallelisation of Monte Carlo problems

    International Nuclear Information System (INIS)

    Schmidt, F.; Dax, W.; Luger, M.

    1990-01-01

    Monte Carlo problems can be parallelized in a natural way. Therefore parallelisation of production codes can be performed quite easily provided the codes are written in FORTRAN and can be transferred to the parallel machine and this machine has a pseudo random number generator available. The MORSE code is a code which can be transferred. We have done this to the CRAY-2 and the 32 processor version of the TX2 which is a binary tree structured parallel machine based on INTEL 80286 processors. We are able to reach efficiencies up to 95% for realistic problems. Thus the same throughput as on one processor on the CRAY-2 could be reached. First experiments on the INTEL i860 based TX3 indicate an additional gain of a factor 100. This will permit the reconsideration of the Monte Carlo method in both nuclear engineering and as a general numerical tool. (author)

  13. Initial Assessment of Parallelization of Monte Carlo Calculation using Graphics Processing Units

    International Nuclear Information System (INIS)

    Choi, Sung Hoon; Joo, Han Gyu

    2009-01-01

    Monte Carlo (MC) simulation is an effective tool for calculating neutron transports in complex geometry. However, because Monte Carlo simulates each neutron behavior one by one, it takes a very long computing time if enough neutrons are used for high precision of calculation. Accordingly, methods that reduce the computing time are required. In a Monte Carlo code, parallel calculation is well-suited since it simulates the behavior of each neutron independently and thus parallel computation is natural. The parallelization of the Monte Carlo codes, however, was done using multi CPUs. By the global demand for high quality 3D graphics, the Graphics Processing Unit (GPU) has developed into a highly parallel, multi-core processor. This parallel processing capability of GPUs can be available to engineering computing once a suitable interface is provided. Recently, NVIDIA introduced CUDATM, a general purpose parallel computing architecture. CUDA is a software environment that allows developers to manage GPU using C/C++ or other languages. In this work, a GPU-based Monte Carlo is developed and the initial assessment of it parallel performance is investigated

  14. Accelerated SDS depletion from proteins by transmembrane electrophoresis: Impacts of Joule heating.

    Science.gov (United States)

    Unterlander, Nicole; Doucette, Alan Austin

    2018-02-08

    SDS plays a key role in proteomics workflows, including protein extraction, solubilization and mass-based separations (e.g. SDS-PAGE, GELFrEE). However, SDS interferes with mass spectrometry and so it must be removed prior to analysis. We recently introduced an electrophoretic platform, termed transmembrane electrophoresis (TME), enabling extensive depletion of SDS from proteins in solution with exceptional protein yields. However, our prior TME runs required 1 h to complete, being limited by Joule heating which causes protein aggregation at higher operating currents. Here, we demonstrate effective strategies to maintain lower TME sample temperatures, permitting accelerated SDS depletion. Among these strategies, the use of a magnetic stir bar to continuously agitate a model protein system (BSA) allows SDS to be depleted below 100 ppm (>98% removal) within 10 min of TME operations, while maintaining exceptional protein recovery (>95%). Moreover, these modifications allow TME to operate without any user intervention, improving throughput and robustness of the approach. Through fits of our time-course SDS depletion curves to an exponential model, we calculate SDS depletion half-lives as low as 1.2 min. This promising electrophoretic platform should provide proteomics researchers with an effective purification strategy to enable MS characterization of SDS-containing proteins. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. G4-STORK: A Geant4-based Monte Carlo reactor kinetics simulation code

    International Nuclear Information System (INIS)

    Russell, Liam; Buijs, Adriaan; Jonkmans, Guy

    2014-01-01

    Highlights: • G4-STORK is a new, time-dependent, Monte Carlo code for reactor physics applications. • G4-STORK was built by adapting and expanding on the Geant4 Monte Carlo toolkit. • G4-STORK was designed to simulate short-term fluctuations in reactor cores. • G4-STORK is well suited for simulating sub- and supercritical assemblies. • G4-STORK was verified through comparisons with DRAGON and MCNP. - Abstract: In this paper we introduce G4-STORK (Geant4 STOchastic Reactor Kinetics), a new, time-dependent, Monte Carlo particle tracking code for reactor physics applications. G4-STORK was built by adapting and expanding on the Geant4 Monte Carlo toolkit. The toolkit provides the fundamental physics models and particle tracking algorithms that track each particle in space and time. It is a framework for further development (e.g. for projects such as G4-STORK). G4-STORK derives reactor physics parameters (e.g. k eff ) from the continuous evolution of a population of neutrons in space and time in the given simulation geometry. In this paper we detail the major additions to the Geant4 toolkit that were necessary to create G4-STORK. These include a renormalization process that maintains a manageable number of neutrons in the simulation even in very sub- or supercritical systems, scoring processes (e.g. recording fission locations, total neutrons produced and lost, etc.) that allow G4-STORK to calculate the reactor physics parameters, and dynamic simulation geometries that can change over the course of simulation to illicit reactor kinetics responses (e.g. fuel temperature reactivity feedback). The additions are verified through simple simulations and code-to-code comparisons with established reactor physics codes such as DRAGON and MCNP. Additionally, G4-STORK was developed to run a single simulation in parallel over many processors using MPI (Message Passing Interface) pipes

  16. Depletion of selected polychlorodibenzodioxins and polychlorodibenzofurans in farmed trout exposed to contaminated feeds

    Energy Technology Data Exchange (ETDEWEB)

    Brambilla, G.; Dellatte, E.; Fochi, I.; Iacovella, N.; Domenico, A. di [Dept. Food Safety and Animal Health, Ist. Superiore di Sanita, Rome (Italy)]|[Dept. Environment and Primary Health Care, Ist. Superiore di Sanita, Rome (Italy)

    2004-09-15

    Farmed fish can bioaccumulate persistent toxic substances when fed on animal-based fat feeds. This fact has recently prompted a re-evaluation of the overall toxicological risk associated with contamination levels recorded in farmed vs. wild salmons. The bioaccumulation of polychlorinated dibenzodioxins (PCDDs) and dibenzofurans (PCDFs) in farmed trout has recently been described nevertheless, poor information is available about their depletion under controlled conditions. In this paper, the results of a 90-day depletion study in groups of trout exposed to three different levels of feed contamination for 30 days are reported. As a follow-up of a PCB depletion study the present paper aims at giving indications for risk management in fish farming practices, to prevent an unacceptable contamination of the produce intended for human consumption.

  17. Monte Carlo closure for moment-based transport schemes in general relativistic radiation hydrodynamic simulations

    Science.gov (United States)

    Foucart, Francois

    2018-04-01

    General relativistic radiation hydrodynamic simulations are necessary to accurately model a number of astrophysical systems involving black holes and neutron stars. Photon transport plays a crucial role in radiatively dominated accretion discs, while neutrino transport is critical to core-collapse supernovae and to the modelling of electromagnetic transients and nucleosynthesis in neutron star mergers. However, evolving the full Boltzmann equations of radiative transport is extremely expensive. Here, we describe the implementation in the general relativistic SPEC code of a cheaper radiation hydrodynamic method that theoretically converges to a solution of Boltzmann's equation in the limit of infinite numerical resources. The algorithm is based on a grey two-moment scheme, in which we evolve the energy density and momentum density of the radiation. Two-moment schemes require a closure that fills in missing information about the energy spectrum and higher order moments of the radiation. Instead of the approximate analytical closure currently used in core-collapse and merger simulations, we complement the two-moment scheme with a low-accuracy Monte Carlo evolution. The Monte Carlo results can provide any or all of the missing information in the evolution of the moments, as desired by the user. As a first test of our methods, we study a set of idealized problems demonstrating that our algorithm performs significantly better than existing analytical closures. We also discuss the current limitations of our method, in particular open questions regarding the stability of the fully coupled scheme.

  18. Effect of ultra high temperature ceramics as fuel cladding materials on the nuclear reactor performance by SERPENT Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Korkut, Turgay; Kara, Ayhan; Korkut, Hatun [Sinop Univ. (Turkey). Dept. of Nuclear Energy Engineering

    2016-12-15

    Ultra High Temperature Ceramics (UHTCs) have low density and high melting point. So they are useful materials in the nuclear industry especially reactor core design. Three UHTCs (silicon carbide, vanadium carbide, and zirconium carbide) were evaluated as the nuclear fuel cladding materials. The SERPENT Monte Carlo code was used to model CANDU, PWR, and VVER type reactor core and to calculate burnup parameters. Some changes were observed at the same burnup and neutronic parameters (keff, neutron flux, absorption rate, and fission rate, depletion of U-238, U-238, Xe-135, Sm-149) with the use of these UHTCs. Results were compared to conventional cladding material zircalloy.

  19. CAD-based Monte Carlo program for integrated simulation of nuclear system SuperMC

    International Nuclear Information System (INIS)

    Wu, Y.; Song, J.; Zheng, H.; Sun, G.; Hao, L.; Long, P.; Hu, L.

    2013-01-01

    SuperMC is a (Computer-Aided-Design) CAD-based Monte Carlo (MC) program for integrated simulation of nuclear systems developed by FDS Team (China), making use of hybrid MC-deterministic method and advanced computer technologies. The design aim, architecture and main methodology of SuperMC are presented in this paper. The taking into account of multi-physics processes and the use of advanced computer technologies such as automatic geometry modeling, intelligent data analysis and visualization, high performance parallel computing and cloud computing, contribute to the efficiency of the code. SuperMC2.1, the latest version of the code for neutron, photon and coupled neutron and photon transport calculation, has been developed and validated by using a series of benchmarking cases such as the fusion reactor ITER model and the fast reactor BN-600 model

  20. Glutathione depletion in tissues after administration of buthionine sulphoximine

    International Nuclear Information System (INIS)

    Minchinton, A.I.; Rojas, A.; Smith, A.; Soranson, J.A.; Shrieve, D.C.; Jones, N.R.; Bremner, J.C.

    1984-01-01

    Buthionine sulphoximine (BSO) an inhibitor of glutathione (GSH) biosynthesis, was administered to mice in single and repeated doses. The resultant pattern of GSH depletion was studied in liver, kidney, skeletal muscle and three types of murine tumor. Liver and kidney exhibited a rapid depletion of GSH. Muscle was depleted to a similar level, but at a slower rate after a single dose. All three tumors required repeated administration of BSO over several days to obtain a similar degree of depletion to that shown in the other tissues

  1. Improving B-cell depletion in systemic lupus erythematosus and rheumatoid arthritis.

    Science.gov (United States)

    Mota, Pedro; Reddy, Venkat; Isenberg, David

    2017-07-01

    Rituximab-based B-cell depletion (BCD) therapy is effective in refractory rheumatoid arthritis (RA) and although used to treat patients with refractory systemic lupus erythematosus (SLE) in routine clinical practice, rituximab failed to meet the primary endpoints in two large randomised controlled trials (RCTs) of non-renal (EXPLORER) and renal (LUNAR) SLE. Areas covered: We review how BCD could be improved to achieve better clinical responses in RA and SLE. Insights into the variability in clinical response to BCD in RA and SLE may help develop new therapeutic strategies. To this end, a literature search was performed using the following terms: rheumatoid arthritis, systemic erythematosus lupus, rituximab and B-cell depletion. Expert commentary: Poor trial design may have, at least partly, contributed to the apparent lack of response to BCD in the two RCTs of patients with SLE. Enhanced B-cell depletion and/or sequential therapy with belimumab may improve clinical response at least in some patients with SLE.

  2. Development of a Fully-Automated Monte Carlo Burnup Code Monteburns

    International Nuclear Information System (INIS)

    Poston, D.I.; Trellue, H.R.

    1999-01-01

    Several computer codes have been developed to perform nuclear burnup calculations over the past few decades. In addition, because of advances in computer technology, it recently has become more desirable to use Monte Carlo techniques for such problems. Monte Carlo techniques generally offer two distinct advantages over discrete ordinate methods: (1) the use of continuous energy cross sections and (2) the ability to model detailed, complex, three-dimensional (3-D) geometries. These advantages allow more accurate burnup results to be obtained, provided that the user possesses the required computing power (which is required for discrete ordinate methods as well). Several linkage codes have been written that combine a Monte Carlo N-particle transport code (such as MCNP TM ) with a radioactive decay and burnup code. This paper describes one such code that was written at Los Alamos National Laboratory: monteburns. Monteburns links MCNP with the isotope generation and depletion code ORIGEN2. The basis for the development of monteburns was the need for a fully automated code that could perform accurate burnup (and other) calculations for any 3-D system (accelerator-driven or a full reactor core). Before the initial development of monteburns, a list of desired attributes was made and is given below. o The code should be fully automated (that is, after the input is set up, no further user interaction is required). . The code should allow for the irradiation of several materials concurrently (each material is evaluated collectively in MCNP and burned separately in 0RIGEN2). o The code should allow the transfer of materials (shuffling) between regions in MCNP. . The code should allow any materials to be added or removed before, during, or after each step in an automated fashion. . The code should not require the user to provide input for 0RIGEN2 and should have minimal MCNP input file requirements (other than a working MCNP deck). . The code should be relatively easy to use

  3. Forest calcium depletion and biotic retention along a soil nitrogen gradient

    Science.gov (United States)

    Perakis, Steven S.; Sinkhorn, Emily R.; Catricala, Christina; Bullen, Thomas D.; Fitzpatrick, John A.; Hynicka, Justin D.; Cromack, Kermit

    2013-01-01

    High nitrogen (N) accumulation in terrestrial ecosystems can shift patterns of nutrient limitation and deficiency beyond N toward other nutrients, most notably phosphorus (P) and base cations (calcium [Ca], magnesium [Mg], and potassium [K]). We examined how naturally high N accumulation from a legacy of symbiotic N fixation shaped P and base cation cycling across a gradient of nine temperate conifer forests in the Oregon Coast Range. We were particularly interested in whether long-term legacies of symbiotic N fixation promoted coupled N and organic P accumulation in soils, and whether biotic demands by non-fixing vegetation could conserve ecosystem base cations as N accumulated. Total soil N (0–100 cm) pools increased nearly threefold across the N gradient, leading to increased nitrate leaching, declines in soil pH from 5.8 to 4.2, 10-fold declines in soil exchangeable Ca, Mg, and K, and increased mobilization of aluminum. These results suggest that long-term N enrichment had acidified soils and depleted much of the readily weatherable base cation pool. Soil organic P increased with both soil N and C across the gradient, but soil inorganic P, biomass P, and P leaching loss did not vary with N, implying that historic symbiotic N fixation promoted soil organic P accumulation and P sufficiency for non-fixers. Even though soil pools of Ca, Mg, and K all declined as soil N increased, only Ca declined in biomass pools, suggesting the emergence of Ca deficiency at high N. Biotic conservation and tight recycling of Ca increased in response to whole-ecosystem Ca depletion, as indicated by preferential accumulation of Ca in biomass and surface soil. Our findings support a hierarchical model of coupled N–Ca cycling under long-term soil N enrichment, whereby ecosystem-level N saturation and nitrate leaching deplete readily available soil Ca, stimulating biotic Ca conservation as overall supply diminishes. We conclude that a legacy of biological N fixation can increase N

  4. Ego depletion decreases trust in economic decision making

    Science.gov (United States)

    Ainsworth, Sarah E.; Baumeister, Roy F.; Vohs, Kathleen D.; Ariely, Dan

    2014-01-01

    Three experiments tested the effects of ego depletion on economic decision making. Participants completed a task either requiring self-control or not. Then participants learned about the trust game, in which senders are given an initial allocation of $10 to split between themselves and another person, the receiver. The receiver receives triple the amount given and can send any, all, or none of the tripled money back to the sender. Participants were assigned the role of the sender and decided how to split the initial allocation. Giving less money, and therefore not trusting the receiver, is the safe, less risky response. Participants who had exerted self-control and were depleted gave the receiver less money than those in the non-depletion condition (Experiment 1). This effect was replicated and moderated in two additional experiments. Depletion again led to lower amounts given (less trust), but primarily among participants who were told they would never meet the receiver (Experiment 2) or who were given no information about how similar they were to the receiver (Experiment 3). Amounts given did not differ for depleted and non-depleted participants who either expected to meet the receiver (Experiment 2) or were led to believe that they were very similar to the receiver (Experiment 3). Decreased trust among depleted participants was strongest among neurotics. These results imply that self-control facilitates behavioral trust, especially when no other cues signal decreased social risk in trusting, such as if an actual or possible relationship with the receiver were suggested. PMID:25013237

  5. Ly6G-mediated depletion of neutrophils is dependent on macrophages.

    Science.gov (United States)

    Bruhn, Kevin W; Dekitani, Ken; Nielsen, Travis B; Pantapalangkoor, Paul; Spellberg, Brad

    2016-01-01

    Antibody-mediated depletion of neutrophils is commonly used to study neutropenia. However, the mechanisms by which antibodies deplete neutrophils have not been well defined. We noticed that mice deficient in complement and macrophages had blunted neutrophil depletion in response to anti-Ly6G monoclonal antibody (MAb) treatment. In vitro, exposure of murine neutrophils to anti-Ly6G MAb in the presence of plasma did not result in significant depletion of cells, either in the presence or absence of complement. In vivo, anti-Ly6G-mediated neutrophil depletion was abrogated following macrophage depletion, but not complement depletion, indicating a requirement for macrophages to induce neutropenia by this method. These results inform the use and limitations of anti-Ly6G antibody as an experimental tool for depleting neutrophils in various immunological settings.

  6. Direct aperture optimization for IMRT using Monte Carlo generated beamlets

    International Nuclear Information System (INIS)

    Bergman, Alanah M.; Bush, Karl; Milette, Marie-Pierre; Popescu, I. Antoniu; Otto, Karl; Duzenli, Cheryl

    2006-01-01

    This work introduces an EGSnrc-based Monte Carlo (MC) beamlet does distribution matrix into a direct aperture optimization (DAO) algorithm for IMRT inverse planning. The technique is referred to as Monte Carlo-direct aperture optimization (MC-DAO). The goal is to assess if the combination of accurate Monte Carlo tissue inhomogeneity modeling and DAO inverse planning will improve the dose accuracy and treatment efficiency for treatment planning. Several authors have shown that the presence of small fields and/or inhomogeneous materials in IMRT treatment fields can cause dose calculation errors for algorithms that are unable to accurately model electronic disequilibrium. This issue may also affect the IMRT optimization process because the dose calculation algorithm may not properly model difficult geometries such as targets close to low-density regions (lung, air etc.). A clinical linear accelerator head is simulated using BEAMnrc (NRC, Canada). A novel in-house algorithm subdivides the resulting phase space into 2.5x5.0 mm 2 beamlets. Each beamlet is projected onto a patient-specific phantom. The beamlet dose contribution to each voxel in a structure-of-interest is calculated using DOSXYZnrc. The multileaf collimator (MLC) leaf positions are linked to the location of the beamlet does distributions. The MLC shapes are optimized using direct aperture optimization (DAO). A final Monte Carlo calculation with MLC modeling is used to compute the final dose distribution. Monte Carlo simulation can generate accurate beamlet dose distributions for traditionally difficult-to-calculate geometries, particularly for small fields crossing regions of tissue inhomogeneity. The introduction of DAO results in an additional improvement by increasing the treatment delivery efficiency. For the examples presented in this paper the reduction in the total number of monitor units to deliver is ∼33% compared to fluence-based optimization methods

  7. Monte Carlo based electron treatment planning and cutout output factor calculations

    Science.gov (United States)

    Mitrou, Ellis

    Electron radiotherapy (RT) offers a number of advantages over photons. The high surface dose, combined with a rapid dose fall-off beyond the target volume presents a net increase in tumor control probability and decreases the normal tissue complication for superficial tumors. Electron treatments are normally delivered clinically without previously calculated dose distributions due to the complexity of the electron transport involved and greater error in planning accuracy. This research uses Monte Carlo (MC) methods to model clinical electron beams in order to accurately calculate electron beam dose distributions in patients as well as calculate cutout output factors, reducing the need for a clinical measurement. The present work is incorporated into a research MC calculation system: McGill Monte Carlo Treatment Planning (MMCTP) system. Measurements of PDDs, profiles and output factors in addition to 2D GAFCHROMICRTM EBT2 film measurements in heterogeneous phantoms were obtained to commission the electron beam model. The use of MC for electron TP will provide more accurate treatments and yield greater knowledge of the electron dose distribution within the patient. The calculation of output factors could invoke a clinical time saving of up to 1 hour per patient.

  8. 26 CFR 1.613-1 - Percentage depletion; general rule.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 7 2010-04-01 2010-04-01 true Percentage depletion; general rule. 1.613-1... TAX (CONTINUED) INCOME TAXES (CONTINUED) Natural Resources § 1.613-1 Percentage depletion; general rule. (a) In general. In the case of a taxpayer computing the deduction for depletion under section 611...

  9. Issues in Stratospheric Ozone Depletion.

    Science.gov (United States)

    Lloyd, Steven Andrew

    Following the announcement of the discovery of the Antarctic ozone hole in 1985 there have arisen a multitude of questions pertaining to the nature and consequences of polar ozone depletion. This thesis addresses several of these specific questions, using both computer models of chemical kinetics and the Earth's radiation field as well as laboratory kinetic experiments. A coupled chemical kinetic-radiative numerical model was developed to assist in the analysis of in situ field measurements of several radical and neutral species in the polar and mid-latitude lower stratosphere. Modeling was used in the analysis of enhanced polar ClO, mid-latitude diurnal variation of ClO, and simultaneous measurements of OH, HO_2, H_2 O and O_3. Most importantly, such modeling was instrumental in establishing the link between the observed ClO and BrO concentrations in the Antarctic polar vortex and the observed rate of ozone depletion. The principal medical concern of stratospheric ozone depletion is that ozone loss will lead to the enhancement of ground-level UV-B radiation. Global ozone climatology (40^circS to 50^ circN latitude) was incorporated into a radiation field model to calculate the biologically accumulated dosage (BAD) of UV-B radiation, integrated over days, months, and years. The slope of the annual BAD as a function of latitude was found to correspond to epidemiological data for non-melanoma skin cancers for 30^circ -50^circN. Various ozone loss scenarios were investigated. It was found that a small ozone loss in the tropics can provide as much additional biologically effective UV-B as a much larger ozone loss at higher latitudes. Also, for ozone depletions of > 5%, the BAD of UV-B increases exponentially with decreasing ozone levels. An important key player in determining whether polar ozone depletion can propagate into the populated mid-latitudes is chlorine nitrate, ClONO_2 . As yet this molecule is only indirectly accounted for in computer models and field

  10. Direct Monte Carlo simulation of nanoscale mixed gas bearings

    Directory of Open Access Journals (Sweden)

    Kyaw Sett Myo

    2015-06-01

    Full Text Available The conception of sealed hard drives with helium gas mixture has been recently suggested over the current hard drives for achieving higher reliability and less position error. Therefore, it is important to understand the effects of different helium gas mixtures on the slider bearing characteristics in the head–disk interface. In this article, the helium/air and helium/argon gas mixtures are applied as the working fluids and their effects on the bearing characteristics are studied using the direct simulation Monte Carlo method. Based on direct simulation Monte Carlo simulations, the physical properties of these gas mixtures such as mean free path and dynamic viscosity are achieved and compared with those obtained from theoretical models. It is observed that both results are comparable. Using these gas mixture properties, the bearing pressure distributions are calculated under different fractions of helium with conventional molecular gas lubrication models. The outcomes reveal that the molecular gas lubrication results could have relatively good agreement with those of direct simulation Monte Carlo simulations, especially for pure air, helium, or argon gas cases. For gas mixtures, the bearing pressures predicted by molecular gas lubrication model are slightly larger than those from direct simulation Monte Carlo simulation.

  11. Importance of energetic solar protons in ozone depletion

    Energy Technology Data Exchange (ETDEWEB)

    Stephenson, J A.E.; Scourfield, M W.J. [Natal Univ., Durban (South Africa). Space Physics Research Inst.

    1991-07-11

    CHLORINE-catalysed depletion of the stratospheric ozone layer has commanded considerable attention since 1985, when Farman et al. observed a decrease of 50% in the total column ozone over Antarctica in the austral spring. Here we examine the depletion of stratospheric ozone caused by the reaction of ozone with nitric oxide generated by energetic solar protons, associated with solar flares. During large solar flares in March 1989, satellite observations indicated that total column ozone was depleted by {approx} 9% over {approx} 20% of the total area between the South Pole and latitude 70{sup o}S. Chlorine-catalysed ozone depletion takes place over a much larger area, but our results indicate that the influence of solar protons on atmospheric ozone concentrations should not be ignored. (author).

  12. Importance of energetic solar protons in ozone depletion

    International Nuclear Information System (INIS)

    Stephenson, J.A.E.; Scourfield, M.W.J.

    1991-01-01

    CHLORINE-catalysed depletion of the stratospheric ozone layer has commanded considerable attention since 1985, when Farman et al. observed a decrease of 50% in the total column ozone over Antarctica in the austral spring. Here we examine the depletion of stratospheric ozone caused by the reaction of ozone with nitric oxide generated by energetic solar protons, associated with solar flares. During large solar flares in March 1989, satellite observations indicated that total column ozone was depleted by ∼ 9% over ∼ 20% of the total area between the South Pole and latitude 70 o S. Chlorine-catalysed ozone depletion takes place over a much larger area, but our results indicate that the influence of solar protons on atmospheric ozone concentrations should not be ignored. (author)

  13. 26 CFR 1.642(e)-1 - Depreciation and depletion.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 8 2010-04-01 2010-04-01 false Depreciation and depletion. 1.642(e)-1 Section 1... (CONTINUED) INCOME TAXES Estates, Trusts, and Beneficiaries § 1.642(e)-1 Depreciation and depletion. An estate or trust is allowed the deductions for depreciation and depletion, but only to the extent the...

  14. The toxic effects, GSH depletion and radiosensitivity by BSO on retinoblastoma

    International Nuclear Information System (INIS)

    Yi Xianjin; Ni Chuo; Wang Wengi; Li Ding; Jin Yizun

    1993-01-01

    Retinoblastoma is the most common intraocular malignant tumor in children. Previous investigations have reported that buthionine sulfoximine (BSO) can deplete intracellular glutathione (GSH) by the specific inhibition and increase cellular radiosensitivity. The toxic effects, GSH depletion and radiosensitivity of BSO on retinoblastoma were reported. GSH content of retinoblastoma cell lines Y-79, So-Rb50 and retinoblastoma xenograft is (2.7 +- 1.3) x 10 -12 mmol/cell, (1.4 +- 0.2) x 10 -12 mmol/cell, and 2.8 +- 1.2 μmol/g respectively. The ID50 of BSO on Y-79 and So-Rb50 in air for 3h exposure is 2.5 mM and 0.2 mM respectively. GSH depletion by 0.1 mM BSO for 24h on Y-79 cells and 0.01 mM BSO for 24 h on So-Rb50 cells is 16.35%, and 4.7% of control. GSH depletion in tumor and other organ tissues in retinoblastoma bearing nude mice after BSO administration is differential. BSH depletion after BSO exposure in Y-79 cells in vitro decrease the D 0 value of retinoblastoma cells. The SER of 0.01 mM and 0.05 mM BSO for 24 h under the hypoxic condition is 1.21 and 1.36 respectively. Based on these observations, the authors conclude that BSO toxicity on retinoblastoma cells depends on the characteristics of cell line and BSO can increase hypoxic retinoblastoma cells radiosensitivity in vitro. Further study of BSO radiosensitization on retinoblastoma in vivo using nude mouse xenograft is needed

  15. Monte Carlo simulation of ordering transformations in Ni-Mo-based alloys

    International Nuclear Information System (INIS)

    Kulkarni, U.D.

    2004-01-01

    The quenched in state of short range order (SRO) in binary Ni-Mo alloys is characterized by intensity maxima at {1 (1/2) 0} and equivalent positions in the reciprocal space. Ternary addition of a small amount of Al to the binary alloy, on the other hand, leads to a state of SRO that gives rise to intensity maxima at {1 0 0} and equivalent, in addition to {1 (1/2) 0} and equivalent, positions in the selected area electron diffraction patterns. Different geometric patterns of streaks of diffuse intensity, joining the SRO maxima with the superlattice positions of the emerging long range ordered (LRO) structures or in some cases between the superlattice positions of different LRO structures, are observed during the SRO-to-LRO transitions in the Ni-Mo-based and other 1 (1/2) 0 alloys. Monte Carlo simulations have been carried out here in order to shed some light on the atomic structures of the SRO and the SRO-to-LRO transition states in these alloys

  16. Monte Carlo Treatment Planning for Advanced Radiotherapy

    DEFF Research Database (Denmark)

    Cronholm, Rickard

    This Ph.d. project describes the development of a workflow for Monte Carlo Treatment Planning for clinical radiotherapy plans. The workflow may be utilized to perform an independent dose verification of treatment plans. Modern radiotherapy treatment delivery is often conducted by dynamically...... modulating the intensity of the field during the irradiation. The workflow described has the potential to fully model the dynamic delivery, including gantry rotation during irradiation, of modern radiotherapy. Three corner stones of Monte Carlo Treatment Planning are identified: Building, commissioning...... and validation of a Monte Carlo model of a medical linear accelerator (i), converting a CT scan of a patient to a Monte Carlo compliant phantom (ii) and translating the treatment plan parameters (including beam energy, angles of incidence, collimator settings etc) to a Monte Carlo input file (iii). A protocol...

  17. Estimativa da produtividade em soldagem pelo Método de Monte Carlo Productivity estimation in welding by Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    José Luiz Ferreira Martins

    2011-09-01

    Full Text Available O objetivo deste artigo é o de analisar a viabilidade da utilização do método de Monte Carlo para estimar a produtividade na soldagem de tubulações industriais de aço carbono com base em amostras pequenas. O estudo foi realizado através de uma análise de uma amostra de referência contendo dados de produtividade de 160 juntas soldadas pelo processo Eletrodo Revestido na REDUC (refinaria de Duque de Caxias, utilizando o software ControlTub 5.3. A partir desses dados foram retiradas de forma aleatória, amostras com, respectivamente, 10, 15 e 20 elementos e executadas simulações pelo método de Monte Carlo. Comparando-se os resultados da amostra com 160 elementos e os dados gerados por simulação se observa que bons resultados podem ser obtidos usando o método de Monte Carlo para estimativa da produtividade da soldagem. Por outro lado, na indústria da construção brasileira o valor da média de produtividade é normalmente usado como um indicador de produtividade e é baseado em dados históricos de outros projetos coletados e avaliados somente após a conclusão do projeto, o que é uma limitação. Este artigo apresenta uma ferramenta para avaliação da execução em tempo real, permitindo ajustes nas estimativas e monitoramento de produtividade durante o empreendimento. Da mesma forma, em licitações, orçamentos e estimativas de prazo, a utilização desta técnica permite a adoção de outras estimativas diferentes da produtividade média, que é comumente usada e como alternativa, se sugerem três critérios: produtividade otimista, média e pessimista.The aim of this article is to analyze the feasibility of using Monte Carlo method to estimate productivity in industrial pipes welding of carbon steel based on small samples. The study was carried out through an analysis of a reference sample containing productivity data of 160 welded joints by SMAW process in REDUC (Duque de Caxias Refinery, using ControlTub 5.3 software

  18. Juan Carlos Onetti encerrado con un solo juguete: un libro

    OpenAIRE

    Becerra, Eduardo

    2009-01-01

    La presente semblanza de Juan Carlos Onetti pretende rescatar un hilo conductor de su vida basado en ciertas actitudes y episodios que se remontan a su niñez y se prolongan hasta sus últimos años. En ellas puede verse una relación entre la soledad, el encierro y la imaginación que explica tanto ciertos rasgos de su personalidad como características fundamentales de su literatura This profile of Juan Carlos Onetti aims to recover a principal current of his life based on ce...

  19. Fast GPU-based Monte Carlo code for SPECT/CT reconstructions generates improved 177Lu images.

    Science.gov (United States)

    Rydén, T; Heydorn Lagerlöf, J; Hemmingsson, J; Marin, I; Svensson, J; Båth, M; Gjertsson, P; Bernhardt, P

    2018-01-04

    Full Monte Carlo (MC)-based SPECT reconstructions have a strong potential for correcting for image degrading factors, but the reconstruction times are long. The objective of this study was to develop a highly parallel Monte Carlo code for fast, ordered subset expectation maximum (OSEM) reconstructions of SPECT/CT images. The MC code was written in the Compute Unified Device Architecture language for a computer with four graphics processing units (GPUs) (GeForce GTX Titan X, Nvidia, USA). This enabled simulations of parallel photon emissions from the voxels matrix (128 3 or 256 3 ). Each computed tomography (CT) number was converted to attenuation coefficients for photo absorption, coherent scattering, and incoherent scattering. For photon scattering, the deflection angle was determined by the differential scattering cross sections. An angular response function was developed and used to model the accepted angles for photon interaction with the crystal, and a detector scattering kernel was used for modeling the photon scattering in the detector. Predefined energy and spatial resolution kernels for the crystal were used. The MC code was implemented in the OSEM reconstruction of clinical and phantom 177 Lu SPECT/CT images. The Jaszczak image quality phantom was used to evaluate the performance of the MC reconstruction in comparison with attenuated corrected (AC) OSEM reconstructions and attenuated corrected OSEM reconstructions with resolution recovery corrections (RRC). The performance of the MC code was 3200 million photons/s. The required number of photons emitted per voxel to obtain a sufficiently low noise level in the simulated image was 200 for a 128 3 voxel matrix. With this number of emitted photons/voxel, the MC-based OSEM reconstruction with ten subsets was performed within 20 s/iteration. The images converged after around six iterations. Therefore, the reconstruction time was around 3 min. The activity recovery for the spheres in the Jaszczak phantom was

  20. A hybrid transport-diffusion Monte Carlo method for frequency-dependent radiative-transfer simulations

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Thompson, Kelly G.; Urbatsch, Todd J.

    2012-01-01

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations in optically thick media. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Each discrete step replaces many smaller Monte Carlo steps, thus improving the efficiency of the simulation. In this paper, we present an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold, as optical thickness is typically a decreasing function of frequency. Above this threshold we employ standard Monte Carlo, which results in a hybrid transport-diffusion scheme. With a set of frequency-dependent test problems, we confirm the accuracy and increased efficiency of our new DDMC method.

  1. Monte Carlo simulation on kinetics of batch and semi-batch free radical polymerization

    KAUST Repository

    Shao, Jing

    2015-10-27

    Based on Monte Carlo simulation technology, we proposed a hybrid routine which combines reaction mechanism together with coarse-grained molecular simulation to study the kinetics of free radical polymerization. By comparing with previous experimental and simulation studies, we showed the capability of our Monte Carlo scheme on representing polymerization kinetics in batch and semi-batch processes. Various kinetics information, such as instant monomer conversion, molecular weight, and polydispersity etc. are readily calculated from Monte Carlo simulation. The kinetic constants such as polymerization rate k p is determined in the simulation without of “steady-state” hypothesis. We explored the mechanism for the variation of polymerization kinetics those observed in previous studies, as well as polymerization-induced phase separation. Our Monte Carlo simulation scheme is versatile on studying polymerization kinetics in batch and semi-batch processes.

  2. The depletion potential in one, two and three dimensions

    Indian Academy of Sciences (India)

    Abstract. We study the behavior of the depletion potential in binary mixtures of hard particles in one, two, and three dimensions within the framework of a general theory for depletion potential using density functional theory. By doing so we extend earlier studies of the depletion potential in three dimensions to the cases of d ...

  3. Depleted uranium hexafluoride: The source material for advanced shielding systems

    Energy Technology Data Exchange (ETDEWEB)

    Quapp, W.J.; Lessing, P.A. [Idaho National Engineering Lab., Idaho Falls, ID (United States); Cooley, C.R. [Department of Technology, Germantown, MD (United States)

    1997-02-01

    The U.S. Department of Energy (DOE) has a management challenge and financial liability problem in the form of 50,000 cylinders containing 555,000 metric tons of depleted uranium hexafluoride (UF{sub 6}) that are stored at the gaseous diffusion plants. DOE is evaluating several options for the disposition of this UF{sub 6}, including continued storage, disposal, and recycle into a product. Based on studies conducted to date, the most feasible recycle option for the depleted uranium is shielding in low-level waste, spent nuclear fuel, or vitrified high-level waste containers. Estimates for the cost of disposal, using existing technologies, range between $3.8 and $11.3 billion depending on factors such as the disposal site and the applicability of the Resource Conservation and Recovery Act (RCRA). Advanced technologies can reduce these costs, but UF{sub 6} disposal still represents large future costs. This paper describes an application for depleted uranium in which depleted uranium hexafluoride is converted into an oxide and then into a heavy aggregate. The heavy uranium aggregate is combined with conventional concrete materials to form an ultra high density concrete, DUCRETE, weighing more than 400 lb/ft{sup 3}. DUCRETE can be used as shielding in spent nuclear fuel/high-level waste casks at a cost comparable to the lower of the disposal cost estimates. Consequently, the case can be made that DUCRETE shielded casks are an alternative to disposal. In this case, a beneficial long term solution is attained for much less than the combined cost of independently providing shielded casks and disposing of the depleted uranium. Furthermore, if disposal is avoided, the political problems associated with selection of a disposal location are also avoided. Other studies have also shown cost benefits for low level waste shielded disposal containers.

  4. NKT cell depletion in humans during early HIV infection.

    Science.gov (United States)

    Fernandez, Caroline S; Kelleher, Anthony D; Finlayson, Robert; Godfrey, Dale I; Kent, Stephen J

    2014-08-01

    Natural killer T (NKT) cells bridge across innate and adaptive immune responses and have an important role in chronic viral infections such as human immunodeficiency virus (HIV). NKT cells are depleted during chronic HIV infection, but the timing, drivers and implications of this NKT cell depletion are poorly understood. We studied human peripheral blood NKT cell levels, phenotype and function in 31 HIV-infected subjects not on antiretroviral treatment from a mean of 4 months to 2 years after HIV infection. We found that peripheral CD4(+) NKT cells were substantially depleted and dysfunctional by 4 months after HIV infection. The depletion of CD4(+) NKT cells was more marked than the depletion of total CD4(+) T cells. Further, the early depletion of NKT cells correlated with CD4(+) T-cell decline, but not HIV viral levels. Levels of activated CD4(+) T cells correlated with the loss of NKT cells. Our studies suggest that the early loss of NKT cells is associated with subsequent immune destruction during HIV infection.

  5. Addressing Ozone Layer Depletion

    Science.gov (United States)

    Access information on EPA's efforts to address ozone layer depletion through regulations, collaborations with stakeholders, international treaties, partnerships with the private sector, and enforcement actions under Title VI of the Clean Air Act.

  6. Threshold stoichiometry for beam induced nitrogen depletion of SiN

    International Nuclear Information System (INIS)

    Timmers, H.; Weijers, T.D.M.; Elliman, R.G.; Uribasterra, J.; Whitlow, H.J.; Sarwe, E.-L.

    2002-01-01

    Measurements of the stoichiometry of silicon nitride films as a function of the number of incident ions using heavy ion elastic recoil detection (ERD) show that beam-induced nitrogen depletion depends on the projectile species, the beam energy, and the initial stoichiometry. A threshold stoichiometry exists in the range 1.3>N/Si≥1, below which the films are stable against nitrogen depletion. Above this threshold, depletion is essentially linear with incident fluence. The depletion rate correlates non-linearly with the electronic energy loss of the projectile ion in the film. Sufficiently long exposure of nitrogen-rich films renders the mechanism, which prevents depletion of nitrogen-poor films, ineffective. Compromising depth-resolution, nitrogen depletion from SiN films during ERD analysis can be reduced significantly by using projectile beams with low atomic numbers

  7. Depleted Nanocrystal-Oxide Heterojunctions for High-Sensitivity Infrared Detection

    Science.gov (United States)

    2015-08-28

    Approved for Public Release; Distribution Unlimited Final Report: 4.3 Electronic Sensing - Depleted Nanocrystal- Oxide Heterojunctions for High...reviewed journals: Final Report: 4.3 Electronic Sensing - Depleted Nanocrystal- Oxide Heterojunctions for High-Sensitivity Infrared Detection Report Title...PERCENT_SUPPORTEDNAME FTE Equivalent: Total Number: 1 1 Final Progress Report Project title: Depleted Nanocrystal- Oxide Heterojunctions for High

  8. Design and evaluation of a Monte Carlo based model of an orthovoltage treatment system

    International Nuclear Information System (INIS)

    Penchev, Petar; Maeder, Ulf; Fiebich, Martin; Zink, Klemens; University Hospital Marburg

    2015-01-01

    The aim of this study was to develop a flexible framework of an orthovoltage treatment system capable of calculating and visualizing dose distributions in different phantoms and CT datasets. The framework provides a complete set of various filters, applicators and X-ray energies and therefore can be adapted to varying studies or be used for educational purposes. A dedicated user friendly graphical interface was developed allowing for easy setup of the simulation parameters and visualization of the results. For the Monte Carlo simulations the EGSnrc Monte Carlo code package was used. Building the geometry was accomplished with the help of the EGSnrc C++ class library. The deposited dose was calculated according to the KERMA approximation using the track-length estimator. The validation against measurements showed a good agreement within 4-5% deviation, down to depths of 20% of the depth dose maximum. Furthermore, to show its capabilities, the validated model was used to calculate the dose distribution on two CT datasets. Typical Monte Carlo calculation time for these simulations was about 10 minutes achieving an average statistical uncertainty of 2% on a standard PC. However, this calculation time depends strongly on the used CT dataset, tube potential, filter material/thickness and applicator size.

  9. STRONG CORRELATIONS AND ELECTRON-PHONON COUPLING IN HIGH-TEMPERATURE SUPERCONDUCTORS - A QUANTUM MONTE-CARLO STUDY

    NARCIS (Netherlands)

    MORGENSTERN, [No Value; FRICK, M; VONDERLINDEN, W

    We present quantum simulation studies for a system of strongly correlated fermions coupled to local anharmonic phonons. The Monte Carlo calculations are based on a generalized version of the Projector Quantum Monte Carlo Method allowing a simultaneous treatment of fermions and dynamical phonons. The

  10. Monte Carlo simulation with the Gate software using grid computing

    International Nuclear Information System (INIS)

    Reuillon, R.; Hill, D.R.C.; Gouinaud, C.; El Bitar, Z.; Breton, V.; Buvat, I.

    2009-03-01

    Monte Carlo simulations are widely used in emission tomography, for protocol optimization, design of processing or data analysis methods, tomographic reconstruction, or tomograph design optimization. Monte Carlo simulations needing many replicates to obtain good statistical results can be easily executed in parallel using the 'Multiple Replications In Parallel' approach. However, several precautions have to be taken in the generation of the parallel streams of pseudo-random numbers. In this paper, we present the distribution of Monte Carlo simulations performed with the GATE software using local clusters and grid computing. We obtained very convincing results with this large medical application, thanks to the EGEE Grid (Enabling Grid for E-science), achieving in one week computations that could have taken more than 3 years of processing on a single computer. This work has been achieved thanks to a generic object-oriented toolbox called DistMe which we designed to automate this kind of parallelization for Monte Carlo simulations. This toolbox, written in Java is freely available on SourceForge and helped to ensure a rigorous distribution of pseudo-random number streams. It is based on the use of a documented XML format for random numbers generators statuses. (authors)

  11. Weighted-delta-tracking for Monte Carlo particle transport

    International Nuclear Information System (INIS)

    Morgan, L.W.G.; Kotlyar, D.

    2015-01-01

    Highlights: • This paper presents an alteration to the Monte Carlo Woodcock tracking technique. • The alteration improves computational efficiency within regions of high absorbers. • The rejection technique is replaced by a statistical weighting mechanism. • The modified Woodcock method is shown to be faster than standard Woodcock tracking. • The modified Woodcock method achieves a lower variance, given a specified accuracy. - Abstract: Monte Carlo particle transport (MCPT) codes are incredibly powerful and versatile tools to simulate particle behavior in a multitude of scenarios, such as core/criticality studies, radiation protection, shielding, medicine and fusion research to name just a small subset applications. However, MCPT codes can be very computationally expensive to run when the model geometry contains large attenuation depths and/or contains many components. This paper proposes a simple modification to the Woodcock tracking method used by some Monte Carlo particle transport codes. The Woodcock method utilizes the rejection method for sampling virtual collisions as a method to remove collision distance sampling at material boundaries. However, it suffers from poor computational efficiency when the sample acceptance rate is low. The proposed method removes rejection sampling from the Woodcock method in favor of a statistical weighting scheme, which improves the computational efficiency of a Monte Carlo particle tracking code. It is shown that the modified Woodcock method is less computationally expensive than standard ray-tracing and rejection-based Woodcock tracking methods and achieves a lower variance, given a specified accuracy

  12. Monte Carlo efficiency calibration of a neutron generator-based total-body irradiator

    International Nuclear Information System (INIS)

    Shypailo, R.J.; Ellis, K.J.

    2009-01-01

    Many body composition measurement systems are calibrated against a single-sized reference phantom. Prompt-gamma neutron activation (PGNA) provides the only direct measure of total body nitrogen (TBN), an index of the body's lean tissue mass. In PGNA systems, body size influences neutron flux attenuation, induced gamma signal distribution, and counting efficiency. Thus, calibration based on a single-sized phantom could result in inaccurate TBN values. We used Monte Carlo simulations (MCNP-5; Los Alamos National Laboratory) in order to map a system's response to the range of body weights (65-160 kg) and body fat distributions (25-60%) in obese humans. Calibration curves were constructed to derive body-size correction factors relative to a standard reference phantom, providing customized adjustments to account for differences in body habitus of obese adults. The use of MCNP-generated calibration curves should allow for a better estimate of the true changes in lean tissue mass that many occur during intervention programs focused only on weight loss. (author)

  13. Fast online Monte Carlo-based IMRT planning for the MRI linear accelerator

    Science.gov (United States)

    Bol, G. H.; Hissoiny, S.; Lagendijk, J. J. W.; Raaymakers, B. W.

    2012-03-01

    The MRI accelerator, a combination of a 6 MV linear accelerator with a 1.5 T MRI, facilitates continuous patient anatomy updates regarding translations, rotations and deformations of targets and organs at risk. Accounting for these demands high speed, online intensity-modulated radiotherapy (IMRT) re-optimization. In this paper, a fast IMRT optimization system is described which combines a GPU-based Monte Carlo dose calculation engine for online beamlet generation and a fast inverse dose optimization algorithm. Tightly conformal IMRT plans are generated for four phantom cases and two clinical cases (cervix and kidney) in the presence of the magnetic fields of 0 and 1.5 T. We show that for the presented cases the beamlet generation and optimization routines are fast enough for online IMRT planning. Furthermore, there is no influence of the magnetic field on plan quality and complexity, and equal optimization constraints at 0 and 1.5 T lead to almost identical dose distributions.

  14. Barium depletion study on impregnated cathodes and lifetime prediction

    International Nuclear Information System (INIS)

    Roquais, J.M.; Poret, F.; Doze, R. le; Ricaud, J.L.; Monterrin, A.; Steinbrunn, A.

    2003-01-01

    In the thermionic cathodes used in cathode ray-tubes (CRTs), barium is the key element for the electronic emission. In the case of the dispenser cathodes made of a porous tungsten pellet impregnated with Ba, Ca aluminates, the evaporation of Ba determines the cathode lifetime with respect to emission performance in the CRT. The Ba evaporation results in progressive depletion of the impregnating material inside the pellet. In the present work, the Ba depletion with time has been extensively characterized over a large range of cathode temperature. Calculations using the depletion data allowed modeling of the depletion as a function of key parameters. The link between measured depletion and emission in tubes has been established, from which an end-of-life criterion was deduced. Taking modeling into account, predicting accelerated life-tests were performed using high-density maximum emission current (MIK)

  15. Reporting and analyzing statistical uncertainties in Monte Carlo-based treatment planning

    International Nuclear Information System (INIS)

    Chetty, Indrin J.; Rosu, Mihaela; Kessler, Marc L.; Fraass, Benedick A.; Haken, Randall K. ten; Kong, Feng-Ming; McShan, Daniel L.

    2006-01-01

    Purpose: To investigate methods of reporting and analyzing statistical uncertainties in doses to targets and normal tissues in Monte Carlo (MC)-based treatment planning. Methods and Materials: Methods for quantifying statistical uncertainties in dose, such as uncertainty specification to specific dose points, or to volume-based regions, were analyzed in MC-based treatment planning for 5 lung cancer patients. The effect of statistical uncertainties on target and normal tissue dose indices was evaluated. The concept of uncertainty volume histograms for targets and organs at risk was examined, along with its utility, in conjunction with dose volume histograms, in assessing the acceptability of the statistical precision in dose distributions. The uncertainty evaluation tools were extended to four-dimensional planning for application on multiple instances of the patient geometry. All calculations were performed using the Dose Planning Method MC code. Results: For targets, generalized equivalent uniform doses and mean target doses converged at 150 million simulated histories, corresponding to relative uncertainties of less than 2% in the mean target doses. For the normal lung tissue (a volume-effect organ), mean lung dose and normal tissue complication probability converged at 150 million histories despite the large range in the relative organ uncertainty volume histograms. For 'serial' normal tissues such as the spinal cord, large fluctuations exist in point dose relative uncertainties. Conclusions: The tools presented here provide useful means for evaluating statistical precision in MC-based dose distributions. Tradeoffs between uncertainties in doses to targets, volume-effect organs, and 'serial' normal tissues must be considered carefully in determining acceptable levels of statistical precision in MC-computed dose distributions

  16. Monte Carlo calculation of Dancoff factors in irregular geometries

    International Nuclear Information System (INIS)

    Feher, S.; Hoogenboom, J.E.; Leege, P.F.A. de; Valko, J.

    1994-01-01

    A Monte Carlo program is described that calculates Dancoff factors in arbitrary arrangements of cylindrical or spherical fuel elements. The fuel elements can have different diameters and material compositions, and they are allowed to be black or partially transparent. Calculations of the Dancoff factor is based on its collision probability definition. The Monte Carlo approach is recommended because it is equally applicable in simple and in complicated geometries. It is shown that some of the commonly used algorithms are inaccurate even in infinite regular lattices. An example of application includes the Canada deuterium uranium (CANDU) 37-pin fuel bundle, which requires different Dancoff factors for the symmetrically different fuel pin positions

  17. Meta-analysis of depleted uranium levels in the Balkan region.

    Science.gov (United States)

    Besic, Larisa; Muhovic, Imer; Asic, Adna; Kurtovic-Kozaric, Amina

    2017-06-01

    In recent years, contradicting data has been published on the connection between the presence of depleted uranium and an increased cancer incidence among military personnel deployed in the Balkans during the 1992-1999 wars. This has led to numerous research articles investigating possible depleted uranium contamination of the afflicted regions of the Balkan Peninsula, namely Bosnia & Herzegovina, Serbia, Kosovo and Montenegro. The aim of this study was to collect data from previously published reports investigating the levels of depleted uranium in the Balkans and to present the data in the form of a meta-analysis. This would provide a clear image of the extent of depleted uranium contamination after the Balkan conflict. In addition, we tested the hypothesis that there is a correlation between the levels of depleted uranium and the assumed depleted uranium-related health effects. Our results suggest that the majority of the examined sites contain natural uranium, while the area of Kosovo appears to be most heavily afflicted by depleted uranium pollution, followed by Bosnia & Herzegovina. Furthermore, the results indicate that it is not possible to make a valid correlation between the health effects and depleted uranium-contaminated areas. We therefore suggest a structured collaborative plan of action where long-term monitoring of the residents of depleted uranium-afflicted areas would be performed. In conclusion, while the possibility of depleted uranium toxicity in post-conflict regions appears to exist, there currently exists no definitive proof of such effects, due to insufficient studies of potentially afflicted populations, in addition to the lack of a common epidemiological approach in the reviewed literature. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Monte carlo simulation for soot dynamics

    KAUST Repository

    Zhou, Kun

    2012-01-01

    A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.

  19. Operation of heavily irradiated silicon detectors in non-depletion mode

    International Nuclear Information System (INIS)

    Verbitskaya, E.; Eremin, V.; Ilyashenko, I.; Li, Z.; Haerkoenen, J.; Tuovinen, E.; Luukka, P.

    2006-01-01

    The non-depletion detector operation mode has generally been disregarded as an option in high-energy physics experiments. In this paper, the non-depletion operation is examined by detailed analysis of the electric field distribution and the current pulse response of heavily irradiated silicon (Si) detectors. The previously reported model of double junction in heavily irradiated Si detector is further developed and a simulation of the current pulse response has been performed. It is shown that detectors can operate in a non-depletion mode due to the fact that the value of the electric field in a non-depleted region is high enough for efficient carrier drift. This electric field originates from the current flow through the detector and a consequent drop of the potential across high-resistivity bulk of a non-depleted region. It is anticipated that the electric field in a non-depleted region, which is still electrically neutral, increases with fluence that improves the non-depleted detector operation. Consideration of the electric field in a non-depleted region allows the explanation of the recorded double-peak current pulse shape of heavily irradiated Si detectors and definition of the requirements for the detector operational conditions. Detailed reconstruction of the electric field distribution gives new information on radiation effects in Si detectors

  20. Simulation based sequential Monte Carlo methods for discretely observed Markov processes

    OpenAIRE

    Neal, Peter

    2014-01-01

    Parameter estimation for discretely observed Markov processes is a challenging problem. However, simulation of Markov processes is straightforward using the Gillespie algorithm. We exploit this ease of simulation to develop an effective sequential Monte Carlo (SMC) algorithm for obtaining samples from the posterior distribution of the parameters. In particular, we introduce two key innovations, coupled simulations, which allow us to study multiple parameter values on the basis of a single sim...

  1. Uncertainty Analysis of Power Grid Investment Capacity Based on Monte Carlo

    Science.gov (United States)

    Qin, Junsong; Liu, Bingyi; Niu, Dongxiao

    By analyzing the influence factors of the investment capacity of power grid, to depreciation cost, sales price and sales quantity, net profit, financing and GDP of the second industry as the dependent variable to build the investment capacity analysis model. After carrying out Kolmogorov-Smirnov test, get the probability distribution of each influence factor. Finally, obtained the grid investment capacity uncertainty of analysis results by Monte Carlo simulation.

  2. Methodology for sodium fire vulnerability assessment of sodium cooled fast reactor based on the Monte-Carlo principle

    Energy Technology Data Exchange (ETDEWEB)

    Song, Wei [Nuclear and Radiation Safety Center, P. O. Box 8088, Beijing (China); Wu, Yuanyu [ITER Organization, Route de Vinon-sur-Verdon, 13115 Saint-Paul-lès-Durance (France); Hu, Wenjun [China Institute of Atomic Energy, P. O. Box 275(34), Beijing (China); Zuo, Jiaxu, E-mail: zuojiaxu@chinansc.cn [Nuclear and Radiation Safety Center, P. O. Box 8088, Beijing (China)

    2015-11-15

    Highlights: • Monte-Carlo principle coupling with fire dynamic code is adopted to perform sodium fire vulnerability assessment. • The method can be used to calculate the failure probability of sodium fire scenarios. • A calculation example and results are given to illustrate the feasibility of the methodology. • Some critical parameters and experience are shared. - Abstract: Sodium fire is a typical and distinctive hazard in sodium cooled fast reactors, which is significant for nuclear safety. In this paper, a method of sodium fire vulnerability assessment based on the Monte-Carlo principle was introduced, which could be used to calculate the probabilities of every failure mode in sodium fire scenarios. After that, the sodium fire scenario vulnerability assessment of primary cold trap room of China Experimental Fast Reactor was performed to illustrate the feasibility of the methodology. The calculation result of the example shows that the conditional failure probability of key cable is 23.6% in the sodium fire scenario which is caused by continuous sodium leakage because of the isolation device failure, but the wall temperature, the room pressure and the aerosol discharge mass are all lower than the safety limits.

  3. Methodology for sodium fire vulnerability assessment of sodium cooled fast reactor based on the Monte-Carlo principle

    International Nuclear Information System (INIS)

    Song, Wei; Wu, Yuanyu; Hu, Wenjun; Zuo, Jiaxu

    2015-01-01

    Highlights: • Monte-Carlo principle coupling with fire dynamic code is adopted to perform sodium fire vulnerability assessment. • The method can be used to calculate the failure probability of sodium fire scenarios. • A calculation example and results are given to illustrate the feasibility of the methodology. • Some critical parameters and experience are shared. - Abstract: Sodium fire is a typical and distinctive hazard in sodium cooled fast reactors, which is significant for nuclear safety. In this paper, a method of sodium fire vulnerability assessment based on the Monte-Carlo principle was introduced, which could be used to calculate the probabilities of every failure mode in sodium fire scenarios. After that, the sodium fire scenario vulnerability assessment of primary cold trap room of China Experimental Fast Reactor was performed to illustrate the feasibility of the methodology. The calculation result of the example shows that the conditional failure probability of key cable is 23.6% in the sodium fire scenario which is caused by continuous sodium leakage because of the isolation device failure, but the wall temperature, the room pressure and the aerosol discharge mass are all lower than the safety limits.

  4. Estimation of terrorist attack resistibility of dual-purpose cask TP-117 with DU (depleted uranium) gamma shield

    International Nuclear Information System (INIS)

    Alekseev, O.G.; Matveev, V.Z.; Morenko, A.I.; Il'kaev, R.I.; Shapovalov, V.I.

    2004-01-01

    Report is devoted to numerical research of dual-purpose unified cask (used for SFA transportation and storage) resistance to terrorist attacks. High resistance of dual-purpose unified cask has been achieved due to the unique design-technological solutions and implementation of depleted uranium in cask construction. In suggested variant of construction depleted uranium fulfils functions of shielding and constructional material. It is used both in metallic and cermet form (basing on steel and depleted uranium dioxide). Implementation of depleted uranium in cask construction allows maximal load in existing overall dimensions of the cask. At the same time: 1) all safety requirements (IAEA) are met, 2) dual-purpose cask with SFA has high resistance to terrorist attacks

  5. Estimation of terrorist attack resistibility of dual-purpose cask TP-117 with DU (depleted uranium) gamma shield

    Energy Technology Data Exchange (ETDEWEB)

    Alekseev, O.G.; Matveev, V.Z.; Morenko, A.I.; Il' kaev, R.I.; Shapovalov, V.I. [Russian Federal Nuclear Center - All-Russian Research Inst. of Experimental Physics, Sarov (Russian Federation)

    2004-07-01

    Report is devoted to numerical research of dual-purpose unified cask (used for SFA transportation and storage) resistance to terrorist attacks. High resistance of dual-purpose unified cask has been achieved due to the unique design-technological solutions and implementation of depleted uranium in cask construction. In suggested variant of construction depleted uranium fulfils functions of shielding and constructional material. It is used both in metallic and cermet form (basing on steel and depleted uranium dioxide). Implementation of depleted uranium in cask construction allows maximal load in existing overall dimensions of the cask. At the same time: 1) all safety requirements (IAEA) are met, 2) dual-purpose cask with SFA has high resistance to terrorist attacks.

  6. Monte Carlo sensitivity analysis of an Eulerian large-scale air pollution model

    International Nuclear Information System (INIS)

    Dimov, I.; Georgieva, R.; Ostromsky, Tz.

    2012-01-01

    Variance-based approaches for global sensitivity analysis have been applied and analyzed to study the sensitivity of air pollutant concentrations according to variations of rates of chemical reactions. The Unified Danish Eulerian Model has been used as a mathematical model simulating a remote transport of air pollutants. Various Monte Carlo algorithms for numerical integration have been applied to compute Sobol's global sensitivity indices. A newly developed Monte Carlo algorithm based on Sobol's quasi-random points MCA-MSS has been applied for numerical integration. It has been compared with some existing approaches, namely Sobol's ΛΠ τ sequences, an adaptive Monte Carlo algorithm, the plain Monte Carlo algorithm, as well as, eFAST and Sobol's sensitivity approaches both implemented in SIMLAB software. The analysis and numerical results show advantages of MCA-MSS for relatively small sensitivity indices in terms of accuracy and efficiency. Practical guidelines on the estimation of Sobol's global sensitivity indices in the presence of computational difficulties have been provided. - Highlights: ► Variance-based global sensitivity analysis is performed for the air pollution model UNI-DEM. ► The main effect of input parameters dominates over higher-order interactions. ► Ozone concentrations are influenced mostly by variability of three chemical reactions rates. ► The newly developed MCA-MSS for multidimensional integration is compared with other approaches. ► More precise approaches like MCA-MSS should be applied when the needed accuracy has not been achieved.

  7. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-01-01

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  8. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros

    2016-08-29

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  9. Continuum variational and diffusion quantum Monte Carlo calculations

    International Nuclear Information System (INIS)

    Needs, R J; Towler, M D; Drummond, N D; Lopez RIos, P

    2010-01-01

    This topical review describes the methodology of continuum variational and diffusion quantum Monte Carlo calculations. These stochastic methods are based on many-body wavefunctions and are capable of achieving very high accuracy. The algorithms are intrinsically parallel and well suited to implementation on petascale computers, and the computational cost scales as a polynomial in the number of particles. A guide to the systems and topics which have been investigated using these methods is given. The bulk of the article is devoted to an overview of the basic quantum Monte Carlo methods, the forms and optimization of wavefunctions, performing calculations under periodic boundary conditions, using pseudopotentials, excited-state calculations, sources of calculational inaccuracy, and calculating energy differences and forces. (topical review)

  10. Applications of Monte Carlo method in Medical Physics

    International Nuclear Information System (INIS)

    Diez Rios, A.; Labajos, M.

    1989-01-01

    The basic ideas of Monte Carlo techniques are presented. Random numbers and their generation by congruential methods, which underlie Monte Carlo calculations are shown. Monte Carlo techniques to solve integrals are discussed. The evaluation of a simple monodimensional integral with a known answer, by means of two different Monte Carlo approaches are discussed. The basic principles to simualate on a computer photon histories reduce variance and the current applications in Medical Physics are commented. (Author)

  11. Guideline of Monte Carlo calculation. Neutron/gamma ray transport simulation by Monte Carlo method

    CERN Document Server

    2002-01-01

    This report condenses basic theories and advanced applications of neutron/gamma ray transport calculations in many fields of nuclear energy research. Chapters 1 through 5 treat historical progress of Monte Carlo methods, general issues of variance reduction technique, cross section libraries used in continuous energy Monte Carlo codes. In chapter 6, the following issues are discussed: fusion benchmark experiments, design of ITER, experiment analyses of fast critical assembly, core analyses of JMTR, simulation of pulsed neutron experiment, core analyses of HTTR, duct streaming calculations, bulk shielding calculations, neutron/gamma ray transport calculations of the Hiroshima atomic bomb. Chapters 8 and 9 treat function enhancements of MCNP and MVP codes, and a parallel processing of Monte Carlo calculation, respectively. An important references are attached at the end of this report.

  12. pyNSMC: A Python Module for Null-Space Monte Carlo Uncertainty Analysis

    Science.gov (United States)

    White, J.; Brakefield, L. K.

    2015-12-01

    The null-space monte carlo technique is a non-linear uncertainty analyses technique that is well-suited to high-dimensional inverse problems. While the technique is powerful, the existing workflow for completing null-space monte carlo is cumbersome, requiring the use of multiple commandline utilities, several sets of intermediate files and even a text editor. pyNSMC is an open-source python module that automates the workflow of null-space monte carlo uncertainty analyses. The module is fully compatible with the PEST and PEST++ software suites and leverages existing functionality of pyEMU, a python framework for linear-based uncertainty analyses. pyNSMC greatly simplifies the existing workflow for null-space monte carlo by taking advantage of object oriented design facilities in python. The core of pyNSMC is the ensemble class, which draws and stores realized random vectors and also provides functionality for exporting and visualizing results. By relieving users of the tedium associated with file handling and command line utility execution, pyNSMC instead focuses the user on the important steps and assumptions of null-space monte carlo analysis. Furthermore, pyNSMC facilitates learning through flow charts and results visualization, which are available at many points in the algorithm. The ease-of-use of the pyNSMC workflow is compared to the existing workflow for null-space monte carlo for a synthetic groundwater model with hundreds of estimable parameters.

  13. Experience with the Monte Carlo Method

    Energy Technology Data Exchange (ETDEWEB)

    Hussein, E M.A. [Department of Mechanical Engineering University of New Brunswick, Fredericton, N.B., (Canada)

    2007-06-15

    Monte Carlo simulation of radiation transport provides a powerful research and design tool that resembles in many aspects laboratory experiments. Moreover, Monte Carlo simulations can provide an insight not attainable in the laboratory. However, the Monte Carlo method has its limitations, which if not taken into account can result in misleading conclusions. This paper will present the experience of this author, over almost three decades, in the use of the Monte Carlo method for a variety of applications. Examples will be shown on how the method was used to explore new ideas, as a parametric study and design optimization tool, and to analyze experimental data. The consequences of not accounting in detail for detector response and the scattering of radiation by surrounding structures are two of the examples that will be presented to demonstrate the pitfall of condensed.

  14. Experience with the Monte Carlo Method

    International Nuclear Information System (INIS)

    Hussein, E.M.A.

    2007-01-01

    Monte Carlo simulation of radiation transport provides a powerful research and design tool that resembles in many aspects laboratory experiments. Moreover, Monte Carlo simulations can provide an insight not attainable in the laboratory. However, the Monte Carlo method has its limitations, which if not taken into account can result in misleading conclusions. This paper will present the experience of this author, over almost three decades, in the use of the Monte Carlo method for a variety of applications. Examples will be shown on how the method was used to explore new ideas, as a parametric study and design optimization tool, and to analyze experimental data. The consequences of not accounting in detail for detector response and the scattering of radiation by surrounding structures are two of the examples that will be presented to demonstrate the pitfall of condensed

  15. Study on MPI/OpenMP hybrid parallelism for Monte Carlo neutron transport code

    International Nuclear Information System (INIS)

    Liang Jingang; Xu Qi; Wang Kan; Liu Shiwen

    2013-01-01

    Parallel programming with mixed mode of messages-passing and shared-memory has several advantages when used in Monte Carlo neutron transport code, such as fitting hardware of distributed-shared clusters, economizing memory demand of Monte Carlo transport, improving parallel performance, and so on. MPI/OpenMP hybrid parallelism was implemented based on a one dimension Monte Carlo neutron transport code. Some critical factors affecting the parallel performance were analyzed and solutions were proposed for several problems such as contention access, lock contention and false sharing. After optimization the code was tested finally. It is shown that the hybrid parallel code can reach good performance just as pure MPI parallel program, while it saves a lot of memory usage at the same time. Therefore hybrid parallel is efficient for achieving large-scale parallel of Monte Carlo neutron transport. (authors)

  16. Collision of Physics and Software in the Monte Carlo Application Toolkit (MCATK)

    Energy Technology Data Exchange (ETDEWEB)

    Sweezy, Jeremy Ed [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-01-21

    The topic is presented in a series of slides organized as follows: MCATK overview, development strategy, available algorithms, problem modeling (sources, geometry, data, tallies), parallelism, miscellaneous tools/features, example MCATK application, recent areas of research, and summary and future work. MCATK is a C++ component-based Monte Carlo neutron-gamma transport software library with continuous energy neutron and photon transport. Designed to build specialized applications and to provide new functionality in existing general-purpose Monte Carlo codes like MCNP, it reads ACE formatted nuclear data generated by NJOY. The motivation behind MCATK was to reduce costs. MCATK physics involves continuous energy neutron & gamma transport with multi-temperature treatment, static eigenvalue (keff and α) algorithms, time-dependent algorithm, and fission chain algorithms. MCATK geometry includes mesh geometries and solid body geometries. MCATK provides verified, unit-test Monte Carlo components, flexibility in Monte Carlo application development, and numerous tools such as geometry and cross section plotters.

  17. Monte Carlo alpha calculation

    Energy Technology Data Exchange (ETDEWEB)

    Brockway, D.; Soran, P.; Whalen, P.

    1985-01-01

    A Monte Carlo algorithm to efficiently calculate static alpha eigenvalues, N = ne/sup ..cap alpha..t/, for supercritical systems has been developed and tested. A direct Monte Carlo approach to calculating a static alpha is to simply follow the buildup in time of neutrons in a supercritical system and evaluate the logarithmic derivative of the neutron population with respect to time. This procedure is expensive, and the solution is very noisy and almost useless for a system near critical. The modified approach is to convert the time-dependent problem to a static ..cap alpha../sup -/eigenvalue problem and regress ..cap alpha.. on solutions of a/sup -/ k/sup -/eigenvalue problem. In practice, this procedure is much more efficient than the direct calculation, and produces much more accurate results. Because the Monte Carlo codes are intrinsically three-dimensional and use elaborate continuous-energy cross sections, this technique is now used as a standard for evaluating other calculational techniques in odd geometries or with group cross sections.

  18. Challenges dealing with depleted uranium in Germany - Reuse or disposal

    International Nuclear Information System (INIS)

    Moeller, Kai D.

    2007-01-01

    During enrichment large amounts of depleted Uranium are produced. In Germany every year 2.800 tons of depleted uranium are generated. In Germany depleted uranium is not classified as radioactive waste but a resource for further enrichment. Therefore since 1996 depleted Uranium is sent to ROSATOM in Russia. However it still has to be dealt with the second generation of depleted Uranium. To evaluate the alternative actions in case a solution has to be found in Germany, several studies have been initiated by the Federal Ministry of the Environment. The work that has been carried out evaluated various possibilities to deal with depleted uranium. The international studies on this field and the situation in Germany have been analyzed. In case no further enrichment is planned the depleted uranium has to be stored. In the enrichment process UF 6 is generated. It is an international consensus that for storage it should be converted to U 3 O 8 . The necessary technique is well established. If the depleted Uranium would have to be characterized as radioactive waste, a final disposal would become necessary. For the planned Konrad repository - a repository for non heat generating radioactive waste - the amount of Uranium is limited by the licensing authority. The existing license would not allow the final disposal of large amounts of depleted Uranium in the Konrad repository. The potential effect on the safety case has not been roughly analyzed. As a result it may be necessary to think about alternatives. Several possibilities for the use of depleted uranium in the industry have been identified. Studies indicate that the properties of Uranium would make it useful in some industrial fields. Nevertheless many practical and legal questions are open. One further option may be the use as shielding e.g. in casks for transport or disposal. Possible techniques for using depleted Uranium as shielding are the use of the metallic Uranium as well as the inclusion in concrete. Another

  19. Monte Carlo simulations of neutron scattering instruments

    International Nuclear Information System (INIS)

    Aestrand, Per-Olof; Copenhagen Univ.; Lefmann, K.; Nielsen, K.

    2001-01-01

    A Monte Carlo simulation is an important computational tool used in many areas of science and engineering. The use of Monte Carlo techniques for simulating neutron scattering instruments is discussed. The basic ideas, techniques and approximations are presented. Since the construction of a neutron scattering instrument is very expensive, Monte Carlo software used for design of instruments have to be validated and tested extensively. The McStas software was designed with these aspects in mind and some of the basic principles of the McStas software will be discussed. Finally, some future prospects are discussed for using Monte Carlo simulations in optimizing neutron scattering experiments. (R.P.)

  20. The toxic effects, GSH depletion and radiosensitivity by BSO on retinoblastoma

    International Nuclear Information System (INIS)

    Xianjin Yi; Li Ding; Yizun Jin; Chuo Ni; Wenji Wang

    1994-01-01

    Retinoblastoma is the most common intraocular malignant tumor in children. Previous investigations have reported that buthionine sulfoximine (BSO) can deplete intracellular glutathione (GSH) by specific inhibition and increase cellular radiosensitivity. The toxic effects, GSH depletion and radiosensitivity effects of BSO on retinoblastoma cells are reported in this paper. GSH content of retinoblastoma cell lines Y-79, So-Rb50 and retinoblastoma xenograft is 2.7 ± 1.3 X 1.0 -12 mmol/cell, 1.4 ± 0.2 X 1.0 -12 mmol/cell, and 2.8 ± 1.2 μmol/g, respectively. The ID 50 of BSO on Y-79 and So-Rb50 in air for 3 h exposure is 2.5 mM and 0.2 mM, respectively. GSH depletion by 0.1 mM BSO for 24 h on Y-79 cells and 0.01 mM BSO for 24 h on So-Rb50 cells is 16.35%, and 4.7% of control. GSH depletion in tumor and other organ tissues in retinoblastoma-bearing nude mice after BSO administration is differential. GSH depletion after BSO exposure in Y-79 cells in vitro decreases the Do value of retinoblastoma cells. The SER of 0.01 mM and 0.05 mM BSO for 24 h under hypoxic conditions is 1.21 and 1.36, respectively. Based on these observations, the authors conclude that BSO toxicity on retinoblastoma cells depends on the characteristics of the cell line and that BSO can increase hypoxic retinoblastoma cells' radiosensitivity in vitro. Further study of BSO radiosensitization on retinoblastoma in vivo using nude mouse xenografts is needed. 25 refs., 3 figs., 3 tabs