WorldWideScience

Sample records for carlo based depletion

  1. Isotopic depletion with Monte Carlo

    International Nuclear Information System (INIS)

    Martin, W.R.; Rathkopf, J.A.

    1996-06-01

    This work considers a method to deplete isotopes during a time- dependent Monte Carlo simulation of an evolving system. The method is based on explicitly combining a conventional estimator for the scalar flux with the analytical solutions to the isotopic depletion equations. There are no auxiliary calculations; the method is an integral part of the Monte Carlo calculation. The method eliminates negative densities and reduces the variance in the estimates for the isotope densities, compared to existing methods. Moreover, existing methods are shown to be special cases of the general method described in this work, as they can be derived by combining a high variance estimator for the scalar flux with a low-order approximation to the analytical solution to the depletion equation

  2. Statistical implications in Monte Carlo depletions - 051

    International Nuclear Information System (INIS)

    Zhiwen, Xu; Rhodes, J.; Smith, K.

    2010-01-01

    As a result of steady advances of computer power, continuous-energy Monte Carlo depletion analysis is attracting considerable attention for reactor burnup calculations. The typical Monte Carlo analysis is set up as a combination of a Monte Carlo neutron transport solver and a fuel burnup solver. Note that the burnup solver is a deterministic module. The statistical errors in Monte Carlo solutions are introduced into nuclide number densities and propagated along fuel burnup. This paper is towards the understanding of the statistical implications in Monte Carlo depletions, including both statistical bias and statistical variations in depleted fuel number densities. The deterministic Studsvik lattice physics code, CASMO-5, is modified to model the Monte Carlo depletion. The statistical bias in depleted number densities is found to be negligible compared to its statistical variations, which, in turn, demonstrates the correctness of the Monte Carlo depletion method. Meanwhile, the statistical variation in number densities generally increases with burnup. Several possible ways of reducing the statistical errors are discussed: 1) to increase the number of individual Monte Carlo histories; 2) to increase the number of time steps; 3) to run additional independent Monte Carlo depletion cases. Finally, a new Monte Carlo depletion methodology, called the batch depletion method, is proposed, which consists of performing a set of independent Monte Carlo depletions and is thus capable of estimating the overall statistical errors including both the local statistical error and the propagated statistical error. (authors)

  3. A perturbation-based susbtep method for coupled depletion Monte-Carlo codes

    International Nuclear Information System (INIS)

    Kotlyar, Dan; Aufiero, Manuele; Shwageraus, Eugene; Fratoni, Massimiliano

    2017-01-01

    Highlights: • The GPT method allows to calculate the sensitivity coefficients to any perturbation. • Full Jacobian of sensitivities, cross sections (XS) to concentrations, may be obtained. • The time dependent XS is obtained by combining the GPT and substep methods. • The proposed GPT substep method considerably reduces the time discretization error. • No additional MC transport solutions are required within the time step. - Abstract: Coupled Monte Carlo (MC) methods are becoming widely used in reactor physics analysis and design. Many research groups therefore, developed their own coupled MC depletion codes. Typically, in such coupled code systems, neutron fluxes and cross sections are provided to the depletion module by solving a static neutron transport problem. These fluxes and cross sections are representative only of a specific time-point. In reality however, both quantities would change through the depletion time interval. Recently, Generalized Perturbation Theory (GPT) equivalent method that relies on collision history approach was implemented in Serpent MC code. This method was used here to calculate the sensitivity of each nuclide and reaction cross section due to the change in concentration of every isotope in the system. The coupling method proposed in this study also uses the substep approach, which incorporates these sensitivity coefficients to account for temporal changes in cross sections. As a result, a notable improvement in time dependent cross section behavior was obtained. The method was implemented in a wrapper script that couples Serpent with an external depletion solver. The performance of this method was compared with other existing methods. The results indicate that the proposed method requires substantially less MC transport solutions to achieve the same accuracy.

  4. Uncertainty Propagation in Monte Carlo Depletion Analysis

    International Nuclear Information System (INIS)

    Shim, Hyung Jin; Kim, Yeong-il; Park, Ho Jin; Joo, Han Gyu; Kim, Chang Hyo

    2008-01-01

    A new formulation aimed at quantifying uncertainties of Monte Carlo (MC) tallies such as k eff and the microscopic reaction rates of nuclides and nuclide number densities in MC depletion analysis and examining their propagation behaviour as a function of depletion time step (DTS) is presented. It is shown that the variance of a given MC tally used as a measure of its uncertainty in this formulation arises from four sources; the statistical uncertainty of the MC tally, uncertainties of microscopic cross sections and nuclide number densities, and the cross correlations between them and the contribution of the latter three sources can be determined by computing the correlation coefficients between the uncertain variables. It is also shown that the variance of any given nuclide number density at the end of each DTS stems from uncertainties of the nuclide number densities (NND) and microscopic reaction rates (MRR) of nuclides at the beginning of each DTS and they are determined by computing correlation coefficients between these two uncertain variables. To test the viability of the formulation, we conducted MC depletion analysis for two sample depletion problems involving a simplified 7x7 fuel assembly (FA) and a 17x17 PWR FA, determined number densities of uranium and plutonium isotopes and their variances as well as k ∞ and its variance as a function of DTS, and demonstrated the applicability of the new formulation for uncertainty propagation analysis that need be followed in MC depletion computations. (authors)

  5. MCOR - Monte Carlo depletion code for reference LWR calculations

    Energy Technology Data Exchange (ETDEWEB)

    Puente Espel, Federico, E-mail: fup104@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Tippayakul, Chanatip, E-mail: cut110@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Ivanov, Kostadin, E-mail: kni1@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Misu, Stefan, E-mail: Stefan.Misu@areva.com [AREVA, AREVA NP GmbH, Erlangen (Germany)

    2011-04-15

    Research highlights: > Introduction of a reference Monte Carlo based depletion code with extended capabilities. > Verification and validation results for MCOR. > Utilization of MCOR for benchmarking deterministic lattice physics (spectral) codes. - Abstract: The MCOR (MCnp-kORigen) code system is a Monte Carlo based depletion system for reference fuel assembly and core calculations. The MCOR code is designed as an interfacing code that provides depletion capability to the LANL Monte Carlo code by coupling two codes: MCNP5 with the AREVA NP depletion code, KORIGEN. The physical quality of both codes is unchanged. The MCOR code system has been maintained and continuously enhanced since it was initially developed and validated. The verification of the coupling was made by evaluating the MCOR code against similar sophisticated code systems like MONTEBURNS, OCTOPUS and TRIPOLI-PEPIN. After its validation, the MCOR code has been further improved with important features. The MCOR code presents several valuable capabilities such as: (a) a predictor-corrector depletion algorithm, (b) utilization of KORIGEN as the depletion module, (c) individual depletion calculation of each burnup zone (no burnup zone grouping is required, which is particularly important for the modeling of gadolinium rings), and (d) on-line burnup cross-section generation by the Monte Carlo calculation for 88 isotopes and usage of the KORIGEN libraries for PWR and BWR typical spectra for the remaining isotopes. Besides the just mentioned capabilities, the MCOR code newest enhancements focus on the possibility of executing the MCNP5 calculation in sequential or parallel mode, a user-friendly automatic re-start capability, a modification of the burnup step size evaluation, and a post-processor and test-matrix, just to name the most important. The article describes the capabilities of the MCOR code system; from its design and development to its latest improvements and further ameliorations. Additionally

  6. MCOR - Monte Carlo depletion code for reference LWR calculations

    International Nuclear Information System (INIS)

    Puente Espel, Federico; Tippayakul, Chanatip; Ivanov, Kostadin; Misu, Stefan

    2011-01-01

    Research highlights: → Introduction of a reference Monte Carlo based depletion code with extended capabilities. → Verification and validation results for MCOR. → Utilization of MCOR for benchmarking deterministic lattice physics (spectral) codes. - Abstract: The MCOR (MCnp-kORigen) code system is a Monte Carlo based depletion system for reference fuel assembly and core calculations. The MCOR code is designed as an interfacing code that provides depletion capability to the LANL Monte Carlo code by coupling two codes: MCNP5 with the AREVA NP depletion code, KORIGEN. The physical quality of both codes is unchanged. The MCOR code system has been maintained and continuously enhanced since it was initially developed and validated. The verification of the coupling was made by evaluating the MCOR code against similar sophisticated code systems like MONTEBURNS, OCTOPUS and TRIPOLI-PEPIN. After its validation, the MCOR code has been further improved with important features. The MCOR code presents several valuable capabilities such as: (a) a predictor-corrector depletion algorithm, (b) utilization of KORIGEN as the depletion module, (c) individual depletion calculation of each burnup zone (no burnup zone grouping is required, which is particularly important for the modeling of gadolinium rings), and (d) on-line burnup cross-section generation by the Monte Carlo calculation for 88 isotopes and usage of the KORIGEN libraries for PWR and BWR typical spectra for the remaining isotopes. Besides the just mentioned capabilities, the MCOR code newest enhancements focus on the possibility of executing the MCNP5 calculation in sequential or parallel mode, a user-friendly automatic re-start capability, a modification of the burnup step size evaluation, and a post-processor and test-matrix, just to name the most important. The article describes the capabilities of the MCOR code system; from its design and development to its latest improvements and further ameliorations

  7. Enhanced Monte-Carlo-Linked Depletion Capabilities in MCNPX

    International Nuclear Information System (INIS)

    Fensin, Michael L.; Hendricks, John S.; Anghaie, Samim

    2006-01-01

    As advanced reactor concepts challenge the accuracy of current modeling technologies, a higher-fidelity depletion calculation is necessary to model time-dependent core reactivity properly for accurate cycle length and safety margin determinations. The recent integration of CINDER90 into the MCNPX Monte Carlo radiation transport code provides a completely self-contained Monte-Carlo-linked depletion capability. Two advances have been made in the latest MCNPX capability based on problems observed in pre-released versions: continuous energy collision density tracking and proper fission yield selection. Pre-released versions of the MCNPX depletion code calculated the reaction rates for (n,2n), (n,3n), (n,p), (n,a), and (n,?) by matching the MCNPX steady-state 63-group flux with 63-group cross sections inherent in the CINDER90 library and then collapsing to one-group collision densities for the depletion calculation. This procedure led to inaccuracies due to the miscalculation of the reaction rates resulting from the collapsed multi-group approach. The current version of MCNPX eliminates this problem by using collapsed one-group collision densities generated from continuous energy reaction rates determined during the MCNPX steady-state calculation. MCNPX also now explicitly determines the proper fission yield to be used by the CINDER90 code for the depletion calculation. The CINDER90 code offers a thermal, fast, and high-energy fission yield for each fissile isotope contained in the CINDER90 data file. MCNPX determines which fission yield to use for a specified problem by calculating the integral fission rate for the defined energy boundaries (thermal, fast, and high energy), determining which energy range contains the majority of fissions, and then selecting the appropriate fission yield for the energy range containing the majority of fissions. The MCNPX depletion capability enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code

  8. Monte Carlo simulation in UWB1 depletion code

    International Nuclear Information System (INIS)

    Lovecky, M.; Prehradny, J.; Jirickova, J.; Skoda, R.

    2015-01-01

    U W B 1 depletion code is being developed as a fast computational tool for the study of burnable absorbers in the University of West Bohemia in Pilsen, Czech Republic. In order to achieve higher precision, the newly developed code was extended by adding a Monte Carlo solver. Research of fuel depletion aims at development and introduction of advanced types of burnable absorbers in nuclear fuel. Burnable absorbers (BA) allow the compensation of the initial reactivity excess of nuclear fuel and result in an increase of fuel cycles lengths with higher enriched fuels. The paper describes the depletion calculations of VVER nuclear fuel doped with rare earth oxides as burnable absorber based on performed depletion calculations, rare earth oxides are divided into two equally numerous groups, suitable burnable absorbers and poisoning absorbers. According to residual poisoning and BA reactivity worth, rare earth oxides marked as suitable burnable absorbers are Nd, Sm, Eu, Gd, Dy, Ho and Er, while poisoning absorbers include Sc, La, Lu, Y, Ce, Pr and Tb. The presentation slides have been added to the article

  9. Monte carlo depletion analysis of SMART core by MCNAP code

    International Nuclear Information System (INIS)

    Jung, Jong Sung; Sim, Hyung Jin; Kim, Chang Hyo; Lee, Jung Chan; Ji, Sung Kyun

    2001-01-01

    Depletion an analysis of SMART, a small-sized advanced integral PWR under development by KAERI, is conducted using the Monte Carlo (MC) depletion analysis program, MCNAP. The results are compared with those of the CASMO-3/ MASTER nuclear analysis. The difference between MASTER and MCNAP on k eff prediction is observed about 600pcm at BOC, and becomes smaller as the core burnup increases. The maximum difference bet ween two predict ions on fuel assembly (FA) normalized power distribution is about 6.6% radially , and 14.5% axially but the differences are observed to lie within standard deviation of MC estimations

  10. Domain Decomposition strategy for pin-wise full-core Monte Carlo depletion calculation with the reactor Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Jingang; Wang, Kan; Qiu, Yishu [Dept. of Engineering Physics, LiuQing Building, Tsinghua University, Beijing (China); Chai, Xiao Ming; Qiang, Sheng Long [Science and Technology on Reactor System Design Technology Laboratory, Nuclear Power Institute of China, Chengdu (China)

    2016-06-15

    Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC) codes in accomplishing pin-wise three-dimensional (3D) full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC) code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.

  11. SCALE Continuous-Energy Monte Carlo Depletion with Parallel KENO in TRITON

    International Nuclear Information System (INIS)

    Goluoglu, Sedat; Bekar, Kursat B.; Wiarda, Dorothea

    2012-01-01

    The TRITON sequence of the SCALE code system is a powerful and robust tool for performing multigroup (MG) reactor physics analysis using either the 2-D deterministic solver NEWT or the 3-D Monte Carlo transport code KENO. However, as with all MG codes, the accuracy of the results depends on the accuracy of the MG cross sections that are generated and/or used. While SCALE resonance self-shielding modules provide rigorous resonance self-shielding, they are based on 1-D models and therefore 2-D or 3-D effects such as heterogeneity of the lattice structures may render final MG cross sections inaccurate. Another potential drawback to MG Monte Carlo depletion is the need to perform resonance self-shielding calculations at each depletion step for each fuel segment that is being depleted. The CPU time and memory required for self-shielding calculations can often eclipse the resources needed for the Monte Carlo transport. This summary presents the results of the new continuous-energy (CE) calculation mode in TRITON. With the new capability, accurate reactor physics analyses can be performed for all types of systems using the SCALE Monte Carlo code KENO as the CE transport solver. In addition, transport calculations can be performed in parallel mode on multiple processors.

  12. Monte Carlo Depletion with Critical Spectrum for Assembly Group Constant Generation

    International Nuclear Information System (INIS)

    Park, Ho Jin; Joo, Han Gyu; Shim, Hyung Jin; Kim, Chang Hyo

    2010-01-01

    The conventional two-step procedure has been used in practical nuclear reactor analysis. In this procedure, a deterministic assembly transport code such as HELIOS and CASMO is normally to generate multigroup flux distribution to be used in few-group cross section generation. Recently there are accuracy issues related with the resonance treatment or the double heterogeneity (DH) treatment for VHTR fuel blocks. In order to mitigate the accuracy issues, Monte Carlo (MC) methods can be used as an alternative way to generate few-group cross sections because the accuracy of the MC calculations benefits from its ability to use continuous energy nuclear data and detailed geometric information. In an earlier work, the conventional methods of obtaining multigroup cross sections and the critical spectrum are implemented into the McCARD Monte Carlo code. However, it was not complete in that the critical spectrum is not reflected in the depletion calculation. The purpose of this study is to develop a method to apply the critical spectrum to MC depletion calculations to correct for the leakage effect in the depletion calculation and then to examine the MC based group constants within the two-step procedure by comparing the two-step solution with the direct whole core MC depletion result

  13. Improvements of MCOR: A Monte Carlo depletion code system for fuel assembly reference calculations

    Energy Technology Data Exchange (ETDEWEB)

    Tippayakul, C.; Ivanov, K. [Pennsylvania State Univ., Univ. Park (United States); Misu, S. [AREVA NP GmbH, An AREVA and SIEMENS Company, Erlangen (Germany)

    2006-07-01

    This paper presents the improvements of MCOR, a Monte Carlo depletion code system for fuel assembly reference calculations. The improvements of MCOR were initiated by the cooperation between the Penn State Univ. and AREVA NP to enhance the original Penn State Univ. MCOR version in order to be used as a new Monte Carlo depletion analysis tool. Essentially, a new depletion module using KORIGEN is utilized to replace the existing ORIGEN-S depletion module in MCOR. Furthermore, the online burnup cross section generation by the Monte Carlo calculation is implemented in the improved version instead of using the burnup cross section library pre-generated by a transport code. Other code features have also been added to make the new MCOR version easier to use. This paper, in addition, presents the result comparisons of the original and the improved MCOR versions against CASMO-4 and OCTOPUS. It was observed in the comparisons that there were quite significant improvements of the results in terms of k{sub inf}, fission rate distributions and isotopic contents. (authors)

  14. The development of depletion program coupled with Monte Carlo computer code

    International Nuclear Information System (INIS)

    Nguyen Kien Cuong; Huynh Ton Nghiem; Vuong Huu Tan

    2015-01-01

    The paper presents the development of depletion code for light water reactor coupled with MCNP5 code called the MCDL code (Monte Carlo Depletion for Light Water Reactor). The first order differential depletion system equations of 21 actinide isotopes and 50 fission product isotopes are solved by the Radau IIA Implicit Runge Kutta (IRK) method after receiving neutron flux, reaction rates in one group energy and multiplication factors for fuel pin, fuel assembly or whole reactor core from the calculation results of the MCNP5 code. The calculation for beryllium poisoning and cooling time is also integrated in the code. To verify and validate the MCDL code, high enriched uranium (HEU) and low enriched uranium (LEU) fuel assemblies VVR-M2 types and 89 fresh HEU fuel assemblies, 92 LEU fresh fuel assemblies cores of the Dalat Nuclear Research Reactor (DNRR) have been investigated and compared with the results calculated by the SRAC code and the MCNP R EBUS linkage system code. The results show good agreement between calculated data of the MCDL code and reference codes. (author)

  15. Sub-step methodology for coupled Monte Carlo depletion and thermal hydraulic codes

    International Nuclear Information System (INIS)

    Kotlyar, D.; Shwageraus, E.

    2016-01-01

    Highlights: • Discretization of time in coupled MC codes determines the results’ accuracy. • The error is due to lack of information regarding the time-dependent reaction rates. • The proposed sub-step method considerably reduces the time discretization error. • No additional MC transport solutions are required within the time step. • The reaction rates are varied as functions of nuclide densities and TH conditions. - Abstract: The governing procedure in coupled Monte Carlo (MC) codes relies on discretization of the simulation time into time steps. Typically, the MC transport solution at discrete points will generate reaction rates, which in most codes are assumed to be constant within the time step. This assumption can trigger numerical instabilities or result in a loss of accuracy, which, in turn, would require reducing the time steps size. This paper focuses on reducing the time discretization error without requiring additional MC transport solutions and hence with no major computational overhead. The sub-step method presented here accounts for the reaction rate variation due to the variation in nuclide densities and thermal hydraulic (TH) conditions. This is achieved by performing additional depletion and TH calculations within the analyzed time step. The method was implemented in BGCore code and subsequently used to analyze a series of test cases. The results indicate that computational speedup of up to a factor of 10 may be achieved over the existing coupling schemes.

  16. LCG Monte-Carlo Data Base

    CERN Document Server

    Bartalini, P.; Kryukov, A.; Selyuzhenkov, Ilya V.; Sherstnev, A.; Vologdin, A.

    2004-01-01

    We present the Monte-Carlo events Data Base (MCDB) project and its development plans. MCDB facilitates communication between authors of Monte-Carlo generators and experimental users. It also provides a convenient book-keeping and an easy access to generator level samples. The first release of MCDB is now operational for the CMS collaboration. In this paper we review the main ideas behind MCDB and discuss future plans to develop this Data Base further within the CERN LCG framework.

  17. Coevolution Based Adaptive Monte Carlo Localization (CEAMCL

    Directory of Open Access Journals (Sweden)

    Luo Ronghua

    2008-11-01

    Full Text Available An adaptive Monte Carlo localization algorithm based on coevolution mechanism of ecological species is proposed. Samples are clustered into species, each of which represents a hypothesis of the robot's pose. Since the coevolution between the species ensures that the multiple distinct hypotheses can be tracked stably, the problem of premature convergence when using MCL in highly symmetric environments can be solved. And the sample size can be adjusted adaptively over time according to the uncertainty of the robot's pose by using the population growth model. In addition, by using the crossover and mutation operators in evolutionary computation, intra-species evolution can drive the samples move towards the regions where the desired posterior density is large. So a small size of samples can represent the desired density well enough to make precise localization. The new algorithm is termed coevolution based adaptive Monte Carlo localization (CEAMCL. Experiments have been carried out to prove the efficiency of the new localization algorithm.

  18. Adding trend data to Depletion-Based Stock Reduction Analysis

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A Bayesian model of Depletion-Based Stock Reduction Analysis (DB-SRA), informed by a time series of abundance indexes, was developed, using the Sampling Importance...

  19. Monte Carlo-based tail exponent estimator

    Science.gov (United States)

    Barunik, Jozef; Vacha, Lukas

    2010-11-01

    In this paper we propose a new approach to estimation of the tail exponent in financial stock markets. We begin the study with the finite sample behavior of the Hill estimator under α-stable distributions. Using large Monte Carlo simulations, we show that the Hill estimator overestimates the true tail exponent and can hardly be used on samples with small length. Utilizing our results, we introduce a Monte Carlo-based method of estimation for the tail exponent. Our proposed method is not sensitive to the choice of tail size and works well also on small data samples. The new estimator also gives unbiased results with symmetrical confidence intervals. Finally, we demonstrate the power of our estimator on the international world stock market indices. On the two separate periods of 2002-2005 and 2006-2009, we estimate the tail exponent.

  20. Research on perturbation based Monte Carlo reactor criticality search

    International Nuclear Information System (INIS)

    Li Zeguang; Wang Kan; Li Yangliu; Deng Jingkang

    2013-01-01

    Criticality search is a very important aspect in reactor physics analysis. Due to the advantages of Monte Carlo method and the development of computer technologies, Monte Carlo criticality search is becoming more and more necessary and feasible. Traditional Monte Carlo criticality search method is suffered from large amount of individual criticality runs and uncertainty and fluctuation of Monte Carlo results. A new Monte Carlo criticality search method based on perturbation calculation is put forward in this paper to overcome the disadvantages of traditional method. By using only one criticality run to get initial k_e_f_f and differential coefficients of concerned parameter, the polynomial estimator of k_e_f_f changing function is solved to get the critical value of concerned parameter. The feasibility of this method was tested. The results show that the accuracy and efficiency of perturbation based criticality search method are quite inspiring and the method overcomes the disadvantages of traditional one. (authors)

  1. Monte Carlo based diffusion coefficients for LMFBR analysis

    International Nuclear Information System (INIS)

    Van Rooijen, Willem F.G.; Takeda, Toshikazu; Hazama, Taira

    2010-01-01

    A method based on Monte Carlo calculations is developed to estimate the diffusion coefficient of unit cells. The method uses a geometrical model similar to that used in lattice theory, but does not use the assumption of a separable fundamental mode used in lattice theory. The method uses standard Monte Carlo flux and current tallies, and the continuous energy Monte Carlo code MVP was used without modifications. Four models are presented to derive the diffusion coefficient from tally results of flux and partial currents. In this paper the method is applied to the calculation of a plate cell of the fast-spectrum critical facility ZEBRA. Conventional calculations of the diffusion coefficient diverge in the presence of planar voids in the lattice, but our Monte Carlo method can treat this situation without any problem. The Monte Carlo method was used to investigate the influence of geometrical modeling as well as the directional dependence of the diffusion coefficient. The method can be used to estimate the diffusion coefficient of complicated unit cells, the limitation being the capabilities of the Monte Carlo code. The method will be used in the future to confirm results for the diffusion coefficient obtained of the Monte Carlo code. The method will be used in the future to confirm results for the diffusion coefficient obtained with deterministic codes. (author)

  2. Evaluation of acute tryptophan depletion and sham depletion with a gelatin-based collagen peptide protein mixture

    DEFF Research Database (Denmark)

    Stenbæk, Dea Siggaard; Einarsdottir, H S; Goregliad-Fjaellingsdal, T

    2016-01-01

    Acute Tryptophan Depletion (ATD) is a dietary method used to modulate central 5-HT to study the effects of temporarily reduced 5-HT synthesis. The aim of this study is to evaluate a novel method of ATD using a gelatin-based collagen peptide (CP) mixture. We administered CP-Trp or CP+Trp mixtures...

  3. Continuous energy Monte Carlo method based lattice homogeinzation

    International Nuclear Information System (INIS)

    Li Mancang; Yao Dong; Wang Kan

    2014-01-01

    Based on the Monte Carlo code MCNP, the continuous energy Monte Carlo multi-group constants generation code MCMC has been developed. The track length scheme has been used as the foundation of cross section generation. The scattering matrix and Legendre components require special techniques, and the scattering event method has been proposed to solve this problem. Three methods have been developed to calculate the diffusion coefficients for diffusion reactor core codes and the Legendre method has been applied in MCMC. To the satisfaction of the equivalence theory, the general equivalence theory (GET) and the superhomogenization method (SPH) have been applied to the Monte Carlo method based group constants. The super equivalence method (SPE) has been proposed to improve the equivalence. GET, SPH and SPE have been implemented into MCMC. The numerical results showed that generating the homogenization multi-group constants via Monte Carlo method overcomes the difficulties in geometry and treats energy in continuum, thus provides more accuracy parameters. Besides, the same code and data library can be used for a wide range of applications due to the versatility. The MCMC scheme can be seen as a potential alternative to the widely used deterministic lattice codes. (authors)

  4. GPU based Monte Carlo for PET image reconstruction: detector modeling

    International Nuclear Information System (INIS)

    Légrády; Cserkaszky, Á; Lantos, J.; Patay, G.; Bükki, T.

    2011-01-01

    Monte Carlo (MC) calculations and Graphical Processing Units (GPUs) are almost like the dedicated hardware designed for the specific task given the similarities between visible light transport and neutral particle trajectories. A GPU based MC gamma transport code has been developed for Positron Emission Tomography iterative image reconstruction calculating the projection from unknowns to data at each iteration step taking into account the full physics of the system. This paper describes the simplified scintillation detector modeling and its effect on convergence. (author)

  5. Monte Carlo evaluation of derivative-based global sensitivity measures

    Energy Technology Data Exchange (ETDEWEB)

    Kucherenko, S. [Centre for Process Systems Engineering, Imperial College London, London SW7 2AZ (United Kingdom)], E-mail: s.kucherenko@ic.ac.uk; Rodriguez-Fernandez, M. [Process Engineering Group, Instituto de Investigaciones Marinas, Spanish Council for Scientific Research (C.S.I.C.), C/ Eduardo Cabello, 6, 36208 Vigo (Spain); Pantelides, C.; Shah, N. [Centre for Process Systems Engineering, Imperial College London, London SW7 2AZ (United Kingdom)

    2009-07-15

    A novel approach for evaluation of derivative-based global sensitivity measures (DGSM) is presented. It is compared with the Morris and the Sobol' sensitivity indices methods. It is shown that there is a link between DGSM and Sobol' sensitivity indices. DGSM are very easy to implement and evaluate numerically. The computational time required for numerical evaluation of DGSM is many orders of magnitude lower than that for estimation of the Sobol' sensitivity indices. It is also lower than that for the Morris method. Efficiencies of Monte Carlo (MC) and quasi-Monte Carlo (QMC) sampling methods for calculation of DGSM are compared. It is shown that the superiority of QMC over MC depends on the problem's effective dimension, which can also be estimated using DGSM.

  6. Monte Carlo evaluation of derivative-based global sensitivity measures

    International Nuclear Information System (INIS)

    Kucherenko, S.; Rodriguez-Fernandez, M.; Pantelides, C.; Shah, N.

    2009-01-01

    A novel approach for evaluation of derivative-based global sensitivity measures (DGSM) is presented. It is compared with the Morris and the Sobol' sensitivity indices methods. It is shown that there is a link between DGSM and Sobol' sensitivity indices. DGSM are very easy to implement and evaluate numerically. The computational time required for numerical evaluation of DGSM is many orders of magnitude lower than that for estimation of the Sobol' sensitivity indices. It is also lower than that for the Morris method. Efficiencies of Monte Carlo (MC) and quasi-Monte Carlo (QMC) sampling methods for calculation of DGSM are compared. It is shown that the superiority of QMC over MC depends on the problem's effective dimension, which can also be estimated using DGSM.

  7. Mechanism-based biomarker gene sets for glutathione depletion-related hepatotoxicity in rats

    International Nuclear Information System (INIS)

    Gao Weihua; Mizukawa, Yumiko; Nakatsu, Noriyuki; Minowa, Yosuke; Yamada, Hiroshi; Ohno, Yasuo; Urushidani, Tetsuro

    2010-01-01

    Chemical-induced glutathione depletion is thought to be caused by two types of toxicological mechanisms: PHO-type glutathione depletion [glutathione conjugated with chemicals such as phorone (PHO) or diethyl maleate (DEM)], and BSO-type glutathione depletion [i.e., glutathione synthesis inhibited by chemicals such as L-buthionine-sulfoximine (BSO)]. In order to identify mechanism-based biomarker gene sets for glutathione depletion in rat liver, male SD rats were treated with various chemicals including PHO (40, 120 and 400 mg/kg), DEM (80, 240 and 800 mg/kg), BSO (150, 450 and 1500 mg/kg), and bromobenzene (BBZ, 10, 100 and 300 mg/kg). Liver samples were taken 3, 6, 9 and 24 h after administration and examined for hepatic glutathione content, physiological and pathological changes, and gene expression changes using Affymetrix GeneChip Arrays. To identify differentially expressed probe sets in response to glutathione depletion, we focused on the following two courses of events for the two types of mechanisms of glutathione depletion: a) gene expression changes occurring simultaneously in response to glutathione depletion, and b) gene expression changes after glutathione was depleted. The gene expression profiles of the identified probe sets for the two types of glutathione depletion differed markedly at times during and after glutathione depletion, whereas Srxn1 was markedly increased for both types as glutathione was depleted, suggesting that Srxn1 is a key molecule in oxidative stress related to glutathione. The extracted probe sets were refined and verified using various compounds including 13 additional positive or negative compounds, and they established two useful marker sets. One contained three probe sets (Akr7a3, Trib3 and Gstp1) that could detect conjugation-type glutathione depletors any time within 24 h after dosing, and the other contained 14 probe sets that could detect glutathione depletors by any mechanism. These two sets, with appropriate scoring

  8. Development of a practical Monte Carlo based fuel management system for the Penn State University Breazeale Research Reactor (PSBR)

    International Nuclear Information System (INIS)

    Tippayakul, Chanatip; Ivanov, Kostadin; Frederick Sears, C.

    2008-01-01

    A practical fuel management system for the he Pennsylvania State University Breazeale Research Reactor (PSBR) based on the advanced Monte Carlo methodology was developed from the existing fuel management tool in this research. Several modeling improvements were implemented to the old system. The improved fuel management system can now utilize the burnup dependent cross section libraries generated specifically for PSBR fuel and it is also able to update the cross sections of these libraries by the Monte Carlo calculation automatically. Considerations were given to balance the computation time and the accuracy of the cross section update. Thus, certain types of a limited number of isotopes, which are considered 'important', are calculated and updated by the scheme. Moreover, the depletion algorithm of the existing fuel management tool was replaced from the predictor only to the predictor-corrector depletion scheme to account for burnup spectrum changes during the burnup step more accurately. An intermediate verification of the fuel management system was performed to assess the correctness of the newly implemented schemes against HELIOS. It was found that the agreement of both codes is good when the same energy released per fission (Q values) is used. Furthermore, to be able to model the reactor at various temperatures, the fuel management tool is able to utilize automatically the continuous cross sections generated at different temperatures. Other additional useful capabilities were also added to the fuel management tool to make it easy to use and be practical. As part of the development, a hybrid nodal diffusion/Monte Carlo calculation was devised to speed up the Monte Carlo calculation by providing more converged initial source distribution for the Monte Carlo calculation from the nodal diffusion calculation. Finally, the fuel management system was validated against the measured data using several actual PSBR core loadings. The agreement of the predicted core

  9. Monte Carlo based radial shield design of typical PWR reactor

    Energy Technology Data Exchange (ETDEWEB)

    Gul, Anas; Khan, Rustam; Qureshi, M. Ayub; Azeem, Muhammad Waqar; Raza, S.A. [Pakistan Institute of Engineering and Applied Sciences, Islamabad (Pakistan). Dept. of Nuclear Engineering; Stummer, Thomas [Technische Univ. Wien (Austria). Atominst.

    2016-11-15

    Neutron and gamma flux and dose equivalent rate distribution are analysed in radial and shields of a typical PWR type reactor based on the Monte Carlo radiation transport computer code MCNP5. The ENDF/B-VI continuous energy cross-section library has been employed for the criticality and shielding analysis. The computed results are in good agreement with the reference results (maximum difference is less than 56 %). It implies that MCNP5 a good tool for accurate prediction of neutron and gamma flux and dose rates in radial shield around the core of PWR type reactors.

  10. Efficient sampling algorithms for Monte Carlo based treatment planning

    International Nuclear Information System (INIS)

    DeMarco, J.J.; Solberg, T.D.; Chetty, I.; Smathers, J.B.

    1998-01-01

    Efficient sampling algorithms are necessary for producing a fast Monte Carlo based treatment planning code. This study evaluates several aspects of a photon-based tracking scheme and the effect of optimal sampling algorithms on the efficiency of the code. Four areas were tested: pseudo-random number generation, generalized sampling of a discrete distribution, sampling from the exponential distribution, and delta scattering as applied to photon transport through a heterogeneous simulation geometry. Generalized sampling of a discrete distribution using the cutpoint method can produce speedup gains of one order of magnitude versus conventional sequential sampling. Photon transport modifications based upon the delta scattering method were implemented and compared with a conventional boundary and collision checking algorithm. The delta scattering algorithm is faster by a factor of six versus the conventional algorithm for a boundary size of 5 mm within a heterogeneous geometry. A comparison of portable pseudo-random number algorithms and exponential sampling techniques is also discussed

  11. Clinical dosimetry in photon radiotherapy. A Monte Carlo based investigation

    International Nuclear Information System (INIS)

    Wulff, Joerg

    2010-01-01

    Practical clinical dosimetry is a fundamental step within the radiation therapy process and aims at quantifying the absorbed radiation dose within a 1-2% uncertainty. To achieve this level of accuracy, corrections are needed for calibrated and air-filled ionization chambers, which are used for dose measurement. The procedures of correction are based on cavity theory of Spencer-Attix and are defined in current dosimetry protocols. Energy dependent corrections for deviations from calibration beams account for changed ionization chamber response in the treatment beam. The corrections applied are usually based on semi-analytical models or measurements and are generally hard to determine due to their magnitude of only a few percents or even less. Furthermore the corrections are defined for fixed geometrical reference-conditions and do not apply to non-reference conditions in modern radiotherapy applications. The stochastic Monte Carlo method for the simulation of radiation transport is becoming a valuable tool in the field of Medical Physics. As a suitable tool for calculation of these corrections with high accuracy the simulations enable the investigation of ionization chambers under various conditions. The aim of this work is the consistent investigation of ionization chamber dosimetry in photon radiation therapy with the use of Monte Carlo methods. Nowadays Monte Carlo systems exist, which enable the accurate calculation of ionization chamber response in principle. Still, their bare use for studies of this type is limited due to the long calculation times needed for a meaningful result with a small statistical uncertainty, inherent to every result of a Monte Carlo simulation. Besides heavy use of computer hardware, techniques methods of variance reduction to reduce the needed calculation time can be applied. Methods for increasing the efficiency in the results of simulation were developed and incorporated in a modern and established Monte Carlo simulation environment

  12. GPU-Monte Carlo based fast IMRT plan optimization

    Directory of Open Access Journals (Sweden)

    Yongbao Li

    2014-03-01

    Full Text Available Purpose: Intensity-modulated radiation treatment (IMRT plan optimization needs pre-calculated beamlet dose distribution. Pencil-beam or superposition/convolution type algorithms are typically used because of high computation speed. However, inaccurate beamlet dose distributions, particularly in cases with high levels of inhomogeneity, may mislead optimization, hindering the resulting plan quality. It is desire to use Monte Carlo (MC methods for beamlet dose calculations. Yet, the long computational time from repeated dose calculations for a number of beamlets prevents this application. It is our objective to integrate a GPU-based MC dose engine in lung IMRT optimization using a novel two-steps workflow.Methods: A GPU-based MC code gDPM is used. Each particle is tagged with an index of a beamlet where the source particle is from. Deposit dose are stored separately for beamlets based on the index. Due to limited GPU memory size, a pyramid space is allocated for each beamlet, and dose outside the space is neglected. A two-steps optimization workflow is proposed for fast MC-based optimization. At first step, a rough dose calculation is conducted with only a few number of particle per beamlet. Plan optimization is followed to get an approximated fluence map. In the second step, more accurate beamlet doses are calculated, where sampled number of particles for a beamlet is proportional to the intensity determined previously. A second-round optimization is conducted, yielding the final result.Results: For a lung case with 5317 beamlets, 105 particles per beamlet in the first round, and 108 particles per beam in the second round are enough to get a good plan quality. The total simulation time is 96.4 sec.Conclusion: A fast GPU-based MC dose calculation method along with a novel two-step optimization workflow are developed. The high efficiency allows the use of MC for IMRT optimizations.--------------------------------Cite this article as: Li Y, Tian Z

  13. The new MCNP6 depletion capability

    International Nuclear Information System (INIS)

    Fensin, M. L.; James, M. R.; Hendricks, J. S.; Goorley, J. T.

    2012-01-01

    The first MCNP based in-line Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. Both the MCNP5 and MCNPX codes have historically provided a successful combinatorial geometry based, continuous energy, Monte Carlo radiation transport solution for advanced reactor modeling and simulation. However, due to separate development pathways, useful simulation capabilities were dispersed between both codes and not unified in a single technology. MCNP6, the next evolution in the MCNP suite of codes, now combines the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. We describe here the new capabilities of the MCNP6 depletion code dating from the official RSICC release MCNPX 2.6.0, reported previously, to the now current state of MCNP6. NEA/OECD benchmark results are also reported. The MCNP6 depletion capability enhancements beyond MCNPX 2.6.0 reported here include: (1) new performance enhancing parallel architecture that implements both shared and distributed memory constructs; (2) enhanced memory management that maximizes calculation fidelity; and (3) improved burnup physics for better nuclide prediction. MCNP6 depletion enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code. The enhancements described here help provide a powerful capability as well as dictate a path forward for future development to improve the usefulness of the technology. (authors)

  14. The New MCNP6 Depletion Capability

    International Nuclear Information System (INIS)

    Fensin, Michael Lorne; James, Michael R.; Hendricks, John S.; Goorley, John T.

    2012-01-01

    The first MCNP based inline Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. Both the MCNP5 and MCNPX codes have historically provided a successful combinatorial geometry based, continuous energy, Monte Carlo radiation transport solution for advanced reactor modeling and simulation. However, due to separate development pathways, useful simulation capabilities were dispersed between both codes and not unified in a single technology. MCNP6, the next evolution in the MCNP suite of codes, now combines the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. We describe here the new capabilities of the MCNP6 depletion code dating from the official RSICC release MCNPX 2.6.0, reported previously, to the now current state of MCNP6. NEA/OECD benchmark results are also reported. The MCNP6 depletion capability enhancements beyond MCNPX 2.6.0 reported here include: (1) new performance enhancing parallel architecture that implements both shared and distributed memory constructs; (2) enhanced memory management that maximizes calculation fidelity; and (3) improved burnup physics for better nuclide prediction. MCNP6 depletion enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code. The enhancements described here help provide a powerful capability as well as dictate a path forward for future development to improve the usefulness of the technology.

  15. GPU based Monte Carlo for PET image reconstruction: parameter optimization

    International Nuclear Information System (INIS)

    Cserkaszky, Á; Légrády, D.; Wirth, A.; Bükki, T.; Patay, G.

    2011-01-01

    This paper presents the optimization of a fully Monte Carlo (MC) based iterative image reconstruction of Positron Emission Tomography (PET) measurements. With our MC re- construction method all the physical effects in a PET system are taken into account thus superior image quality is achieved in exchange for increased computational effort. The method is feasible because we utilize the enormous processing power of Graphical Processing Units (GPUs) to solve the inherently parallel problem of photon transport. The MC approach regards the simulated positron decays as samples in mathematical sums required in the iterative reconstruction algorithm, so to complement the fast architecture, our work of optimization focuses on the number of simulated positron decays required to obtain sufficient image quality. We have achieved significant results in determining the optimal number of samples for arbitrary measurement data, this allows to achieve the best image quality with the least possible computational effort. Based on this research recommendations can be given for effective partitioning of computational effort into the iterations in limited time reconstructions. (author)

  16. EXPERIMENTAL ACIDIFICATION CAUSES SOIL BASE-CATION DEPLETION AT THE BEAR BROOK WATERSHED IN MAINE

    Science.gov (United States)

    There is concern that changes in atmospheric deposition, climate, or land use have altered the biogeochemistry of forests causing soil base-cation depletion, particularly Ca. The Bear Brook Watershed in Maine (BBWM) is a paired watershed experiment with one watershed subjected to...

  17. Experimental Acidification Causes Soil Base-Cation Depletion at the Bear Brook Watershed in Maine

    Science.gov (United States)

    Ivan J. Fernandez; Lindsey E. Rustad; Stephen A. Norton; Jeffrey S. Kahl; Bernard J. Cosby

    2003-01-01

    There is concern that changes in atmospheric deposition, climate, or land use have altered the biogeochemistry of forests causing soil base-cation depletion, particularly Ca. The Bear Brook Watershed in Maine (BBWM) is a paired watershed experiment with one watershed subjected to elevated N and S deposition through bimonthly additions of (NH4)2SO4. Quantitative soil...

  18. Image based Monte Carlo modeling for computational phantom

    International Nuclear Information System (INIS)

    Cheng, M.; Wang, W.; Zhao, K.; Fan, Y.; Long, P.; Wu, Y.

    2013-01-01

    Full text of the publication follows. The evaluation on the effects of ionizing radiation and the risk of radiation exposure on human body has been becoming one of the most important issues for radiation protection and radiotherapy fields, which is helpful to avoid unnecessary radiation and decrease harm to human body. In order to accurately evaluate the dose on human body, it is necessary to construct more realistic computational phantom. However, manual description and verification of the models for Monte Carlo (MC) simulation are very tedious, error-prone and time-consuming. In addition, it is difficult to locate and fix the geometry error, and difficult to describe material information and assign it to cells. MCAM (CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport Simulation) was developed as an interface program to achieve both CAD- and image-based automatic modeling. The advanced version (Version 6) of MCAM can achieve automatic conversion from CT/segmented sectioned images to computational phantoms such as MCNP models. Imaged-based automatic modeling program(MCAM6.0) has been tested by several medical images and sectioned images. And it has been applied in the construction of Rad-HUMAN. Following manual segmentation and 3D reconstruction, a whole-body computational phantom of Chinese adult female called Rad-HUMAN was created by using MCAM6.0 from sectioned images of a Chinese visible human dataset. Rad-HUMAN contains 46 organs/tissues, which faithfully represented the average anatomical characteristics of the Chinese female. The dose conversion coefficients (Dt/Ka) from kerma free-in-air to absorbed dose of Rad-HUMAN were calculated. Rad-HUMAN can be applied to predict and evaluate dose distributions in the Treatment Plan System (TPS), as well as radiation exposure for human body in radiation protection. (authors)

  19. Monte Carlo Based Framework to Support HAZOP Study

    DEFF Research Database (Denmark)

    Danko, Matej; Frutiger, Jerome; Jelemenský, Ľudovít

    2017-01-01

    deviations in process parameters simultaneously, thereby bringing an improvement to the Hazard and Operability study (HAZOP), which normally considers only one at a time deviation in process parameters. Furthermore, Monte Carlo filtering was then used to identify operability and hazard issues including...

  20. Base cation depletion and potential long-term acidification of Norwegian catchments

    International Nuclear Information System (INIS)

    Kirchner, J.W.; Lydersen, E.

    1995-01-01

    Long-term monitoring data from Norwegian catchments show that since the late 1970s, sulfate deposition and runoff sulfate concentrations have declined significantly. However, water quality has not significantly improved, because reductions in runoff sulfate have been matched by equal declines in calcium and magnesium concentrations. Long-term declines in runoff Ca and Mg are most pronounced at catchments subject to highly acidic deposition; the observed rates of decline are quantitatively consistent with depletion of exchangeable bases by accelerated leaching under high acid loading. Even though water quality has not recovered, reductions in acid deposition have been valuable because they have prevented significant acidification that would otherwise have occurred under constant acid deposition. Ongoing depletion of exchangeable bases from these catchments implies that continued deposition reductions will be needed to avoid further acidification and that recovery from acidification will be slow. 31 refs., 2 figs., 4 tabs

  1. Monte Carlo-based simulation of dynamic jaws tomotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Sterpin, E.; Chen, Y.; Chen, Q.; Lu, W.; Mackie, T. R.; Vynckier, S. [Department of Molecular Imaging, Radiotherapy and Oncology, Universite Catholique de Louvain, 54 Avenue Hippocrate, 1200 Brussels, Belgium and Department of Medical Physics, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States); TomoTherapy Inc., 1240 Deming Way, Madison, Wisconsin 53717 (United States); 21 Century Oncology., 1240 D' onofrio, Madison, Wisconsin 53719 (United States); TomoTherapy Inc., 1240 Deming Way, Madison, Wisconsin 53717 and Department of Medical Physics, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States); Department of Radiotherapy and Oncology, Universite Catholique de Louvain, St-Luc University Hospital, 10 Avenue Hippocrate, 1200 Brussels (Belgium)

    2011-09-15

    Purpose: Original TomoTherapy systems may involve a trade-off between conformity and treatment speed, the user being limited to three slice widths (1.0, 2.5, and 5.0 cm). This could be overcome by allowing the jaws to define arbitrary fields, including very small slice widths (<1 cm), which are challenging for a beam model. The aim of this work was to incorporate the dynamic jaws feature into a Monte Carlo (MC) model called TomoPen, based on the MC code PENELOPE, previously validated for the original TomoTherapy system. Methods: To keep the general structure of TomoPen and its efficiency, the simulation strategy introduces several techniques: (1) weight modifiers to account for any jaw settings using only the 5 cm phase-space file; (2) a simplified MC based model called FastStatic to compute the modifiers faster than pure MC; (3) actual simulation of dynamic jaws. Weight modifiers computed with both FastStatic and pure MC were compared. Dynamic jaws simulations were compared with the convolution/superposition (C/S) of TomoTherapy in the ''cheese'' phantom for a plan with two targets longitudinally separated by a gap of 3 cm. Optimization was performed in two modes: asymmetric jaws-constant couch speed (''running start stop,'' RSS) and symmetric jaws-variable couch speed (''symmetric running start stop,'' SRSS). Measurements with EDR2 films were also performed for RSS for the formal validation of TomoPen with dynamic jaws. Results: Weight modifiers computed with FastStatic were equivalent to pure MC within statistical uncertainties (0.5% for three standard deviations). Excellent agreement was achieved between TomoPen and C/S for both asymmetric jaw opening/constant couch speed and symmetric jaw opening/variable couch speed, with deviations well within 2%/2 mm. For RSS procedure, agreement between C/S and measurements was within 2%/2 mm for 95% of the points and 3%/3 mm for 98% of the points, where dose is

  2. Monte Carlo-based simulation of dynamic jaws tomotherapy

    International Nuclear Information System (INIS)

    Sterpin, E.; Chen, Y.; Chen, Q.; Lu, W.; Mackie, T. R.; Vynckier, S.

    2011-01-01

    Purpose: Original TomoTherapy systems may involve a trade-off between conformity and treatment speed, the user being limited to three slice widths (1.0, 2.5, and 5.0 cm). This could be overcome by allowing the jaws to define arbitrary fields, including very small slice widths (<1 cm), which are challenging for a beam model. The aim of this work was to incorporate the dynamic jaws feature into a Monte Carlo (MC) model called TomoPen, based on the MC code PENELOPE, previously validated for the original TomoTherapy system. Methods: To keep the general structure of TomoPen and its efficiency, the simulation strategy introduces several techniques: (1) weight modifiers to account for any jaw settings using only the 5 cm phase-space file; (2) a simplified MC based model called FastStatic to compute the modifiers faster than pure MC; (3) actual simulation of dynamic jaws. Weight modifiers computed with both FastStatic and pure MC were compared. Dynamic jaws simulations were compared with the convolution/superposition (C/S) of TomoTherapy in the ''cheese'' phantom for a plan with two targets longitudinally separated by a gap of 3 cm. Optimization was performed in two modes: asymmetric jaws-constant couch speed (''running start stop,'' RSS) and symmetric jaws-variable couch speed (''symmetric running start stop,'' SRSS). Measurements with EDR2 films were also performed for RSS for the formal validation of TomoPen with dynamic jaws. Results: Weight modifiers computed with FastStatic were equivalent to pure MC within statistical uncertainties (0.5% for three standard deviations). Excellent agreement was achieved between TomoPen and C/S for both asymmetric jaw opening/constant couch speed and symmetric jaw opening/variable couch speed, with deviations well within 2%/2 mm. For RSS procedure, agreement between C/S and measurements was within 2%/2 mm for 95% of the points and 3%/3 mm for 98% of the points, where dose is greater than 30% of the prescription dose (gamma analysis

  3. Depleted uranium

    International Nuclear Information System (INIS)

    Huffer, E.; Nifenecker, H.

    2001-02-01

    This document deals with the physical, chemical and radiological properties of the depleted uranium. What is the depleted uranium? Why do the military use depleted uranium and what are the risk for the health? (A.L.B.)

  4. Mesh-based weight window approach for Monte Carlo simulation

    International Nuclear Information System (INIS)

    Liu, L.; Gardner, R.P.

    1997-01-01

    The Monte Carlo method has been increasingly used to solve particle transport problems. Statistical fluctuation from random sampling is the major limiting factor of its application. To obtain the desired precision, variance reduction techniques are indispensable for most practical problems. Among various variance reduction techniques, the weight window method proves to be one of the most general, powerful, and robust. The method is implemented in the current MCNP code. An importance map is estimated during a regular Monte Carlo run, and then the map is used in the subsequent run for splitting and Russian roulette games. The major drawback of this weight window method is lack of user-friendliness. It normally requires that users divide the large geometric cells into smaller ones by introducing additional surfaces to ensure an acceptable spatial resolution of the importance map. In this paper, we present a new weight window approach to overcome this drawback

  5. Perturbation based Monte Carlo criticality search in density, enrichment and concentration

    International Nuclear Information System (INIS)

    Li, Zeguang; Wang, Kan; Deng, Jingkang

    2015-01-01

    Highlights: • A new perturbation based Monte Carlo criticality search method is proposed. • The method could get accurate results with only one individual criticality run. • The method is used to solve density, enrichment and concentration search problems. • Results show the feasibility and good performances of this method. • The relationship between results’ accuracy and perturbation order is discussed. - Abstract: Criticality search is a very important aspect in reactor physics analysis. Due to the advantages of Monte Carlo method and the development of computer technologies, Monte Carlo criticality search is becoming more and more necessary and feasible. Existing Monte Carlo criticality search methods need large amount of individual criticality runs and may have unstable results because of the uncertainties of criticality results. In this paper, a new perturbation based Monte Carlo criticality search method is proposed and discussed. This method only needs one individual criticality calculation with perturbation tallies to estimate k eff changing function using initial k eff and differential coefficients results, and solves polynomial equations to get the criticality search results. The new perturbation based Monte Carlo criticality search method is implemented in the Monte Carlo code RMC, and criticality search problems in density, enrichment and concentration are taken out. Results show that this method is quite inspiring in accuracy and efficiency, and has advantages compared with other criticality search methods

  6. Soil nutrients, aboveground productivity and vegetative diversity after 10 years of experimental acidification and base cation depletion

    Science.gov (United States)

    Mary Beth Adams; James A. Burger

    2010-01-01

    Soil acidification and base cation depletion are concerns for those wishing to manage central Appalachian hardwood forests sustainably. In this research, 2 experiments were established in 1996 and 1997 in two forest types common in the central Appalachian hardwood forests, to examine how these important forests respond to depletion of nutrients such as calcium and...

  7. Monte Carlo based radial shield design of typical PWR reactor

    Energy Technology Data Exchange (ETDEWEB)

    Gul, Anas; Khan, Rustam; Qureshi, M. Ayub; Azeem, Muhammad Waqar; Raza, S.A. [Pakistan Institute of Engineering and Applied Sciences, Islamabad (Pakistan). Dept. of Nuclear Engineering; Stummer, Thomas [Technische Univ. Wien (Austria). Atominst.

    2017-04-15

    This paper presents the radiation shielding model of a typical PWR (CNPP-II) at Chashma, Pakistan. The model was developed using Monte Carlo N Particle code [2], equipped with ENDF/B-VI continuous energy cross section libraries. This model was applied to calculate the neutron and gamma flux and dose rates in the radial direction at core mid plane. The simulated results were compared with the reference results of Shanghai Nuclear Engineering Research and Design Institute (SNERDI).

  8. Systematic uncertainties on Monte Carlo simulation of lead based ADS

    International Nuclear Information System (INIS)

    Embid, M.; Fernandez, R.; Garcia-Sanz, J.M.; Gonzalez, E.

    1999-01-01

    Computer simulations of the neutronic behaviour of ADS systems foreseen for actinide and fission product transmutation are affected by many sources of systematic uncertainties, both from the nuclear data and by the methodology selected when applying the codes. Several actual ADS Monte Carlo simulations are presented, comparing different options both for the data and for the methodology, evaluating the relevance of the different uncertainties. (author)

  9. SELF-ABSORPTION CORRECTIONS BASED ON MONTE CARLO SIMULATIONS

    Directory of Open Access Journals (Sweden)

    Kamila Johnová

    2016-12-01

    Full Text Available The main aim of this article is to demonstrate how Monte Carlo simulations are implemented in our gamma spectrometry laboratory at the Department of Dosimetry and Application of Ionizing Radiation in order to calculate the self-absorption within the samples. A model of real HPGe detector created for MCNP simulations is presented in this paper. All of the possible parameters, which may influence the self-absorption, are at first discussed theoretically and lately described using the calculated results.

  10. Comparison of two lung clearance models based on the dissolution rates of oxidized depleted uranium

    International Nuclear Information System (INIS)

    Crist, K.C.

    1984-10-01

    An in-vitro dissolution study was conducted on two respirable oxidized depleted uranium samples. The dissolution rates generated from this study were then utilized in the International Commission on Radiological Protection Task Group lung clearance model and a lung clearance model proposed by Cuddihy. Predictions from both models based on the dissolution rates of the amount of oxidized depleted uranium that would be cleared to blood from the pulmonary region following an inhalation exposure were compared. It was found that the predictions made by both models differed considerably. The difference between the predictions was attributed to the differences in the way each model perceives the clearance from the pulmonary region. 33 references, 11 figures, 9 tables

  11. Comparison of two lung clearance models based on the dissolution rates of oxidized depleted uranium

    Energy Technology Data Exchange (ETDEWEB)

    Crist, K.C.

    1984-10-01

    An in-vitro dissolution study was conducted on two respirable oxidized depleted uranium samples. The dissolution rates generated from this study were then utilized in the International Commission on Radiological Protection Task Group lung clearance model and a lung clearance model proposed by Cuddihy. Predictions from both models based on the dissolution rates of the amount of oxidized depleted uranium that would be cleared to blood from the pulmonary region following an inhalation exposure were compared. It was found that the predictions made by both models differed considerably. The difference between the predictions was attributed to the differences in the way each model perceives the clearance from the pulmonary region. 33 references, 11 figures, 9 tables.

  12. Skin fluorescence model based on the Monte Carlo technique

    Science.gov (United States)

    Churmakov, Dmitry Y.; Meglinski, Igor V.; Piletsky, Sergey A.; Greenhalgh, Douglas A.

    2003-10-01

    The novel Monte Carlo technique of simulation of spatial fluorescence distribution within the human skin is presented. The computational model of skin takes into account spatial distribution of fluorophores following the collagen fibers packing, whereas in epidermis and stratum corneum the distribution of fluorophores assumed to be homogeneous. The results of simulation suggest that distribution of auto-fluorescence is significantly suppressed in the NIR spectral region, while fluorescence of sensor layer embedded in epidermis is localized at the adjusted depth. The model is also able to simulate the skin fluorescence spectra.

  13. CAD-based Monte Carlo program for integrated simulation of nuclear system SuperMC

    International Nuclear Information System (INIS)

    Wu, Yican; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Long, Pengcheng; Hu, Liqin

    2015-01-01

    Highlights: • The new developed CAD-based Monte Carlo program named SuperMC for integrated simulation of nuclear system makes use of hybrid MC-deterministic method and advanced computer technologies. SuperMC is designed to perform transport calculation of various types of particles, depletion and activation calculation including isotope burn-up, material activation and shutdown dose, and multi-physics coupling calculation including thermo-hydraulics, fuel performance and structural mechanics. The bi-directional automatic conversion between general CAD models and physical settings and calculation models can be well performed. Results and process of simulation can be visualized with dynamical 3D dataset and geometry model. Continuous-energy cross section, burnup, activation, irradiation damage and material data etc. are used to support the multi-process simulation. Advanced cloud computing framework makes the computation and storage extremely intensive simulation more attractive just as a network service to support design optimization and assessment. The modular design and generic interface promotes its flexible manipulation and coupling of external solvers. • The new developed and incorporated advanced methods in SuperMC was introduced including hybrid MC-deterministic transport method, particle physical interaction treatment method, multi-physics coupling calculation method, geometry automatic modeling and processing method, intelligent data analysis and visualization method, elastic cloud computing technology and parallel calculation method. • The functions of SuperMC2.1 integrating automatic modeling, neutron and photon transport calculation, results and process visualization was introduced. It has been validated by using a series of benchmarking cases such as the fusion reactor ITER model and the fast reactor BN-600 model. - Abstract: Monte Carlo (MC) method has distinct advantages to simulate complicated nuclear systems and is envisioned as a routine

  14. Implementation of a Monte Carlo based inverse planning model for clinical IMRT with MCNP code

    International Nuclear Information System (INIS)

    He, Tongming Tony

    2003-01-01

    Inaccurate dose calculations and limitations of optimization algorithms in inverse planning introduce systematic and convergence errors to treatment plans. This work was to implement a Monte Carlo based inverse planning model for clinical IMRT aiming to minimize the aforementioned errors. The strategy was to precalculate the dose matrices of beamlets in a Monte Carlo based method followed by the optimization of beamlet intensities. The MCNP 4B (Monte Carlo N-Particle version 4B) code was modified to implement selective particle transport and dose tallying in voxels and efficient estimation of statistical uncertainties. The resulting performance gain was over eleven thousand times. Due to concurrent calculation of multiple beamlets of individual ports, hundreds of beamlets in an IMRT plan could be calculated within a practical length of time. A finite-sized point source model provided a simple and accurate modeling of treatment beams. The dose matrix calculations were validated through measurements in phantoms. Agreements were better than 1.5% or 0.2 cm. The beamlet intensities were optimized using a parallel platform based optimization algorithm that was capable of escape from local minima and preventing premature convergence. The Monte Carlo based inverse planning model was applied to clinical cases. The feasibility and capability of Monte Carlo based inverse planning for clinical IMRT was demonstrated. Systematic errors in treatment plans of a commercial inverse planning system were assessed in comparison with the Monte Carlo based calculations. Discrepancies in tumor doses and critical structure doses were up to 12% and 17%, respectively. The clinical importance of Monte Carlo based inverse planning for IMRT was demonstrated

  15. RCPO1 - A Monte Carlo program for solving neutron and photon transport problems in three dimensional geometry with detailed energy description and depletion capability

    International Nuclear Information System (INIS)

    Ondis, L.A. II; Tyburski, L.J.; Moskowitz, B.S.

    2000-01-01

    The RCP01 Monte Carlo program is used to analyze many geometries of interest in nuclear design and analysis of light water moderated reactors such as the core in its pressure vessel with complex piping arrangement, fuel storage arrays, shipping and container arrangements, and neutron detector configurations. Written in FORTRAN and in use on a variety of computers, it is capable of estimating steady state neutron or photon reaction rates and neutron multiplication factors. The energy range covered in neutron calculations is that relevant to the fission process and subsequent slowing-down and thermalization, i.e., 20 MeV to 0 eV. The same energy range is covered for photon calculations

  16. RCPO1 - A Monte Carlo program for solving neutron and photon transport problems in three dimensional geometry with detailed energy description and depletion capability

    Energy Technology Data Exchange (ETDEWEB)

    Ondis, L.A., II; Tyburski, L.J.; Moskowitz, B.S.

    2000-03-01

    The RCP01 Monte Carlo program is used to analyze many geometries of interest in nuclear design and analysis of light water moderated reactors such as the core in its pressure vessel with complex piping arrangement, fuel storage arrays, shipping and container arrangements, and neutron detector configurations. Written in FORTRAN and in use on a variety of computers, it is capable of estimating steady state neutron or photon reaction rates and neutron multiplication factors. The energy range covered in neutron calculations is that relevant to the fission process and subsequent slowing-down and thermalization, i.e., 20 MeV to 0 eV. The same energy range is covered for photon calculations.

  17. Comparative evaluations of the Monte Carlo-based light propagation simulation packages for optical imaging

    Directory of Open Access Journals (Sweden)

    Lin Wang

    2018-01-01

    Full Text Available Monte Carlo simulation of light propagation in turbid medium has been studied for years. A number of software packages have been developed to handle with such issue. However, it is hard to compare these simulation packages, especially for tissues with complex heterogeneous structures. Here, we first designed a group of mesh datasets generated by Iso2Mesh software, and used them to cross-validate the accuracy and to evaluate the performance of four Monte Carlo-based simulation packages, including Monte Carlo model of steady-state light transport in multi-layered tissues (MCML, tetrahedron-based inhomogeneous Monte Carlo optical simulator (TIMOS, Molecular Optical Simulation Environment (MOSE, and Mesh-based Monte Carlo (MMC. The performance of each package was evaluated based on the designed mesh datasets. The merits and demerits of each package were also discussed. Comparative results showed that the TIMOS package provided the best performance, which proved to be a reliable, efficient, and stable MC simulation package for users.

  18. Acceptance and implementation of a system of planning computerized based on Monte Carlo

    International Nuclear Information System (INIS)

    Lopez-Tarjuelo, J.; Garcia-Molla, R.; Suan-Senabre, X. J.; Quiros-Higueras, J. Q.; Santos-Serra, A.; Marco-Blancas, N.; Calzada-Feliu, S.

    2013-01-01

    It has been done the acceptance for use clinical Monaco computerized planning system, based on an on a virtual model of the energy yield of the head of the linear electron Accelerator and that performs the calculation of the dose with an algorithm of x-rays (XVMC) based on Monte Carlo algorithm. (Author)

  19. Continuous energy Monte Carlo method based homogenization multi-group constants calculation

    International Nuclear Information System (INIS)

    Li Mancang; Wang Kan; Yao Dong

    2012-01-01

    The efficiency of the standard two-step reactor physics calculation relies on the accuracy of multi-group constants from the assembly-level homogenization process. In contrast to the traditional deterministic methods, generating the homogenization cross sections via Monte Carlo method overcomes the difficulties in geometry and treats energy in continuum, thus provides more accuracy parameters. Besides, the same code and data bank can be used for a wide range of applications, resulting in the versatility using Monte Carlo codes for homogenization. As the first stage to realize Monte Carlo based lattice homogenization, the track length scheme is used as the foundation of cross section generation, which is straight forward. The scattering matrix and Legendre components, however, require special techniques. The Scattering Event method was proposed to solve the problem. There are no continuous energy counterparts in the Monte Carlo calculation for neutron diffusion coefficients. P 1 cross sections were used to calculate the diffusion coefficients for diffusion reactor simulator codes. B N theory is applied to take the leakage effect into account when the infinite lattice of identical symmetric motives is assumed. The MCMC code was developed and the code was applied in four assembly configurations to assess the accuracy and the applicability. At core-level, A PWR prototype core is examined. The results show that the Monte Carlo based multi-group constants behave well in average. The method could be applied to complicated configuration nuclear reactor core to gain higher accuracy. (authors)

  20. Evaluation of tomographic-image based geometries with PENELOPE Monte Carlo

    International Nuclear Information System (INIS)

    Kakoi, A.A.Y.; Galina, A.C.; Nicolucci, P.

    2009-01-01

    The Monte Carlo method can be used to evaluate treatment planning systems or for the determination of dose distributions in radiotherapy planning due to its accuracy and precision. In Monte Carlo simulation packages typically used in radiotherapy, however, a realistic representation of the geometry of the patient can not be used, which compromises the accuracy of the results. In this work, an algorithm for the description of geometries based on CT images of patients, developed to be used with Monte Carlo simulation package PENELOPE, is tested by simulating the dose distribution produced by a photon beam of 10 MV. The geometry simulated was based on CT images of a planning of prostate cancer. The volumes of interest in the treatment were adequately represented in the simulation geometry, allowing the algorithm to be used in verification of doses in radiotherapy treatments. (author)

  1. Acceptance and implementation of a system of planning computerized based on Monte Carlo; Aceptacion y puesta en marcha de un sistema de planificacion comutarizada basado en Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Lopez-Tarjuelo, J.; Garcia-Molla, R.; Suan-Senabre, X. J.; Quiros-Higueras, J. Q.; Santos-Serra, A.; Marco-Blancas, N.; Calzada-Feliu, S.

    2013-07-01

    It has been done the acceptance for use clinical Monaco computerized planning system, based on an on a virtual model of the energy yield of the head of the linear electron Accelerator and that performs the calculation of the dose with an algorithm of x-rays (XVMC) based on Monte Carlo algorithm. (Author)

  2. Response matrix Monte Carlo based on a general geometry local calculation for electron transport

    International Nuclear Information System (INIS)

    Ballinger, C.T.; Rathkopf, J.A.; Martin, W.R.

    1991-01-01

    A Response Matrix Monte Carlo (RMMC) method has been developed for solving electron transport problems. This method was born of the need to have a reliable, computationally efficient transport method for low energy electrons (below a few hundred keV) in all materials. Today, condensed history methods are used which reduce the computation time by modeling the combined effect of many collisions but fail at low energy because of the assumptions required to characterize the electron scattering. Analog Monte Carlo simulations are prohibitively expensive since electrons undergo coulombic scattering with little state change after a collision. The RMMC method attempts to combine the accuracy of an analog Monte Carlo simulation with the speed of the condensed history methods. Like condensed history, the RMMC method uses probability distributions functions (PDFs) to describe the energy and direction of the electron after several collisions. However, unlike the condensed history method the PDFs are based on an analog Monte Carlo simulation over a small region. Condensed history theories require assumptions about the electron scattering to derive the PDFs for direction and energy. Thus the RMMC method samples from PDFs which more accurately represent the electron random walk. Results show good agreement between the RMMC method and analog Monte Carlo. 13 refs., 8 figs

  3. Determination of the spatial response of neutron based analysers using a Monte Carlo based method

    International Nuclear Information System (INIS)

    Tickner, James

    2000-01-01

    One of the principal advantages of using thermal neutron capture (TNC, also called prompt gamma neutron activation analysis or PGNAA) or neutron inelastic scattering (NIS) techniques for measuring elemental composition is the high penetrating power of both the incident neutrons and the resultant gamma-rays, which means that large sample volumes can be interrogated. Gauges based on these techniques are widely used in the mineral industry for on-line determination of the composition of bulk samples. However, attenuation of both neutrons and gamma-rays in the sample and geometric (source/detector distance) effects typically result in certain parts of the sample contributing more to the measured composition than others. In turn, this introduces errors in the determination of the composition of inhomogeneous samples. This paper discusses a combined Monte Carlo/analytical method for estimating the spatial response of a neutron gauge. Neutron propagation is handled using a Monte Carlo technique which allows an arbitrarily complex neutron source and gauge geometry to be specified. Gamma-ray production and detection is calculated analytically which leads to a dramatic increase in the efficiency of the method. As an example, the method is used to study ways of reducing the spatial sensitivity of on-belt composition measurements of cement raw meal

  4. The future of new calculation concepts in dosimetry based on the Monte Carlo Methods; Avenir des nouveaux concepts des calculs dosimetriques bases sur les methodes de Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Makovicka, L.; Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J. [Universite de Franche-Comte, Equipe IRMA/ENISYS/FEMTO-ST, UMR6174 CNRS, 25 - Montbeliard (France); Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J.; Salomon, M. [Universite de Franche-Comte, Equipe AND/LIFC, 90 - Belfort (France)

    2009-01-15

    Monte Carlo codes, precise but slow, are very important tools in the vast majority of specialities connected to Radiation Physics, Radiation Protection and Dosimetry. A discussion about some other computing solutions is carried out; solutions not only based on the enhancement of computer power, or on the 'biasing'used for relative acceleration of these codes (in the case of photons), but on more efficient methods (A.N.N. - artificial neural network, C.B.R. - case-based reasoning - or other computer science techniques) already and successfully used for a long time in other scientific or industrial applications and not only Radiation Protection or Medical Dosimetry. (authors)

  5. Development of the point-depletion code DEPTH

    International Nuclear Information System (INIS)

    She, Ding; Wang, Kan; Yu, Ganglin

    2013-01-01

    Highlights: ► The DEPTH code has been developed for the large-scale depletion system. ► DEPTH uses the data library which is convenient to couple with MC codes. ► TTA and matrix exponential methods are implemented and compared. ► DEPTH is able to calculate integral quantities based on the matrix inverse. ► Code-to-code comparisons prove the accuracy and efficiency of DEPTH. -- Abstract: The burnup analysis is an important aspect in reactor physics, which is generally done by coupling of transport calculations and point-depletion calculations. DEPTH is a newly-developed point-depletion code of handling large burnup depletion systems and detailed depletion chains. For better coupling with Monte Carlo transport codes, DEPTH uses data libraries based on the combination of ORIGEN-2 and ORIGEN-S and allows users to assign problem-dependent libraries for each depletion step. DEPTH implements various algorithms of treating the stiff depletion systems, including the Transmutation trajectory analysis (TTA), the Chebyshev Rational Approximation Method (CRAM), the Quadrature-based Rational Approximation Method (QRAM) and the Laguerre Polynomial Approximation Method (LPAM). Three different modes are supported by DEPTH to execute the decay, constant flux and constant power calculations. In addition to obtaining the instantaneous quantities of the radioactivity, decay heats and reaction rates, DEPTH is able to calculate the integral quantities by a time-integrated solver. Through calculations compared with ORIGEN-2, the validity of DEPTH in point-depletion calculations is proved. The accuracy and efficiency of depletion algorithms are also discussed. In addition, an actual pin-cell burnup case is calculated to illustrate the DEPTH code performance in coupling with the RMC Monte Carlo code

  6. A lattice-based Monte Carlo evaluation of Canada Deuterium Uranium-6 safety parameters

    International Nuclear Information System (INIS)

    Kim, Yong Hee; Hartanto, Donny; Kim, Woo Song

    2016-01-01

    Important safety parameters such as the fuel temperature coefficient (FTC) and the power coefficient of reactivity (PCR) of the CANada Deuterium Uranium (CANDU-6) reactor have been evaluated using the Monte Carlo method. For accurate analysis of the parameters, the Doppler broadening rejection correction scheme was implemented in the MCNPX code to account for the thermal motion of the heavy uranium-238 nucleus in the neutron-U scattering reactions. In this work, a standard fuel lattice has been modeled and the fuel is depleted using MCNPX. The FTC value is evaluated for several burnup points including the mid-burnup representing a near-equilibrium core. The Doppler effect has been evaluated using several cross-section libraries such as ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1.1, and JENDL-4.0. The PCR value is also evaluated at mid-burnup conditions to characterize the safety features of an equilibrium CANDU-6 reactor. To improve the reliability of the Monte Carlo calculations, we considered a huge number of neutron histories in this work and the standard deviation of the k-infinity values is only 0.5-1 pcm

  7. Confronting uncertainty in model-based geostatistics using Markov Chain Monte Carlo simulation

    NARCIS (Netherlands)

    Minasny, B.; Vrugt, J.A.; McBratney, A.B.

    2011-01-01

    This paper demonstrates for the first time the use of Markov Chain Monte Carlo (MCMC) simulation for parameter inference in model-based soil geostatistics. We implemented the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm to jointly summarize the posterior

  8. Convex-based void filling method for CAD-based Monte Carlo geometry modeling

    International Nuclear Information System (INIS)

    Yu, Shengpeng; Cheng, Mengyun; Song, Jing; Long, Pengcheng; Hu, Liqin

    2015-01-01

    Highlights: • We present a new void filling method named CVF for CAD based MC geometry modeling. • We describe convex based void description based and quality-based space subdivision. • The results showed improvements provided by CVF for both modeling and MC calculation efficiency. - Abstract: CAD based automatic geometry modeling tools have been widely applied to generate Monte Carlo (MC) calculation geometry for complex systems according to CAD models. Automatic void filling is one of the main functions in the CAD based MC geometry modeling tools, because the void space between parts in CAD models is traditionally not modeled while MC codes such as MCNP need all the problem space to be described. A dedicated void filling method, named Convex-based Void Filling (CVF), is proposed in this study for efficient void filling and concise void descriptions. The method subdivides all the problem space into disjointed regions using Quality based Subdivision (QS) and describes the void space in each region with complementary descriptions of the convex volumes intersecting with that region. It has been implemented in SuperMC/MCAM, the Multiple-Physics Coupling Analysis Modeling Program, and tested on International Thermonuclear Experimental Reactor (ITER) Alite model. The results showed that the new method reduced both automatic modeling time and MC calculation time

  9. Development of Monte Carlo-based pebble bed reactor fuel management code

    International Nuclear Information System (INIS)

    Setiadipura, Topan; Obara, Toru

    2014-01-01

    Highlights: • A new Monte Carlo-based fuel management code for OTTO cycle pebble bed reactor was developed. • The double-heterogeneity was modeled using statistical method in MVP-BURN code. • The code can perform analysis of equilibrium and non-equilibrium phase. • Code-to-code comparisons for Once-Through-Then-Out case were investigated. • Ability of the code to accommodate the void cavity was confirmed. - Abstract: A fuel management code for pebble bed reactors (PBRs) based on the Monte Carlo method has been developed in this study. The code, named Monte Carlo burnup analysis code for PBR (MCPBR), enables a simulation of the Once-Through-Then-Out (OTTO) cycle of a PBR from the running-in phase to the equilibrium condition. In MCPBR, a burnup calculation based on a continuous-energy Monte Carlo code, MVP-BURN, is coupled with an additional utility code to be able to simulate the OTTO cycle of PBR. MCPBR has several advantages in modeling PBRs, namely its Monte Carlo neutron transport modeling, its capability of explicitly modeling the double heterogeneity of the PBR core, and its ability to model different axial fuel speeds in the PBR core. Analysis at the equilibrium condition of the simplified PBR was used as the validation test of MCPBR. The calculation results of the code were compared with the results of diffusion-based fuel management PBR codes, namely the VSOP and PEBBED codes. Using JENDL-4.0 nuclide library, MCPBR gave a 4.15% and 3.32% lower k eff value compared to VSOP and PEBBED, respectively. While using JENDL-3.3, MCPBR gave a 2.22% and 3.11% higher k eff value compared to VSOP and PEBBED, respectively. The ability of MCPBR to analyze neutron transport in the top void of the PBR core and its effects was also confirmed

  10. Accelerated Monte Carlo system reliability analysis through machine-learning-based surrogate models of network connectivity

    International Nuclear Information System (INIS)

    Stern, R.E.; Song, J.; Work, D.B.

    2017-01-01

    The two-terminal reliability problem in system reliability analysis is known to be computationally intractable for large infrastructure graphs. Monte Carlo techniques can estimate the probability of a disconnection between two points in a network by selecting a representative sample of network component failure realizations and determining the source-terminal connectivity of each realization. To reduce the runtime required for the Monte Carlo approximation, this article proposes an approximate framework in which the connectivity check of each sample is estimated using a machine-learning-based classifier. The framework is implemented using both a support vector machine (SVM) and a logistic regression based surrogate model. Numerical experiments are performed on the California gas distribution network using the epicenter and magnitude of the 1989 Loma Prieta earthquake as well as randomly-generated earthquakes. It is shown that the SVM and logistic regression surrogate models are able to predict network connectivity with accuracies of 99% for both methods, and are 1–2 orders of magnitude faster than using a Monte Carlo method with an exact connectivity check. - Highlights: • Surrogate models of network connectivity are developed by machine-learning algorithms. • Developed surrogate models can reduce the runtime required for Monte Carlo simulations. • Support vector machine and logistic regressions are employed to develop surrogate models. • Numerical example of California gas distribution network demonstrate the proposed approach. • The developed models have accuracies 99%, and are 1–2 orders of magnitude faster than MCS.

  11. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans.

    Science.gov (United States)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-07

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients' CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  12. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    Science.gov (United States)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-01

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  13. Research on reactor physics analysis method based on Monte Carlo homogenization

    International Nuclear Information System (INIS)

    Ye Zhimin; Zhang Peng

    2014-01-01

    In order to meet the demand of nuclear energy market in the future, many new concepts of nuclear energy systems has been put forward. The traditional deterministic neutronics analysis method has been challenged in two aspects: one is the ability of generic geometry processing; the other is the multi-spectrum applicability of the multigroup cross section libraries. Due to its strong geometry modeling capability and the application of continuous energy cross section libraries, the Monte Carlo method has been widely used in reactor physics calculations, and more and more researches on Monte Carlo method has been carried out. Neutronics-thermal hydraulics coupling analysis based on Monte Carlo method has been realized. However, it still faces the problems of long computation time and slow convergence which make it not applicable to the reactor core fuel management simulations. Drawn from the deterministic core analysis method, a new two-step core analysis scheme is proposed in this work. Firstly, Monte Carlo simulations are performed for assembly, and the assembly homogenized multi-group cross sections are tallied at the same time. Secondly, the core diffusion calculations can be done with these multigroup cross sections. The new scheme can achieve high efficiency while maintain acceptable precision, so it can be used as an effective tool for the design and analysis of innovative nuclear energy systems. Numeric tests have been done in this work to verify the new scheme. (authors)

  14. PENBURN - A 3-D Zone-Based Depletion/Burnup Solver

    International Nuclear Information System (INIS)

    Manalo, Kevin; Plower, Thomas; Rowe, Mireille; Mock, Travis; Sjoden, Glenn E.

    2008-01-01

    PENBURN (Parallel Environment Burnup) is a general depletion/burnup solver which, when provided with zone-based reaction rates, computes time-dependent isotope concentrations for a set of actinides and fission products. Burnup analysis in PENBURN is performed with a direct Bateman-solver chain solution technique. Specifically, in tandem with PENBURN is the use of PENTRAN, a parallel multi-group anisotropic Sn code for 3-D Cartesian geometries. In PENBURN, the linear chain method is actively used to solve individual isotope chains which are then fully attributed by the burnup code to yield integrated isotope concentrations for each nuclide specified. Included with the discussion of code features, a single PWR fuel pin calculation with the burnup code is performed and detailed with a benchmark comparison to PIE (Post-Irradiation Examination) data within the SFCOMPO (Spent Fuel Composition / NEA) database, and also with burnup codes in SCALE5.1. Conclusions within the paper detail, in PENBURN, the accuracy of major actinides, flux profile behavior as a function of burnup, and criticality calculations for the PWR fuel pin model. (authors)

  15. Development of a space radiation Monte Carlo computer simulation based on the FLUKA and ROOT codes

    CERN Document Server

    Pinsky, L; Ferrari, A; Sala, P; Carminati, F; Brun, R

    2001-01-01

    This NASA funded project is proceeding to develop a Monte Carlo-based computer simulation of the radiation environment in space. With actual funding only initially in place at the end of May 2000, the study is still in the early stage of development. The general tasks have been identified and personnel have been selected. The code to be assembled will be based upon two major existing software packages. The radiation transport simulation will be accomplished by updating the FLUKA Monte Carlo program, and the user interface will employ the ROOT software being developed at CERN. The end-product will be a Monte Carlo-based code which will complement the existing analytic codes such as BRYNTRN/HZETRN presently used by NASA to evaluate the effects of radiation shielding in space. The planned code will possess the ability to evaluate the radiation environment for spacecraft and habitats in Earth orbit, in interplanetary space, on the lunar surface, or on a planetary surface such as Mars. Furthermore, it will be usef...

  16. Intrinsic fluorescence of protein in turbid media using empirical relation based on Monte Carlo lookup table

    Science.gov (United States)

    Einstein, Gnanatheepam; Udayakumar, Kanniyappan; Aruna, Prakasarao; Ganesan, Singaravelu

    2017-03-01

    Fluorescence of Protein has been widely used in diagnostic oncology for characterizing cellular metabolism. However, the intensity of fluorescence emission is affected due to the absorbers and scatterers in tissue, which may lead to error in estimating exact protein content in tissue. Extraction of intrinsic fluorescence from measured fluorescence has been achieved by different methods. Among them, Monte Carlo based method yields the highest accuracy for extracting intrinsic fluorescence. In this work, we have attempted to generate a lookup table for Monte Carlo simulation of fluorescence emission by protein. Furthermore, we fitted the generated lookup table using an empirical relation. The empirical relation between measured and intrinsic fluorescence is validated using tissue phantom experiments. The proposed relation can be used for estimating intrinsic fluorescence of protein for real-time diagnostic applications and thereby improving the clinical interpretation of fluorescence spectroscopic data.

  17. Visual improvement for bad handwriting based on Monte-Carlo method

    Science.gov (United States)

    Shi, Cao; Xiao, Jianguo; Xu, Canhui; Jia, Wenhua

    2014-03-01

    A visual improvement algorithm based on Monte Carlo simulation is proposed in this paper, in order to enhance visual effects for bad handwriting. The whole improvement process is to use well designed typeface so as to optimize bad handwriting image. In this process, a series of linear operators for image transformation are defined for transforming typeface image to approach handwriting image. And specific parameters of linear operators are estimated by Monte Carlo method. Visual improvement experiments illustrate that the proposed algorithm can effectively enhance visual effect for handwriting image as well as maintain the original handwriting features, such as tilt, stroke order and drawing direction etc. The proposed visual improvement algorithm, in this paper, has a huge potential to be applied in tablet computer and Mobile Internet, in order to improve user experience on handwriting.

  18. ERSN-OpenMC, a Java-based GUI for OpenMC Monte Carlo code

    Directory of Open Access Journals (Sweden)

    Jaafar EL Bakkali

    2016-07-01

    Full Text Available OpenMC is a new Monte Carlo transport particle simulation code focused on solving two types of neutronic problems mainly the k-eigenvalue criticality fission source problems and external fixed fission source problems. OpenMC does not have any Graphical User Interface and the creation of one is provided by our java-based application named ERSN-OpenMC. The main feature of this application is to provide to the users an easy-to-use and flexible graphical interface to build better and faster simulations, with less effort and great reliability. Additionally, this graphical tool was developed with several features, as the ability to automate the building process of OpenMC code and related libraries as well as the users are given the freedom to customize their installation of this Monte Carlo code. A full description of the ERSN-OpenMC application is presented in this paper.

  19. Fault Risk Assessment of Underwater Vehicle Steering System Based on Virtual Prototyping and Monte Carlo Simulation

    Directory of Open Access Journals (Sweden)

    He Deyu

    2016-09-01

    Full Text Available Assessing the risks of steering system faults in underwater vehicles is a human-machine-environment (HME systematic safety field that studies faults in the steering system itself, the driver’s human reliability (HR and various environmental conditions. This paper proposed a fault risk assessment method for an underwater vehicle steering system based on virtual prototyping and Monte Carlo simulation. A virtual steering system prototype was established and validated to rectify a lack of historic fault data. Fault injection and simulation were conducted to acquire fault simulation data. A Monte Carlo simulation was adopted that integrated randomness due to the human operator and environment. Randomness and uncertainty of the human, machine and environment were integrated in the method to obtain a probabilistic risk indicator. To verify the proposed method, a case of stuck rudder fault (SRF risk assessment was studied. This method may provide a novel solution for fault risk assessment of a vehicle or other general HME system.

  20. Present status of transport code development based on Monte Carlo method

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki

    1985-01-01

    The present status of development in Monte Carlo code is briefly reviewed. The main items are the followings; Application fields, Methods used in Monte Carlo code (geometry spectification, nuclear data, estimator and variance reduction technique) and unfinished works, Typical Monte Carlo codes and Merits of continuous energy Monte Carlo code. (author)

  1. Continuous energy Monte Carlo calculations for randomly distributed spherical fuels based on statistical geometry model

    Energy Technology Data Exchange (ETDEWEB)

    Murata, Isao [Osaka Univ., Suita (Japan); Mori, Takamasa; Nakagawa, Masayuki; Itakura, Hirofumi

    1996-03-01

    The method to calculate neutronics parameters of a core composed of randomly distributed spherical fuels has been developed based on a statistical geometry model with a continuous energy Monte Carlo method. This method was implemented in a general purpose Monte Carlo code MCNP, and a new code MCNP-CFP had been developed. This paper describes the model and method how to use it and the validation results. In the Monte Carlo calculation, the location of a spherical fuel is sampled probabilistically along the particle flight path from the spatial probability distribution of spherical fuels, called nearest neighbor distribution (NND). This sampling method was validated through the following two comparisons: (1) Calculations of inventory of coated fuel particles (CFPs) in a fuel compact by both track length estimator and direct evaluation method, and (2) Criticality calculations for ordered packed geometries. This method was also confined by applying to an analysis of the critical assembly experiment at VHTRC. The method established in the present study is quite unique so as to a probabilistic model of the geometry with a great number of spherical fuels distributed randomly. Realizing the speed-up by vector or parallel computations in future, it is expected to be widely used in calculation of a nuclear reactor core, especially HTGR cores. (author).

  2. Monte Carlo-based investigation of water-equivalence of solid phantoms at 137Cs energy

    International Nuclear Information System (INIS)

    Vishwakarma, Ramkrushna S.; Palani Selvam, T.; Sahoo, Sridhar; Mishra, Subhalaxmi; Chourasiya, Ghanshyam

    2013-01-01

    Investigation of solid phantom materials such as solid water, virtual water, plastic water, RW1, polystyrene, and polymethylmethacrylate (PMMA) for their equivalence to liquid water at 137 Cs energy (photon energy of 662 keV) under full scatter conditions is carried out using the EGSnrc Monte Carlo code system. Monte Carlo-based EGSnrc code system was used in the work to calculate distance-dependent phantom scatter corrections. The study also includes separation of primary and scattered dose components. Monte Carlo simulations are carried out using primary particle histories up to 5 x 10 9 to attain less than 0.3% statistical uncertainties in the estimation of dose. Water equivalence of various solid phantoms such as solid water, virtual water, RW1, PMMA, polystyrene, and plastic water materials are investigated at 137 Cs energy under full scatter conditions. The investigation reveals that solid water, virtual water, and RW1 phantoms are water equivalent up to 15 cm from the source. Phantom materials such as plastic water, PMMA, and polystyrene phantom materials are water equivalent up to 10 cm. At 15 cm from the source, the phantom scatter corrections are 1.035, 1.050, and 0.949 for the phantoms PMMA, plastic water, and polystyrene, respectively. (author)

  3. Application of backtracking algorithm to depletion calculations

    International Nuclear Information System (INIS)

    Wu Mingyu; Wang Shixi; Yang Yong; Zhang Qiang; Yang Jiayin

    2013-01-01

    Based on the theory of linear chain method for analytical depletion calculations, the burnup matrix is decoupled by the divide and conquer strategy and the linear chain with Markov characteristic is formed. The density, activity and decay heat of every nuclide in the chain then can be calculated by analytical solutions. Every possible reaction path of the nuclide must be considered during the linear chain establishment process. To confirm the calculation precision and efficiency, the algorithm which can cover all the reaction paths and search the paths automatically according to the problem description and precision restrictions should be found. Through analysis and comparison of several kinds of searching algorithms, the backtracking algorithm was selected to establish and calculate the linear chains in searching process using depth first search (DFS) method, forming an algorithm which can solve the depletion problem adaptively and with high fidelity. The complexity of the solution space and time was analyzed by taking into account depletion process and the characteristics of the backtracking algorithm. The newly developed depletion program was coupled with Monte Carlo program MCMG-Ⅱ to calculate the benchmark burnup problem of the first core of China Experimental Fast Reactor (CEFR) and the preliminary verification and validation of the program were performed. (authors)

  4. The future of new calculation concepts in dosimetry based on the Monte Carlo Methods

    International Nuclear Information System (INIS)

    Makovicka, L.; Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J.; Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J.; Salomon, M.

    2009-01-01

    Monte Carlo codes, precise but slow, are very important tools in the vast majority of specialities connected to Radiation Physics, Radiation Protection and Dosimetry. A discussion about some other computing solutions is carried out; solutions not only based on the enhancement of computer power, or on the 'biasing'used for relative acceleration of these codes (in the case of photons), but on more efficient methods (A.N.N. - artificial neural network, C.B.R. - case-based reasoning - or other computer science techniques) already and successfully used for a long time in other scientific or industrial applications and not only Radiation Protection or Medical Dosimetry. (authors)

  5. Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy

    Science.gov (United States)

    Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe

    2015-07-01

    The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm3 calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.

  6. Sensitivity theory for reactor burnup analysis based on depletion perturbation theory

    International Nuclear Information System (INIS)

    Yang, Wonsik.

    1989-01-01

    The large computational effort involved in the design and analysis of advanced reactor configurations motivated the development of Depletion Perturbation Theory (DPT) for general fuel cycle analysis. The work here focused on two important advances in the current methods. First, the adjoint equations were developed for using the efficient linear flux approximation to decouple the neutron/nuclide field equations. And second, DPT was extended to the constrained equilibrium cycle which is important for the consistent comparison and evaluation of alternative reactor designs. Practical strategies were formulated for solving the resulting adjoint equations and a computer code was developed for practical applications. In all cases analyzed, the sensitivity coefficients generated by DPT were in excellent agreement with the results of exact calculations. The work here indicates that for a given core response, the sensitivity coefficients to all input parameters can be computed by DPT with a computational effort similar to a single forward depletion calculation

  7. Deep-depletion physics-based analytical model for scanning capacitance microscopy carrier profile extraction

    International Nuclear Information System (INIS)

    Wong, K. M.; Chim, W. K.

    2007-01-01

    An approach for fast and accurate carrier profiling using deep-depletion analytical modeling of scanning capacitance microscopy (SCM) measurements is shown for an ultrashallow p-n junction with a junction depth of less than 30 nm and a profile steepness of about 3 nm per decade change in carrier concentration. In addition, the analytical model is also used to extract the SCM dopant profiles of three other p-n junction samples with different junction depths and profile steepnesses. The deep-depletion effect arises from rapid changes in the bias applied between the sample and probe tip during SCM measurements. The extracted carrier profile from the model agrees reasonably well with the more accurate carrier profile from inverse modeling and the dopant profile from secondary ion mass spectroscopy measurements

  8. Thermal transport in nanocrystalline Si and SiGe by ab initio based Monte Carlo simulation.

    Science.gov (United States)

    Yang, Lina; Minnich, Austin J

    2017-03-14

    Nanocrystalline thermoelectric materials based on Si have long been of interest because Si is earth-abundant, inexpensive, and non-toxic. However, a poor understanding of phonon grain boundary scattering and its effect on thermal conductivity has impeded efforts to improve the thermoelectric figure of merit. Here, we report an ab-initio based computational study of thermal transport in nanocrystalline Si-based materials using a variance-reduced Monte Carlo method with the full phonon dispersion and intrinsic lifetimes from first-principles as input. By fitting the transmission profile of grain boundaries, we obtain excellent agreement with experimental thermal conductivity of nanocrystalline Si [Wang et al. Nano Letters 11, 2206 (2011)]. Based on these calculations, we examine phonon transport in nanocrystalline SiGe alloys with ab-initio electron-phonon scattering rates. Our calculations show that low energy phonons still transport substantial amounts of heat in these materials, despite scattering by electron-phonon interactions, due to the high transmission of phonons at grain boundaries, and thus improvements in ZT are still possible by disrupting these modes. This work demonstrates the important insights into phonon transport that can be obtained using ab-initio based Monte Carlo simulations in complex nanostructured materials.

  9. A strategy for selective detection based on interferent depleting and redox cycling using the plane-recessed microdisk array electrodes

    International Nuclear Information System (INIS)

    Zhu Feng; Yan Jiawei; Lu Miao; Zhou Yongliang; Yang Yang; Mao Bingwei

    2011-01-01

    Highlights: → A novel strategy based on a combination of interferent depleting and redox cycling is proposed for the plane-recessed microdisk array electrodes. → The strategy break up the restriction of selectively detecting a species that exhibits reversible reaction in a mixture with one that exhibits an irreversible reaction. → The electrodes enhance the current signal by redox cycling. → The electrodes can work regardless of the reversibility of interfering species. - Abstract: The fabrication, characterization and application of the plane-recessed microdisk array electrodes for selective detection are demonstrated. The electrodes, fabricated by lithographic microfabrication technology, are composed of a planar film electrode and a 32 x 32 recessed microdisk array electrode. Different from commonly used redox cycling operating mode for array configurations such as interdigitated array electrodes, a novel strategy based on a combination of interferent depleting and redox cycling is proposed for the electrodes with an appropriate configuration. The planar film electrode (the plane electrode) is used to deplete the interferent in the diffusion layer. The recessed microdisk array electrode (the microdisk array), locating within the diffusion layer of the plane electrode, works for detecting the target analyte in the interferent-depleted diffusion layer. In addition, the microdisk array overcomes the disadvantage of low current signal for a single microelectrode. Moreover, the current signal of the target analyte that undergoes reversible electron transfer can be enhanced due to the redox cycling between the plane electrode and the microdisk array. Based on the above working principle, the plane-recessed microdisk array electrodes break up the restriction of selectively detecting a species that exhibits reversible reaction in a mixture with one that exhibits an irreversible reaction, which is a limitation of single redox cycling operating mode. The

  10. Monte Carlo tests of the Rasch model based on scalability coefficients

    DEFF Research Database (Denmark)

    Christensen, Karl Bang; Kreiner, Svend

    2010-01-01

    that summarizes the number of Guttman errors in the data matrix. These coefficients are shown to yield efficient tests of the Rasch model using p-values computed using Markov chain Monte Carlo methods. The power of the tests of unequal item discrimination, and their ability to distinguish between local dependence......For item responses fitting the Rasch model, the assumptions underlying the Mokken model of double monotonicity are met. This makes non-parametric item response theory a natural starting-point for Rasch item analysis. This paper studies scalability coefficients based on Loevinger's H coefficient...

  11. Binocular optical axis parallelism detection precision analysis based on Monte Carlo method

    Science.gov (United States)

    Ying, Jiaju; Liu, Bingqi

    2018-02-01

    According to the working principle of the binocular photoelectric instrument optical axis parallelism digital calibration instrument, and in view of all components of the instrument, the various factors affect the system precision is analyzed, and then precision analysis model is established. Based on the error distribution, Monte Carlo method is used to analyze the relationship between the comprehensive error and the change of the center coordinate of the circle target image. The method can further guide the error distribution, optimize control the factors which have greater influence on the comprehensive error, and improve the measurement accuracy of the optical axis parallelism digital calibration instrument.

  12. Monte Carlo simulation based reliability evaluation in a multi-bilateral contracts market

    International Nuclear Information System (INIS)

    Goel, L.; Viswanath, P.A.; Wang, P.

    2004-01-01

    This paper presents a time sequential Monte Carlo simulation technique to evaluate customer load point reliability in multi-bilateral contracts market. The effects of bilateral transactions, reserve agreements, and the priority commitments of generating companies on customer load point reliability have been investigated. A generating company with bilateral contracts is modelled as an equivalent time varying multi-state generation (ETMG). A procedure to determine load point reliability based on ETMG has been developed. The developed procedure is applied to a reliability test system to illustrate the technique. Representing each bilateral contract by an ETMG provides flexibility in determining the reliability at various customer load points. (authors)

  13. CARMEN: a system Monte Carlo based on linear programming from direct openings

    International Nuclear Information System (INIS)

    Ureba, A.; Pereira-Barbeiro, A. R.; Jimenez-Ortega, E.; Baeza, J. A.; Salguero, F. J.; Leal, A.

    2013-01-01

    The use of Monte Carlo (MC) has shown an improvement in the accuracy of the calculation of the dose compared to other analytics algorithms installed on the systems of business planning, especially in the case of non-standard situations typical of complex techniques such as IMRT and VMAT. Our treatment planning system called CARMEN, is based on the complete simulation, both the beam transport in the head of the accelerator and the patient, and simulation designed for efficient operation in terms of the accuracy of the estimate and the required computation times. (Author)

  14. PeneloPET, a Monte Carlo PET simulation tool based on PENELOPE: features and validation

    Energy Technology Data Exchange (ETDEWEB)

    Espana, S; Herraiz, J L; Vicente, E; Udias, J M [Grupo de Fisica Nuclear, Departmento de Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid, Madrid (Spain); Vaquero, J J; Desco, M [Unidad de Medicina y CirugIa Experimental, Hospital General Universitario Gregorio Maranon, Madrid (Spain)], E-mail: jose@nuc2.fis.ucm.es

    2009-03-21

    Monte Carlo simulations play an important role in positron emission tomography (PET) imaging, as an essential tool for the research and development of new scanners and for advanced image reconstruction. PeneloPET, a PET-dedicated Monte Carlo tool, is presented and validated in this work. PeneloPET is based on PENELOPE, a Monte Carlo code for the simulation of the transport in matter of electrons, positrons and photons, with energies from a few hundred eV to 1 GeV. PENELOPE is robust, fast and very accurate, but it may be unfriendly to people not acquainted with the FORTRAN programming language. PeneloPET is an easy-to-use application which allows comprehensive simulations of PET systems within PENELOPE. Complex and realistic simulations can be set by modifying a few simple input text files. Different levels of output data are available for analysis, from sinogram and lines-of-response (LORs) histogramming to fully detailed list mode. These data can be further exploited with the preferred programming language, including ROOT. PeneloPET simulates PET systems based on crystal array blocks coupled to photodetectors and allows the user to define radioactive sources, detectors, shielding and other parts of the scanner. The acquisition chain is simulated in high level detail; for instance, the electronic processing can include pile-up rejection mechanisms and time stamping of events, if desired. This paper describes PeneloPET and shows the results of extensive validations and comparisons of simulations against real measurements from commercial acquisition systems. PeneloPET is being extensively employed to improve the image quality of commercial PET systems and for the development of new ones.

  15. PeneloPET, a Monte Carlo PET simulation tool based on PENELOPE: features and validation

    International Nuclear Information System (INIS)

    Espana, S; Herraiz, J L; Vicente, E; Udias, J M; Vaquero, J J; Desco, M

    2009-01-01

    Monte Carlo simulations play an important role in positron emission tomography (PET) imaging, as an essential tool for the research and development of new scanners and for advanced image reconstruction. PeneloPET, a PET-dedicated Monte Carlo tool, is presented and validated in this work. PeneloPET is based on PENELOPE, a Monte Carlo code for the simulation of the transport in matter of electrons, positrons and photons, with energies from a few hundred eV to 1 GeV. PENELOPE is robust, fast and very accurate, but it may be unfriendly to people not acquainted with the FORTRAN programming language. PeneloPET is an easy-to-use application which allows comprehensive simulations of PET systems within PENELOPE. Complex and realistic simulations can be set by modifying a few simple input text files. Different levels of output data are available for analysis, from sinogram and lines-of-response (LORs) histogramming to fully detailed list mode. These data can be further exploited with the preferred programming language, including ROOT. PeneloPET simulates PET systems based on crystal array blocks coupled to photodetectors and allows the user to define radioactive sources, detectors, shielding and other parts of the scanner. The acquisition chain is simulated in high level detail; for instance, the electronic processing can include pile-up rejection mechanisms and time stamping of events, if desired. This paper describes PeneloPET and shows the results of extensive validations and comparisons of simulations against real measurements from commercial acquisition systems. PeneloPET is being extensively employed to improve the image quality of commercial PET systems and for the development of new ones.

  16. Monte Carlo based treatment planning for modulated electron beam radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Michael C. [Radiation Physics Division, Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA (United States)]. E-mail: mclee@reyes.stanford.edu; Deng Jun; Li Jinsheng; Jiang, Steve B.; Ma, C.-M. [Radiation Physics Division, Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA (United States)

    2001-08-01

    A Monte Carlo based treatment planning system for modulated electron radiation therapy (MERT) is presented. This new variation of intensity modulated radiation therapy (IMRT) utilizes an electron multileaf collimator (eMLC) to deliver non-uniform intensity maps at several electron energies. In this way, conformal dose distributions are delivered to irregular targets located a few centimetres below the surface while sparing deeper-lying normal anatomy. Planning for MERT begins with Monte Carlo generation of electron beamlets. Electrons are transported with proper in-air scattering and the dose is tallied in the phantom for each beamlet. An optimized beamlet plan may be calculated using inverse-planning methods. Step-and-shoot leaf sequences are generated for the intensity maps and dose distributions recalculated using Monte Carlo simulations. Here, scatter and leakage from the leaves are properly accounted for by transporting electrons through the eMLC geometry. The weights for the segments of the plan are re-optimized with the leaf positions fixed and bremsstrahlung leakage and electron scatter doses included. This optimization gives the final optimized plan. It is shown that a significant portion of the calculation time is spent transporting particles in the leaves. However, this is necessary since optimizing segment weights based on a model in which leaf transport is ignored results in an improperly optimized plan with overdosing of target and critical structures. A method of rapidly calculating the bremsstrahlung contribution is presented and shown to be an efficient solution to this problem. A homogeneous model target and a 2D breast plan are presented. The potential use of this tool in clinical planning is discussed. (author)

  17. Development of a micro-depletion model to us WIMS properties in history-based local-parameter calculations in RFSP

    International Nuclear Information System (INIS)

    Shen, W.

    2004-01-01

    A micro-depletion model has been developed and implemented in the *SIMULATE module of RFSP to use WIMS-calculated lattice properties in history-based local-parameter calculations. A comparison between the micro-depletion and WIMS results for each type of lattice cross section and for the infinite-lattice multiplication factor was also performed for a fuel similar to that which may be used in the ACR fuel. The comparison shows that the micro-depletion calculation agrees well with the WIMS-IST calculation. The relative differences in k-infinity are within ±0.5 mk and ±0.9 mk for perturbation and depletion calculations, respectively. The micro-depletion model gives the *SIMULATE module of RFSP the capability to use WIMS-calculated lattice properties in history-based local-parameter calculations without resorting to the Simple-Cell-Methodology (SCM) surrogate for CANDU core-tracking simulations. (author)

  18. CARMEN: a system Monte Carlo based on linear programming from direct openings; CARMEN: Un sistema de planficiacion Monte Carlo basado en programacion lineal a partir de aberturas directas

    Energy Technology Data Exchange (ETDEWEB)

    Ureba, A.; Pereira-Barbeiro, A. R.; Jimenez-Ortega, E.; Baeza, J. A.; Salguero, F. J.; Leal, A.

    2013-07-01

    The use of Monte Carlo (MC) has shown an improvement in the accuracy of the calculation of the dose compared to other analytics algorithms installed on the systems of business planning, especially in the case of non-standard situations typical of complex techniques such as IMRT and VMAT. Our treatment planning system called CARMEN, is based on the complete simulation, both the beam transport in the head of the accelerator and the patient, and simulation designed for efficient operation in terms of the accuracy of the estimate and the required computation times. (Author)

  19. Monte Carlo simulation as a tool to predict blasting fragmentation based on the Kuz Ram model

    Science.gov (United States)

    Morin, Mario A.; Ficarazzo, Francesco

    2006-04-01

    Rock fragmentation is considered the most important aspect of production blasting because of its direct effects on the costs of drilling and blasting and on the economics of the subsequent operations of loading, hauling and crushing. Over the past three decades, significant progress has been made in the development of new technologies for blasting applications. These technologies include increasingly sophisticated computer models for blast design and blast performance prediction. Rock fragmentation depends on many variables such as rock mass properties, site geology, in situ fracturing and blasting parameters and as such has no complete theoretical solution for its prediction. However, empirical models for the estimation of size distribution of rock fragments have been developed. In this study, a blast fragmentation Monte Carlo-based simulator, based on the Kuz-Ram fragmentation model, has been developed to predict the entire fragmentation size distribution, taking into account intact and joints rock properties, the type and properties of explosives and the drilling pattern. Results produced by this simulator were quite favorable when compared with real fragmentation data obtained from a blast quarry. It is anticipated that the use of Monte Carlo simulation will increase our understanding of the effects of rock mass and explosive properties on the rock fragmentation by blasting, as well as increase our confidence in these empirical models. This understanding will translate into improvements in blasting operations, its corresponding costs and the overall economics of open pit mines and rock quarries.

  20. CAD-based Monte Carlo automatic modeling method based on primitive solid

    International Nuclear Information System (INIS)

    Wang, Dong; Song, Jing; Yu, Shengpeng; Long, Pengcheng; Wang, Yongliang

    2016-01-01

    Highlights: • We develop a method which bi-convert between CAD model and primitive solid. • This method was improved from convert method between CAD model and half space. • This method was test by ITER model and validated the correctness and efficiency. • This method was integrated in SuperMC which could model for SuperMC and Geant4. - Abstract: Monte Carlo method has been widely used in nuclear design and analysis, where geometries are described with primitive solids. However, it is time consuming and error prone to describe a primitive solid geometry, especially for a complicated model. To reuse the abundant existed CAD models and conveniently model with CAD modeling tools, an automatic modeling method for accurate prompt modeling between CAD model and primitive solid is needed. An automatic modeling method for Monte Carlo geometry described by primitive solid was developed which could bi-convert between CAD model and Monte Carlo geometry represented by primitive solids. While converting from CAD model to primitive solid model, the CAD model was decomposed into several convex solid sets, and then corresponding primitive solids were generated and exported. While converting from primitive solid model to the CAD model, the basic primitive solids were created and related operation was done. This method was integrated in the SuperMC and was benchmarked with ITER benchmark model. The correctness and efficiency of this method were demonstrated.

  1. GPU-based high performance Monte Carlo simulation in neutron transport

    Energy Technology Data Exchange (ETDEWEB)

    Heimlich, Adino; Mol, Antonio C.A.; Pereira, Claudio M.N.A. [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. de Inteligencia Artificial Aplicada], e-mail: cmnap@ien.gov.br

    2009-07-01

    Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in neutron transport simulation by Monte Carlo method. To accomplish that, GPU- and CPU-based (single and multicore) approaches were developed and applied to a simple, but time-consuming problem. Comparisons demonstrated that the GPU-based approach is about 15 times faster than a parallel 8-core CPU-based approach also developed in this work. (author)

  2. GPU-based high performance Monte Carlo simulation in neutron transport

    International Nuclear Information System (INIS)

    Heimlich, Adino; Mol, Antonio C.A.; Pereira, Claudio M.N.A.

    2009-01-01

    Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in neutron transport simulation by Monte Carlo method. To accomplish that, GPU- and CPU-based (single and multicore) approaches were developed and applied to a simple, but time-consuming problem. Comparisons demonstrated that the GPU-based approach is about 15 times faster than a parallel 8-core CPU-based approach also developed in this work. (author)

  3. Density-based Monte Carlo filter and its applications in nonlinear stochastic differential equation models.

    Science.gov (United States)

    Huang, Guanghui; Wan, Jianping; Chen, Hui

    2013-02-01

    Nonlinear stochastic differential equation models with unobservable state variables are now widely used in analysis of PK/PD data. Unobservable state variables are usually estimated with extended Kalman filter (EKF), and the unknown pharmacokinetic parameters are usually estimated by maximum likelihood estimator. However, EKF is inadequate for nonlinear PK/PD models, and MLE is known to be biased downwards. A density-based Monte Carlo filter (DMF) is proposed to estimate the unobservable state variables, and a simulation-based M estimator is proposed to estimate the unknown parameters in this paper, where a genetic algorithm is designed to search the optimal values of pharmacokinetic parameters. The performances of EKF and DMF are compared through simulations for discrete time and continuous time systems respectively, and it is found that the results based on DMF are more accurate than those given by EKF with respect to mean absolute error. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Monte Carlo based dosimetry and treatment planning for neutron capture therapy of brain tumors

    International Nuclear Information System (INIS)

    Zamenhof, R.G.; Brenner, J.F.; Wazer, D.E.; Madoc-Jones, H.; Clement, S.D.; Harling, O.K.; Yanch, J.C.

    1990-01-01

    Monte Carlo based dosimetry and computer-aided treatment planning for neutron capture therapy have been developed to provide the necessary link between physical dosimetric measurements performed on the MITR-II epithermal-neutron beams and the need of the radiation oncologist to synthesize large amounts of dosimetric data into a clinically meaningful treatment plan for each individual patient. Monte Carlo simulation has been employed to characterize the spatial dose distributions within a skull/brain model irradiated by an epithermal-neutron beam designed for neutron capture therapy applications. The geometry and elemental composition employed for the mathematical skull/brain model and the neutron and photon fluence-to-dose conversion formalism are presented. A treatment planning program, NCTPLAN, developed specifically for neutron capture therapy, is described. Examples are presented illustrating both one and two-dimensional dose distributions obtainable within the brain with an experimental epithermal-neutron beam, together with beam quality and treatment plan efficacy criteria which have been formulated for neutron capture therapy. The incorporation of three-dimensional computed tomographic image data into the treatment planning procedure is illustrated

  5. Levy-Lieb-Based Monte Carlo Study of the Dimensionality Behaviour of the Electronic Kinetic Functional

    Directory of Open Access Journals (Sweden)

    Seshaditya A.

    2017-06-01

    Full Text Available We consider a gas of interacting electrons in the limit of nearly uniform density and treat the one dimensional (1D, two dimensional (2D and three dimensional (3D cases. We focus on the determination of the correlation part of the kinetic functional by employing a Monte Carlo sampling technique of electrons in space based on an analytic derivation via the Levy-Lieb constrained search principle. Of particular interest is the question of the behaviour of the functional as one passes from 1D to 3D; according to the basic principles of Density Functional Theory (DFT the form of the universal functional should be independent of the dimensionality. However, in practice the straightforward use of current approximate functionals in different dimensions is problematic. Here, we show that going from the 3D to the 2D case the functional form is consistent (concave function but in 1D becomes convex; such a drastic difference is peculiar of 1D electron systems as it is for other quantities. Given the interesting behaviour of the functional, this study represents a basic first-principle approach to the problem and suggests further investigations using highly accurate (though expensive many-electron computational techniques, such as Quantum Monte Carlo.

  6. Monte-Carlo-based uncertainty propagation with hierarchical models—a case study in dynamic torque

    Science.gov (United States)

    Klaus, Leonard; Eichstädt, Sascha

    2018-04-01

    For a dynamic calibration, a torque transducer is described by a mechanical model, and the corresponding model parameters are to be identified from measurement data. A measuring device for the primary calibration of dynamic torque, and a corresponding model-based calibration approach, have recently been developed at PTB. The complete mechanical model of the calibration set-up is very complex, and involves several calibration steps—making a straightforward implementation of a Monte Carlo uncertainty evaluation tedious. With this in mind, we here propose to separate the complete model into sub-models, with each sub-model being treated with individual experiments and analysis. The uncertainty evaluation for the overall model then has to combine the information from the sub-models in line with Supplement 2 of the Guide to the Expression of Uncertainty in Measurement. In this contribution, we demonstrate how to carry this out using the Monte Carlo method. The uncertainty evaluation involves various input quantities of different origin and the solution of a numerical optimisation problem.

  7. Fission yield calculation using toy model based on Monte Carlo simulation

    International Nuclear Information System (INIS)

    Jubaidah; Kurniadi, Rizal

    2015-01-01

    Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R c ), mean of left curve (μ L ) and mean of right curve (μ R ), deviation of left curve (σ L ) and deviation of right curve (σ R ). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90

  8. TREEDE, Point Fluxes and Currents Based on Track Rotation Estimator by Monte-Carlo Method

    International Nuclear Information System (INIS)

    Dubi, A.

    1985-01-01

    1 - Description of problem or function: TREEDE is a Monte Carlo transport code based on the Track Rotation estimator, used, in general, to calculate fluxes and currents at a point. This code served as a test code in the development of the concept of the Track Rotation estimator, and therefore analogue Monte Carlo is used (i.e. no importance biasing). 2 - Method of solution: The basic idea is to follow the particle's track in the medium and then to rotate it such that it passes through the detector point. That is, rotational symmetry considerations (even in non-spherically symmetric configurations) are applied to every history, so that a very large fraction of the track histories can be rotated and made to pass through the point of interest; in this manner the 1/r 2 singularity in the un-collided flux estimator (next event estimator) is avoided. TREEDE, being a test code, is used to estimate leakage or in-medium fluxes at given points in a 3-dimensional finite box, where the source is an isotropic point source at the centre of the z = 0 surface. However, many of the constraints of geometry and source can be easily removed. The medium is assumed homogeneous with isotropic scattering, and one energy group only is considered. 3 - Restrictions on the complexity of the problem: One energy group, a homogeneous medium, isotropic scattering

  9. Fission yield calculation using toy model based on Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Jubaidah, E-mail: jubaidah@student.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia); Physics Department, Faculty of Mathematics and Natural Science – State University of Medan. Jl. Willem Iskandar Pasar V Medan Estate – North Sumatera, Indonesia 20221 (Indonesia); Kurniadi, Rizal, E-mail: rijalk@fi.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia)

    2015-09-30

    Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90

  10. Development of a shield based on Monte-Carlo studies for the COBRA experiment

    Energy Technology Data Exchange (ETDEWEB)

    Heidrich, Nadine [Institut fuer Experimentalphysik, 22761 Hamburg (Germany); Collaboration: COBRA-Collaboration

    2013-07-01

    COBRA is a next-generation experiment searching for neutrinoless double beta decay using CdZnTe semiconductor detectors. The main focus is on {sup 116}Cd, with a decay energy of 2813.5 keV well above the highest naturally occurring gamma lines. The concept for a large scale set-up consists of an array of CdZnTe detectors with a total mass of 420 kg enriched in {sup 116}Cd up to 90 %. With a background rate in the order of 10{sup -3} counts/keV/kg/year, the experiment would be sensitive to a half-life larger than 10{sup 26} years, corresponding to a Majorana mass term m{sub ββ} smaller than 50 meV. To achieve the background level, an appropriate shield is necessary. The shield is developed based on Monte-Carlo simulations. For that, different materials and configurations are tested. In the talk the current status of the Monte-Carlo survey is presented and discussed.

  11. Monte Carlo closure for moment-based transport schemes in general relativistic radiation hydrodynamic simulations

    Science.gov (United States)

    Foucart, Francois

    2018-04-01

    General relativistic radiation hydrodynamic simulations are necessary to accurately model a number of astrophysical systems involving black holes and neutron stars. Photon transport plays a crucial role in radiatively dominated accretion discs, while neutrino transport is critical to core-collapse supernovae and to the modelling of electromagnetic transients and nucleosynthesis in neutron star mergers. However, evolving the full Boltzmann equations of radiative transport is extremely expensive. Here, we describe the implementation in the general relativistic SPEC code of a cheaper radiation hydrodynamic method that theoretically converges to a solution of Boltzmann's equation in the limit of infinite numerical resources. The algorithm is based on a grey two-moment scheme, in which we evolve the energy density and momentum density of the radiation. Two-moment schemes require a closure that fills in missing information about the energy spectrum and higher order moments of the radiation. Instead of the approximate analytical closure currently used in core-collapse and merger simulations, we complement the two-moment scheme with a low-accuracy Monte Carlo evolution. The Monte Carlo results can provide any or all of the missing information in the evolution of the moments, as desired by the user. As a first test of our methods, we study a set of idealized problems demonstrating that our algorithm performs significantly better than existing analytical closures. We also discuss the current limitations of our method, in particular open questions regarding the stability of the fully coupled scheme.

  12. Regulatory considerations and quality assurance of depleted uranium based radiography cameras

    International Nuclear Information System (INIS)

    Sapkal, Jyotsna A.; Yadav, R.K.B.; Amrota, C.T.; Singh, Pratap; GopaIakrishanan, R.H.; Patil, B.N.; Mane, Nilesh

    2016-01-01

    Radiography cameras with shielding material as Depleted Uranium (DU) are used for containment of Iridium ( 192 Ir) source. DU shielding surrounds the titanium made 'S' tube through which the encapsulated 192 Ir source along with the pigtail travels. As per guidelines, it is required to check periodically the shielding integrity of DU shielding periodically by monitoring for alpha transferable contamination inside the 'S' tube. This paper describes in brief the method followed for collection of samples from inside the 'S' tube . The samples were analysed for transferable contamination due to gross alpha using alpha scintillation (ALSCIN) counter. The gross alpha contamination in the 'S' tube was found to be less than the recommended USNRC value for discarding the radiography camera. IAEA recommendations related to transferable contamination and AERB guidelines on the quality assurance (QA) requirements of radiography camera were studied

  13. Sign Learning Kink-based (SiLK) Quantum Monte Carlo for molecular systems

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Xiaoyao [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Hall, Randall W. [Department of Natural Sciences and Mathematics, Dominican University of California, San Rafael, California 94901 (United States); Department of Chemistry, Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Löffler, Frank [Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Kowalski, Karol [William R. Wiley Environmental Molecular Sciences Laboratory, Battelle, Pacific Northwest National Laboratory, Richland, Washington 99352 (United States); Bhaskaran-Nair, Kiran; Jarrell, Mark; Moreno, Juana [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803 (United States)

    2016-01-07

    The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H{sub 2}O, N{sub 2}, and F{sub 2} molecules. The method is based on Feynman’s path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem.

  14. Sign Learning Kink-based (SiLK) Quantum Monte Carlo for molecular systems

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Xiaoyao [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Hall, Randall W. [Department of Natural Sciences and Mathematics, Dominican University of California, San Rafael, California 94901, USA; Department of Chemistry, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Löffler, Frank [Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Kowalski, Karol [William R. Wiley Environmental Molecular Sciences Laboratory, Battelle, Pacific Northwest National Laboratory, Richland, Washington 99352, USA; Bhaskaran-Nair, Kiran [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Jarrell, Mark [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Moreno, Juana [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803, USA

    2016-01-07

    The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H2O, N2, and F2 molecules. The method is based on Feynman’s path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem.

  15. Integrated layout based Monte-Carlo simulation for design arc optimization

    Science.gov (United States)

    Shao, Dongbing; Clevenger, Larry; Zhuang, Lei; Liebmann, Lars; Wong, Robert; Culp, James

    2016-03-01

    Design rules are created considering a wafer fail mechanism with the relevant design levels under various design cases, and the values are set to cover the worst scenario. Because of the simplification and generalization, design rule hinders, rather than helps, dense device scaling. As an example, SRAM designs always need extensive ground rule waivers. Furthermore, dense design also often involves "design arc", a collection of design rules, the sum of which equals critical pitch defined by technology. In design arc, a single rule change can lead to chain reaction of other rule violations. In this talk we present a methodology using Layout Based Monte-Carlo Simulation (LBMCS) with integrated multiple ground rule checks. We apply this methodology on SRAM word line contact, and the result is a layout that has balanced wafer fail risks based on Process Assumptions (PAs). This work was performed at the IBM Microelectronics Div, Semiconductor Research and Development Center, Hopewell Junction, NY 12533

  16. Comparison of nonstationary generalized logistic models based on Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    S. Kim

    2015-06-01

    Full Text Available Recently, the evidences of climate change have been observed in hydrologic data such as rainfall and flow data. The time-dependent characteristics of statistics in hydrologic data are widely defined as nonstationarity. Therefore, various nonstationary GEV and generalized Pareto models have been suggested for frequency analysis of nonstationary annual maximum and POT (peak-over-threshold data, respectively. However, the alternative models are required for nonstatinoary frequency analysis because of analyzing the complex characteristics of nonstationary data based on climate change. This study proposed the nonstationary generalized logistic model including time-dependent parameters. The parameters of proposed model are estimated using the method of maximum likelihood based on the Newton-Raphson method. In addition, the proposed model is compared by Monte Carlo simulation to investigate the characteristics of models and applicability.

  17. Monte Carlo dose calculation using a cell processor based PlayStation 3 system

    International Nuclear Information System (INIS)

    Chow, James C L; Lam, Phil; Jaffray, David A

    2012-01-01

    This study investigates the performance of the EGSnrc computer code coupled with a Cell-based hardware in Monte Carlo simulation of radiation dose in radiotherapy. Performance evaluations of two processor-intensive functions namely, HOWNEAR and RANMAR G ET in the EGSnrc code were carried out basing on the 20-80 rule (Pareto principle). The execution speeds of the two functions were measured by the profiler gprof specifying the number of executions and total time spent on the functions. A testing architecture designed for Cell processor was implemented in the evaluation using a PlayStation3 (PS3) system. The evaluation results show that the algorithms examined are readily parallelizable on the Cell platform, provided that an architectural change of the EGSnrc was made. However, as the EGSnrc performance was limited by the PowerPC Processing Element in the PS3, PC coupled with graphics processing units or GPCPU may provide a more viable avenue for acceleration.

  18. Monte Carlo dose calculation using a cell processor based PlayStation 3 system

    Science.gov (United States)

    Chow, James C. L.; Lam, Phil; Jaffray, David A.

    2012-02-01

    This study investigates the performance of the EGSnrc computer code coupled with a Cell-based hardware in Monte Carlo simulation of radiation dose in radiotherapy. Performance evaluations of two processor-intensive functions namely, HOWNEAR and RANMAR_GET in the EGSnrc code were carried out basing on the 20-80 rule (Pareto principle). The execution speeds of the two functions were measured by the profiler gprof specifying the number of executions and total time spent on the functions. A testing architecture designed for Cell processor was implemented in the evaluation using a PlayStation3 (PS3) system. The evaluation results show that the algorithms examined are readily parallelizable on the Cell platform, provided that an architectural change of the EGSnrc was made. However, as the EGSnrc performance was limited by the PowerPC Processing Element in the PS3, PC coupled with graphics processing units or GPCPU may provide a more viable avenue for acceleration.

  19. Prediction of betavoltaic battery output parameters based on SEM measurements and Monte Carlo simulation

    International Nuclear Information System (INIS)

    Yakimov, Eugene B.

    2016-01-01

    An approach for a prediction of "6"3Ni-based betavoltaic battery output parameters is described. It consists of multilayer Monte Carlo simulation to obtain the depth dependence of excess carrier generation rate inside the semiconductor converter, a determination of collection probability based on the electron beam induced current measurements, a calculation of current induced in the semiconductor converter by beta-radiation, and SEM measurements of output parameters using the calculated induced current value. Such approach allows to predict the betavoltaic battery parameters and optimize the converter design for any real semiconductor structure and any thickness and specific activity of beta-radiation source. - Highlights: • New procedure for betavoltaic battery output parameters prediction is described. • A depth dependence of beta particle energy deposition for Si and SiC is calculated. • Electron trajectories are assumed isotropic and uniformly started under simulation.

  20. Sign Learning Kink-based (SiLK) Quantum Monte Carlo for molecular systems

    International Nuclear Information System (INIS)

    Ma, Xiaoyao; Hall, Randall W.; Löffler, Frank; Kowalski, Karol; Bhaskaran-Nair, Kiran; Jarrell, Mark; Moreno, Juana

    2016-01-01

    The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H 2 O, N 2 , and F 2 molecules. The method is based on Feynman’s path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem

  1. Modelling of scintillator based flat-panel detectors with Monte-Carlo simulations

    International Nuclear Information System (INIS)

    Reims, N; Sukowski, F; Uhlmann, N

    2011-01-01

    Scintillator based flat panel detectors are state of the art in the field of industrial X-ray imaging applications. Choosing the proper system and setup parameters for the vast range of different applications can be a time consuming task, especially when developing new detector systems. Since the system behaviour cannot always be foreseen easily, Monte-Carlo (MC) simulations are keys to gain further knowledge of system components and their behaviour for different imaging conditions. In this work we used two Monte-Carlo based models to examine an indirect converting flat panel detector, specifically the Hamamatsu C9312SK. We focused on the signal generation in the scintillation layer and its influence on the spatial resolution of the whole system. The models differ significantly in their level of complexity. The first model gives a global description of the detector based on different parameters characterizing the spatial resolution. With relatively small effort a simulation model can be developed which equates the real detector regarding signal transfer. The second model allows a more detailed insight of the system. It is based on the well established cascade theory, i.e. describing the detector as a cascade of elemental gain and scattering stages, which represent the built in components and their signal transfer behaviour. In comparison to the first model the influence of single components especially the important light spread behaviour in the scintillator can be analysed in a more differentiated way. Although the implementation of the second model is more time consuming both models have in common that a relatively small amount of system manufacturer parameters are needed. The results of both models were in good agreement with the measured parameters of the real system.

  2. GGEMS-Brachy: GPU GEant4-based Monte Carlo simulation for brachytherapy applications

    International Nuclear Information System (INIS)

    Lemaréchal, Yannick; Bert, Julien; Schick, Ulrike; Pradier, Olivier; Garcia, Marie-Paule; Boussion, Nicolas; Visvikis, Dimitris; Falconnet, Claire; Després, Philippe; Valeri, Antoine

    2015-01-01

    In brachytherapy, plans are routinely calculated using the AAPM TG43 formalism which considers the patient as a simple water object. An accurate modeling of the physical processes considering patient heterogeneity using Monte Carlo simulation (MCS) methods is currently too time-consuming and computationally demanding to be routinely used. In this work we implemented and evaluated an accurate and fast MCS on Graphics Processing Units (GPU) for brachytherapy low dose rate (LDR) applications. A previously proposed Geant4 based MCS framework implemented on GPU (GGEMS) was extended to include a hybrid GPU navigator, allowing navigation within voxelized patient specific images and analytically modeled 125 I seeds used in LDR brachytherapy. In addition, dose scoring based on track length estimator including uncertainty calculations was incorporated. The implemented GGEMS-brachy platform was validated using a comparison with Geant4 simulations and reference datasets. Finally, a comparative dosimetry study based on the current clinical standard (TG43) and the proposed platform was performed on twelve prostate cancer patients undergoing LDR brachytherapy. Considering patient 3D CT volumes of 400  × 250  × 65 voxels and an average of 58 implanted seeds, the mean patient dosimetry study run time for a 2% dose uncertainty was 9.35 s (≈500 ms 10 −6 simulated particles) and 2.5 s when using one and four GPUs, respectively. The performance of the proposed GGEMS-brachy platform allows envisaging the use of Monte Carlo simulation based dosimetry studies in brachytherapy compatible with clinical practice. Although the proposed platform was evaluated for prostate cancer, it is equally applicable to other LDR brachytherapy clinical applications. Future extensions will allow its application in high dose rate brachytherapy applications. (paper)

  3. GGEMS-Brachy: GPU GEant4-based Monte Carlo simulation for brachytherapy applications

    Science.gov (United States)

    Lemaréchal, Yannick; Bert, Julien; Falconnet, Claire; Després, Philippe; Valeri, Antoine; Schick, Ulrike; Pradier, Olivier; Garcia, Marie-Paule; Boussion, Nicolas; Visvikis, Dimitris

    2015-07-01

    In brachytherapy, plans are routinely calculated using the AAPM TG43 formalism which considers the patient as a simple water object. An accurate modeling of the physical processes considering patient heterogeneity using Monte Carlo simulation (MCS) methods is currently too time-consuming and computationally demanding to be routinely used. In this work we implemented and evaluated an accurate and fast MCS on Graphics Processing Units (GPU) for brachytherapy low dose rate (LDR) applications. A previously proposed Geant4 based MCS framework implemented on GPU (GGEMS) was extended to include a hybrid GPU navigator, allowing navigation within voxelized patient specific images and analytically modeled 125I seeds used in LDR brachytherapy. In addition, dose scoring based on track length estimator including uncertainty calculations was incorporated. The implemented GGEMS-brachy platform was validated using a comparison with Geant4 simulations and reference datasets. Finally, a comparative dosimetry study based on the current clinical standard (TG43) and the proposed platform was performed on twelve prostate cancer patients undergoing LDR brachytherapy. Considering patient 3D CT volumes of 400  × 250  × 65 voxels and an average of 58 implanted seeds, the mean patient dosimetry study run time for a 2% dose uncertainty was 9.35 s (≈500 ms 10-6 simulated particles) and 2.5 s when using one and four GPUs, respectively. The performance of the proposed GGEMS-brachy platform allows envisaging the use of Monte Carlo simulation based dosimetry studies in brachytherapy compatible with clinical practice. Although the proposed platform was evaluated for prostate cancer, it is equally applicable to other LDR brachytherapy clinical applications. Future extensions will allow its application in high dose rate brachytherapy applications.

  4. Base cation depletion, eutrophication and acidification of species-rich grasslands in response to long-term simulated nitrogen deposition

    Energy Technology Data Exchange (ETDEWEB)

    Horswill, Paul [Department of Animal and Plant Sciences, University of Sheffield, Alfred Denny Building, Western Bank, Sheffield S10 2TN (United Kingdom)], E-mail: paul.horswill@naturalengland.org.uk; O' Sullivan, Odhran; Phoenix, Gareth K.; Lee, John A.; Leake, Jonathan R. [Department of Animal and Plant Sciences, University of Sheffield, Alfred Denny Building, Western Bank, Sheffield S10 2TN (United Kingdom)

    2008-09-15

    Pollutant nitrogen deposition effects on soil and foliar element concentrations were investigated in acidic and limestone grasslands, located in one of the most nitrogen and acid rain polluted regions of the UK, using plots treated for 8-10 years with 35-140 kg N ha{sup -2} y{sup -1} as NH{sub 4}NO{sub 3}. Historic data suggests both grasslands have acidified over the past 50 years. Nitrogen deposition treatments caused the grassland soils to lose 23-35% of their total available bases (Ca, Mg, K, and Na) and they became acidified by 0.2-0.4 pH units. Aluminium, iron and manganese were mobilised and taken up by limestone grassland forbs and were translocated down the acid grassland soil. Mineral nitrogen availability increased in both grasslands and many species showed foliar N enrichment. This study provides the first definitive evidence that nitrogen deposition depletes base cations from grassland soils. The resulting acidification, metal mobilisation and eutrophication are implicated in driving floristic changes. - Nitrogen deposition causes base cation depletion, acidification and eutrophication of semi-natural grassland soils.

  5. PELE:  Protein Energy Landscape Exploration. A Novel Monte Carlo Based Technique.

    Science.gov (United States)

    Borrelli, Kenneth W; Vitalis, Andreas; Alcantara, Raul; Guallar, Victor

    2005-11-01

    Combining protein structure prediction algorithms and Metropolis Monte Carlo techniques, we provide a novel method to explore all-atom energy landscapes. The core of the technique is based on a steered localized perturbation followed by side-chain sampling as well as minimization cycles. The algorithm and its application to ligand diffusion are presented here. Ligand exit pathways are successfully modeled for different systems containing ligands of various sizes:  carbon monoxide in myoglobin, camphor in cytochrome P450cam, and palmitic acid in the intestinal fatty-acid-binding protein. These initial applications reveal the potential of this new technique in mapping millisecond-time-scale processes. The computational cost associated with the exploration is significantly less than that of conventional MD simulations.

  6. CAD-based Monte Carlo program for integrated simulation of nuclear system SuperMC

    International Nuclear Information System (INIS)

    Wu, Y.; Song, J.; Zheng, H.; Sun, G.; Hao, L.; Long, P.; Hu, L.

    2013-01-01

    SuperMC is a (Computer-Aided-Design) CAD-based Monte Carlo (MC) program for integrated simulation of nuclear systems developed by FDS Team (China), making use of hybrid MC-deterministic method and advanced computer technologies. The design aim, architecture and main methodology of SuperMC are presented in this paper. The taking into account of multi-physics processes and the use of advanced computer technologies such as automatic geometry modeling, intelligent data analysis and visualization, high performance parallel computing and cloud computing, contribute to the efficiency of the code. SuperMC2.1, the latest version of the code for neutron, photon and coupled neutron and photon transport calculation, has been developed and validated by using a series of benchmarking cases such as the fusion reactor ITER model and the fast reactor BN-600 model

  7. Monte Carlo based statistical power analysis for mediation models: methods and software.

    Science.gov (United States)

    Zhang, Zhiyong

    2014-12-01

    The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.

  8. An Application of Monte-Carlo-Based Sensitivity Analysis on the Overlap in Discriminant Analysis

    Directory of Open Access Journals (Sweden)

    S. Razmyan

    2012-01-01

    Full Text Available Discriminant analysis (DA is used for the measurement of estimates of a discriminant function by minimizing their group misclassifications to predict group membership of newly sampled data. A major source of misclassification in DA is due to the overlapping of groups. The uncertainty in the input variables and model parameters needs to be properly characterized in decision making. This study combines DEA-DA with a sensitivity analysis approach to an assessment of the influence of banks’ variables on the overall variance in overlap in a DA in order to determine which variables are most significant. A Monte-Carlo-based sensitivity analysis is considered for computing the set of first-order sensitivity indices of the variables to estimate the contribution of each uncertain variable. The results show that the uncertainties in the loans granted and different deposit variables are more significant than uncertainties in other banks’ variables in decision making.

  9. A Monte Carlo-based treatment-planning tool for ion beam therapy

    CERN Document Server

    Böhlen, T T; Dosanjh, M; Ferrari, A; Haberer, T; Parodi, K; Patera, V; Mairan, A

    2013-01-01

    Ion beam therapy, as an emerging radiation therapy modality, requires continuous efforts to develop and improve tools for patient treatment planning (TP) and research applications. Dose and fluence computation algorithms using the Monte Carlo (MC) technique have served for decades as reference tools for accurate dose computations for radiotherapy. In this work, a novel MC-based treatment-planning (MCTP) tool for ion beam therapy using the pencil beam scanning technique is presented. It allows single-field and simultaneous multiple-fields optimization for realistic patient treatment conditions and for dosimetric quality assurance for irradiation conditions at state-of-the-art ion beam therapy facilities. It employs iterative procedures that allow for the optimization of absorbed dose and relative biological effectiveness (RBE)-weighted dose using radiobiological input tables generated by external RBE models. Using a re-implementation of the local effect model (LEM), theMCTP tool is able to perform TP studies u...

  10. Noninvasive spectral imaging of skin chromophores based on multiple regression analysis aided by Monte Carlo simulation

    Science.gov (United States)

    Nishidate, Izumi; Wiswadarma, Aditya; Hase, Yota; Tanaka, Noriyuki; Maeda, Takaaki; Niizeki, Kyuichi; Aizu, Yoshihisa

    2011-08-01

    In order to visualize melanin and blood concentrations and oxygen saturation in human skin tissue, a simple imaging technique based on multispectral diffuse reflectance images acquired at six wavelengths (500, 520, 540, 560, 580 and 600nm) was developed. The technique utilizes multiple regression analysis aided by Monte Carlo simulation for diffuse reflectance spectra. Using the absorbance spectrum as a response variable and the extinction coefficients of melanin, oxygenated hemoglobin, and deoxygenated hemoglobin as predictor variables, multiple regression analysis provides regression coefficients. Concentrations of melanin and total blood are then determined from the regression coefficients using conversion vectors that are deduced numerically in advance, while oxygen saturation is obtained directly from the regression coefficients. Experiments with a tissue-like agar gel phantom validated the method. In vivo experiments with human skin of the human hand during upper limb occlusion and of the inner forearm exposed to UV irradiation demonstrated the ability of the method to evaluate physiological reactions of human skin tissue.

  11. Fast online Monte Carlo-based IMRT planning for the MRI linear accelerator

    Science.gov (United States)

    Bol, G. H.; Hissoiny, S.; Lagendijk, J. J. W.; Raaymakers, B. W.

    2012-03-01

    The MRI accelerator, a combination of a 6 MV linear accelerator with a 1.5 T MRI, facilitates continuous patient anatomy updates regarding translations, rotations and deformations of targets and organs at risk. Accounting for these demands high speed, online intensity-modulated radiotherapy (IMRT) re-optimization. In this paper, a fast IMRT optimization system is described which combines a GPU-based Monte Carlo dose calculation engine for online beamlet generation and a fast inverse dose optimization algorithm. Tightly conformal IMRT plans are generated for four phantom cases and two clinical cases (cervix and kidney) in the presence of the magnetic fields of 0 and 1.5 T. We show that for the presented cases the beamlet generation and optimization routines are fast enough for online IMRT planning. Furthermore, there is no influence of the magnetic field on plan quality and complexity, and equal optimization constraints at 0 and 1.5 T lead to almost identical dose distributions.

  12. Memristive device based on a depletion-type SONOS field effect transistor

    Science.gov (United States)

    Himmel, N.; Ziegler, M.; Mähne, H.; Thiem, S.; Winterfeld, H.; Kohlstedt, H.

    2017-06-01

    State-of-the-art SONOS (silicon-oxide-nitride-oxide-polysilicon) field effect transistors were operated in a memristive switching mode. The circuit design is a variation of the MemFlash concept and the particular properties of depletion type SONOS-transistors were taken into account. The transistor was externally wired with a resistively shunted pn-diode. Experimental current-voltage curves show analog bipolar switching characteristics within a bias voltage range of ±10 V, exhibiting a pronounced asymmetric hysteresis loop. The experimental data are confirmed by SPICE simulations. The underlying memristive mechanism is purely electronic, which eliminates an initial forming step of the as-fabricated cells. This fact, together with reasonable design flexibility, in particular to adjust the maximum R ON/R OFF ratio, makes these cells attractive for neuromorphic applications. The relative large set and reset voltage around ±10 V might be decreased by using thinner gate-oxides. The all-electric operation principle, in combination with an established silicon manufacturing process of SONOS devices at the Semiconductor Foundry X-FAB, promise reliable operation, low parameter spread and high integration density.

  13. Monte Carlo MCNP-4B-based absorbed dose distribution estimates for patient-specific dosimetry.

    Science.gov (United States)

    Yoriyaz, H; Stabin, M G; dos Santos, A

    2001-04-01

    This study was intended to verify the capability of the Monte Carlo MCNP-4B code to evaluate spatial dose distribution based on information gathered from CT or SPECT. A new three-dimensional (3D) dose calculation approach for internal emitter use in radioimmunotherapy (RIT) was developed using the Monte Carlo MCNP-4B code as the photon and electron transport engine. It was shown that the MCNP-4B computer code can be used with voxel-based anatomic and physiologic data to provide 3D dose distributions. This study showed that the MCNP-4B code can be used to develop a treatment planning system that will provide such information in a time manner, if dose reporting is suitably optimized. If each organ is divided into small regions where the average energy deposition is calculated with a typical volume of 0.4 cm(3), regional dose distributions can be provided with reasonable central processing unit times (on the order of 12-24 h on a 200-MHz personal computer or modest workstation). Further efforts to provide semiautomated region identification (segmentation) and improvement of marrow dose calculations are needed to supply a complete system for RIT. It is envisioned that all such efforts will continue to develop and that internal dose calculations may soon be brought to a similar level of accuracy, detail, and robustness as is commonly expected in external dose treatment planning. For this study we developed a code with a user-friendly interface that works on several nuclear medicine imaging platforms and provides timely patient-specific dose information to the physician and medical physicist. Future therapy with internal emitters should use a 3D dose calculation approach, which represents a significant advance over dose information provided by the standard geometric phantoms used for more than 20 y (which permit reporting of only average organ doses for certain standardized individuals)

  14. A Monte Carlo-based model for simulation of digital chest tomo-synthesis

    International Nuclear Information System (INIS)

    Ullman, G.; Dance, D. R.; Sandborg, M.; Carlsson, G. A.; Svalkvist, A.; Baath, M.

    2010-01-01

    The aim of this work was to calculate synthetic digital chest tomo-synthesis projections using a computer simulation model based on the Monte Carlo method. An anthropomorphic chest phantom was scanned in a computed tomography scanner, segmented and included in the computer model to allow for simulation of realistic high-resolution X-ray images. The input parameters to the model were adapted to correspond to the VolumeRAD chest tomo-synthesis system from GE Healthcare. Sixty tomo-synthesis projections were calculated with projection angles ranging from + 15 to -15 deg. The images from primary photons were calculated using an analytical model of the anti-scatter grid and a pre-calculated detector response function. The contributions from scattered photons were calculated using an in-house Monte Carlo-based model employing a number of variance reduction techniques such as the collision density estimator. Tomographic section images were reconstructed by transferring the simulated projections into the VolumeRAD system. The reconstruction was performed for three types of images using: (i) noise-free primary projections, (ii) primary projections including contributions from scattered photons and (iii) projections as in (ii) with added correlated noise. The simulated section images were compared with corresponding section images from projections taken with the real, anthropomorphic phantom from which the digital voxel phantom was originally created. The present article describes a work in progress aiming towards developing a model intended for optimisation of chest tomo-synthesis, allowing for simulation of both existing and future chest tomo-synthesis systems. (authors)

  15. Lithium Depletion in Solar-like Stars: Effect of Overshooting Based on Realistic Multi-dimensional Simulations

    Science.gov (United States)

    Baraffe, I.; Pratt, J.; Goffrey, T.; Constantino, T.; Folini, D.; Popov, M. V.; Walder, R.; Viallet, M.

    2017-08-01

    We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a young low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ˜50 Myr to ˜4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.

  16. Lithium Depletion in Solar-like Stars: Effect of Overshooting Based on Realistic Multi-dimensional Simulations

    International Nuclear Information System (INIS)

    Baraffe, I.; Pratt, J.; Goffrey, T.; Constantino, T.; Viallet, M.; Folini, D.; Popov, M. V.; Walder, R.

    2017-01-01

    We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a young low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ∼50 Myr to ∼4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.

  17. Lithium Depletion in Solar-like Stars: Effect of Overshooting Based on Realistic Multi-dimensional Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Baraffe, I.; Pratt, J.; Goffrey, T.; Constantino, T.; Viallet, M. [Astrophysics Group, University of Exeter, Exeter EX4 4QL (United Kingdom); Folini, D.; Popov, M. V.; Walder, R., E-mail: i.baraffe@ex.ac.uk [Ecole Normale Supérieure de Lyon, CRAL, UMR CNRS 5574, F-69364 Lyon Cedex 07 (France)

    2017-08-10

    We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a young low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ∼50 Myr to ∼4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.

  18. Monte Carlo simulation of grating-based neutron phase contrast imaging at CPHS

    International Nuclear Information System (INIS)

    Zhang Ran; Chen Zhiqiang; Huang Zhifeng; Xiao Yongshun; Wang Xuewu; Wie Jie; Loong, C.-K.

    2011-01-01

    Since the launching of the Compact Pulsed Hadron Source (CPHS) project of Tsinghua University in 2009, works have begun on the design and engineering of an imaging/radiography instrument for the neutron source provided by CPHS. The instrument will perform basic tasks such as transmission imaging and computerized tomography. Additionally, we include in the design the utilization of coded-aperture and grating-based phase contrast methodology, as well as the options of prompt gamma-ray analysis and neutron-energy selective imaging. Previously, we had implemented the hardware and data-analysis software for grating-based X-ray phase contrast imaging. Here, we investigate Geant4-based Monte Carlo simulations of neutron refraction phenomena and then model the grating-based neutron phase contrast imaging system according to the classic-optics-based method. The simulated experimental results of the retrieving phase shift gradient information by five-step phase-stepping approach indicate the feasibility of grating-based neutron phase contrast imaging as an option for the cold neutron imaging instrument at the CPHS.

  19. Full modelling of the MOSAIC animal PET system based on the GATE Monte Carlo simulation code

    International Nuclear Information System (INIS)

    Merheb, C; Petegnief, Y; Talbot, J N

    2007-01-01

    within 9%. For a 410-665 keV energy window, the measured sensitivity for a centred point source was 1.53% and mouse and rat scatter fractions were respectively 12.0% and 18.3%. The scattered photons produced outside the rat and mouse phantoms contributed to 24% and 36% of total simulated scattered coincidences. Simulated and measured single and prompt count rates agreed well for activities up to the electronic saturation at 110 MBq for the mouse and rat phantoms. Volumetric spatial resolution was 17.6 μL at the centre of the FOV with differences less than 6% between experimental and simulated spatial resolution values. The comprehensive evaluation of the Monte Carlo modelling of the Mosaic(TM) system demonstrates that the GATE package is adequately versatile and appropriate to accurately describe the response of an Anger logic based animal PET system

  20. Monte Carlo based dosimetry and treatment planning for neutron capture therapy of brain tumors

    International Nuclear Information System (INIS)

    Zamenhof, R.G.; Clement, S.D.; Harling, O.K.; Brenner, J.F.; Wazer, D.E.; Madoc-Jones, H.; Yanch, J.C.

    1990-01-01

    Monte Carlo based dosimetry and computer-aided treatment planning for neutron capture therapy have been developed to provide the necessary link between physical dosimetric measurements performed on the MITR-II epithermal-neutron beams and the need of the radiation oncologist to synthesize large amounts of dosimetric data into a clinically meaningful treatment plan for each individual patient. Monte Carlo simulation has been employed to characterize the spatial dose distributions within a skull/brain model irradiated by an epithermal-neutron beam designed for neutron capture therapy applications. The geometry and elemental composition employed for the mathematical skull/brain model and the neutron and photon fluence-to-dose conversion formalism are presented. A treatment planning program, NCTPLAN, developed specifically for neutron capture therapy, is described. Examples are presented illustrating both one and two-dimensional dose distributions obtainable within the brain with an experimental epithermal-neutron beam, together with beam quality and treatment plan efficacy criteria which have been formulated for neutron capture therapy. The incorporation of three-dimensional computed tomographic image data into the treatment planning procedure is illustrated. The experimental epithermal-neutron beam has a maximum usable circular diameter of 20 cm, and with 30 ppm of B-10 in tumor and 3 ppm of B-10 in blood, it produces a beam-axis advantage depth of 7.4 cm, a beam-axis advantage ratio of 1.83, a global advantage ratio of 1.70, and an advantage depth RBE-dose rate to tumor of 20.6 RBE-cGy/min (cJ/kg-min). These characteristics make this beam well suited for clinical applications, enabling an RBE-dose of 2,000 RBE-cGy/min (cJ/kg-min) to be delivered to tumor at brain midline in six fractions with a treatment time of approximately 16 minutes per fraction

  1. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    Science.gov (United States)

    Kanjilal, Oindrila; Manohar, C. S.

    2017-07-01

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.

  2. SQERTSS: Dynamic rank based throttling of transition probabilities in kinetic Monte Carlo simulations

    International Nuclear Information System (INIS)

    Danielson, Thomas; Sutton, Jonathan E.; Hin, Céline; Virginia Polytechnic Institute and State University; Savara, Aditya

    2017-01-01

    Lattice based Kinetic Monte Carlo (KMC) simulations offer a powerful simulation technique for investigating large reaction networks while retaining spatial configuration information, unlike ordinary differential equations. However, large chemical reaction networks can contain reaction processes with rates spanning multiple orders of magnitude. This can lead to the problem of “KMC stiffness” (similar to stiffness in differential equations), where the computational expense has the potential to be overwhelmed by very short time-steps during KMC simulations, with the simulation spending an inordinate amount of KMC steps / cpu-time simulating fast frivolous processes (FFPs) without progressing the system (reaction network). In order to achieve simulation times that are experimentally relevant or desired for predictions, a dynamic throttling algorithm involving separation of the processes into speed-ranks based on event frequencies has been designed and implemented with the intent of decreasing the probability of FFP events, and increasing the probability of slow process events -- allowing rate limiting events to become more likely to be observed in KMC simulations. This Staggered Quasi-Equilibrium Rank-based Throttling for Steady-state (SQERTSS) algorithm designed for use in achieving and simulating steady-state conditions in KMC simulations. Lastly, as shown in this work, the SQERTSS algorithm also works for transient conditions: the correct configuration space and final state will still be achieved if the required assumptions are not violated, with the caveat that the sizes of the time-steps may be distorted during the transient period.

  3. Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation

    Science.gov (United States)

    Ziegenhein, Peter; Pirner, Sven; Kamerling, Cornelis Ph; Oelfke, Uwe

    2015-08-01

    Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37× compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25× and 1.95× faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.

  4. Ultrafast cone-beam CT scatter correction with GPU-based Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    Yuan Xu

    2014-03-01

    Full Text Available Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT. We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstruction within 30 seconds.Methods: The method consists of six steps: 1 FDK reconstruction using raw projection data; 2 Rigid Registration of planning CT to the FDK results; 3 MC scatter calculation at sparse view angles using the planning CT; 4 Interpolation of the calculated scatter signals to other angles; 5 Removal of scatter from the raw projections; 6 FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC noise from the simulated scatter images caused by low photon numbers. The method is validated on one simulated head-and-neck case with 364 projection angles.Results: We have examined variation of the scatter signal among projection angles using Fourier analysis. It is found that scatter images at 31 angles are sufficient to restore those at all angles with < 0.1% error. For the simulated patient case with a resolution of 512 × 512 × 100, we simulated 5 × 106 photons per angle. The total computation time is 20.52 seconds on a Nvidia GTX Titan GPU, and the time at each step is 2.53, 0.64, 14.78, 0.13, 0.19, and 2.25 seconds, respectively. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU.Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. It accomplished the whole procedure of scatter correction and reconstruction within 30 seconds.----------------------------Cite this

  5. Development of CAD-Based Geometry Processing Module for a Monte Carlo Particle Transport Analysis Code

    International Nuclear Information System (INIS)

    Choi, Sung Hoon; Kwark, Min Su; Shim, Hyung Jin

    2012-01-01

    As The Monte Carlo (MC) particle transport analysis for a complex system such as research reactor, accelerator, and fusion facility may require accurate modeling of the complicated geometry. Its manual modeling by using the text interface of a MC code to define the geometrical objects is tedious, lengthy and error-prone. This problem can be overcome by taking advantage of modeling capability of the computer aided design (CAD) system. There have been two kinds of approaches to develop MC code systems utilizing the CAD data: the external format conversion and the CAD kernel imbedded MC simulation. The first approach includes several interfacing programs such as McCAD, MCAM, GEOMIT etc. which were developed to automatically convert the CAD data into the MCNP geometry input data. This approach makes the most of the existing MC codes without any modifications, but implies latent data inconsistency due to the difference of the geometry modeling system. In the second approach, a MC code utilizes the CAD data for the direct particle tracking or the conversion to an internal data structure of the constructive solid geometry (CSG) and/or boundary representation (B-rep) modeling with help of a CAD kernel. MCNP-BRL and OiNC have demonstrated their capabilities of the CAD-based MC simulations. Recently we have developed a CAD-based geometry processing module for the MC particle simulation by using the OpenCASCADE (OCC) library. In the developed module, CAD data can be used for the particle tracking through primitive CAD surfaces (hereafter the CAD-based tracking) or the internal conversion to the CSG data structure. In this paper, the performances of the text-based model, the CAD-based tracking, and the internal CSG conversion are compared by using an in-house MC code, McSIM, equipped with the developed CAD-based geometry processing module

  6. Limits on the efficiency of event-based algorithms for Monte Carlo neutron transport

    Directory of Open Access Journals (Sweden)

    Paul K. Romano

    2017-09-01

    Full Text Available The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup due to vectorization as a function of the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size to achieve vector efficiency greater than 90%. When the execution times for events are allowed to vary, the vector speedup is also limited by differences in the execution time for events being carried out in a single event-iteration.

  7. Study on quantification method based on Monte Carlo sampling for multiunit probabilistic safety assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Kye Min [KHNP Central Research Institute, Daejeon (Korea, Republic of); Han, Sang Hoon; Park, Jin Hee; Lim, Ho Gon; Yang, Joon Yang [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Heo, Gyun Young [Kyung Hee University, Yongin (Korea, Republic of)

    2017-06-15

    In Korea, many nuclear power plants operate at a single site based on geographical characteristics, but the population density near the sites is higher than that in other countries. Thus, multiunit accidents are a more important consideration than in other countries and should be addressed appropriately. Currently, there are many issues related to a multiunit probabilistic safety assessment (PSA). One of them is the quantification of a multiunit PSA model. A traditional PSA uses a Boolean manipulation of the fault tree in terms of the minimal cut set. However, such methods have some limitations when rare event approximations cannot be used effectively or a very small truncation limit should be applied to identify accident sequence combinations for a multiunit site. In particular, it is well known that seismic risk in terms of core damage frequency can be overestimated because there are many events that have a high failure probability. In this study, we propose a quantification method based on a Monte Carlo approach for a multiunit PSA model. This method can consider all possible accident sequence combinations in a multiunit site and calculate a more exact value for events that have a high failure probability. An example model for six identical units at a site was also developed and quantified to confirm the applicability of the proposed method.

  8. Iterative reconstruction using a Monte Carlo based system transfer matrix for dedicated breast positron emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    Saha, Krishnendu [Ohio Medical Physics Consulting, Dublin, Ohio 43017 (United States); Straus, Kenneth J.; Glick, Stephen J. [Department of Radiology, University of Massachusetts Medical School, Worcester, Massachusetts 01655 (United States); Chen, Yu. [Department of Radiation Oncology, Columbia University, New York, New York 10032 (United States)

    2014-08-28

    To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.

  9. A technique for generating phase-space-based Monte Carlo beamlets in radiotherapy applications

    International Nuclear Information System (INIS)

    Bush, K; Popescu, I A; Zavgorodni, S

    2008-01-01

    As radiotherapy treatment planning moves toward Monte Carlo (MC) based dose calculation methods, the MC beamlet is becoming an increasingly common optimization entity. At present, methods used to produce MC beamlets have utilized a particle source model (PSM) approach. In this work we outline the implementation of a phase-space-based approach to MC beamlet generation that is expected to provide greater accuracy in beamlet dose distributions. In this approach a standard BEAMnrc phase space is sorted and divided into beamlets with particles labeled using the inheritable particle history variable. This is achieved with the use of an efficient sorting algorithm, capable of sorting a phase space of any size into the required number of beamlets in only two passes. Sorting a phase space of five million particles can be achieved in less than 8 s on a single-core 2.2 GHz CPU. The beamlets can then be transported separately into a patient CT dataset, producing separate dose distributions (doselets). Methods for doselet normalization and conversion of dose to absolute units of Gy for use in intensity modulated radiation therapy (IMRT) plan optimization are also described. (note)

  10. Monte Carlo dose calculation using a cell processor based PlayStation 3 system

    Energy Technology Data Exchange (ETDEWEB)

    Chow, James C L; Lam, Phil; Jaffray, David A, E-mail: james.chow@rmp.uhn.on.ca [Department of Radiation Oncology, University of Toronto and Radiation Medicine Program, Princess Margaret Hospital, University Health Network, Toronto, Ontario M5G 2M9 (Canada)

    2012-02-09

    This study investigates the performance of the EGSnrc computer code coupled with a Cell-based hardware in Monte Carlo simulation of radiation dose in radiotherapy. Performance evaluations of two processor-intensive functions namely, HOWNEAR and RANMAR{sub G}ET in the EGSnrc code were carried out basing on the 20-80 rule (Pareto principle). The execution speeds of the two functions were measured by the profiler gprof specifying the number of executions and total time spent on the functions. A testing architecture designed for Cell processor was implemented in the evaluation using a PlayStation3 (PS3) system. The evaluation results show that the algorithms examined are readily parallelizable on the Cell platform, provided that an architectural change of the EGSnrc was made. However, as the EGSnrc performance was limited by the PowerPC Processing Element in the PS3, PC coupled with graphics processing units or GPCPU may provide a more viable avenue for acceleration.

  11. The development of GPU-based parallel PRNG for Monte Carlo applications in CUDA Fortran

    Directory of Open Access Journals (Sweden)

    Hamed Kargaran

    2016-04-01

    Full Text Available The implementation of Monte Carlo simulation on the CUDA Fortran requires a fast random number generation with good statistical properties on GPU. In this study, a GPU-based parallel pseudo random number generator (GPPRNG have been proposed to use in high performance computing systems. According to the type of GPU memory usage, GPU scheme is divided into two work modes including GLOBAL_MODE and SHARED_MODE. To generate parallel random numbers based on the independent sequence method, the combination of middle-square method and chaotic map along with the Xorshift PRNG have been employed. Implementation of our developed PPRNG on a single GPU showed a speedup of 150x and 470x (with respect to the speed of PRNG on a single CPU core for GLOBAL_MODE and SHARED_MODE, respectively. To evaluate the accuracy of our developed GPPRNG, its performance was compared to that of some other commercially available PPRNGs such as MATLAB, FORTRAN and Miller-Park algorithm through employing the specific standard tests. The results of this comparison showed that the developed GPPRNG in this study can be used as a fast and accurate tool for computational science applications.

  12. The development of GPU-based parallel PRNG for Monte Carlo applications in CUDA Fortran

    Energy Technology Data Exchange (ETDEWEB)

    Kargaran, Hamed, E-mail: h-kargaran@sbu.ac.ir; Minuchehr, Abdolhamid; Zolfaghari, Ahmad [Department of nuclear engineering, Shahid Behesti University, Tehran, 1983969411 (Iran, Islamic Republic of)

    2016-04-15

    The implementation of Monte Carlo simulation on the CUDA Fortran requires a fast random number generation with good statistical properties on GPU. In this study, a GPU-based parallel pseudo random number generator (GPPRNG) have been proposed to use in high performance computing systems. According to the type of GPU memory usage, GPU scheme is divided into two work modes including GLOBAL-MODE and SHARED-MODE. To generate parallel random numbers based on the independent sequence method, the combination of middle-square method and chaotic map along with the Xorshift PRNG have been employed. Implementation of our developed PPRNG on a single GPU showed a speedup of 150x and 470x (with respect to the speed of PRNG on a single CPU core) for GLOBAL-MODE and SHARED-MODE, respectively. To evaluate the accuracy of our developed GPPRNG, its performance was compared to that of some other commercially available PPRNGs such as MATLAB, FORTRAN and Miller-Park algorithm through employing the specific standard tests. The results of this comparison showed that the developed GPPRNG in this study can be used as a fast and accurate tool for computational science applications.

  13. New Monte Carlo-based method to evaluate fission fraction uncertainties for the reactor antineutrino experiment

    Energy Technology Data Exchange (ETDEWEB)

    Ma, X.B., E-mail: maxb@ncepu.edu.cn; Qiu, R.M.; Chen, Y.X.

    2017-02-15

    Uncertainties regarding fission fractions are essential in understanding antineutrino flux predictions in reactor antineutrino experiments. A new Monte Carlo-based method to evaluate the covariance coefficients between isotopes is proposed. The covariance coefficients are found to vary with reactor burnup and may change from positive to negative because of balance effects in fissioning. For example, between {sup 235}U and {sup 239}Pu, the covariance coefficient changes from 0.15 to −0.13. Using the equation relating fission fraction and atomic density, consistent uncertainties in the fission fraction and covariance matrix were obtained. The antineutrino flux uncertainty is 0.55%, which does not vary with reactor burnup. The new value is about 8.3% smaller. - Highlights: • The covariance coefficients between isotopes vs reactor burnup may change its sign because of two opposite effects. • The relation between fission fraction uncertainty and atomic density are first studied. • A new MC-based method of evaluating the covariance coefficients between isotopes was proposed.

  14. TH-E-BRE-08: GPU-Monte Carlo Based Fast IMRT Plan Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Li, Y; Tian, Z; Shi, F; Jiang, S; Jia, X [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States)

    2014-06-15

    Purpose: Intensity-modulated radiation treatment (IMRT) plan optimization needs pre-calculated beamlet dose distribution. Pencil-beam or superposition/convolution type algorithms are typically used because of high computation speed. However, inaccurate beamlet dose distributions, particularly in cases with high levels of inhomogeneity, may mislead optimization, hindering the resulting plan quality. It is desire to use Monte Carlo (MC) methods for beamlet dose calculations. Yet, the long computational time from repeated dose calculations for a number of beamlets prevents this application. It is our objective to integrate a GPU-based MC dose engine in lung IMRT optimization using a novel two-steps workflow. Methods: A GPU-based MC code gDPM is used. Each particle is tagged with an index of a beamlet where the source particle is from. Deposit dose are stored separately for beamlets based on the index. Due to limited GPU memory size, a pyramid space is allocated for each beamlet, and dose outside the space is neglected. A two-steps optimization workflow is proposed for fast MC-based optimization. At first step, rough beamlet dose calculations is conducted with only a small number of particles per beamlet. Plan optimization is followed to get an approximated fluence map. In the second step, more accurate beamlet doses are calculated, where sampled number of particles for a beamlet is proportional to the intensity determined previously. A second-round optimization is conducted, yielding the final Result. Results: For a lung case with 5317 beamlets, 10{sup 5} particles per beamlet in the first round, and 10{sup 8} particles per beam in the second round are enough to get a good plan quality. The total simulation time is 96.4 sec. Conclusion: A fast GPU-based MC dose calculation method along with a novel two-step optimization workflow are developed. The high efficiency allows the use of MC for IMRT optimizations.

  15. Technical Note: Defining cyclotron-based clinical scanning proton machines in a FLUKA Monte Carlo system.

    Science.gov (United States)

    Fiorini, Francesca; Schreuder, Niek; Van den Heuvel, Frank

    2018-02-01

    Cyclotron-based pencil beam scanning (PBS) proton machines represent nowadays the majority and most affordable choice for proton therapy facilities, however, their representation in Monte Carlo (MC) codes is more complex than passively scattered proton system- or synchrotron-based PBS machines. This is because degraders are used to decrease the energy from the cyclotron maximum energy to the desired energy, resulting in a unique spot size, divergence, and energy spread depending on the amount of degradation. This manuscript outlines a generalized methodology to characterize a cyclotron-based PBS machine in a general-purpose MC code. The code can then be used to generate clinically relevant plans starting from commercial TPS plans. The described beam is produced at the Provision Proton Therapy Center (Knoxville, TN, USA) using a cyclotron-based IBA Proteus Plus equipment. We characterized the Provision beam in the MC FLUKA using the experimental commissioning data. The code was then validated using experimental data in water phantoms for single pencil beams and larger irregular fields. Comparisons with RayStation TPS plans are also presented. Comparisons of experimental, simulated, and planned dose depositions in water plans show that same doses are calculated by both programs inside the target areas, while penumbrae differences are found at the field edges. These differences are lower for the MC, with a γ(3%-3 mm) index never below 95%. Extensive explanations on how MC codes can be adapted to simulate cyclotron-based scanning proton machines are given with the aim of using the MC as a TPS verification tool to check and improve clinical plans. For all the tested cases, we showed that dose differences with experimental data are lower for the MC than TPS, implying that the created FLUKA beam model is better able to describe the experimental beam. © 2017 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists

  16. Reporting and analyzing statistical uncertainties in Monte Carlo-based treatment planning

    International Nuclear Information System (INIS)

    Chetty, Indrin J.; Rosu, Mihaela; Kessler, Marc L.; Fraass, Benedick A.; Haken, Randall K. ten; Kong, Feng-Ming; McShan, Daniel L.

    2006-01-01

    Purpose: To investigate methods of reporting and analyzing statistical uncertainties in doses to targets and normal tissues in Monte Carlo (MC)-based treatment planning. Methods and Materials: Methods for quantifying statistical uncertainties in dose, such as uncertainty specification to specific dose points, or to volume-based regions, were analyzed in MC-based treatment planning for 5 lung cancer patients. The effect of statistical uncertainties on target and normal tissue dose indices was evaluated. The concept of uncertainty volume histograms for targets and organs at risk was examined, along with its utility, in conjunction with dose volume histograms, in assessing the acceptability of the statistical precision in dose distributions. The uncertainty evaluation tools were extended to four-dimensional planning for application on multiple instances of the patient geometry. All calculations were performed using the Dose Planning Method MC code. Results: For targets, generalized equivalent uniform doses and mean target doses converged at 150 million simulated histories, corresponding to relative uncertainties of less than 2% in the mean target doses. For the normal lung tissue (a volume-effect organ), mean lung dose and normal tissue complication probability converged at 150 million histories despite the large range in the relative organ uncertainty volume histograms. For 'serial' normal tissues such as the spinal cord, large fluctuations exist in point dose relative uncertainties. Conclusions: The tools presented here provide useful means for evaluating statistical precision in MC-based dose distributions. Tradeoffs between uncertainties in doses to targets, volume-effect organs, and 'serial' normal tissues must be considered carefully in determining acceptable levels of statistical precision in MC-computed dose distributions

  17. A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom

    International Nuclear Information System (INIS)

    Bellezzo, M.; Do Nascimento, E.; Yoriyaz, H.

    2014-08-01

    As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)

  18. A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom

    Energy Technology Data Exchange (ETDEWEB)

    Bellezzo, M.; Do Nascimento, E.; Yoriyaz, H., E-mail: mbellezzo@gmail.br [Instituto de Pesquisas Energeticas e Nucleares / CNEN, Av. Lineu Prestes 2242, Cidade Universitaria, 05508-000 Sao Paulo (Brazil)

    2014-08-15

    As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)

  19. Application of the measurement-based Monte Carlo method in nasopharyngeal cancer patients for intensity modulated radiation therapy

    International Nuclear Information System (INIS)

    Yeh, C.Y.; Lee, C.C.; Chao, T.C.; Lin, M.H.; Lai, P.A.; Liu, F.H.; Tung, C.J.

    2014-01-01

    This study aims to utilize a measurement-based Monte Carlo (MBMC) method to evaluate the accuracy of dose distributions calculated using the Eclipse radiotherapy treatment planning system (TPS) based on the anisotropic analytical algorithm. Dose distributions were calculated for the nasopharyngeal carcinoma (NPC) patients treated with the intensity modulated radiotherapy (IMRT). Ten NPC IMRT plans were evaluated by comparing their dose distributions with those obtained from the in-house MBMC programs for the same CT images and beam geometry. To reconstruct the fluence distribution of the IMRT field, an efficiency map was obtained by dividing the energy fluence of the intensity modulated field by that of the open field, both acquired from an aS1000 electronic portal imaging device. The integrated image of the non-gated mode was used to acquire the full dose distribution delivered during the IMRT treatment. This efficiency map redistributed the particle weightings of the open field phase-space file for IMRT applications. Dose differences were observed in the tumor and air cavity boundary. The mean difference between MBMC and TPS in terms of the planning target volume coverage was 0.6% (range: 0.0–2.3%). The mean difference for the conformity index was 0.01 (range: 0.0–0.01). In conclusion, the MBMC method serves as an independent IMRT dose verification tool in a clinical setting. - Highlights: ► The patient-based Monte Carlo method serves as a reference standard to verify IMRT doses. ► 3D Dose distributions for NPC patients have been verified by the Monte Carlo method. ► Doses predicted by the Monte Carlo method matched closely with those by the TPS. ► The Monte Carlo method predicted a higher mean dose to the middle ears than the TPS. ► Critical organ doses should be confirmed to avoid overdose to normal organs

  20. Study on the propagation properties of laser in aerosol based on Monte Carlo simulation

    Science.gov (United States)

    Leng, Kun; Wu, Wenyuan; Zhang, Xi; Gong, Yanchun; Yang, Yuntao

    2018-02-01

    When laser propagate in the atmosphere, due to aerosol scattering and absorption, laser energy will continue to decline, affecting the effectiveness of the laser effect. Based on the Monte Carlo method, the relationship between the photon spatial energy distributions of the laser wavelengths of 10.6μm in marine, sand-type, water-soluble and soot aerosols ,and the propagation distance, visibility and the divergence angle were studied. The results show that for 10.6μm laser, the maximum number of attenuation of photons arriving at the receiving plane is sand-type aerosol, the minimal attenuation is water soluble aerosol; as the propagation distance increases, the number of photons arriving at the receiving plane decreases; as the visibility increases, the number of photons arriving at the receiving plane increases rapidly and then stabilizes; in the above cases, the photon energy distribution does not deviated from the Gaussian distribution; as the divergence angle increases, the number of photons arriving at the receiving plane is almost unchanged, but the photon energy distribution gradually deviates from the Gaussian distribution.

  1. IVF cycle cost estimation using Activity Based Costing and Monte Carlo simulation.

    Science.gov (United States)

    Cassettari, Lucia; Mosca, Marco; Mosca, Roberto; Rolando, Fabio; Costa, Mauro; Pisaturo, Valerio

    2016-03-01

    The Authors present a new methodological approach in stochastic regime to determine the actual costs of an healthcare process. The paper specifically shows the application of the methodology for the determination of the cost of an Assisted reproductive technology (ART) treatment in Italy. The reason of this research comes from the fact that deterministic regime is inadequate to implement an accurate estimate of the cost of this particular treatment. In fact the durations of the different activities involved are unfixed and described by means of frequency distributions. Hence the need to determine in addition to the mean value of the cost, the interval within which it is intended to vary with a known confidence level. Consequently the cost obtained for each type of cycle investigated (in vitro fertilization and embryo transfer with or without intracytoplasmic sperm injection), shows tolerance intervals around the mean value sufficiently restricted as to make the data obtained statistically robust and therefore usable also as reference for any benchmark with other Countries. It should be noted that under a methodological point of view the approach was rigorous. In fact it was used both the technique of Activity Based Costing for determining the cost of individual activities of the process both the Monte Carlo simulation, with control of experimental error, for the construction of the tolerance intervals on the final result.

  2. Monte Carlo simulation of ordering transformations in Ni-Mo-based alloys

    International Nuclear Information System (INIS)

    Kulkarni, U.D.

    2004-01-01

    The quenched in state of short range order (SRO) in binary Ni-Mo alloys is characterized by intensity maxima at {1 (1/2) 0} and equivalent positions in the reciprocal space. Ternary addition of a small amount of Al to the binary alloy, on the other hand, leads to a state of SRO that gives rise to intensity maxima at {1 0 0} and equivalent, in addition to {1 (1/2) 0} and equivalent, positions in the selected area electron diffraction patterns. Different geometric patterns of streaks of diffuse intensity, joining the SRO maxima with the superlattice positions of the emerging long range ordered (LRO) structures or in some cases between the superlattice positions of different LRO structures, are observed during the SRO-to-LRO transitions in the Ni-Mo-based and other 1 (1/2) 0 alloys. Monte Carlo simulations have been carried out here in order to shed some light on the atomic structures of the SRO and the SRO-to-LRO transition states in these alloys

  3. Voxel-based Monte Carlo simulation of X-ray imaging and spectroscopy experiments

    International Nuclear Information System (INIS)

    Bottigli, U.; Brunetti, A.; Golosio, B.; Oliva, P.; Stumbo, S.; Vincze, L.; Randaccio, P.; Bleuet, P.; Simionovici, A.; Somogyi, A.

    2004-01-01

    A Monte Carlo code for the simulation of X-ray imaging and spectroscopy experiments in heterogeneous samples is presented. The energy spectrum, polarization and profile of the incident beam can be defined so that X-ray tube systems as well as synchrotron sources can be simulated. The sample is modeled as a 3D regular grid. The chemical composition and density is given at each point of the grid. Photoelectric absorption, fluorescent emission, elastic and inelastic scattering are included in the simulation. The core of the simulation is a fast routine for the calculation of the path lengths of the photon trajectory intersections with the grid voxels. The voxel representation is particularly useful for samples that cannot be well described by a small set of polyhedra. This is the case of most naturally occurring samples. In such cases, voxel-based simulations are much less expensive in terms of computational cost than simulations on a polygonal representation. The efficient scheme used for calculating the path lengths in the voxels and the use of variance reduction techniques make the code suitable for the detailed simulation of complex experiments on generic samples in a relatively short time. Examples of applications to X-ray imaging and spectroscopy experiments are discussed

  4. Learning Algorithm of Boltzmann Machine Based on Spatial Monte Carlo Integration Method

    Directory of Open Access Journals (Sweden)

    Muneki Yasuda

    2018-04-01

    Full Text Available The machine learning techniques for Markov random fields are fundamental in various fields involving pattern recognition, image processing, sparse modeling, and earth science, and a Boltzmann machine is one of the most important models in Markov random fields. However, the inference and learning problems in the Boltzmann machine are NP-hard. The investigation of an effective learning algorithm for the Boltzmann machine is one of the most important challenges in the field of statistical machine learning. In this paper, we study Boltzmann machine learning based on the (first-order spatial Monte Carlo integration method, referred to as the 1-SMCI learning method, which was proposed in the author’s previous paper. In the first part of this paper, we compare the method with the maximum pseudo-likelihood estimation (MPLE method using a theoretical and a numerical approaches, and show the 1-SMCI learning method is more effective than the MPLE. In the latter part, we compare the 1-SMCI learning method with other effective methods, ratio matching and minimum probability flow, using a numerical experiment, and show the 1-SMCI learning method outperforms them.

  5. Monte Carlo efficiency calibration of a neutron generator-based total-body irradiator

    International Nuclear Information System (INIS)

    Shypailo, R.J.; Ellis, K.J.

    2009-01-01

    Many body composition measurement systems are calibrated against a single-sized reference phantom. Prompt-gamma neutron activation (PGNA) provides the only direct measure of total body nitrogen (TBN), an index of the body's lean tissue mass. In PGNA systems, body size influences neutron flux attenuation, induced gamma signal distribution, and counting efficiency. Thus, calibration based on a single-sized phantom could result in inaccurate TBN values. We used Monte Carlo simulations (MCNP-5; Los Alamos National Laboratory) in order to map a system's response to the range of body weights (65-160 kg) and body fat distributions (25-60%) in obese humans. Calibration curves were constructed to derive body-size correction factors relative to a standard reference phantom, providing customized adjustments to account for differences in body habitus of obese adults. The use of MCNP-generated calibration curves should allow for a better estimate of the true changes in lean tissue mass that many occur during intervention programs focused only on weight loss. (author)

  6. Optimization of Control Points Number at Coordinate Measurements based on the Monte-Carlo Method

    Science.gov (United States)

    Korolev, A. A.; Kochetkov, A. V.; Zakharov, O. V.

    2018-01-01

    Improving the quality of products causes an increase in the requirements for the accuracy of the dimensions and shape of the surfaces of the workpieces. This, in turn, raises the requirements for accuracy and productivity of measuring of the workpieces. The use of coordinate measuring machines is currently the most effective measuring tool for solving similar problems. The article proposes a method for optimizing the number of control points using Monte Carlo simulation. Based on the measurement of a small sample from batches of workpieces, statistical modeling is performed, which allows one to obtain interval estimates of the measurement error. This approach is demonstrated by examples of applications for flatness, cylindricity and sphericity. Four options of uniform and uneven arrangement of control points are considered and their comparison is given. It is revealed that when the number of control points decreases, the arithmetic mean decreases, the standard deviation of the measurement error increases and the probability of the measurement α-error increases. In general, it has been established that it is possible to repeatedly reduce the number of control points while maintaining the required measurement accuracy.

  7. Ab initio based kinetic Monte-Carlo simulations of phase transformations in FeCrAl

    International Nuclear Information System (INIS)

    Olsson, Paer

    2015-01-01

    Document available in abstract form only, full text follows: Corrosion and erosion in lead cooled reactors can be a serious issue due to the high operating temperature and the necessary flow rates. FeCrAl alloys are under consideration as cladding or as coating for stainless steel cladding tubes for lead cooled reactor concepts. The alumina scale that is formed, as Al segregates to the surface and Fe and Cr rich oxides break off, offers a highly protective layer against lead corrosion in a large range of temperatures. However, there are concerns about the phase stability of the alloy under irradiation conditions and of possible induced alpha-prime precipitation. Here a theoretical model of the ternary FeCrAl alloy is presented, based on density functional theory predictions and linked to a kinetic Monte-Carlo simulation framework. The effect of Al on the FeCr miscibility properties are discussed and the coupling of irradiation induced defects with the solutes are treated. Simulations of the micro-structure evolution are tentatively compared to available experiments. (authors)

  8. Poster - 20: Detector selection for commissioning of a Monte Carlo based electron dose calculation algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Anusionwu, Princess [Medical Physics, CancerCare Manitoba, Winnipeg Canada (Canada); Department of Physics & Astronomy, University of Manitoba, Winnipeg Canada (Canada); Alpuche Aviles, Jorge E. [Medical Physics, CancerCare Manitoba, Winnipeg Canada (Canada); Pistorius, Stephen [Medical Physics, CancerCare Manitoba, Winnipeg Canada (Canada); Department of Physics & Astronomy, University of Manitoba, Winnipeg Canada (Canada); Department of Radiology, University of Manitoba, Winnipeg (Canada)

    2016-08-15

    Objective: Commissioning of a Monte Carlo based electron dose calculation algorithm requires percentage depth doses (PDDs) and beam profiles which can be measured with multiple detectors. Electron dosimetry is commonly performed with cylindrical chambers but parallel plate chambers and diodes can also be used. The purpose of this study was to determine the most appropriate detector to perform the commissioning measurements. Methods: PDDs and beam profiles were measured for beams with energies ranging from 6 MeV to 15 MeV and field sizes ranging from 6 cm × 6 cm to 40 cm × 40 cm. Detectors used included diodes, cylindrical and parallel plate ionization chambers. Beam profiles were measured in water (100 cm source to surface distance) and in air (95 cm source to detector distance). Results: PDDs for the cylindrical chambers were shallower (1.3 mm averaged over all energies and field sizes) than those measured with the parallel plate chambers and diodes. Surface doses measured with the diode and cylindrical chamber were on average larger by 1.6 % and 3% respectively than those of the parallel plate chamber. Profiles measured with a diode resulted in penumbra values smaller than those measured with the cylindrical chamber by 2 mm. Conclusion: The diode was selected as the most appropriate detector since PDDs agreed with those measured with parallel plate chambers (typically recommended for low energies) and results in sharper profiles. Unlike ion chambers, no corrections are needed to measure PDDs, making it more convenient to use.

  9. dsmcFoam+: An OpenFOAM based direct simulation Monte Carlo solver

    Science.gov (United States)

    White, C.; Borg, M. K.; Scanlon, T. J.; Longshaw, S. M.; John, B.; Emerson, D. R.; Reese, J. M.

    2018-03-01

    dsmcFoam+ is a direct simulation Monte Carlo (DSMC) solver for rarefied gas dynamics, implemented within the OpenFOAM software framework, and parallelised with MPI. It is open-source and released under the GNU General Public License in a publicly available software repository that includes detailed documentation and tutorial DSMC gas flow cases. This release of the code includes many features not found in standard dsmcFoam, such as molecular vibrational and electronic energy modes, chemical reactions, and subsonic pressure boundary conditions. Since dsmcFoam+ is designed entirely within OpenFOAM's C++ object-oriented framework, it benefits from a number of key features: the code emphasises extensibility and flexibility so it is aimed first and foremost as a research tool for DSMC, allowing new models and test cases to be developed and tested rapidly. All DSMC cases are as straightforward as setting up any standard OpenFOAM case, as dsmcFoam+ relies upon the standard OpenFOAM dictionary based directory structure. This ensures that useful pre- and post-processing capabilities provided by OpenFOAM remain available even though the fully Lagrangian nature of a DSMC simulation is not typical of most OpenFOAM applications. We show that dsmcFoam+ compares well to other well-known DSMC codes and to analytical solutions in terms of benchmark results.

  10. Cost-effectiveness of targeted screening for abdominal aortic aneurysm. Monte Carlo-based estimates.

    Science.gov (United States)

    Pentikäinen, T J; Sipilä, T; Rissanen, P; Soisalon-Soininen, S; Salo, J

    2000-01-01

    This article reports a cost-effectiveness analysis of targeted screening for abdominal aortic aneurysm (AAA). A major emphasis was on the estimation of distributions of costs and effectiveness. We performed a Monte Carlo simulation using C programming language in a PC environment. Data on survival and costs, and a majority of screening probabilities, were from our own empirical studies. Natural history data were based on the literature. Each screened male gained 0.07 life-years at an incremental cost of FIM 3,300. The expected values differed from zero very significantly. For females, expected gains were 0.02 life-years at an incremental cost of FIM 1,100, which was not statistically significant. Cost-effectiveness ratios and their 95% confidence intervals were FIM 48,000 (27,000-121,000) and 54,000 (22,000-infinity) for males and females, respectively. Sensitivity analysis revealed that the results for males were stable. Individual variation in life-year gains was high. Males seemed to benefit from targeted AAA screening, and the results were stable. As far as the cost-effectiveness ratio is considered acceptable, screening for males seemed to be justified. However, our assumptions about growth and rupture behavior of AAAs might be improved with further clinical and epidemiological studies. As a point estimate, females benefited in a similar manner, but the results were not statistically significant. The evidence of this study did not justify screening of females.

  11. Voxel-based Monte Carlo simulation of X-ray imaging and spectroscopy experiments

    Energy Technology Data Exchange (ETDEWEB)

    Bottigli, U. [Istituto di Matematica e Fisica dell' Universita di Sassari, via Vienna 2, 07100, Sassari (Italy); Sezione INFN di Cagliari (Italy); Brunetti, A. [Istituto di Matematica e Fisica dell' Universita di Sassari, via Vienna 2, 07100, Sassari (Italy); Golosio, B. [Istituto di Matematica e Fisica dell' Universita di Sassari, via Vienna 2, 07100, Sassari (Italy) and Sezione INFN di Cagliari (Italy)]. E-mail: golosio@uniss.it; Oliva, P. [Istituto di Matematica e Fisica dell' Universita di Sassari, via Vienna 2, 07100, Sassari (Italy); Stumbo, S. [Istituto di Matematica e Fisica dell' Universita di Sassari, via Vienna 2, 07100, Sassari (Italy); Vincze, L. [Department of Chemistry, University of Antwerp (Belgium); Randaccio, P. [Dipartimento di Fisica dell' Universita di Cagliari and Sezione INFN di Cagliari (Italy); Bleuet, P. [European Synchrotron Radiation Facility, Grenoble (France); Simionovici, A. [European Synchrotron Radiation Facility, Grenoble (France); Somogyi, A. [European Synchrotron Radiation Facility, Grenoble (France)

    2004-10-08

    A Monte Carlo code for the simulation of X-ray imaging and spectroscopy experiments in heterogeneous samples is presented. The energy spectrum, polarization and profile of the incident beam can be defined so that X-ray tube systems as well as synchrotron sources can be simulated. The sample is modeled as a 3D regular grid. The chemical composition and density is given at each point of the grid. Photoelectric absorption, fluorescent emission, elastic and inelastic scattering are included in the simulation. The core of the simulation is a fast routine for the calculation of the path lengths of the photon trajectory intersections with the grid voxels. The voxel representation is particularly useful for samples that cannot be well described by a small set of polyhedra. This is the case of most naturally occurring samples. In such cases, voxel-based simulations are much less expensive in terms of computational cost than simulations on a polygonal representation. The efficient scheme used for calculating the path lengths in the voxels and the use of variance reduction techniques make the code suitable for the detailed simulation of complex experiments on generic samples in a relatively short time. Examples of applications to X-ray imaging and spectroscopy experiments are discussed.

  12. Independent Monte-Carlo dose calculation for MLC based CyberKnife radiotherapy

    Science.gov (United States)

    Mackeprang, P.-H.; Vuong, D.; Volken, W.; Henzen, D.; Schmidhalter, D.; Malthaner, M.; Mueller, S.; Frei, D.; Stampanoni, M. F. M.; Dal Pra, A.; Aebersold, D. M.; Fix, M. K.; Manser, P.

    2018-01-01

    This work aims to develop, implement and validate a Monte Carlo (MC)-based independent dose calculation (IDC) framework to perform patient-specific quality assurance (QA) for multi-leaf collimator (MLC)-based CyberKnife® (Accuray Inc., Sunnyvale, CA) treatment plans. The IDC framework uses an XML-format treatment plan as exported from the treatment planning system (TPS) and DICOM format patient CT data, an MC beam model using phase spaces, CyberKnife MLC beam modifier transport using the EGS++ class library, a beam sampling and coordinate transformation engine and dose scoring using DOSXYZnrc. The framework is validated against dose profiles and depth dose curves of single beams with varying field sizes in a water tank in units of cGy/Monitor Unit and against a 2D dose distribution of a full prostate treatment plan measured with Gafchromic EBT3 (Ashland Advanced Materials, Bridgewater, NJ) film in a homogeneous water-equivalent slab phantom. The film measurement is compared to IDC results by gamma analysis using 2% (global)/2 mm criteria. Further, the dose distribution of the clinical treatment plan in the patient CT is compared to TPS calculation by gamma analysis using the same criteria. Dose profiles from IDC calculation in a homogeneous water phantom agree within 2.3% of the global max dose or 1 mm distance to agreement to measurements for all except the smallest field size. Comparing the film measurement to calculated dose, 99.9% of all voxels pass gamma analysis, comparing dose calculated by the IDC framework to TPS calculated dose for the clinical prostate plan shows 99.0% passing rate. IDC calculated dose is found to be up to 5.6% lower than dose calculated by the TPS in this case near metal fiducial markers. An MC-based modular IDC framework was successfully developed, implemented and validated against measurements and is now available to perform patient-specific QA by IDC.

  13. Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport

    Energy Technology Data Exchange (ETDEWEB)

    Romano, Paul K.; Siegel, Andrew R.

    2017-04-16

    The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup due to vectorization as a function of two parameters: the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size in order to achieve vector efficiency greater than 90%. When the execution times for events are allowed to vary, however, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration. For some problems, this implies that vector effciencies over 50% may not be attainable. While there are many factors impacting performance of an event-based algorithm that are not captured by our model, it nevertheless provides insights into factors that may be limiting in a real implementation.

  14. CT-Based Brachytherapy Treatment Planning using Monte Carlo Simulation Aided by an Interface Software

    Directory of Open Access Journals (Sweden)

    Vahid Moslemi

    2011-03-01

    Full Text Available Introduction: In brachytherapy, radioactive sources are placed close to the tumor, therefore, small changes in their positions can cause large changes in the dose distribution. This emphasizes the need for computerized treatment planning. The usual method for treatment planning of cervix brachytherapy uses conventional radiographs in the Manchester system. Nowadays, because of their advantages in locating the source positions and the surrounding tissues, CT and MRI images are replacing conventional radiographs. In this study, we used CT images in Monte Carlo based dose calculation for brachytherapy treatment planning, using an interface software to create the geometry file required in the MCNP code. The aim of using the interface software is to facilitate and speed up the geometry set-up for simulations based on the patient’s anatomy. This paper examines the feasibility of this method in cervix brachytherapy and assesses its accuracy and speed. Material and Methods: For dosimetric measurements regarding the treatment plan, a pelvic phantom was made from polyethylene in which the treatment applicators could be placed. For simulations using CT images, the phantom was scanned at 120 kVp. Using an interface software written in MATLAB, the CT images were converted into MCNP input file and the simulation was then performed. Results: Using the interface software, preparation time for the simulations of the applicator and surrounding structures was approximately 3 minutes; the corresponding time needed in the conventional MCNP geometry entry being approximately 1 hour. The discrepancy in the simulated and measured doses to point A was 1.7% of the prescribed dose.  The corresponding dose differences between the two methods in rectum and bladder were 3.0% and 3.7% of the prescribed dose, respectively. Comparing the results of simulation using the interface software with those of simulation using the standard MCNP geometry entry showed a less than 1

  15. A voxel-based mouse for internal dose calculations using Monte Carlo simulations (MCNP).

    Science.gov (United States)

    Bitar, A; Lisbona, A; Thedrez, P; Sai Maurel, C; Le Forestier, D; Barbet, J; Bardies, M

    2007-02-21

    Murine models are useful for targeted radiotherapy pre-clinical experiments. These models can help to assess the potential interest of new radiopharmaceuticals. In this study, we developed a voxel-based mouse for dosimetric estimates. A female nude mouse (30 g) was frozen and cut into slices. High-resolution digital photographs were taken directly on the frozen block after each section. Images were segmented manually. Monoenergetic photon or electron sources were simulated using the MCNP4c2 Monte Carlo code for each source organ, in order to give tables of S-factors (in Gy Bq-1 s-1) for all target organs. Results obtained from monoenergetic particles were then used to generate S-factors for several radionuclides of potential interest in targeted radiotherapy. Thirteen source and 25 target regions were considered in this study. For each source region, 16 photon and 16 electron energies were simulated. Absorbed fractions, specific absorbed fractions and S-factors were calculated for 16 radionuclides of interest for targeted radiotherapy. The results obtained generally agree well with data published previously. For electron energies ranging from 0.1 to 2.5 MeV, the self-absorbed fraction varies from 0.98 to 0.376 for the liver, and from 0.89 to 0.04 for the thyroid. Electrons cannot be considered as 'non-penetrating' radiation for energies above 0.5 MeV for mouse organs. This observation can be generalized to radionuclides: for example, the beta self-absorbed fraction for the thyroid was 0.616 for I-131; absorbed fractions for Y-90 for left kidney-to-left kidney and for left kidney-to-spleen were 0.486 and 0.058, respectively. Our voxel-based mouse allowed us to generate a dosimetric database for use in preclinical targeted radiotherapy experiments.

  16. An exercise in model validation: Comparing univariate statistics and Monte Carlo-based multivariate statistics

    International Nuclear Information System (INIS)

    Weathers, J.B.; Luck, R.; Weathers, J.W.

    2009-01-01

    The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.

  17. An exercise in model validation: Comparing univariate statistics and Monte Carlo-based multivariate statistics

    Energy Technology Data Exchange (ETDEWEB)

    Weathers, J.B. [Shock, Noise, and Vibration Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: James.Weathers@ngc.com; Luck, R. [Department of Mechanical Engineering, Mississippi State University, 210 Carpenter Engineering Building, P.O. Box ME, Mississippi State, MS 39762-5925 (United States)], E-mail: Luck@me.msstate.edu; Weathers, J.W. [Structural Analysis Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: Jeffrey.Weathers@ngc.com

    2009-11-15

    The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.

  18. Benchmarking time-dependent neutron problems with Monte Carlo codes

    International Nuclear Information System (INIS)

    Couet, B.; Loomis, W.A.

    1990-01-01

    Many nuclear logging tools measure the time dependence of a neutron flux in a geological formation to infer important properties of the formation. The complex geometry of the tool and the borehole within the formation does not permit an exact deterministic modelling of the neutron flux behaviour. While this exact simulation is possible with Monte Carlo methods the computation time does not facilitate quick turnaround of results useful for design and diagnostic purposes. Nonetheless a simple model based on the diffusion-decay equation for the flux of neutrons of a single energy group can be useful in this situation. A combination approach where a Monte Carlo calculation benchmarks a deterministic model in terms of the diffusion constants of the neutrons propagating in the media and their flux depletion rates thus offers the possibility of quick calculation with assurance as to accuracy. We exemplify this approach with the Monte Carlo benchmarking of a logging tool problem, showing standoff and bedding response. (author)

  19. A new Monte Carlo-based treatment plan optimization approach for intensity modulated radiation therapy.

    Science.gov (United States)

    Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2015-04-07

    Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 10(6) particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 10(5) particles per beamlet. Correspondingly, the computation

  20. A GPU-accelerated and Monte Carlo-based intensity modulated proton therapy optimization system.

    Science.gov (United States)

    Ma, Jiasen; Beltran, Chris; Seum Wan Chan Tseung, Hok; Herman, Michael G

    2014-12-01

    Conventional spot scanning intensity modulated proton therapy (IMPT) treatment planning systems (TPSs) optimize proton spot weights based on analytical dose calculations. These analytical dose calculations have been shown to have severe limitations in heterogeneous materials. Monte Carlo (MC) methods do not have these limitations; however, MC-based systems have been of limited clinical use due to the large number of beam spots in IMPT and the extremely long calculation time of traditional MC techniques. In this work, the authors present a clinically applicable IMPT TPS that utilizes a very fast MC calculation. An in-house graphics processing unit (GPU)-based MC dose calculation engine was employed to generate the dose influence map for each proton spot. With the MC generated influence map, a modified least-squares optimization method was used to achieve the desired dose volume histograms (DVHs). The intrinsic CT image resolution was adopted for voxelization in simulation and optimization to preserve spatial resolution. The optimizations were computed on a multi-GPU framework to mitigate the memory limitation issues for the large dose influence maps that resulted from maintaining the intrinsic CT resolution. The effects of tail cutoff and starting condition were studied and minimized in this work. For relatively large and complex three-field head and neck cases, i.e., >100,000 spots with a target volume of ∼ 1000 cm(3) and multiple surrounding critical structures, the optimization together with the initial MC dose influence map calculation was done in a clinically viable time frame (less than 30 min) on a GPU cluster consisting of 24 Nvidia GeForce GTX Titan cards. The in-house MC TPS plans were comparable to a commercial TPS plans based on DVH comparisons. A MC-based treatment planning system was developed. The treatment planning can be performed in a clinically viable time frame on a hardware system costing around 45,000 dollars. The fast calculation and

  1. A GPU-accelerated and Monte Carlo-based intensity modulated proton therapy optimization system

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Jiasen, E-mail: ma.jiasen@mayo.edu; Beltran, Chris; Seum Wan Chan Tseung, Hok; Herman, Michael G. [Department of Radiation Oncology, Division of Medical Physics, Mayo Clinic, 200 First Street Southwest, Rochester, Minnesota 55905 (United States)

    2014-12-15

    Purpose: Conventional spot scanning intensity modulated proton therapy (IMPT) treatment planning systems (TPSs) optimize proton spot weights based on analytical dose calculations. These analytical dose calculations have been shown to have severe limitations in heterogeneous materials. Monte Carlo (MC) methods do not have these limitations; however, MC-based systems have been of limited clinical use due to the large number of beam spots in IMPT and the extremely long calculation time of traditional MC techniques. In this work, the authors present a clinically applicable IMPT TPS that utilizes a very fast MC calculation. Methods: An in-house graphics processing unit (GPU)-based MC dose calculation engine was employed to generate the dose influence map for each proton spot. With the MC generated influence map, a modified least-squares optimization method was used to achieve the desired dose volume histograms (DVHs). The intrinsic CT image resolution was adopted for voxelization in simulation and optimization to preserve spatial resolution. The optimizations were computed on a multi-GPU framework to mitigate the memory limitation issues for the large dose influence maps that resulted from maintaining the intrinsic CT resolution. The effects of tail cutoff and starting condition were studied and minimized in this work. Results: For relatively large and complex three-field head and neck cases, i.e., >100 000 spots with a target volume of ∼1000 cm{sup 3} and multiple surrounding critical structures, the optimization together with the initial MC dose influence map calculation was done in a clinically viable time frame (less than 30 min) on a GPU cluster consisting of 24 Nvidia GeForce GTX Titan cards. The in-house MC TPS plans were comparable to a commercial TPS plans based on DVH comparisons. Conclusions: A MC-based treatment planning system was developed. The treatment planning can be performed in a clinically viable time frame on a hardware system costing around 45

  2. The effect of acute tyrosine phenylalanine depletion on emotion-based decision-making in healthy adults.

    Science.gov (United States)

    Vrshek-Schallhorn, Suzanne; Wahlstrom, Dustin; White, Tonya; Luciana, Monica

    2013-04-01

    Despite interest in dopamine's role in emotion-based decision-making, few reports of the effects of dopamine manipulations are available in this area in humans. This study investigates dopamine's role in emotion-based decision-making through a common measure of this construct, the Iowa Gambling Task (IGT), using Acute Tyrosine Phenylalanine Depletion (ATPD). In a between-subjects design, 40 healthy adults were randomized to receive either an ATPD beverage or a balanced amino acid beverage (a control) prior to completing the IGT, as well as pre- and post-manipulation blood draws for the neurohormone prolactin. Together with conventional IGT performance metrics, choice selections and response latencies were examined separately for good and bad choices before and after several key punishment events. Changes in response latencies were also used to predict total task performance. Prolactin levels increased significantly in the ATPD group but not in the control group. However, no significant group differences in performance metrics were detected, nor were there sex differences in outcome measures. However, the balanced group's bad deck latencies speeded up across the task, while the ATPD group's latencies remained adaptively hesitant. Additionally, modulation of latencies to the bad decks predicted total score for the ATPD group only. One interpretation is that ATPD subtly attenuated reward salience and altered the approach by which individuals achieved successful performance, without resulting in frank group differences in task performance. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Monte-Carlo Simulation for PDC-Based Optical CDMA System

    Directory of Open Access Journals (Sweden)

    FAHIM AZIZ UMRANI

    2010-10-01

    Full Text Available This paper presents the Monte-Carlo simulation of Optical CDMA (Code Division Multiple Access systems, and analyse its performance in terms of the BER (Bit Error Rate. The spreading sequence chosen for CDMA is Perfect Difference Codes. Furthermore, this paper derives the expressions of noise variances from first principles to calibrate the noise for both bipolar (electrical domain and unipolar (optical domain signalling required for Monte-Carlo simulation. The simulated results conform to the theory and show that the receiver gain mismatch and splitter loss at the transceiver degrades the system performance.

  4. Volume Measurement Algorithm for Food Product with Irregular Shape using Computer Vision based on Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Joko Siswantoro

    2014-11-01

    Full Text Available Volume is one of important issues in the production and processing of food product. Traditionally, volume measurement can be performed using water displacement method based on Archimedes’ principle. Water displacement method is inaccurate and considered as destructive method. Computer vision offers an accurate and nondestructive method in measuring volume of food product. This paper proposes algorithm for volume measurement of irregular shape food product using computer vision based on Monte Carlo method. Five images of object were acquired from five different views and then processed to obtain the silhouettes of object. From the silhouettes of object, Monte Carlo method was performed to approximate the volume of object. The simulation result shows that the algorithm produced high accuracy and precision for volume measurement.

  5. Test Population Selection from Weibull-Based, Monte Carlo Simulations of Fatigue Life

    Science.gov (United States)

    Vlcek, Brian L.; Zaretsky, Erwin V.; Hendricks, Robert C.

    2012-01-01

    Fatigue life is probabilistic and not deterministic. Experimentally establishing the fatigue life of materials, components, and systems is both time consuming and costly. As a result, conclusions regarding fatigue life are often inferred from a statistically insufficient number of physical tests. A proposed methodology for comparing life results as a function of variability due to Weibull parameters, variability between successive trials, and variability due to size of the experimental population is presented. Using Monte Carlo simulation of randomly selected lives from a large Weibull distribution, the variation in the L10 fatigue life of aluminum alloy AL6061 rotating rod fatigue tests was determined as a function of population size. These results were compared to the L10 fatigue lives of small (10 each) populations from AL2024, AL7075 and AL6061. For aluminum alloy AL6061, a simple algebraic relationship was established for the upper and lower L10 fatigue life limits as a function of the number of specimens failed. For most engineering applications where less than 30 percent variability can be tolerated in the maximum and minimum values, at least 30 to 35 test samples are necessary. The variability of test results based on small sample sizes can be greater than actual differences, if any, that exists between materials and can result in erroneous conclusions. The fatigue life of AL2024 is statistically longer than AL6061 and AL7075. However, there is no statistical difference between the fatigue lives of AL6061 and AL7075 even though AL7075 had a fatigue life 30 percent greater than AL6061.

  6. Cell death following BNCT: A theoretical approach based on Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ballarini, F., E-mail: francesca.ballarini@pv.infn.it [University of Pavia, Department of Nuclear and Theoretical Physics, via Bassi 6, Pavia (Italy)] [INFN (National Institute of Nuclear Physics)-Sezione di Pavia, via Bassi 6, Pavia (Italy); Bakeine, J. [University of Pavia, Department of Nuclear and Theoretical Physics, via Bassi 6, Pavia (Italy); Bortolussi, S. [University of Pavia, Department of Nuclear and Theoretical Physics, via Bassi 6, Pavia (Italy)] [INFN (National Institute of Nuclear Physics)-Sezione di Pavia, via Bassi 6, Pavia (Italy); Bruschi, P. [University of Pavia, Department of Nuclear and Theoretical Physics, via Bassi 6, Pavia (Italy); Cansolino, L.; Clerici, A.M.; Ferrari, C. [University of Pavia, Department of Surgery, Experimental Surgery Laboratory, Pavia (Italy); Protti, N.; Stella, S. [University of Pavia, Department of Nuclear and Theoretical Physics, via Bassi 6, Pavia (Italy)] [INFN (National Institute of Nuclear Physics)-Sezione di Pavia, via Bassi 6, Pavia (Italy); Zonta, A.; Zonta, C. [University of Pavia, Department of Surgery, Experimental Surgery Laboratory, Pavia (Italy); Altieri, S. [University of Pavia, Department of Nuclear and Theoretical Physics, via Bassi 6, Pavia (Italy)] [INFN (National Institute of Nuclear Physics)-Sezione di Pavia, via Bassi 6, Pavia (Italy)

    2011-12-15

    In parallel to boron measurements and animal studies, investigations on radiation-induced cell death are also in progress in Pavia, with the aim of better characterisation of the effects of a BNCT treatment down to the cellular level. Such studies are being carried out not only experimentally but also theoretically, based on a mechanistic model and a Monte Carlo code. Such model assumes that: (1) only clustered DNA strand breaks can lead to chromosome aberrations; (2) only chromosome fragments within a certain threshold distance can undergo misrejoining; (3) the so-called 'lethal aberrations' (dicentrics, rings and large deletions) lead to cell death. After applying the model to normal cells exposed to monochromatic fields of different radiation types, the irradiation section of the code was purposely extended to mimic the cell exposure to a mixed radiation field produced by the {sup 10}B(n,{alpha}) {sup 7}Li reaction, which gives rise to alpha particles and Li ions of short range and high biological effectiveness, and by the {sup 14}N(n,p){sup 14}C reaction, which produces 0.58 MeV protons. Very good agreement between model predictions and literature data was found for human and animal cells exposed to X- or gamma-rays, protons and alpha particles, thus allowing to validate the model for cell death induced by monochromatic radiation fields. The model predictions showed good agreement also with experimental data obtained by our group exposing DHD cells to thermal neutrons in the TRIGA Mark II reactor of University of Pavia; this allowed to validate the model also for a BNCT exposure scenario, providing a useful predictive tool to bridge the gap between irradiation and cell death.

  7. Monte Carlo N-particle simulation of neutron-based sterilisation of anthrax contamination.

    Science.gov (United States)

    Liu, B; Xu, J; Liu, T; Ouyang, X

    2012-10-01

    To simulate the neutron-based sterilisation of anthrax contamination by Monte Carlo N-particle (MCNP) 4C code. Neutrons are elementary particles that have no charge. They are 20 times more effective than electrons or γ-rays in killing anthrax spores on surfaces and inside closed containers. Neutrons emitted from a (252)Cf neutron source are in the 100 keV to 2 MeV energy range. A 2.5 MeV D-D neutron generator can create neutrons at up to 10(13) n s(-1) with current technology. All these enable an effective and low-cost method of killing anthrax spores. There is no effect on neutron energy deposition on the anthrax sample when using a reflector that is thicker than its saturation thickness. Among all three reflecting materials tested in the MCNP simulation, paraffin is the best because it has the thinnest saturation thickness and is easy to machine. The MCNP radiation dose and fluence simulation calculation also showed that the MCNP-simulated neutron fluence that is needed to kill the anthrax spores agrees with previous analytical estimations very well. The MCNP simulation indicates that a 10 min neutron irradiation from a 0.5 g (252)Cf neutron source or a 1 min neutron irradiation from a 2.5 MeV D-D neutron generator may kill all anthrax spores in a sample. This is a promising result because a 2.5 MeV D-D neutron generator output >10(13) n s(-1) should be attainable in the near future. This indicates that we could use a D-D neutron generator to sterilise anthrax contamination within several seconds.

  8. The use of tetrahedral mesh geometries in Monte Carlo simulation of applicator based brachytherapy dose distributions

    International Nuclear Information System (INIS)

    Fonseca, Gabriel Paiva; Yoriyaz, Hélio; Landry, Guillaume; White, Shane; Reniers, Brigitte; Verhaegen, Frank; D’Amours, Michel; Beaulieu, Luc

    2014-01-01

    Accounting for brachytherapy applicator attenuation is part of the recommendations from the recent report of AAPM Task Group 186. To do so, model based dose calculation algorithms require accurate modelling of the applicator geometry. This can be non-trivial in the case of irregularly shaped applicators such as the Fletcher Williamson gynaecological applicator or balloon applicators with possibly irregular shapes employed in accelerated partial breast irradiation (APBI) performed using electronic brachytherapy sources (EBS). While many of these applicators can be modelled using constructive solid geometry (CSG), the latter may be difficult and time-consuming. Alternatively, these complex geometries can be modelled using tessellated geometries such as tetrahedral meshes (mesh geometries (MG)). Recent versions of Monte Carlo (MC) codes Geant4 and MCNP6 allow for the use of MG. The goal of this work was to model a series of applicators relevant to brachytherapy using MG. Applicators designed for 192 Ir sources and 50 kV EBS were studied; a shielded vaginal applicator, a shielded Fletcher Williamson applicator and an APBI balloon applicator. All applicators were modelled in Geant4 and MCNP6 using MG and CSG for dose calculations. CSG derived dose distributions were considered as reference and used to validate MG models by comparing dose distribution ratios. In general agreement within 1% for the dose calculations was observed for all applicators between MG and CSG and between codes when considering volumes inside the 25% isodose surface. When compared to CSG, MG required longer computation times by a factor of at least 2 for MC simulations using the same code. MCNP6 calculation times were more than ten times shorter than Geant4 in some cases. In conclusion we presented methods allowing for high fidelity modelling with results equivalent to CSG. To the best of our knowledge MG offers the most accurate representation of an irregular APBI balloon applicator. (paper)

  9. Optimization of FIBMOS Through 2D Silvaco ATLAS and 2D Monte Carlo Particle-based Device Simulations

    OpenAIRE

    Kang, J.; He, X.; Vasileska, D.; Schroder, D. K.

    2001-01-01

    Focused Ion Beam MOSFETs (FIBMOS) demonstrate large enhancements in core device performance areas such as output resistance, hot electron reliability and voltage stability upon channel length or drain voltage variation. In this work, we describe an optimization technique for FIBMOS threshold voltage characterization using the 2D Silvaco ATLAS simulator. Both ATLAS and 2D Monte Carlo particle-based simulations were used to show that FIBMOS devices exhibit enhanced current drive ...

  10. Transformation of human osteoblast cells to the tumorigenic phenotype by depleted uranium-uranyl chloride.

    OpenAIRE

    Miller, A C; Blakely, W F; Livengood, D; Whittaker, T; Xu, J; Ejnik, J W; Hamilton, M M; Parlette, E; John, T S; Gerstenberg, H M; Hsu, H

    1998-01-01

    Depleted uranium (DU) is a dense heavy metal used primarily in military applications. Although the health effects of occupational uranium exposure are well known, limited data exist regarding the long-term health effects of internalized DU in humans. We established an in vitro cellular model to study DU exposure. Microdosimetric assessment, determined using a Monte Carlo computer simulation based on measured intracellular and extracellular uranium levels, showed that few (0.0014%) cell nuclei...

  11. A comparison study for dose calculation in radiation therapy: pencil beam Kernel based vs. Monte Carlo simulation vs. measurements

    Energy Technology Data Exchange (ETDEWEB)

    Cheong, Kwang-Ho; Suh, Tae-Suk; Lee, Hyoung-Koo; Choe, Bo-Young [The Catholic Univ. of Korea, Seoul (Korea, Republic of); Kim, Hoi-Nam; Yoon, Sei-Chul [Kangnam St. Mary' s Hospital, Seoul (Korea, Republic of)

    2002-07-01

    Accurate dose calculation in radiation treatment planning is most important for successful treatment. Since human body is composed of various materials and not an ideal shape, it is not easy to calculate the accurate effective dose in the patients. Many methods have been proposed to solve inhomogeneity and surface contour problems. Monte Carlo simulations are regarded as the most accurate method, but it is not appropriate for routine planning because it takes so much time. Pencil beam kernel based convolution/superposition methods were also proposed to correct those effects. Nowadays, many commercial treatment planning systems have adopted this algorithm as a dose calculation engine. The purpose of this study is to verify the accuracy of the dose calculated from pencil beam kernel based treatment planning system comparing to Monte Carlo simulations and measurements especially in inhomogeneous region. Home-made inhomogeneous phantom, Helax-TMS ver. 6.0 and Monte Carlo code BEAMnrc and DOSXYZnrc were used in this study. In homogeneous media, the accuracy was acceptable but in inhomogeneous media, the errors were more significant. However in general clinical situation, pencil beam kernel based convolution algorithm is thought to be a valuable tool to calculate the dose.

  12. Development of three-dimensional program based on Monte Carlo and discrete ordinates bidirectional coupling method

    International Nuclear Information System (INIS)

    Han Jingru; Chen Yixue; Yuan Longjun

    2013-01-01

    The Monte Carlo (MC) and discrete ordinates (SN) are the commonly used methods in the design of radiation shielding. Monte Carlo method is able to treat the geometry exactly, but time-consuming in dealing with the deep penetration problem. The discrete ordinate method has great computational efficiency, but it is quite costly in computer memory and it suffers from ray effect. Single discrete ordinates method or single Monte Carlo method has limitation in shielding calculation for large complex nuclear facilities. In order to solve the problem, the Monte Carlo and discrete ordinates bidirectional coupling method is developed. The bidirectional coupling method is implemented in the interface program to transfer the particle probability distribution of MC and angular flux of discrete ordinates. The coupling method combines the advantages of MC and SN. The test problems of cartesian and cylindrical coordinate have been calculated by the coupling methods. The calculation results are performed with comparison to MCNP and TORT and satisfactory agreements are obtained. The correctness of the program is proved. (authors)

  13. Monte Carlo-based QA for IMRT of head and neck cancers

    Science.gov (United States)

    Tang, F.; Sham, J.; Ma, C.-M.; Li, J.-S.

    2007-06-01

    It is well-known that the presence of large air cavity in a dense medium (or patient) introduces significant electronic disequilibrium when irradiated with megavoltage X-ray field. This condition may worsen by the possible use of tiny beamlets in intensity-modulated radiation therapy (IMRT). Commercial treatment planning systems (TPSs), in particular those based on the pencil-beam method, do not provide accurate dose computation for the lungs and other cavity-laden body sites such as the head and neck. In this paper we present the use of Monte Carlo (MC) technique for dose re-calculation of IMRT of head and neck cancers. In our clinic, a turn-key software system is set up for MC calculation and comparison with TPS-calculated treatment plans as part of the quality assurance (QA) programme for IMRT delivery. A set of 10 off-the-self PCs is employed as the MC calculation engine with treatment plan parameters imported from the TPS via a graphical user interface (GUI) which also provides a platform for launching remote MC simulation and subsequent dose comparison with the TPS. The TPS-segmented intensity maps are used as input for the simulation hence skipping the time-consuming simulation of the multi-leaf collimator (MLC). The primary objective of this approach is to assess the accuracy of the TPS calculations in the presence of air cavities in the head and neck whereas the accuracy of leaf segmentation is verified by fluence measurement using a fluoroscopic camera-based imaging device. This measurement can also validate the correct transfer of intensity maps to the record and verify system. Comparisons between TPS and MC calculations of 6 MV IMRT for typical head and neck treatments review regional consistency in dose distribution except at and around the sinuses where our pencil-beam-based TPS sometimes over-predicts the dose by up to 10%, depending on the size of the cavities. In addition, dose re-buildup of up to 4% is observed at the posterior nasopharyngeal

  14. Unbiased Selective Isolation of Protein N-Terminal Peptides from Complex Proteome Samples Using Phospho Tagging PTAG) and TiO2-based Depletion

    NARCIS (Netherlands)

    Mommen, G.P.M.; Waterbeemd, van de B.; Meiring, H.D.; Kersten, G.; Heck, A.J.R.; Jong, de A.P.J.M.

    2012-01-01

    A positional proteomics strategy for global N-proteome analysis is presented based on phospho tagging (PTAG) of internal peptides followed by depletion by titanium dioxide (TiO2) affinity chromatography. Therefore, N-terminal and lysine amino groups are initially completely dimethylated with

  15. Lab Scale Study of the Depletion of Mullite/Corundum-Based Refractories Trough Reaction with Scaffold Materials

    International Nuclear Information System (INIS)

    Stjernberg, J; Antti, M-L; Ion, J C; Lindblom, B

    2011-01-01

    To investigate the mechanisms underlying the depletion of mullite/corundum-based refractory bricks used in rotary kilns for iron ore pellet production, the reaction mechanisms between scaffold material and refractory bricks have been studied on the laboratory-scale. Alkali additions were used to enhance the reaction rates between the materials. The morphological changes and active chemical reactions at the refractory/scaffold material interface in the samples were characterized using scanning electron microscopy (SEM), thermal analysis (TA) and X-ray diffraction (XRD). No reaction products of alkali and hematite (Fe 2 O 3 ) were detected; however, alkali dissolves the mullite in the bricks. Phases such as nepheline (Na 2 O·Al 2 O 3 ·2SiO 2 ), kalsilite (K 2 O·Al 2 O 3 ·2SiO 2 ), leucite (K 2 O·Al 2 O 3 ·4SiO 2 ) and potassium β-alumina (K 2 O·11Al 2 O 3 ) were formed as a consequence of reactions between alkali and the bricks.

  16. The multiphase flow system used in exploiting depleted reservoirs: water-based Micro-bubble drilling fluid

    International Nuclear Information System (INIS)

    Zheng Lihui; He Xiaoqing; Wang Xiangchun; Fu Lixia

    2009-01-01

    Water-based micro-bubble drilling fluid, which is used to exploit depleted reservoirs, is a complicated multiphase flow system that is composed of gas, water, oil, polymer, surfactants and solids. The gas phase is separate from bulk water by two layers and three membranes. They are 'surface tension reducing membrane', 'high viscosity layer', 'high viscosity fixing membrane', 'compatibility enhancing membrane' and 'concentration transition layer of liner high polymer (LHP) and surfactants' from every gas phase centre to the bulk water. 'Surface tension reducing membrane', 'high viscosity layer' and 'high viscosity fixing membrane' bond closely to pack air forming 'air-bag', 'compatibility enhancing membrane' and 'concentration transition layer of LHP and surfactants' absorb outside 'air-bag' to form 'incompact zone'. From another point of view, 'air-bag' and 'incompact zone' compose micro-bubble. Dynamic changes of 'incompact zone' enable micro-bubble to exist lonely or aggregate together, and lead the whole fluid, which can wet both hydrophilic and hydrophobic surface, to possess very high viscosity at an extremely low shear rate but to possess good fluidity at a higher shear rate. When the water-based micro-bubble drilling fluid encounters leakage zones, it will automatically regulate the sizes and shapes of the bubbles according to the slot width of fracture, the height of cavern as well as the aperture of openings, or seal them by making use of high viscosity of the system at a very low shear rate. Measurements of the rheological parameters indicate that water-based micro-bubble drilling fluid has very high plastic viscosity, yield point, initial gel, final gel and high ratio of yield point and plastic viscosity. All of these properties make the multiphase flow system meet the requirements of petroleum drilling industry. Research on interface between gas and bulk water of this multiphase flow system can provide us with information of synthesizing effective

  17. Development of a consistent Monte Carlo-deterministic transport methodology based on the method of characteristics and MCNP5

    International Nuclear Information System (INIS)

    Karriem, Z.; Ivanov, K.; Zamonsky, O.

    2011-01-01

    This paper presents work that has been performed to develop an integrated Monte Carlo- Deterministic transport methodology in which the two methods make use of exactly the same general geometry and multigroup nuclear data. The envisioned application of this methodology is in reactor lattice physics methods development and shielding calculations. The methodology will be based on the Method of Long Characteristics (MOC) and the Monte Carlo N-Particle Transport code MCNP5. Important initial developments pertaining to ray tracing and the development of an MOC flux solver for the proposed methodology are described. Results showing the viability of the methodology are presented for two 2-D general geometry transport problems. The essential developments presented is the use of MCNP as geometry construction and ray tracing tool for the MOC, verification of the ray tracing indexing scheme that was developed to represent the MCNP geometry in the MOC and the verification of the prototype 2-D MOC flux solver. (author)

  18. Three-Dimensional Simulation of DRIE Process Based on the Narrow Band Level Set and Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Jia-Cheng Yu

    2018-02-01

    Full Text Available A three-dimensional topography simulation of deep reactive ion etching (DRIE is developed based on the narrow band level set method for surface evolution and Monte Carlo method for flux distribution. The advanced level set method is implemented to simulate the time-related movements of etched surface. In the meanwhile, accelerated by ray tracing algorithm, the Monte Carlo method incorporates all dominant physical and chemical mechanisms such as ion-enhanced etching, ballistic transport, ion scattering, and sidewall passivation. The modified models of charged particles and neutral particles are epitomized to determine the contributions of etching rate. The effects such as scalloping effect and lag effect are investigated in simulations and experiments. Besides, the quantitative analyses are conducted to measure the simulation error. Finally, this simulator will be served as an accurate prediction tool for some MEMS fabrications.

  19. Qualitative Simulation of Photon Transport in Free Space Based on Monte Carlo Method and Its Parallel Implementation

    Directory of Open Access Journals (Sweden)

    Xueli Chen

    2010-01-01

    Full Text Available During the past decade, Monte Carlo method has obtained wide applications in optical imaging to simulate photon transport process inside tissues. However, this method has not been effectively extended to the simulation of free-space photon transport at present. In this paper, a uniform framework for noncontact optical imaging is proposed based on Monte Carlo method, which consists of the simulation of photon transport both in tissues and in free space. Specifically, the simplification theory of lens system is utilized to model the camera lens equipped in the optical imaging system, and Monte Carlo method is employed to describe the energy transformation from the tissue surface to the CCD camera. Also, the focusing effect of camera lens is considered to establish the relationship of corresponding points between tissue surface and CCD camera. Furthermore, a parallel version of the framework is realized, making the simulation much more convenient and effective. The feasibility of the uniform framework and the effectiveness of the parallel version are demonstrated with a cylindrical phantom based on real experimental results.

  20. Depleted uranium management alternatives

    Energy Technology Data Exchange (ETDEWEB)

    Hertzler, T.J.; Nishimoto, D.D.

    1994-08-01

    This report evaluates two management alternatives for Department of Energy depleted uranium: continued storage as uranium hexafluoride, and conversion to uranium metal and fabrication to shielding for spent nuclear fuel containers. The results will be used to compare the costs with other alternatives, such as disposal. Cost estimates for the continued storage alternative are based on a life-cycle of 27 years through the year 2020. Cost estimates for the recycle alternative are based on existing conversion process costs and Capital costs for fabricating the containers. Additionally, the recycle alternative accounts for costs associated with intermediate product resale and secondary waste disposal for materials generated during the conversion process.

  1. Depleted uranium management alternatives

    International Nuclear Information System (INIS)

    Hertzler, T.J.; Nishimoto, D.D.

    1994-08-01

    This report evaluates two management alternatives for Department of Energy depleted uranium: continued storage as uranium hexafluoride, and conversion to uranium metal and fabrication to shielding for spent nuclear fuel containers. The results will be used to compare the costs with other alternatives, such as disposal. Cost estimates for the continued storage alternative are based on a life-cycle of 27 years through the year 2020. Cost estimates for the recycle alternative are based on existing conversion process costs and Capital costs for fabricating the containers. Additionally, the recycle alternative accounts for costs associated with intermediate product resale and secondary waste disposal for materials generated during the conversion process

  2. [Study of Determination of Oil Mixture Components Content Based on Quasi-Monte Carlo Method].

    Science.gov (United States)

    Wang, Yu-tian; Xu, Jing; Liu, Xiao-fei; Chen, Meng-han; Wang, Shi-tao

    2015-05-01

    Gasoline, kerosene, diesel is processed by crude oil with different distillation range. The boiling range of gasoline is 35 ~205 °C. The boiling range of kerosene is 140~250 °C. And the boiling range of diesel is 180~370 °C. At the same time, the carbon chain length of differentmineral oil is different. The carbon chain-length of gasoline is within the scope of C7 to C11. The carbon chain length of kerosene is within the scope of C12 to C15. And the carbon chain length of diesel is within the scope of C15 to C18. The recognition and quantitative measurement of three kinds of mineral oil is based on different fluorescence spectrum formed in their different carbon number distribution characteristics. Mineral oil pollution occurs frequently, so monitoring mineral oil content in the ocean is very important. A new method of components content determination of spectra overlapping mineral oil mixture is proposed, with calculation of characteristic peak power integrationof three-dimensional fluorescence spectrum by using Quasi-Monte Carlo Method, combined with optimal algorithm solving optimum number of characteristic peak and range of integral region, solving nonlinear equations by using BFGS(a rank to two update method named after its inventor surname first letter, Boyden, Fletcher, Goldfarb and Shanno) method. Peak power accumulation of determined points in selected area is sensitive to small changes of fluorescence spectral line, so the measurement of small changes of component content is sensitive. At the same time, compared with the single point measurement, measurement sensitivity is improved by the decrease influence of random error due to the selection of points. Three-dimensional fluorescence spectra and fluorescence contour spectra of single mineral oil and the mixture are measured by taking kerosene, diesel and gasoline as research objects, with a single mineral oil regarded whole, not considered each mineral oil components. Six characteristic peaks are

  3. Development of a hybrid multi-scale phantom for Monte-Carlo based internal dosimetry

    International Nuclear Information System (INIS)

    Marcatili, S.; Villoing, D.; Bardies, M.

    2015-01-01

    Full text of publication follows. Aim: in recent years several phantoms were developed for radiopharmaceutical dosimetry in clinical and preclinical settings. Voxel-based models (Zubal, Max/Fax, ICRP110) were developed to reach a level of realism that could not be achieved by mathematical models. In turn, 'hybrid' models (XCAT, MOBY/ROBY, Mash/Fash) allow a further degree of versatility by offering the possibility to finely tune each model according to various parameters. However, even 'hybrid' models require the generation of a voxel version for Monte-Carlo modeling of radiation transport. Since absorbed dose simulation time is strictly related to geometry spatial sampling, a compromise should be made between phantom realism and simulation speed. This trade-off leads on one side in an overestimation of the size of small radiosensitive structures such as the skin or hollow organs' walls, and on the other hand to unnecessarily detailed voxellization of large, homogeneous structures. The Aim of this work is to develop a hybrid multi-resolution phantom model for Geant4 and Gate, to better characterize energy deposition in small structures while preserving reasonable computation times. Materials and Methods: we have developed a pipeline for the conversion of preexisting phantoms into a multi-scale Geant4 model. Meshes of each organ are created from raw binary images of a phantom and then voxellized to the smallest spatial sampling required by the user. The user can then decide to re-sample the internal part of each organ, while leaving a layer of smallest voxels at the edge of the organ. In this way, the realistic shape of the organ is maintained while reducing the voxel number in the inner part. For hollow organs, the wall is always modeled using the smallest voxel sampling. This approach allows choosing different voxel resolutions for each organ according to a specific application. Results: preliminary results show that it is possible to

  4. Monte Carlo based simulation of LIAC intraoperative radiotherapy accelerator along with beam shaper applicator

    Directory of Open Access Journals (Sweden)

    N Heidarloo

    2017-08-01

    Full Text Available Intraoperative electron radiotherapy is one of the radiotherapy methods that delivers a high single fraction of radiation dose to the patient in one session during the surgery. Beam shaper applicator is one of the applicators that is recently employed with this radiotherapy method. This applicator has a considerable application in treatment of large tumors. In this study, the dosimetric characteristics of the electron beam produced by LIAC intraoperative radiotherapy accelerator in conjunction with this applicator have been evaluated through Monte Carlo simulation by MCNP code. The results showed that the electron beam produced by the beam shaper applicator would have the desirable dosimetric characteristics, so that the mentioned applicator can be considered for clinical purposes. Furthermore, the good agreement between the results of simulation and practical dosimetry, confirms the applicability of Monte Carlo method in determining the dosimetric parameters of electron beam  intraoperative radiotherapy

  5. An NPT Monte Carlo Molecular Simulation-Based Approach to Investigate Solid-Vapor Equilibrium: Application to Elemental Sulfur-H2S System

    KAUST Repository

    Kadoura, Ahmad Salim; Salama, Amgad; Sun, Shuyu; Sherik, Abdelmounam

    2013-01-01

    In this work, a method to estimate solid elemental sulfur solubility in pure and gas mixtures using Monte Carlo (MC) molecular simulation is proposed. This method is based on Isobaric-Isothermal (NPT) ensemble and the Widom insertion technique

  6. G4-STORK: A Geant4-based Monte Carlo reactor kinetics simulation code

    International Nuclear Information System (INIS)

    Russell, Liam; Buijs, Adriaan; Jonkmans, Guy

    2014-01-01

    Highlights: • G4-STORK is a new, time-dependent, Monte Carlo code for reactor physics applications. • G4-STORK was built by adapting and expanding on the Geant4 Monte Carlo toolkit. • G4-STORK was designed to simulate short-term fluctuations in reactor cores. • G4-STORK is well suited for simulating sub- and supercritical assemblies. • G4-STORK was verified through comparisons with DRAGON and MCNP. - Abstract: In this paper we introduce G4-STORK (Geant4 STOchastic Reactor Kinetics), a new, time-dependent, Monte Carlo particle tracking code for reactor physics applications. G4-STORK was built by adapting and expanding on the Geant4 Monte Carlo toolkit. The toolkit provides the fundamental physics models and particle tracking algorithms that track each particle in space and time. It is a framework for further development (e.g. for projects such as G4-STORK). G4-STORK derives reactor physics parameters (e.g. k eff ) from the continuous evolution of a population of neutrons in space and time in the given simulation geometry. In this paper we detail the major additions to the Geant4 toolkit that were necessary to create G4-STORK. These include a renormalization process that maintains a manageable number of neutrons in the simulation even in very sub- or supercritical systems, scoring processes (e.g. recording fission locations, total neutrons produced and lost, etc.) that allow G4-STORK to calculate the reactor physics parameters, and dynamic simulation geometries that can change over the course of simulation to illicit reactor kinetics responses (e.g. fuel temperature reactivity feedback). The additions are verified through simple simulations and code-to-code comparisons with established reactor physics codes such as DRAGON and MCNP. Additionally, G4-STORK was developed to run a single simulation in parallel over many processors using MPI (Message Passing Interface) pipes

  7. Evaluation of IMRT plans of prostate carcinoma from four treatment planning systems based on Monte Carlo

    International Nuclear Information System (INIS)

    Chi Zifeng; Han Chun; Liu Dan; Cao Yankun; Li Runxiao

    2011-01-01

    Objective: With the Monte Carlo method to recalculate the IMRT dose distributions from four TPS to provide a platform for independent comparison and evaluation of the plan quality.These results will help make a clinical decision as which TPS will be used for prostate IMRT planning. Methods: Eleven prostate cancer cases were planned with the Corvus, Xio, Pinnacle and Eclipse TPS. The plans were recalculated by Monte Carlo using leaf sequences and MUs for individual plans. Dose-volume-histograms and isodose distributions were compared. Other quantities such as D min (the minimum dose received by 99% of CTV/PTV), D max (the maximum dose received by 1% of CTV/PTV), V 110% , V 105% , V 95% (the volume of CTV/PTV receiving 110%, 105%, 95% of the prescription dose), the volume of rectum and bladder receiving >65 Gy and >40 Gy, and the volume of femur receiving >50 Gy were evaluated. Total segments and MUs were also compared. Results: The Monte Carlo results agreed with the dose distributions from the TPS to within 3%/3 mm. The Xio, Pinnacle and Eclipse plans show less target dose heterogeneity and lower V 65 and V 40 for the rectum and bladder compared to the Corvus plans. The PTV D min is about 2 Gy lower for Xio plans than others while the Corvus plans have slightly lower female head V 50 (0.03% and 0.58%) than others. The Corvus plans require significantly most segments (187.8) and MUs (1264.7) to deliver and the Pinnacle plans require fewest segments (82.4) and MUs (703.6). Conclusions: We have tested an independent Monte Carlo dose calculation system for dose reconstruction and plan evaluation. This system provides a platform for the fair comparison and evaluation of treatment plans to facilitate clinical decision making in selecting a TPS and beam delivery system for particular treatment sites. (authors)

  8. Simulation based sequential Monte Carlo methods for discretely observed Markov processes

    OpenAIRE

    Neal, Peter

    2014-01-01

    Parameter estimation for discretely observed Markov processes is a challenging problem. However, simulation of Markov processes is straightforward using the Gillespie algorithm. We exploit this ease of simulation to develop an effective sequential Monte Carlo (SMC) algorithm for obtaining samples from the posterior distribution of the parameters. In particular, we introduce two key innovations, coupled simulations, which allow us to study multiple parameter values on the basis of a single sim...

  9. Uncertainty Analysis of Power Grid Investment Capacity Based on Monte Carlo

    Science.gov (United States)

    Qin, Junsong; Liu, Bingyi; Niu, Dongxiao

    By analyzing the influence factors of the investment capacity of power grid, to depreciation cost, sales price and sales quantity, net profit, financing and GDP of the second industry as the dependent variable to build the investment capacity analysis model. After carrying out Kolmogorov-Smirnov test, get the probability distribution of each influence factor. Finally, obtained the grid investment capacity uncertainty of analysis results by Monte Carlo simulation.

  10. Algorithm simulating the atom displacement processes induced by the gamma rays on the base of Monte Carlo method

    International Nuclear Information System (INIS)

    Cruz, C. M.; Pinera, I; Abreu, Y.; Leyva, A.

    2007-01-01

    Present work concerns with the implementation of a Monte Carlo based calculation algorithm describing particularly the occurrence of Atom Displacements induced by the Gamma Radiation interactions at a given target material. The Atom Displacement processes were considered only on the basis of single elastic scattering interactions among fast secondary electrons with matrix atoms, which are ejected from their crystalline sites at recoil energies higher than a given threshold energy. The secondary electron transport was described assuming typical approaches on this matter, where consecutive small angle scattering and very low energy transfer events behave as a continuously cuasi-classical electron state changes along a given path length delimited by two discrete high scattering angle and electron energy losses events happening on a random way. A limiting scattering angle was introduced and calculated according Moliere-Bethe-Goudsmit-Saunderson Electron Multiple Scattering, which allows splitting away secondary electrons single scattering processes from multiple one, according which a modified McKinley-Feshbach electron elastic scattering cross section arises. This distribution was statistically sampled and simulated in the framework of the Monte Carlo Method to perform discrete single electron scattering processes, particularly those leading to Atom Displacement events. The possibility of adding this algorithm to present existing open Monte Carlo code systems is analyze, in order to improve their capabilities. (Author)

  11. Uncertainty Analysis Based on Sparse Grid Collocation and Quasi-Monte Carlo Sampling with Application in Groundwater Modeling

    Science.gov (United States)

    Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.

    2011-12-01

    Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently

  12. Design and evaluation of a Monte Carlo based model of an orthovoltage treatment system

    International Nuclear Information System (INIS)

    Penchev, Petar; Maeder, Ulf; Fiebich, Martin; Zink, Klemens; University Hospital Marburg

    2015-01-01

    The aim of this study was to develop a flexible framework of an orthovoltage treatment system capable of calculating and visualizing dose distributions in different phantoms and CT datasets. The framework provides a complete set of various filters, applicators and X-ray energies and therefore can be adapted to varying studies or be used for educational purposes. A dedicated user friendly graphical interface was developed allowing for easy setup of the simulation parameters and visualization of the results. For the Monte Carlo simulations the EGSnrc Monte Carlo code package was used. Building the geometry was accomplished with the help of the EGSnrc C++ class library. The deposited dose was calculated according to the KERMA approximation using the track-length estimator. The validation against measurements showed a good agreement within 4-5% deviation, down to depths of 20% of the depth dose maximum. Furthermore, to show its capabilities, the validated model was used to calculate the dose distribution on two CT datasets. Typical Monte Carlo calculation time for these simulations was about 10 minutes achieving an average statistical uncertainty of 2% on a standard PC. However, this calculation time depends strongly on the used CT dataset, tube potential, filter material/thickness and applicator size.

  13. Sampling-based nuclear data uncertainty quantification for continuous energy Monte-Carlo codes

    International Nuclear Information System (INIS)

    Zhu, T.

    2015-01-01

    Research on the uncertainty of nuclear data is motivated by practical necessity. Nuclear data uncertainties can propagate through nuclear system simulations into operation and safety related parameters. The tolerance for uncertainties in nuclear reactor design and operation can affect the economic efficiency of nuclear power, and essentially its sustainability. The goal of the present PhD research is to establish a methodology of nuclear data uncertainty quantification (NDUQ) for MCNPX, the continuous-energy Monte-Carlo (M-C) code. The high fidelity (continuous-energy treatment and flexible geometry modelling) of MCNPX makes it the choice of routine criticality safety calculations at PSI/LRS, but also raises challenges for NDUQ by conventional sensitivity/uncertainty (S/U) methods. For example, only recently in 2011, the capability of calculating continuous energy κ_e_f_f sensitivity to nuclear data was demonstrated in certain M-C codes by using the method of iterated fission probability. The methodology developed during this PhD research is fundamentally different from the conventional S/U approach: nuclear data are treated as random variables and sampled in accordance to presumed probability distributions. When sampled nuclear data are used in repeated model calculations, the output variance is attributed to the collective uncertainties of nuclear data. The NUSS (Nuclear data Uncertainty Stochastic Sampling) tool is based on this sampling approach and implemented to work with MCNPX’s ACE format of nuclear data, which also gives NUSS compatibility with MCNP and SERPENT M-C codes. In contrast, multigroup uncertainties are used for the sampling of ACE-formatted pointwise-energy nuclear data in a groupwise manner due to the more limited quantity and quality of nuclear data uncertainties. Conveniently, the usage of multigroup nuclear data uncertainties allows consistent comparison between NUSS and other methods (both S/U and sampling-based) that employ the same

  14. Sampling-based nuclear data uncertainty quantification for continuous energy Monte-Carlo codes

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, T.

    2015-07-01

    Research on the uncertainty of nuclear data is motivated by practical necessity. Nuclear data uncertainties can propagate through nuclear system simulations into operation and safety related parameters. The tolerance for uncertainties in nuclear reactor design and operation can affect the economic efficiency of nuclear power, and essentially its sustainability. The goal of the present PhD research is to establish a methodology of nuclear data uncertainty quantification (NDUQ) for MCNPX, the continuous-energy Monte-Carlo (M-C) code. The high fidelity (continuous-energy treatment and flexible geometry modelling) of MCNPX makes it the choice of routine criticality safety calculations at PSI/LRS, but also raises challenges for NDUQ by conventional sensitivity/uncertainty (S/U) methods. For example, only recently in 2011, the capability of calculating continuous energy κ{sub eff} sensitivity to nuclear data was demonstrated in certain M-C codes by using the method of iterated fission probability. The methodology developed during this PhD research is fundamentally different from the conventional S/U approach: nuclear data are treated as random variables and sampled in accordance to presumed probability distributions. When sampled nuclear data are used in repeated model calculations, the output variance is attributed to the collective uncertainties of nuclear data. The NUSS (Nuclear data Uncertainty Stochastic Sampling) tool is based on this sampling approach and implemented to work with MCNPX’s ACE format of nuclear data, which also gives NUSS compatibility with MCNP and SERPENT M-C codes. In contrast, multigroup uncertainties are used for the sampling of ACE-formatted pointwise-energy nuclear data in a groupwise manner due to the more limited quantity and quality of nuclear data uncertainties. Conveniently, the usage of multigroup nuclear data uncertainties allows consistent comparison between NUSS and other methods (both S/U and sampling-based) that employ the same

  15. Cloud-based Monte Carlo modelling of BSSRDF for the rendering of human skin appearance (Conference Presentation)

    Science.gov (United States)

    Doronin, Alexander; Rushmeier, Holly E.; Meglinski, Igor; Bykov, Alexander V.

    2016-03-01

    We present a new Monte Carlo based approach for the modelling of Bidirectional Scattering-Surface Reflectance Distribution Function (BSSRDF) for accurate rendering of human skin appearance. The variations of both skin tissues structure and the major chromophores are taken into account correspondingly to the different ethnic and age groups. The computational solution utilizes HTML5, accelerated by the graphics processing units (GPUs), and therefore is convenient for the practical use at the most of modern computer-based devices and operating systems. The results of imitation of human skin reflectance spectra, corresponding skin colours and examples of 3D faces rendering are presented and compared with the results of phantom studies.

  16. Feasibility Study of Core Design with a Monte Carlo Code for APR1400 Initial core

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jinsun; Chang, Do Ik; Seong, Kibong [KEPCO NF, Daejeon (Korea, Republic of)

    2014-10-15

    The Monte Carlo calculation becomes more popular and useful nowadays due to the rapid progress in computing power and parallel calculation techniques. There have been many attempts to analyze a commercial core by Monte Carlo transport code using the enhanced computer capability, recently. In this paper, Monte Carlo calculation of APR1400 initial core has been performed and the results are compared with the calculation results of conventional deterministic code to find out the feasibility of core design using Monte Carlo code. SERPENT, a 3D continuous-energy Monte Carlo reactor physics burnup calculation code is used for this purpose and the KARMA-ASTRA code system, which is used for a deterministic code of comparison. The preliminary investigation for the feasibility of commercial core design with Monte Carlo code was performed in this study. Simplified core geometry modeling was performed for the reactor core surroundings and reactor coolant model is based on two region model. The reactivity difference at HZP ARO condition between Monte Carlo code and the deterministic code is consistent with each other and the reactivity difference during the depletion could be reduced by adopting the realistic moderator temperature. The reactivity difference calculated at HFP, BOC, ARO equilibrium condition was 180 ±9 pcm, with axial moderator temperature of a deterministic code. The computing time will be a significant burden at this time for the application of Monte Carlo code to the commercial core design even with the application of parallel computing because numerous core simulations are required for actual loading pattern search. One of the remedy will be a combination of Monte Carlo code and the deterministic code to generate the physics data. The comparison of physics parameters with sophisticated moderator temperature modeling and depletion will be performed for a further study.

  17. Transcriptome-based identification of pro- and antioxidative gene expression in kidney cortex of nitric oxide-depleted rats

    NARCIS (Netherlands)

    Wesseling, Sebastiaan; Joles, Jaap A.; van Goor, Harry; Bluyssen, Hans A.; Kemmeren, Patrick; Holstege, Frank C.; Koomans, Hein A.; Braam, Branko

    2007-01-01

    Nitric oxide (NO) depletion in rats induces severe endothelial dysfunction within 4 days. Subsequently, hypertension and renal injury develop, which are ameliorated by alpha-tocopherol (VitE) cotreatment. The hypothesis of the present study was that NO synthase (NOS) inhibition induces a renal

  18. Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy.

    Science.gov (United States)

    Martinez-Rovira, I; Sempau, J; Prezado, Y

    2012-05-01

    Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-μm-wide microbeams spaced by 200-400 μm) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Good agreement between MC simulations and experimental results was achieved, even at the interfaces between two

  19. Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-Rovira, I.; Sempau, J.; Prezado, Y. [Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain) and ID17 Biomedical Beamline, European Synchrotron Radiation Facility (ESRF), 6 rue Jules Horowitz B.P. 220, F-38043 Grenoble Cedex (France); Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain); Laboratoire Imagerie et modelisation en neurobiologie et cancerologie, UMR8165, Centre National de la Recherche Scientifique (CNRS), Universites Paris 7 et Paris 11, Bat 440., 15 rue Georges Clemenceau, F-91406 Orsay Cedex (France)

    2012-05-15

    Purpose: Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-{mu}m-wide microbeams spaced by 200-400 {mu}m) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. Methods: The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Results: Good agreement between MC simulations and experimental results was achieved, even at

  20. Automatic commissioning of a GPU-based Monte Carlo radiation dose calculation code for photon radiotherapy

    International Nuclear Information System (INIS)

    Tian, Zhen; Jia, Xun; Jiang, Steve B; Graves, Yan Jiang

    2014-01-01

    Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6 MV beam was used to generate the PSLs for 6 MV beams. In a simulation study, we commissioned a Siemens 6 MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D γ-index test passing rate within the regions with dose above 10% of d max dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2 mm criteria and from 32.22 to 89.65% for 1%/1 mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The

  1. A Monte Carlo based development of a cavity theory for solid state detectors irradiated in electron beams

    International Nuclear Information System (INIS)

    Mobit, P.

    2002-01-01

    Recent Monte Carlo simulations have shown that the assumption in the small cavity theory (and the extension of the small cavity theory by Spencer-Attix) that the cavity does not perturb the electron fluence is seriously flawed. For depths beyond d max not only is there a significant difference between the energy spectra in the medium and in the solid cavity materials but there is also a significant difference in the number of low-energy electrons which cannot travel across the solid cavity and hence deposit their dose in it (i.e. stopper electrons whose residual range is less than the cavity thickness). The number of these low-energy electrons that are not able to travel across the solid state cavity increases with depth and effective thickness of the detector. This also invalidates the assumption in the small cavity theory that most of the dose deposited in a small cavity is delivered by crossers. Based on Monte Carlo simulations, a new cavity theory for solid state detectors irradiated in electron beams has been proposed as: D med (p)=D det (p) x s S-A med.det x gamma(p) e x S T , where D med (p) is the dose to the medium at point, p, D det (p) is the average detector dose to the same point, s S-A med.det is the Spencer-Attix mass collision stopping power ratio of the medium to the detector material, gamma(p) e is the electron fluence perturbation correction factor and S T is a stopper-to-crosser correction factor to correct for the dependence of the stopper-to-crosser ratio on depth and the effective cavity size. Monte Carlo simulations have been computed for all the terms in this equation. The new cavity theory has been tested against the Spencer-Attix cavity equation as the small cavity limiting case and also Monte Carlo simulations. The agreement between this new cavity theory and Monte Carlo simulations is within 0.3%. (author)

  2. TU-F-CAMPUS-T-05: A Cloud-Based Monte Carlo Dose Calculation for Electron Cutout Factors

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, T; Bush, K [Stanford School of Medicine, Stanford, CA (United States)

    2015-06-15

    Purpose: For electron cutouts of smaller sizes, it is necessary to verify electron cutout factors due to perturbations in electron scattering. Often, this requires a physical measurement using a small ion chamber, diode, or film. The purpose of this study is to develop a fast Monte Carlo based dose calculation framework that requires only a smart phone photograph of the cutout and specification of the SSD and energy to determine the electron cutout factor, with the ultimate goal of making this cloud-based calculation widely available to the medical physics community. Methods: The algorithm uses a pattern recognition technique to identify the corners of the cutout in the photograph as shown in Figure 1. It then corrects for variations in perspective, scaling, and translation of the photograph introduced by the user’s positioning of the camera. Blob detection is used to identify the portions of the cutout which comprise the aperture and the portions which are cutout material. This information is then used define physical densities of the voxels used in the Monte Carlo dose calculation algorithm as shown in Figure 2, and select a particle source from a pre-computed library of phase-spaces scored above the cutout. The electron cutout factor is obtained by taking a ratio of the maximum dose delivered with the cutout in place to the dose delivered under calibration/reference conditions. Results: The algorithm has been shown to successfully identify all necessary features of the electron cutout to perform the calculation. Subsequent testing will be performed to compare the Monte Carlo results with a physical measurement. Conclusion: A simple, cloud-based method of calculating electron cutout factors could eliminate the need for physical measurements and substantially reduce the time required to properly assure accurate dose delivery.

  3. Cherenkov radiation-based three-dimensional position-sensitive PET detector: A Monte Carlo study.

    Science.gov (United States)

    Ota, Ryosuke; Yamada, Ryoko; Moriya, Takahiro; Hasegawa, Tomoyuki

    2018-05-01

    Cherenkov radiation has recently received attention due to its prompt emission phenomenon, which has the potential to improve the timing performance of radiation detectors dedicated to positron emission tomography (PET). In this study, a Cherenkov-based three-dimensional (3D) position-sensitive radiation detector was proposed, which is composed of a monolithic lead fluoride (PbF 2 ) crystal and a photodetector array of which the signals can be readout independently. Monte Carlo simulations were performed to estimate the performance of the proposed detector. The position- and time resolution were evaluated under various practical conditions. The radiator size and various properties of the photodetector, e.g., readout pitch and single photon timing resolution (SPTR), were parameterized. The single photon time response of the photodetector was assumed to be a single Gaussian for the simplification. The photo detection efficiency of the photodetector was ideally 100% for all wavelengths. Compton scattering was included in simulations, but partly analyzed. To estimate the position at which a γ-ray interacted in the Cherenkov radiator, the center-of-gravity (COG) method was employed. In addition, to estimate the depth-of-interaction (DOI) principal component analysis (PCA), which is a multivariate analysis method and has been used to identify the patterns in data, was employed. The time-space distribution of Cherenkov photons was quantified to perform PCA. To evaluate coincidence time resolution (CTR), the time difference of two independent γ-ray events was calculated. The detection time was defined as the first photon time after the SPTR of the photodetector was taken into account. The position resolution on the photodetector plane could be estimated with high accuracy, by using a small number of Cherenkov photons. Moreover, PCA showed an ability to estimate the DOI. The position resolution heavily depends on the pitch of the photodetector array and the radiator

  4. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, Stavros, E-mail: stavros.christoforou@gmail.com [Kirinthou 17, 34100, Chalkida (Greece); Hoogenboom, J. Eduard, E-mail: j.e.hoogenboom@tudelft.nl [Department of Applied Sciences, Delft University of Technology (Netherlands)

    2011-07-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k{sub eff} estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  5. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    International Nuclear Information System (INIS)

    Christoforou, Stavros; Hoogenboom, J. Eduard

    2011-01-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k_e_f_f estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  6. Monte Carlo based electron treatment planning and cutout output factor calculations

    Science.gov (United States)

    Mitrou, Ellis

    Electron radiotherapy (RT) offers a number of advantages over photons. The high surface dose, combined with a rapid dose fall-off beyond the target volume presents a net increase in tumor control probability and decreases the normal tissue complication for superficial tumors. Electron treatments are normally delivered clinically without previously calculated dose distributions due to the complexity of the electron transport involved and greater error in planning accuracy. This research uses Monte Carlo (MC) methods to model clinical electron beams in order to accurately calculate electron beam dose distributions in patients as well as calculate cutout output factors, reducing the need for a clinical measurement. The present work is incorporated into a research MC calculation system: McGill Monte Carlo Treatment Planning (MMCTP) system. Measurements of PDDs, profiles and output factors in addition to 2D GAFCHROMICRTM EBT2 film measurements in heterogeneous phantoms were obtained to commission the electron beam model. The use of MC for electron TP will provide more accurate treatments and yield greater knowledge of the electron dose distribution within the patient. The calculation of output factors could invoke a clinical time saving of up to 1 hour per patient.

  7. Evaluation of the interindividual human variation in bioactivation of methyleugenol using physiologically based kinetic modeling and Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Al-Subeihi, Ala' A.A., E-mail: subeihi@yahoo.com [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands); BEN-HAYYAN-Aqaba International Laboratories, Aqaba Special Economic Zone Authority (ASEZA), P. O. Box 2565, Aqaba 77110 (Jordan); Alhusainy, Wasma; Kiwamoto, Reiko; Spenkelink, Bert [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands); Bladeren, Peter J. van [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands); Nestec S.A., Avenue Nestlé 55, 1800 Vevey (Switzerland); Rietjens, Ivonne M.C.M.; Punt, Ans [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands)

    2015-03-01

    The present study aims at predicting the level of formation of the ultimate carcinogenic metabolite of methyleugenol, 1′-sulfooxymethyleugenol, in the human population by taking variability in key bioactivation and detoxification reactions into account using Monte Carlo simulations. Depending on the metabolic route, variation was simulated based on kinetic constants obtained from incubations with a range of individual human liver fractions or by combining kinetic constants obtained for specific isoenzymes with literature reported human variation in the activity of these enzymes. The results of the study indicate that formation of 1′-sulfooxymethyleugenol is predominantly affected by variation in i) P450 1A2-catalyzed bioactivation of methyleugenol to 1′-hydroxymethyleugenol, ii) P450 2B6-catalyzed epoxidation of methyleugenol, iii) the apparent kinetic constants for oxidation of 1′-hydroxymethyleugenol, and iv) the apparent kinetic constants for sulfation of 1′-hydroxymethyleugenol. Based on the Monte Carlo simulations a so-called chemical-specific adjustment factor (CSAF) for intraspecies variation could be derived by dividing different percentiles by the 50th percentile of the predicted population distribution for 1′-sulfooxymethyleugenol formation. The obtained CSAF value at the 90th percentile was 3.2, indicating that the default uncertainty factor of 3.16 for human variability in kinetics may adequately cover the variation within 90% of the population. Covering 99% of the population requires a larger uncertainty factor of 6.4. In conclusion, the results showed that adequate predictions on interindividual human variation can be made with Monte Carlo-based PBK modeling. For methyleugenol this variation was observed to be in line with the default variation generally assumed in risk assessment. - Highlights: • Interindividual human differences in methyleugenol bioactivation were simulated. • This was done using in vitro incubations, PBK modeling

  8. A Newton-based Jacobian-free approach for neutronic-Monte Carlo/thermal-hydraulic static coupled analysis

    International Nuclear Information System (INIS)

    Mylonakis, Antonios G.; Varvayanni, M.; Catsaros, N.

    2017-01-01

    Highlights: •A Newton-based Jacobian-free Monte Carlo/thermal-hydraulic coupling approach is introduced. •OpenMC is coupled with COBRA-EN with a Newton-based approach. •The introduced coupling approach is tested in numerical experiments. •The performance of the new approach is compared with the traditional “serial” coupling approach. -- Abstract: In the field of nuclear reactor analysis, multi-physics calculations that account for the bonded nature of the neutronic and thermal-hydraulic phenomena are of major importance for both reactor safety and design. So far in the context of Monte-Carlo neutronic analysis a kind of “serial” algorithm has been mainly used for coupling with thermal-hydraulics. The main motivation of this work is the interest for an algorithm that could maintain the distinct treatment of the involved fields within a tight coupling context that could be translated into higher convergence rates and more stable behaviour. This work investigates the possibility of replacing the usually used “serial” iteration with an approximate Newton algorithm. The selected algorithm, called Approximate Block Newton, is actually a version of the Jacobian-free Newton Krylov method suitably modified for coupling mono-disciplinary solvers. Within this Newton scheme the linearised system is solved with a Krylov solver in order to avoid the creation of the Jacobian matrix. A coupling algorithm between Monte-Carlo neutronics and thermal-hydraulics based on the above-mentioned methodology is developed and its performance is analysed. More specifically, OpenMC, a Monte-Carlo neutronics code and COBRA-EN, a thermal-hydraulics code for sub-channel and core analysis, are merged in a coupling scheme using the Approximate Block Newton method aiming to examine the performance of this scheme and compare with that of the “traditional” serial iterative scheme. First results show a clear improvement of the convergence especially in problems where significant

  9. Evaluation of the interindividual human variation in bioactivation of methyleugenol using physiologically based kinetic modeling and Monte Carlo simulations

    International Nuclear Information System (INIS)

    Al-Subeihi, Ala' A.A.; Alhusainy, Wasma; Kiwamoto, Reiko; Spenkelink, Bert; Bladeren, Peter J. van; Rietjens, Ivonne M.C.M.; Punt, Ans

    2015-01-01

    The present study aims at predicting the level of formation of the ultimate carcinogenic metabolite of methyleugenol, 1′-sulfooxymethyleugenol, in the human population by taking variability in key bioactivation and detoxification reactions into account using Monte Carlo simulations. Depending on the metabolic route, variation was simulated based on kinetic constants obtained from incubations with a range of individual human liver fractions or by combining kinetic constants obtained for specific isoenzymes with literature reported human variation in the activity of these enzymes. The results of the study indicate that formation of 1′-sulfooxymethyleugenol is predominantly affected by variation in i) P450 1A2-catalyzed bioactivation of methyleugenol to 1′-hydroxymethyleugenol, ii) P450 2B6-catalyzed epoxidation of methyleugenol, iii) the apparent kinetic constants for oxidation of 1′-hydroxymethyleugenol, and iv) the apparent kinetic constants for sulfation of 1′-hydroxymethyleugenol. Based on the Monte Carlo simulations a so-called chemical-specific adjustment factor (CSAF) for intraspecies variation could be derived by dividing different percentiles by the 50th percentile of the predicted population distribution for 1′-sulfooxymethyleugenol formation. The obtained CSAF value at the 90th percentile was 3.2, indicating that the default uncertainty factor of 3.16 for human variability in kinetics may adequately cover the variation within 90% of the population. Covering 99% of the population requires a larger uncertainty factor of 6.4. In conclusion, the results showed that adequate predictions on interindividual human variation can be made with Monte Carlo-based PBK modeling. For methyleugenol this variation was observed to be in line with the default variation generally assumed in risk assessment. - Highlights: • Interindividual human differences in methyleugenol bioactivation were simulated. • This was done using in vitro incubations, PBK modeling

  10. Optimization of mass of plastic scintillator film for flow-cell based tritium monitoring: a Monte Carlo study

    International Nuclear Information System (INIS)

    Roy, Arup Singha; Palani Selvam, T.; Raman, Anand; Raja, V.; Chaudhury, Probal

    2014-01-01

    Over the years, various types of tritium-in-air monitors have been designed and developed based on different principles. Ionization chamber, proportional counter and scintillation detector systems are few among them. A plastic scintillator based, flow-cell type online tritium-in-air monitoring system was developed for online monitoring of tritium in air. The value of the scintillator mass inside the cell-volume, which maximizes the response of the detector system, should be obtained to get maximum efficiency. The present study is aimed to optimize the amount of mass of the plastic scintillator film for the flow-cell based tritium monitoring instrument so that maximum efficiency is achieved. The Monte Carlo based EGSnrc code system has been used for this purpose

  11. Monte Carlo application based on GEANT4 toolkit to simulate a laser–plasma electron beam line for radiobiological studies

    Energy Technology Data Exchange (ETDEWEB)

    Lamia, D., E-mail: debora.lamia@ibfm.cnr.it [Institute of Molecular Bioimaging and Physiology IBFM CNR – LATO, Cefalù (Italy); Russo, G., E-mail: giorgio.russo@ibfm.cnr.it [Institute of Molecular Bioimaging and Physiology IBFM CNR – LATO, Cefalù (Italy); Casarino, C.; Gagliano, L.; Candiano, G.C. [Institute of Molecular Bioimaging and Physiology IBFM CNR – LATO, Cefalù (Italy); Labate, L. [Intense Laser Irradiation Laboratory (ILIL) – National Institute of Optics INO CNR, Pisa (Italy); National Institute for Nuclear Physics INFN, Pisa Section and Frascati National Laboratories LNF (Italy); Baffigi, F.; Fulgentini, L.; Giulietti, A.; Koester, P.; Palla, D. [Intense Laser Irradiation Laboratory (ILIL) – National Institute of Optics INO CNR, Pisa (Italy); Gizzi, L.A. [Intense Laser Irradiation Laboratory (ILIL) – National Institute of Optics INO CNR, Pisa (Italy); National Institute for Nuclear Physics INFN, Pisa Section and Frascati National Laboratories LNF (Italy); Gilardi, M.C. [Institute of Molecular Bioimaging and Physiology IBFM CNR, Segrate (Italy); University of Milano-Bicocca, Milano (Italy)

    2015-06-21

    We report on the development of a Monte Carlo application, based on the GEANT4 toolkit, for the characterization and optimization of electron beams for clinical applications produced by a laser-driven plasma source. The GEANT4 application is conceived so as to represent in the most general way the physical and geometrical features of a typical laser-driven accelerator. It is designed to provide standard dosimetric figures such as percentage dose depth curves, two-dimensional dose distributions and 3D dose profiles at different positions both inside and outside the interaction chamber. The application was validated by comparing its predictions to experimental measurements carried out on a real laser-driven accelerator. The work is aimed at optimizing the source, by using this novel application, for radiobiological studies and, in perspective, for medical applications. - Highlights: • Development of a Monte Carlo application based on GEANT4 toolkit. • Experimental measurements carried out with a laser-driven acceleration system. • Validation of Geant4 application comparing experimental data with the simulated ones. • Dosimetric characterization of the acceleration system.

  12. Using Monte Carlo/Gaussian Based Small Area Estimates to Predict Where Medicaid Patients Reside.

    Science.gov (United States)

    Behrens, Jess J; Wen, Xuejin; Goel, Satyender; Zhou, Jing; Fu, Lina; Kho, Abel N

    2016-01-01

    Electronic Health Records (EHR) are rapidly becoming accepted as tools for planning and population health 1,2 . With the national dialogue around Medicaid expansion 12 , the role of EHR data has become even more important. For their potential to be fully realized and contribute to these discussions, techniques for creating accurate small area estimates is vital. As such, we examined the efficacy of developing small area estimates for Medicaid patients in two locations, Albuquerque and Chicago, by using a Monte Carlo/Gaussian technique that has worked in accurately locating registered voters in North Carolina 11 . The Albuquerque data, which includes patient address, will first be used to assess the accuracy of the methodology. Subsequently, it will be combined with the EHR data from Chicago to develop a regression that predicts Medicaid patients by US Block Group. We seek to create a tool that is effective in translating EHR data's potential for population health studies.

  13. Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector

    Energy Technology Data Exchange (ETDEWEB)

    Padilla Cabal, Fatima, E-mail: fpadilla@instec.c [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba); Lopez-Pino, Neivy; Luis Bernal-Castillo, Jose; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D' Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba)

    2010-12-15

    A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ({sup 241}Am, {sup 133}Ba, {sup 22}Na, {sup 60}Co, {sup 57}Co, {sup 137}Cs and {sup 152}Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.

  14. The motion of discs and spherical fuel particles in combustion burners based on Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Granada, E.; Patino, D.; Porteiro, J.; Collazo, J.; Miguez, J.L.; Moran, J. [University of Vigo, E.T.S. Ingenieros Industriales, Lagoas-Marcosende s/n, 36200-Vigo (Spain)

    2010-04-15

    The position of pellet fuel particles in a burner largely determines their combustion behaviour. This paper addresses the simulated motion of circles and spheres, equivalent to pellet, and their final position in a packed bed subject to a gravitational field confined inside rigid cylindrical walls. A simplified Monte Carlo statistical technique has been described and applied with the standard Metropolis method for the simulation of movement. This simplification provides an easier understanding of the method when applied to solid fuels in granular form, provided that they are only under gravitational forces. Not only have we contrasted one parameter, as other authors, but three, which are radial, bulk and local porosities, via Voronoi tessellation. Our simulations reveal a structural order near the walls, which declines towards the centre of the container, and no pattern was found in local porosity via Voronoi. Results with this simplified method are in agreement with more complex previously published studies. (author)

  15. PC-based process distribution to solve iterative Monte Carlo simulations in physical dosimetry

    International Nuclear Information System (INIS)

    Leal, A.; Sanchez-Doblado, F.; Perucha, M.; Rincon, M.; Carrasco, E.; Bernal, C.

    2001-01-01

    A distribution model to simulate physical dosimetry measurements with Monte Carlo (MC) techniques has been developed. This approach is indicated to solve the simulations where there are continuous changes of measurement conditions (and hence of the input parameters) such as a TPR curve or the estimation of the resolution limit of an optimal densitometer in the case of small field profiles. As a comparison, a high resolution scan for narrow beams with no iterative process is presented. The model has been installed on a network PCs without any resident software. The only requirement for these PCs has been a small and temporal Linux partition in the hard disks and to be connecting by the net with our server PC. (orig.)

  16. Experimental validation of a rapid Monte Carlo based micro-CT simulator

    International Nuclear Information System (INIS)

    Colijn, A P; Zbijewski, W; Sasov, A; Beekman, F J

    2004-01-01

    We describe a newly developed, accelerated Monte Carlo simulator of a small animal micro-CT scanner. Transmission measurements using aluminium slabs are employed to estimate the spectrum of the x-ray source. The simulator incorporating this spectrum is validated with micro-CT scans of physical water phantoms of various diameters, some containing stainless steel and Teflon rods. Good agreement is found between simulated and real data: normalized error of simulated projections, as compared to the real ones, is typically smaller than 0.05. Also the reconstructions obtained from simulated and real data are found to be similar. Thereafter, effects of scatter are studied using a voxelized software phantom representing a rat body. It is shown that the scatter fraction can reach tens of per cents in specific areas of the body and therefore scatter can significantly affect quantitative accuracy in small animal CT imaging

  17. The motion of discs and spherical fuel particles in combustion burners based on Monte Carlo simulation

    International Nuclear Information System (INIS)

    Granada, E.; Patino, D.; Porteiro, J.; Collazo, J.; Miguez, J.L.; Moran, J.

    2010-01-01

    The position of pellet fuel particles in a burner largely determines their combustion behaviour. This paper addresses the simulated motion of circles and spheres, equivalent to pellet, and their final position in a packed bed subject to a gravitational field confined inside rigid cylindrical walls. A simplified Monte Carlo statistical technique has been described and applied with the standard Metropolis method for the simulation of movement. This simplification provides an easier understanding of the method when applied to solid fuels in granular form, provided that they are only under gravitational forces. Not only have we contrasted one parameter, as other authors, but three, which are radial, bulk and local porosities, via Voronoi tessellation. Our simulations reveal a structural order near the walls, which declines towards the centre of the container, and no pattern was found in local porosity via Voronoi. Results with this simplified method are in agreement with more complex previously published studies.

  18. In Silico Generation of Peptides by Replica Exchange Monte Carlo: Docking-Based Optimization of Maltose-Binding-Protein Ligands.

    Directory of Open Access Journals (Sweden)

    Anna Russo

    Full Text Available Short peptides can be designed in silico and synthesized through automated techniques, making them advantageous and versatile protein binders. A number of docking-based algorithms allow for a computational screening of peptides as binders. Here we developed ex-novo peptides targeting the maltose site of the Maltose Binding Protein, the prototypical system for the study of protein ligand recognition. We used a Monte Carlo based protocol, to computationally evolve a set of octapeptides starting from a polialanine sequence. We screened in silico the candidate peptides and characterized their binding abilities by surface plasmon resonance, fluorescence and electrospray ionization mass spectrometry assays. These experiments showed the designed binders to recognize their target with micromolar affinity. We finally discuss the obtained results in the light of further improvement in the ex-novo optimization of peptide based binders.

  19. Depleted Reactor Analysis With MCNP-4B

    International Nuclear Information System (INIS)

    Caner, M.; Silverman, L.; Bettan, M.

    2004-01-01

    Monte Carlo neutronics calculations are mostly done for fresh reactor cores. There is today an ongoing activity in the development of Monte Carlo plus burnup code systems made possible by the fast gains in computer processor speeds. In this work we investigate the use of MCNP-4B for the calculation of a depleted core of the Soreq reactor (IRR-1). The number densities as function of burnup were taken from the WIMS-D/4 cell code calculations. This particular code coupling has been implemented before. The Monte Carlo code MCNP-4B calculates the coupled transport of neutrons and photons for complicated geometries. We have done neutronics calculations of the IRR-1 core with the WIMS and CITATION codes in the past Also, we have developed an MCNP model of the IRR-1 standard fuel for a criticality safety calculation of a spent fuel storage pool

  20. Computational Model of D-Region Ion Production Caused by Energetic Electron Precipitations Based on General Monte Carlo Transport Calculations

    Science.gov (United States)

    Kouznetsov, A.; Cully, C. M.

    2017-12-01

    During enhanced magnetic activities, large ejections of energetic electrons from radiation belts are deposited in the upper polar atmosphere where they play important roles in its physical and chemical processes, including VLF signals subionospheric propagation. Electron deposition can affect D-Region ionization, which are estimated based on ionization rates derived from energy depositions. We present a model of D-region ion production caused by an arbitrary (in energy and pitch angle) distribution of fast (10 keV - 1 MeV) electrons. The model relies on a set of pre-calculated results obtained using a general Monte Carlo approach with the latest version of the MCNP6 (Monte Carlo N-Particle) code for the explicit electron tracking in magnetic fields. By expressing those results using the ionization yield functions, the pre-calculated results are extended to cover arbitrary magnetic field inclinations and atmospheric density profiles, allowing ionization rate altitude profile computations in the range of 20 and 200 km at any geographic point of interest and date/time by adopting results from an external atmospheric density model (e.g. NRLMSISE-00). The pre-calculated MCNP6 results are stored in a CDF (Common Data Format) file, and IDL routines library is written to provide an end-user interface to the model.

  1. Methodology for sodium fire vulnerability assessment of sodium cooled fast reactor based on the Monte-Carlo principle

    International Nuclear Information System (INIS)

    Song, Wei; Wu, Yuanyu; Hu, Wenjun; Zuo, Jiaxu

    2015-01-01

    Highlights: • Monte-Carlo principle coupling with fire dynamic code is adopted to perform sodium fire vulnerability assessment. • The method can be used to calculate the failure probability of sodium fire scenarios. • A calculation example and results are given to illustrate the feasibility of the methodology. • Some critical parameters and experience are shared. - Abstract: Sodium fire is a typical and distinctive hazard in sodium cooled fast reactors, which is significant for nuclear safety. In this paper, a method of sodium fire vulnerability assessment based on the Monte-Carlo principle was introduced, which could be used to calculate the probabilities of every failure mode in sodium fire scenarios. After that, the sodium fire scenario vulnerability assessment of primary cold trap room of China Experimental Fast Reactor was performed to illustrate the feasibility of the methodology. The calculation result of the example shows that the conditional failure probability of key cable is 23.6% in the sodium fire scenario which is caused by continuous sodium leakage because of the isolation device failure, but the wall temperature, the room pressure and the aerosol discharge mass are all lower than the safety limits.

  2. Methodology for sodium fire vulnerability assessment of sodium cooled fast reactor based on the Monte-Carlo principle

    Energy Technology Data Exchange (ETDEWEB)

    Song, Wei [Nuclear and Radiation Safety Center, P. O. Box 8088, Beijing (China); Wu, Yuanyu [ITER Organization, Route de Vinon-sur-Verdon, 13115 Saint-Paul-lès-Durance (France); Hu, Wenjun [China Institute of Atomic Energy, P. O. Box 275(34), Beijing (China); Zuo, Jiaxu, E-mail: zuojiaxu@chinansc.cn [Nuclear and Radiation Safety Center, P. O. Box 8088, Beijing (China)

    2015-11-15

    Highlights: • Monte-Carlo principle coupling with fire dynamic code is adopted to perform sodium fire vulnerability assessment. • The method can be used to calculate the failure probability of sodium fire scenarios. • A calculation example and results are given to illustrate the feasibility of the methodology. • Some critical parameters and experience are shared. - Abstract: Sodium fire is a typical and distinctive hazard in sodium cooled fast reactors, which is significant for nuclear safety. In this paper, a method of sodium fire vulnerability assessment based on the Monte-Carlo principle was introduced, which could be used to calculate the probabilities of every failure mode in sodium fire scenarios. After that, the sodium fire scenario vulnerability assessment of primary cold trap room of China Experimental Fast Reactor was performed to illustrate the feasibility of the methodology. The calculation result of the example shows that the conditional failure probability of key cable is 23.6% in the sodium fire scenario which is caused by continuous sodium leakage because of the isolation device failure, but the wall temperature, the room pressure and the aerosol discharge mass are all lower than the safety limits.

  3. Monte Carlo simulation methods in moment-based scale-bridging algorithms for thermal radiative-transfer problems

    International Nuclear Information System (INIS)

    Densmore, J.D.; Park, H.; Wollaber, A.B.; Rauenzahn, R.M.; Knoll, D.A.

    2015-01-01

    We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm

  4. Monte Carlo simulation methods in moment-based scale-bridging algorithms for thermal radiative-transfer problems

    Energy Technology Data Exchange (ETDEWEB)

    Densmore, J.D., E-mail: jeffery.densmore@unnpp.gov [Bettis Atomic Power Laboratory, P.O. Box 79, West Mifflin, PA 15122 (United States); Park, H., E-mail: hkpark@lanl.gov [Fluid Dynamics and Solid Mechanics Group, Los Alamos National Laboratory, P.O. Box 1663, MS B216, Los Alamos, NM 87545 (United States); Wollaber, A.B., E-mail: wollaber@lanl.gov [Computational Physics and Methods Group, Los Alamos National Laboratory, P.O. Box 1663, MS D409, Los Alamos, NM 87545 (United States); Rauenzahn, R.M., E-mail: rick@lanl.gov [Fluid Dynamics and Solid Mechanics Group, Los Alamos National Laboratory, P.O. Box 1663, MS B216, Los Alamos, NM 87545 (United States); Knoll, D.A., E-mail: nol@lanl.gov [Fluid Dynamics and Solid Mechanics Group, Los Alamos National Laboratory, P.O. Box 1663, MS B216, Los Alamos, NM 87545 (United States)

    2015-03-01

    We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm.

  5. Monte Carlo simulation based study of a proposed multileaf collimator for a telecobalt machine

    International Nuclear Information System (INIS)

    Sahani, G.; Dash Sharma, P. K.; Hussain, S. A.; Dutt Sharma, Sunil; Sharma, D. N.

    2013-01-01

    Purpose: The objective of the present work was to propose a design of a secondary multileaf collimator (MLC) for a telecobalt machine and optimize its design features through Monte Carlo simulation. Methods: The proposed MLC design consists of 72 leaves (36 leaf pairs) with additional jaws perpendicular to leaf motion having the capability of shaping a maximum square field size of 35 × 35 cm 2 . The projected widths at isocenter of each of the central 34 leaf pairs and 2 peripheral leaf pairs are 10 and 5 mm, respectively. The ends of the leaves and the x-jaws were optimized to obtain acceptable values of dosimetric and leakage parameters. Monte Carlo N-Particle code was used for generating beam profiles and depth dose curves and estimating the leakage radiation through the MLC. A water phantom of dimension 50 × 50 × 40 cm 3 with an array of voxels (4 × 0.3 × 0.6 cm 3 = 0.72 cm 3 ) was used for the study of dosimetric and leakage characteristics of the MLC. Output files generated for beam profiles were exported to the PTW radiation field analyzer software through locally developed software for analysis of beam profiles in order to evaluate radiation field width, beam flatness, symmetry, and beam penumbra. Results: The optimized version of the MLC can define radiation fields of up to 35 × 35 cm 2 within the prescribed tolerance values of 2 mm. The flatness and symmetry were found to be well within the acceptable tolerance value of 3%. The penumbra for a 10 × 10 cm 2 field size is 10.7 mm which is less than the generally acceptable value of 12 mm for a telecobalt machine. The maximum and average radiation leakage through the MLC were found to be 0.74% and 0.41% which are well below the International Electrotechnical Commission recommended tolerance values of 2% and 0.75%, respectively. The maximum leakage through the leaf ends in closed condition was observed to be 8.6% which is less than the values reported for other MLCs designed for medical linear

  6. Investigation of p-type depletion doping for InGaN/GaN-based light-emitting diodes

    Science.gov (United States)

    Zhang, Yiping; Zhang, Zi-Hui; Tan, Swee Tiam; Hernandez-Martinez, Pedro Ludwig; Zhu, Binbin; Lu, Shunpeng; Kang, Xue Jun; Sun, Xiao Wei; Demir, Hilmi Volkan

    2017-01-01

    Due to the limitation of the hole injection, p-type doping is essential to improve the performance of InGaN/GaN multiple quantum well light-emitting diodes (LEDs). In this work, we propose and show a depletion-region Mg-doping method. Here we systematically analyze the effectiveness of different Mg-doping profiles ranging from the electron blocking layer to the active region. Numerical computations show that the Mg-doping decreases the valence band barrier for holes and thus enhances the hole transportation. The proposed depletion-region Mg-doping approach also increases the barrier height for electrons, which leads to a reduced electron overflow, while increasing the hole concentration in the p-GaN layer. Experimentally measured external quantum efficiency indicates that Mg-doping position is vitally important. The doping in or adjacent to the quantum well degrades the LED performance due to Mg diffusion, increasing the corresponding nonradiative recombination, which is well supported by the measured carrier lifetimes. The experimental results are well numerically reproduced by modifying the nonradiative recombination lifetimes, which further validate the effectiveness of our approach.

  7. Depleted fully monolithic CMOS pixel detectors using a column based readout architecture for the ATLAS Inner Tracker upgrade

    Science.gov (United States)

    Wang, T.; Barbero, M.; Berdalovic, I.; Bespin, C.; Bhat, S.; Breugnon, P.; Caicedo, I.; Cardella, R.; Chen, Z.; Degerli, Y.; Egidos, N.; Godiot, S.; Guilloux, F.; Hemperek, T.; Hirono, T.; Krüger, H.; Kugathasan, T.; Hügging, F.; Marin Tobon, C. A.; Moustakas, K.; Pangaud, P.; Schwemling, P.; Pernegger, H.; Pohl, D.-L.; Rozanov, A.; Rymaszewski, P.; Snoeys, W.; Wermes, N.

    2018-03-01

    Depleted monolithic active pixel sensors (DMAPS), which exploit high voltage and/or high resistivity add-ons of modern CMOS technologies to achieve substantial depletion in the sensing volume, have proven to have high radiation tolerance towards the requirements of ATLAS in the high-luminosity LHC era. DMAPS integrating fast readout architectures are currently being developed as promising candidates for the outer pixel layers of the future ATLAS Inner Tracker, which will be installed during the phase II upgrade of ATLAS around year 2025. In this work, two DMAPS prototype designs, named LF-Monopix and TJ-Monopix, are presented. LF-Monopix was fabricated in the LFoundry 150 nm CMOS technology, and TJ-Monopix has been designed in the TowerJazz 180 nm CMOS technology. Both chips employ the same readout architecture, i.e. the column drain architecture, whereas different sensor implementation concepts are pursued. The paper makes a joint description of the two prototypes, so that their technical differences and challenges can be addressed in direct comparison. First measurement results for LF-Monopix will also be shown, demonstrating for the first time a fully functional fast readout DMAPS prototype implemented in the LFoundry technology.

  8. Deuterium-depleted water

    International Nuclear Information System (INIS)

    Stefanescu, Ion; Steflea, Dumitru; Saros-Rogobete, Irina; Titescu, Gheorghe; Tamaian, Radu

    2001-01-01

    Deuterium-depleted water represents water that has an isotopic content smaller than 145 ppm D/(D+H) which is the natural isotopic content of water. Deuterium depleted water is produced by vacuum distillation in columns equipped with structured packing made from phosphor bronze or stainless steel. Deuterium-depleted water, the production technique and structured packing are patents of National Institute of Research - Development for Cryogenics and Isotopic Technologies at Rm. Valcea. Researches made in the last few years showed the deuterium-depleted water is a biological active product that could have many applications in medicine and agriculture. (authors)

  9. Accuracy assessment of a new Monte Carlo based burnup computer code

    International Nuclear Information System (INIS)

    El Bakkari, B.; ElBardouni, T.; Nacir, B.; ElYounoussi, C.; Boulaich, Y.; Meroun, O.; Zoubair, M.; Chakir, E.

    2012-01-01

    Highlights: ► A new burnup code called BUCAL1 was developed. ► BUCAL1 uses the MCNP tallies directly in the calculation of the isotopic inventories. ► Validation of BUCAL1 was done by code to code comparison using VVER-1000 LEU Benchmark Assembly. ► Differences from BM value were found to be ± 600 pcm for k ∞ and ±6% for the isotopic compositions. ► The effect on reactivity due to the burnup of Gd isotopes is well reproduced by BUCAL1. - Abstract: This study aims to test for the suitability and accuracy of a new home-made Monte Carlo burnup code, called BUCAL1, by investigating and predicting the neutronic behavior of a “VVER-1000 LEU Assembly Computational Benchmark”, at lattice level. BUCAL1 uses MCNP tally information directly in the computation; this approach allows performing straightforward and accurate calculation without having to use the calculated group fluxes to perform transmutation analysis in a separate code. ENDF/B-VII evaluated nuclear data library was used in these calculations. Processing of the data library is performed using recent updates of NJOY99 system. Code to code comparisons with the reported Nuclear OECD/NEA results are presented and analyzed.

  10. Calculation of Credit Valuation Adjustment Based on Least Square Monte Carlo Methods

    Directory of Open Access Journals (Sweden)

    Qian Liu

    2015-01-01

    Full Text Available Counterparty credit risk has become one of the highest-profile risks facing participants in the financial markets. Despite this, relatively little is known about how counterparty credit risk is actually priced mathematically. We examine this issue using interest rate swaps. This largely traded financial product allows us to well identify the risk profiles of both institutions and their counterparties. Concretely, Hull-White model for rate and mean-reverting model for default intensity have proven to be in correspondence with the reality and to be well suited for financial institutions. Besides, we find that least square Monte Carlo method is quite efficient in the calculation of credit valuation adjustment (CVA, for short as it avoids the redundant step to generate inner scenarios. As a result, it accelerates the convergence speed of the CVA estimators. In the second part, we propose a new method to calculate bilateral CVA to avoid double counting in the existing bibliographies, where several copula functions are adopted to describe the dependence of two first to default times.

  11. Stability of nanocrystalline Ni-based alloys: coupling Monte Carlo and molecular dynamics simulations

    Science.gov (United States)

    Waseda, O.; Goldenstein, H.; Silva, G. F. B. Lenz e.; Neiva, A.; Chantrenne, P.; Morthomas, J.; Perez, M.; Becquart, C. S.; Veiga, R. G. A.

    2017-10-01

    The thermal stability of nanocrystalline Ni due to small additions of Mo or W (up to 1 at%) was investigated in computer simulations by means of a combined Monte Carlo (MC)/molecular dynamics (MD) two-steps approach. In the first step, energy-biased on-lattice MC revealed segregation of the alloying elements to grain boundaries. However, the condition for the thermodynamic stability of these nanocrystalline Ni alloys (zero grain boundary energy) was not fulfilled. Subsequently, MD simulations were carried out for up to 0.5 μs at 1000 K. At this temperature, grain growth was hindered for minimum global concentrations of 0.5 at% W and 0.7 at% Mo, thus preserving most of the nanocrystalline structure. This is in clear contrast to a pure Ni model system, for which the transformation into a monocrystal was observed in MD simulations within 0.2 μs at the same temperature. These results suggest that grain boundary segregation of low-soluble alloying elements in low-alloyed systems can produce high-temperature metastable nanocrystalline materials. MD simulations carried out at 1200 K for 1 at% Mo/W showed significant grain boundary migration accompanied by some degree of solute diffusion, thus providing additional evidence that solute drag mostly contributed to the nanostructure stability observed at lower temperature.

  12. Monte Carlo based water/medium stopping-power ratios for various ICRP and ICRU tissues

    International Nuclear Information System (INIS)

    Fernandez-Varea, Jose M; Carrasco, Pablo; Panettieri, Vanessa; Brualla, Lorenzo

    2007-01-01

    Water/medium stopping-power ratios, s w,m , have been calculated for several ICRP and ICRU tissues, namely adipose tissue, brain, cortical bone, liver, lung (deflated and inflated) and spongiosa. The considered clinical beams were 6 and 18 MV x-rays and the field size was 10 x 10 cm 2 . Fluence distributions were scored at a depth of 10 cm using the Monte Carlo code PENELOPE. The collision stopping powers for the studied tissues were evaluated employing the formalism of ICRU Report 37 (1984 Stopping Powers for Electrons and Positrons (Bethesda, MD: ICRU)). The Bragg-Gray values of s w,m calculated with these ingredients range from about 0.98 (adipose tissue) to nearly 1.14 (cortical bone), displaying a rather small variation with beam quality. Excellent agreement, to within 0.1%, is found with stopping-power ratios reported by Siebers et al (2000a Phys. Med. Biol. 45 983-95) for cortical bone, inflated lung and spongiosa. In the case of cortical bone, s w,m changes approximately 2% when either ICRP or ICRU compositions are adopted, whereas the stopping-power ratios of lung, brain and adipose tissue are less sensitive to the selected composition. The mass density of lung also influences the calculated values of s w,m , reducing them by around 1% (6 MV) and 2% (18 MV) when going from deflated to inflated lung

  13. Development of Subspace-based Hybrid Monte Carlo-Deterministric Algorithms for Reactor Physics Calculations

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.; Zhang, Qiong

    2014-01-01

    The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 10 3 - 10 5 times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.

  14. CO Depletion: A Microscopic Perspective

    Energy Technology Data Exchange (ETDEWEB)

    Cazaux, S. [Faculty of Aerospace Engineering, Delft University of Technology, Delft (Netherlands); Martín-Doménech, R.; Caro, G. M. Muñoz; Díaz, C. González [Centro de Astrobiología (INTA-CSIC), Ctra. de Ajalvir, km 4, Torrejón de Ardoz, E-28850 Madrid (Spain); Chen, Y. J. [Department of Physics, National Central University, Jhongli City, 32054, Taoyuan County, Taiwan (China)

    2017-11-10

    In regions where stars form, variations in density and temperature can cause gas to freeze out onto dust grains forming ice mantles, which influences the chemical composition of a cloud. The aim of this paper is to understand in detail the depletion (and desorption) of CO on (from) interstellar dust grains. Experimental simulations were performed under two different (astrophysically relevant) conditions. In parallel, Kinetic Monte Carlo simulations were used to mimic the experimental conditions. In our experiments, CO molecules accrete onto water ice at temperatures below 27 K, with a deposition rate that does not depend on the substrate temperature. During the warm-up phase, the desorption processes do exhibit subtle differences, indicating the presence of weakly bound CO molecules, therefore highlighting a low diffusion efficiency. IR measurements following the ice thickness during the TPD confirm that diffusion occurs at temperatures close to the desorption. Applied to astrophysical conditions, in a pre-stellar core, the binding energies of CO molecules, ranging between 300 and 850 K, depend on the conditions at which CO has been deposited. Because of this wide range of binding energies, the depletion of CO as a function of A{sub V} is much less important than initially thought. The weakly bound molecules, easily released into the gas phase through evaporation, change the balance between accretion and desorption, which result in a larger abundance of CO at high extinctions. In addition, weakly bound CO molecules are also more mobile, and this could increase the reactivity within interstellar ices.

  15. Clinical implementation of a GPU-based simplified Monte Carlo method for a treatment planning system of proton beam therapy

    International Nuclear Information System (INIS)

    Kohno, R; Hotta, K; Nishioka, S; Matsubara, K; Tansho, R; Suzuki, T

    2011-01-01

    We implemented the simplified Monte Carlo (SMC) method on graphics processing unit (GPU) architecture under the computer-unified device architecture platform developed by NVIDIA. The GPU-based SMC was clinically applied for four patients with head and neck, lung, or prostate cancer. The results were compared to those obtained by a traditional CPU-based SMC with respect to the computation time and discrepancy. In the CPU- and GPU-based SMC calculations, the estimated mean statistical errors of the calculated doses in the planning target volume region were within 0.5% rms. The dose distributions calculated by the GPU- and CPU-based SMCs were similar, within statistical errors. The GPU-based SMC showed 12.30–16.00 times faster performance than the CPU-based SMC. The computation time per beam arrangement using the GPU-based SMC for the clinical cases ranged 9–67 s. The results demonstrate the successful application of the GPU-based SMC to a clinical proton treatment planning. (note)

  16. The enhancements and testing for the MCNPX depletion capability

    International Nuclear Information System (INIS)

    Fensin, M. L.; Hendricks, J. S.; Anghaie, S.

    2008-01-01

    Monte Carlo-linked depletion methods have gained recent interest due to the ability to more accurately model true system physics and better track the evolution of temporal nuclide inventory by simulating the actual physical process. The integration of INDER90 into the MCNPX Monte Carlo radiation transport code provides a completely self-contained Monte- Carlo-linked depletion capability in a single Monte Carlo code that is compatible with most nuclear criticality (KCODE) particle tracking features in MCNPX. MCNPX depletion tracks all necessary reaction rates and follows as many isotopes as cross section data permits in order to achieve a highly accurate temporal nuclide inventory solution. We describe here the depletion methodology dating from the original linking of MONTEBURNS and MCNP to the first public release of the integrated capability (MCNPX 2. 6.B, June, 2006) that has been reported previously. Then we further detail the many new depletion capability enhancements since then leading to the present capability. The H.B. Robinson benchmark calculation results are also reported. The new MCNPX depletion capability enhancements include: (1) allowing the modeling of as large a system as computer memory capacity permits; (2) tracking every fission product available in ENDF/B VII. 0; (3) enabling depletion in repeated structures geometries such as repeated arrays of fuel pins; (4) including metastable isotopes in burnup; and (5) manually changing the concentrations of key isotopes during different time steps to simulate changing reactor control conditions such as dilution of poisons to maintain criticality during burnup. These enhancements allow better detail to model the true system physics and also improve the robustness of the capability. The H.B. Robinson benchmark calculation was completed in order to determine the accuracy of the depletion solution. Temporal nuclide computations of key actinide and fission products are compared to the results of other

  17. The MC21 Monte Carlo Transport Code

    International Nuclear Information System (INIS)

    Sutton TM; Donovan TJ; Trumbull TH; Dobreff PS; Caro E; Griesheimer DP; Tyburski LJ; Carpenter DC; Joo H

    2007-01-01

    MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities

  18. Monte-Carlo Modeling of Parameters of a Subcritical Cascade Reactor Based on MSBR and LMFBR Technologies

    CERN Document Server

    Bznuni, S A; Zhamkochyan, V M; Polanski, A; Sosnin, A N; Khudaverdyan, A H

    2001-01-01

    Parameters of a subcritical cascade reactor driven by a proton accelerator and based on a primary lead-bismuth target, main reactor constructed analogously to the molten salt breeder (MSBR) reactor core and a booster-reactor analogous to the core of the BN-350 liquid metal cooled fast breeder reactor (LMFBR). It is shown by means of Monte-Carlo modeling that the reactor under study provides safe operation modes (k_{eff}=0.94-0.98), is apable to transmute effectively radioactive nuclear waste and reduces by an order of magnitude the requirements on the accelerator beam current. Calculations show that the maximal neutron flux in the thermal zone is 10^{14} cm^{12}\\cdot s^_{-1}, in the fast booster zone is 5.12\\cdot10^{15} cm^{12}\\cdot s{-1} at k_{eff}=0.98 and proton beam current I=2.1 mA.

  19. Analysis and Assessment of Operation Risk for Hybrid AC/DC Power System based on the Monte Carlo Method

    Science.gov (United States)

    Hu, Xiaojing; Li, Qiang; Zhang, Hao; Guo, Ziming; Zhao, Kun; Li, Xinpeng

    2018-06-01

    Based on the Monte Carlo method, an improved risk assessment method for hybrid AC/DC power system with VSC station considering the operation status of generators, converter stations, AC lines and DC lines is proposed. According to the sequential AC/DC power flow algorithm, node voltage and line active power are solved, and then the operation risk indices of node voltage over-limit and line active power over-limit are calculated. Finally, an improved two-area IEEE RTS-96 system is taken as a case to analyze and assessment its operation risk. The results show that the proposed model and method can intuitively and directly reflect the weak nodes and weak lines of the system, which can provide some reference for the dispatching department.

  20. Analysis and Assessment of Operation Risk for Hybrid AC/DC Power System based on the Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Hu Xiaojing

    2018-01-01

    Full Text Available Based on the Monte Carlo method, an improved risk assessment method for hybrid AC/DC power system with VSC station considering the operation status of generators, converter stations, AC lines and DC lines is proposed. According to the sequential AC/DC power flow algorithm, node voltage and line active power are solved, and then the operation risk indices of node voltage over-limit and line active power over-limit are calculated. Finally, an improved two-area IEEE RTS-96 system is taken as a case to analyze and assessment its operation risk. The results show that the proposed model and method can intuitively and directly reflect the weak nodes and weak lines of the system, which can provide some reference for the dispatching department.

  1. Monte-Carlo modeling of parameters of a subcritical cascade reactor based on MSBR and LMFBR technologies

    International Nuclear Information System (INIS)

    Bznuni, S.A.; Zhamkochyan, V.M.; Khudaverdyan, A.G.; Barashenkov, V.S.; Sosnin, A.N.; Polanski, A.

    2001-01-01

    Parameters are investigated of a subcritical cascade reactor driven by a proton accelerator and based on a primary lead-bismuth target, main reactor constructed analogously to the molten salt breeder (MSBR) reactor core and a booster-reactor analogous to the core of the BN-350 liquid metal cooled fast breeder reactor (LMFBR). It is shown by means of Monte-Carlo modeling that the reactor under study provides safe operation modes (k eff = 0.94 - 0.98), is capable to transmute effectively radioactive nuclear waste and reduces by an order of magnitude the requirements on the accelerator beam current. Calculations show that the maximal neutron flux in the thermal zone is 10 14 cm 12 · s -1 , in the fast booster zone is 5.12 · 10 15 cm 12 · s -1 at k eff = 0.98 and proton beam current I = 2.1 mA. (author)

  2. Monte Carlo investigation of design aspects of indigenously developed 120 Ci 60Co based industrial radiography device

    International Nuclear Information System (INIS)

    Palam Selvam, T.; Vishwakarma, R.S.; Sahoo, Dhiren K.; Sharma, Manoj K.; Srivastava, Piyush

    2016-01-01

    Industrial Radiography is an indispensable, versatile and well-established non destructive testing (NDT). Industrial radiography is carried out by using industrial radiography gamma exposure devices (IGRED) housing 60 Co and 192 Ir radioactive sources, and Industrial X-ray machines/accelerators. IGRED is an assembly of components which includes source housing, exposure mechanism, source drive system, pig tail, and source conduit. In the present study, we investigated the shielding design aspects of 120 Ci 60 Co-based IRGED using the Monte Carlo methods (MCNP version 3.1). The design details were provided by Board of Radiation and Isotope Technology (BRIT). As the objective is to finalize the suitable design of the device, we also included additional designs. The work also included measurements around the device using TLDs

  3. Introducing ab initio based neural networks for transition-rate prediction in kinetic Monte Carlo simulations

    Science.gov (United States)

    Messina, Luca; Castin, Nicolas; Domain, Christophe; Olsson, Pär

    2017-02-01

    The quality of kinetic Monte Carlo (KMC) simulations of microstructure evolution in alloys relies on the parametrization of point-defect migration rates, which are complex functions of the local chemical composition and can be calculated accurately with ab initio methods. However, constructing reliable models that ensure the best possible transfer of physical information from ab initio to KMC is a challenging task. This work presents an innovative approach, where the transition rates are predicted by artificial neural networks trained on a database of 2000 migration barriers, obtained with density functional theory (DFT) in place of interatomic potentials. The method is tested on copper precipitation in thermally aged iron alloys, by means of a hybrid atomistic-object KMC model. For the object part of the model, the stability and mobility properties of copper-vacancy clusters are analyzed by means of independent atomistic KMC simulations, driven by the same neural networks. The cluster diffusion coefficients and mean free paths are found to increase with size, confirming the dominant role of coarsening of medium- and large-sized clusters in the precipitation kinetics. The evolution under thermal aging is in better agreement with experiments with respect to a previous interatomic-potential model, especially concerning the experiment time scales. However, the model underestimates the solubility of copper in iron due to the excessively high solution energy predicted by the chosen DFT method. Nevertheless, this work proves the capability of neural networks to transfer complex ab initio physical properties to higher-scale models, and facilitates the extension to systems with increasing chemical complexity, setting the ground for reliable microstructure evolution simulations in a wide range of alloys and applications.

  4. Two schemes for quantitative photoacoustic tomography based on Monte Carlo simulation

    International Nuclear Information System (INIS)

    Liu, Yubin; Yuan, Zhen; Jiang, Huabei

    2016-01-01

    Purpose: The aim of this study was to develop novel methods for photoacoustically determining the optical absorption coefficient of biological tissues using Monte Carlo (MC) simulation. Methods: In this study, the authors propose two quantitative photoacoustic tomography (PAT) methods for mapping the optical absorption coefficient. The reconstruction methods combine conventional PAT with MC simulation in a novel way to determine the optical absorption coefficient of biological tissues or organs. Specifically, the authors’ two schemes were theoretically and experimentally examined using simulations, tissue-mimicking phantoms, ex vivo, and in vivo tests. In particular, the authors explored these methods using several objects with different absorption contrasts embedded in turbid media and by using high-absorption media when the diffusion approximation was not effective at describing the photon transport. Results: The simulations and experimental tests showed that the reconstructions were quantitatively accurate in terms of the locations, sizes, and optical properties of the targets. The positions of the recovered targets were accessed by the property profiles, where the authors discovered that the off center error was less than 0.1 mm for the circular target. Meanwhile, the sizes and quantitative optical properties of the targets were quantified by estimating the full width half maximum of the optical absorption property. Interestingly, for the reconstructed sizes, the authors discovered that the errors ranged from 0 for relatively small-size targets to 26% for relatively large-size targets whereas for the recovered optical properties, the errors ranged from 0% to 12.5% for different cases. Conclusions: The authors found that their methods can quantitatively reconstruct absorbing objects of different sizes and optical contrasts even when the diffusion approximation is unable to accurately describe the photon propagation in biological tissues. In particular, their

  5. Performance Analysis of Korean Liquid metal type TBM based on Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, C. H.; Han, B. S.; Park, H. J.; Park, D. K. [Seoul National Univ., Seoul (Korea, Republic of)

    2007-01-15

    The objective of this project is to analyze a nuclear performance of the Korean HCML(Helium Cooled Molten Lithium) TBM(Test Blanket Module) which will be installed in ITER(International Thermonuclear Experimental Reactor). This project is intended to analyze a neutronic design and nuclear performances of the Korean HCML ITER TBM through the transport calculation of MCCARD. In detail, we will conduct numerical experiments for analyzing the neutronic design of the Korean HCML TBM and the DEMO fusion blanket, and improving the nuclear performances. The results of the numerical experiments performed in this project will be utilized further for a design optimization of the Korean HCML TBM. In this project, Monte Carlo transport calculations for evaluating TBR (Tritium Breeding Ratio) and EMF (Energy Multiplication factor) were conducted to analyze a nuclear performance of the Korean HCML TBM. The activation characteristics and shielding performances for the Korean HCML TBM were analyzed using ORIGEN and MCCARD. We proposed the neutronic methodologies for analyzing the nuclear characteristics of the fusion blanket, which was applied to the blanket analysis of a DEMO fusion reactor. In the results, the TBR of the Korean HCML ITER TBM is 0.1352 and the EMF is 1.362. Taking into account a limitation for the Li amount in ITER TBM, it is expected that tritium self-sufficiency condition can be satisfied through a change of the Li quantity and enrichment. In the results of activation and shielding analysis, the activity drops to 1.5% of the initial value and the decay heat drops to 0.02% of the initial amount after 10 years from plasma shutdown.

  6. Monte Carlo-based dose reconstruction in a rat model for scattered ionizing radiation investigations.

    Science.gov (United States)

    Kirkby, Charles; Ghasroddashti, Esmaeel; Kovalchuk, Anna; Kolb, Bryan; Kovalchuk, Olga

    2013-09-01

    In radiation biology, rats are often irradiated, but the precise dose distributions are often lacking, particularly in areas that receive scatter radiation. We used a non-dedicated set of resources to calculate detailed dose distributions, including doses to peripheral organs well outside of the primary field, in common rat exposure settings. We conducted a detailed dose reconstruction in a rat through an analog to the conventional human treatment planning process. The process consisted of: (i) Characterizing source properties of an X-ray irradiator system, (ii) acquiring a computed tomography (CT) scan of a rat model, and (iii) using a Monte Carlo (MC) dose calculation engine to generate the dose distribution within the rat model. We considered cranial and liver irradiation scenarios where the rest of the body was protected by a lead shield. Organs of interest were the brain, liver and gonads. The study also included paired scenarios where the dose to adjacent, shielded rats was determined as a potential control for analysis of bystander effects. We established the precise doses and dose distributions delivered to the peripheral organs in single and paired rats. Mean doses to non-targeted organs in irradiated rats ranged from 0.03-0.1% of the reference platform dose. Mean doses to the adjacent rat peripheral organs were consistent to within 10% those of the directly irradiated rat. This work provided details of dose distributions in rat models under common irradiation conditions and established an effective scenario for delivering only scattered radiation consistent with that in a directly irradiated rat.

  7. GPU-BASED MONTE CARLO DUST RADIATIVE TRANSFER SCHEME APPLIED TO ACTIVE GALACTIC NUCLEI

    International Nuclear Information System (INIS)

    Heymann, Frank; Siebenmorgen, Ralf

    2012-01-01

    A three-dimensional parallel Monte Carlo (MC) dust radiative transfer code is presented. To overcome the huge computing-time requirements of MC treatments, the computational power of vectorized hardware is used, utilizing either multi-core computer power or graphics processing units. The approach is a self-consistent way to solve the radiative transfer equation in arbitrary dust configurations. The code calculates the equilibrium temperatures of two populations of large grains and stochastic heated polycyclic aromatic hydrocarbons. Anisotropic scattering is treated applying the Heney-Greenstein phase function. The spectral energy distribution (SED) of the object is derived at low spatial resolution by a photon counting procedure and at high spatial resolution by a vectorized ray tracer. The latter allows computation of high signal-to-noise images of the objects at any frequencies and arbitrary viewing angles. We test the robustness of our approach against other radiative transfer codes. The SED and dust temperatures of one- and two-dimensional benchmarks are reproduced at high precision. The parallelization capability of various MC algorithms is analyzed and included in our treatment. We utilize the Lucy algorithm for the optical thin case where the Poisson noise is high, the iteration-free Bjorkman and Wood method to reduce the calculation time, and the Fleck and Canfield diffusion approximation for extreme optical thick cells. The code is applied to model the appearance of active galactic nuclei (AGNs) at optical and infrared wavelengths. The AGN torus is clumpy and includes fluffy composite grains of various sizes made up of silicates and carbon. The dependence of the SED on the number of clumps in the torus and the viewing angle is studied. The appearance of the 10 μm silicate features in absorption or emission is discussed. The SED of the radio-loud quasar 3C 249.1 is fit by the AGN model and a cirrus component to account for the far-infrared emission.

  8. Ozone Depletion Caused by Rocket Engine Emissions: A Fundamental Limit on the Scale and Viability of Space-Based Geoengineering Schemes

    Science.gov (United States)

    Ross, M. N.; Toohey, D.

    2008-12-01

    Emissions from solid and liquid propellant rocket engines reduce global stratospheric ozone levels. Currently ~ one kiloton of payloads are launched into earth orbit annually by the global space industry. Stratospheric ozone depletion from present day launches is a small fraction of the ~ 4% globally averaged ozone loss caused by halogen gases. Thus rocket engine emissions are currently considered a minor, if poorly understood, contributor to ozone depletion. Proposed space-based geoengineering projects designed to mitigate climate change would require order of magnitude increases in the amount of material launched into earth orbit. The increased launches would result in comparable increases in the global ozone depletion caused by rocket emissions. We estimate global ozone loss caused by three space-based geoengineering proposals to mitigate climate change: (1) mirrors, (2) sunshade, and (3) space-based solar power (SSP). The SSP concept does not directly engineer climate, but is touted as a mitigation strategy in that SSP would reduce CO2 emissions. We show that launching the mirrors or sunshade would cause global ozone loss between 2% and 20%. Ozone loss associated with an economically viable SSP system would be at least 0.4% and possibly as large as 3%. It is not clear which, if any, of these levels of ozone loss would be acceptable under the Montreal Protocol. The large uncertainties are mainly caused by a lack of data or validated models regarding liquid propellant rocket engine emissions. Our results offer four main conclusions. (1) The viability of space-based geoengineering schemes could well be undermined by the relatively large ozone depletion that would be caused by the required rocket launches. (2) Analysis of space- based geoengineering schemes should include the difficult tradeoff between the gain of long-term (~ decades) climate control and the loss of short-term (~ years) deep ozone loss. (3) The trade can be properly evaluated only if our

  9. TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations

    International Nuclear Information System (INIS)

    Schuemann, J; Grassberger, C; Paganetti, H; Dowdell, S

    2014-01-01

    Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50) were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we recommend

  10. TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Schuemann, J; Grassberger, C; Paganetti, H [Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Dowdell, S [Illawarra Shoalhaven Local Health District, Wollongong (Australia)

    2014-06-15

    Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50) were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we recommend

  11. SU-F-T-619: Dose Evaluation of Specific Patient Plans Based On Monte Carlo Algorithm for a CyberKnife Stereotactic Radiosurgery System

    Energy Technology Data Exchange (ETDEWEB)

    Piao, J [PLA General Hospital, Beijing (China); PLA 302 Hospital, Beijing (China); Xu, S [PLA General Hospital, Beijing (China); Tsinghua University, Beijing (China); Wu, Z; Liu, Y [Tsinghua University, Beijing (China); Li, Y [Beihang University, Beijing (China); Qu, B [PLA General Hospital, Beijing (China); Duan, X [PLA 302 Hospital, Beijing (China)

    2016-06-15

    Purpose: This study will use Monte Carlo to simulate the Cyberknife system, and intend to develop the third-party tool to evaluate the dose verification of specific patient plans in TPS. Methods: By simulating the treatment head using the BEAMnrc and DOSXYZnrc software, the comparison between the calculated and measured data will be done to determine the beam parameters. The dose distribution calculated in the Raytracing, Monte Carlo algorithms of TPS (Multiplan Ver4.0.2) and in-house Monte Carlo simulation method for 30 patient plans, which included 10 head, lung and liver cases in each, were analyzed. The γ analysis with the combined 3mm/3% criteria would be introduced to quantitatively evaluate the difference of the accuracy between three algorithms. Results: More than 90% of the global error points were less than 2% for the comparison of the PDD and OAR curves after determining the mean energy and FWHM.The relative ideal Monte Carlo beam model had been established. Based on the quantitative evaluation of dose accuracy for three algorithms, the results of γ analysis shows that the passing rates (84.88±9.67% for head,98.83±1.05% for liver,98.26±1.87% for lung) of PTV in 30 plans between Monte Carlo simulation and TPS Monte Carlo algorithms were good. And the passing rates (95.93±3.12%,99.84±0.33% in each) of PTV in head and liver plans between Monte Carlo simulation and TPS Ray-tracing algorithms were also good. But the difference of DVHs in lung plans between Monte Carlo simulation and Ray-tracing algorithms was obvious, and the passing rate (51.263±38.964%) of γ criteria was not good. It is feasible that Monte Carlo simulation was used for verifying the dose distribution of patient plans. Conclusion: Monte Carlo simulation algorithm developed in the CyberKnife system of this study can be used as a reference tool for the third-party tool, which plays an important role in dose verification of patient plans. This work was supported in part by the grant

  12. Variational Monte Carlo Technique

    Indian Academy of Sciences (India)

    ias

    on the development of nuclear weapons in Los Alamos ..... cantly improved the paper. ... Carlo simulations of solids, Reviews of Modern Physics, Vol.73, pp.33– ... The computer algorithms are usually based on a random seed that starts the ...

  13. Kinetics of depletion interactions

    NARCIS (Netherlands)

    Vliegenthart, G.A.; Schoot, van der P.P.A.M.

    2003-01-01

    Depletion interactions between colloidal particles dispersed in a fluid medium are effective interactions induced by the presence of other types of colloid. They are not instantaneous but built up in time. We show by means of Brownian dynamics simulations that the static (mean-field) depletion force

  14. Management of depleted uranium

    International Nuclear Information System (INIS)

    2001-01-01

    Large stocks of depleted uranium have arisen as a result of enrichment operations, especially in the United States and the Russian Federation. Countries with depleted uranium stocks are interested in assessing strategies for the use and management of depleted uranium. The choice of strategy depends on several factors, including government and business policy, alternative uses available, the economic value of the material, regulatory aspects and disposal options, and international market developments in the nuclear fuel cycle. This report presents the results of a depleted uranium study conducted by an expert group organised jointly by the OECD Nuclear Energy Agency and the International Atomic Energy Agency. It contains information on current inventories of depleted uranium, potential future arisings, long term management alternatives, peaceful use options and country programmes. In addition, it explores ideas for international collaboration and identifies key issues for governments and policy makers to consider. (authors)

  15. Advanced TEM Characterization for the Development of 28-14nm nodes based on fully-depleted Silicon-on-Insulator Technology

    International Nuclear Information System (INIS)

    Servanton, G; Clement, L; Lepinay, K; Lorut, F; Pantel, R; Pofelski, A; Bicais, N

    2013-01-01

    The growing demand for wireless multimedia applications (smartphones, tablets, digital cameras) requires the development of devices combining both high speed performances and low power consumption. A recent technological breakthrough making a good compromise between these two antagonist conditions has been proposed: the 28-14nm CMOS transistor generations based on a fully-depleted Silicon-on-Insulator (FD-SOI) performed on a thin Si film of 5-6nm. In this paper, we propose to review the TEM characterization challenges that are essential for the development of extremely power-efficient System on Chip (SoC)

  16. Carlos Romero

    Directory of Open Access Journals (Sweden)

    2008-05-01

    Full Text Available Entrevista (en español Presentación Carlos Romero, politólogo, es profesor-investigador en el Instituto de Estudios Políticos de la Facultad de Ciencias Jurídicas y Políticas de la Universidad Central de Venezuela, en donde se ha desempeñado como coordinador del Doctorado, subdirector y director del Centro de Estudios de Postgrado. Cuenta con ocho libros publicados sobre temas de análisis político y relaciones internacionales, siendo uno de los últimos Jugando con el globo. La política exter...

  17. Monte Carlo-based development of a shield and total background estimation for the COBRA experiment

    International Nuclear Information System (INIS)

    Heidrich, Nadine

    2014-11-01

    The COBRA experiment aims for the measurement of the neutrinoless double beta decay and thus for the determination the effective Majorana mass of the neutrino. To be competitive with other next-generation experiments the background rate has to be in the order of 10 -3 counts/kg/keV/yr, which is a challenging criterion. This thesis deals with the development of a shield design and the calculation of the expected total background rate for the large scale COBRA experiment containing 13824 6 cm 3 CdZnTe detectors. For the development of a shield single-layer and multi-layer shields were investigated and a shield design was optimized concerning high-energy muon-induced neutrons. As the best design the combination of 10 cm boron doped polyethylene as outermost layer, 20 cm lead and 10 cm copper as innermost layer were determined. It showed the best performance regarding neutron attenuation as well as (n, γ) self-shielding effects leading to a negligible background rate of less than 2.10 -6 counts/kg/keV/yr. Additionally. the shield with a thickness of 40 cm is compact and costeffective. In the next step the expected total background rate was computed taking into account individual setup parts and various background sources including natural and man-made radioactivity, cosmic ray-induced background and thermal neutrons. Furthermore, a comparison of measured data from the COBRA demonstrator setup with Monte Carlo data was used to calculate reliable contamination levels of the single setup parts. The calculation was performed conservatively to prevent an underestimation. In addition, the contribution to the total background rate regarding the individual detector parts and background sources was investigated. The main portion arise from the Delrin support structure, the Glyptal lacquer followed by the circuit board of the high voltage supply. Most background events originate from particles with a quantity of 99 % in total. Regarding surface events a contribution of 26

  18. Monte Carlo-based development of a shield and total background estimation for the COBRA experiment

    Energy Technology Data Exchange (ETDEWEB)

    Heidrich, Nadine

    2014-11-15

    The COBRA experiment aims for the measurement of the neutrinoless double beta decay and thus for the determination the effective Majorana mass of the neutrino. To be competitive with other next-generation experiments the background rate has to be in the order of 10{sup -3} counts/kg/keV/yr, which is a challenging criterion. This thesis deals with the development of a shield design and the calculation of the expected total background rate for the large scale COBRA experiment containing 13824 6 cm{sup 3} CdZnTe detectors. For the development of a shield single-layer and multi-layer shields were investigated and a shield design was optimized concerning high-energy muon-induced neutrons. As the best design the combination of 10 cm boron doped polyethylene as outermost layer, 20 cm lead and 10 cm copper as innermost layer were determined. It showed the best performance regarding neutron attenuation as well as (n, γ) self-shielding effects leading to a negligible background rate of less than 2.10{sup -6} counts/kg/keV/yr. Additionally. the shield with a thickness of 40 cm is compact and costeffective. In the next step the expected total background rate was computed taking into account individual setup parts and various background sources including natural and man-made radioactivity, cosmic ray-induced background and thermal neutrons. Furthermore, a comparison of measured data from the COBRA demonstrator setup with Monte Carlo data was used to calculate reliable contamination levels of the single setup parts. The calculation was performed conservatively to prevent an underestimation. In addition, the contribution to the total background rate regarding the individual detector parts and background sources was investigated. The main portion arise from the Delrin support structure, the Glyptal lacquer followed by the circuit board of the high voltage supply. Most background events originate from particles with a quantity of 99 % in total. Regarding surface events a

  19. A method to generate equivalent energy spectra and filtration models based on measurement for multidetector CT Monte Carlo dosimetry simulations

    International Nuclear Information System (INIS)

    Turner, Adam C.; Zhang Di; Kim, Hyun J.; DeMarco, John J.; Cagnon, Chris H.; Angel, Erin; Cody, Dianna D.; Stevens, Donna M.; Primak, Andrew N.; McCollough, Cynthia H.; McNitt-Gray, Michael F.

    2009-01-01

    The purpose of this study was to present a method for generating x-ray source models for performing Monte Carlo (MC) radiation dosimetry simulations of multidetector row CT (MDCT) scanners. These so-called ''equivalent'' source models consist of an energy spectrum and filtration description that are generated based wholly on the measured values and can be used in place of proprietary manufacturer's data for scanner-specific MDCT MC simulations. Required measurements include the half value layers (HVL 1 and HVL 2 ) and the bowtie profile (exposure values across the fan beam) for the MDCT scanner of interest. Using these measured values, a method was described (a) to numerically construct a spectrum with the calculated HVLs approximately equal to those measured (equivalent spectrum) and then (b) to determine a filtration scheme (equivalent filter) that attenuates the equivalent spectrum in a similar fashion as the actual filtration attenuates the actual x-ray beam, as measured by the bowtie profile measurements. Using this method, two types of equivalent source models were generated: One using a spectrum based on both HVL 1 and HVL 2 measurements and its corresponding filtration scheme and the second consisting of a spectrum based only on the measured HVL 1 and its corresponding filtration scheme. Finally, a third type of source model was built based on the spectrum and filtration data provided by the scanner's manufacturer. MC simulations using each of these three source model types were evaluated by comparing the accuracy of multiple CT dose index (CTDI) simulations to measured CTDI values for 64-slice scanners from the four major MDCT manufacturers. Comprehensive evaluations were carried out for each scanner using each kVp and bowtie filter combination available. CTDI experiments were performed for both head (16 cm in diameter) and body (32 cm in diameter) CTDI phantoms using both central and peripheral measurement positions. Both equivalent source model types

  20. Fast GPU-based Monte Carlo code for SPECT/CT reconstructions generates improved 177Lu images.

    Science.gov (United States)

    Rydén, T; Heydorn Lagerlöf, J; Hemmingsson, J; Marin, I; Svensson, J; Båth, M; Gjertsson, P; Bernhardt, P

    2018-01-04

    Full Monte Carlo (MC)-based SPECT reconstructions have a strong potential for correcting for image degrading factors, but the reconstruction times are long. The objective of this study was to develop a highly parallel Monte Carlo code for fast, ordered subset expectation maximum (OSEM) reconstructions of SPECT/CT images. The MC code was written in the Compute Unified Device Architecture language for a computer with four graphics processing units (GPUs) (GeForce GTX Titan X, Nvidia, USA). This enabled simulations of parallel photon emissions from the voxels matrix (128 3 or 256 3 ). Each computed tomography (CT) number was converted to attenuation coefficients for photo absorption, coherent scattering, and incoherent scattering. For photon scattering, the deflection angle was determined by the differential scattering cross sections. An angular response function was developed and used to model the accepted angles for photon interaction with the crystal, and a detector scattering kernel was used for modeling the photon scattering in the detector. Predefined energy and spatial resolution kernels for the crystal were used. The MC code was implemented in the OSEM reconstruction of clinical and phantom 177 Lu SPECT/CT images. The Jaszczak image quality phantom was used to evaluate the performance of the MC reconstruction in comparison with attenuated corrected (AC) OSEM reconstructions and attenuated corrected OSEM reconstructions with resolution recovery corrections (RRC). The performance of the MC code was 3200 million photons/s. The required number of photons emitted per voxel to obtain a sufficiently low noise level in the simulated image was 200 for a 128 3 voxel matrix. With this number of emitted photons/voxel, the MC-based OSEM reconstruction with ten subsets was performed within 20 s/iteration. The images converged after around six iterations. Therefore, the reconstruction time was around 3 min. The activity recovery for the spheres in the Jaszczak phantom was

  1. A photon source model based on particle transport in a parameterized accelerator structure for Monte Carlo dose calculations.

    Science.gov (United States)

    Ishizawa, Yoshiki; Dobashi, Suguru; Kadoya, Noriyuki; Ito, Kengo; Chiba, Takahito; Takayama, Yoshiki; Sato, Kiyokazu; Takeda, Ken

    2018-05-17

    An accurate source model of a medical linear accelerator is essential for Monte Carlo (MC) dose calculations. This study aims to propose an analytical photon source model based on particle transport in parameterized accelerator structures, focusing on a more realistic determination of linac photon spectra compared to existing approaches. We designed the primary and secondary photon sources based on the photons attenuated and scattered by a parameterized flattening filter. The primary photons were derived by attenuating bremsstrahlung photons based on the path length in the filter. Conversely, the secondary photons were derived from the decrement of the primary photons in the attenuation process. This design facilitates these sources to share the free parameters of the filter shape and be related to each other through the photon interaction in the filter. We introduced two other parameters of the primary photon source to describe the particle fluence in penumbral regions. All the parameters are optimized based on calculated dose curves in water using the pencil-beam-based algorithm. To verify the modeling accuracy, we compared the proposed model with the phase space data (PSD) of the Varian TrueBeam 6 and 15 MV accelerators in terms of the beam characteristics and the dose distributions. The EGS5 Monte Carlo code was used to calculate the dose distributions associated with the optimized model and reference PSD in a homogeneous water phantom and a heterogeneous lung phantom. We calculated the percentage of points passing 1D and 2D gamma analysis with 1%/1 mm criteria for the dose curves and lateral dose distributions, respectively. The optimized model accurately reproduced the spectral curves of the reference PSD both on- and off-axis. The depth dose and lateral dose profiles of the optimized model also showed good agreement with those of the reference PSD. The passing rates of the 1D gamma analysis with 1%/1 mm criteria between the model and PSD were 100% for 4

  2. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in

    2017-07-15

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.

  3. Monte Carlo simulation of explosive detection system based on a Deuterium-Deuterium (D-D) neutron generator.

    Science.gov (United States)

    Bergaoui, K; Reguigui, N; Gary, C K; Brown, C; Cremer, J T; Vainionpaa, J H; Piestrup, M A

    2014-12-01

    An explosive detection system based on a Deuterium-Deuterium (D-D) neutron generator has been simulated using the Monte Carlo N-Particle Transport Code (MCNP5). Nuclear-based explosive detection methods can detect explosives by identifying their elemental components, especially nitrogen. Thermal neutron capture reactions have been used for detecting prompt gamma emission (10.82MeV) following radiative neutron capture by (14)N nuclei. The explosive detection system was built based on a fully high-voltage-shielded, axial D-D neutron generator with a radio frequency (RF) driven ion source and nominal yield of about 10(10) fast neutrons per second (E=2.5MeV). Polyethylene and paraffin were used as moderators with borated polyethylene and lead as neutron and gamma ray shielding, respectively. The shape and the thickness of the moderators and shields are optimized to produce the highest thermal neutron flux at the position of the explosive and the minimum total dose at the outer surfaces of the explosive detection system walls. In addition, simulation of the response functions of NaI, BGO, and LaBr3-based γ-ray detectors to different explosives is described. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. DInSAR-Based Detection of Land Subsidence and Correlation with Groundwater Depletion in Konya Plain, Turkey

    Directory of Open Access Journals (Sweden)

    Fabiana Caló

    2017-01-01

    Full Text Available In areas where groundwater overexploitation occurs, land subsidence triggered by aquifer compaction is observed, resulting in high socio-economic impacts for the affected communities. In this paper, we focus on the Konya region, one of the leading economic centers in the agricultural and industrial sectors in Turkey. We present a multi-source data approach aimed at investigating the complex and fragile environment of this area which is heavily affected by groundwater drawdown and ground subsidence. In particular, in order to analyze the spatial and temporal pattern of the subsidence process we use the Small BAseline Subset DInSAR technique to process two datasets of ENVISAT SAR images spanning the 2002–2010 period. The produced ground deformation maps and associated time-series allow us to detect a wide land subsidence extending for about 1200 km2 and measure vertical displacements reaching up to 10 cm in the observed time interval. DInSAR results, complemented with climatic, stratigraphic and piezometric data as well as with land-cover changes information, allow us to give more insights on the impact of climate changes and human activities on groundwater resources depletion and land subsidence.

  5. A calculational procedure for neutronic and depletion analysis of Molten-Salt reactors based on SCALE6/TRITON

    International Nuclear Information System (INIS)

    Sheu, R.J.; Chang, J.S.; Liu, Y.-W. H.

    2011-01-01

    Molten-Salt Reactors (MSRs) represent one of the selected categories in the GEN-IV program. This type of reactor is distinguished by the use of liquid fuel circulating in and out of the core, which makes it possible for online refueling and salt processing. However, this operation characteristic also complicates the modeling and simulation of reactor core behaviour using conventional neutronic codes. The TRITON sequence in the SCALE6 code system has been designed to provide the combined capabilities of problem-dependent cross-section processing, rigorous treatment of neutron transport, and coupled with the ORIGEN-S depletion calculations. In order to accommodate the simulation of dynamic refueling and processing scheme, an in-house program REFRESH together with a run script are developed for carrying out a series of stepwise TRITON calculations, that makes the work of analyzing the neutronic properties and performance of a MSR core design easier. As a demonstration and cross check, we have applied this method to reexamine the conceptual design of Molten Salt Actinide Recycler & Transmuter (MOSART). This paper summarizes the development of the method and preliminary results of its application on MOSART. (author)

  6. Halo Star Lithium Depletion

    International Nuclear Information System (INIS)

    Pinsonneault, M. H.; Walker, T. P.; Steigman, G.; Narayanan, Vijay K.

    1999-01-01

    The depletion of lithium during the pre-main-sequence and main-sequence phases of stellar evolution plays a crucial role in the comparison of the predictions of big bang nucleosynthesis with the abundances observed in halo stars. Previous work has indicated a wide range of possible depletion factors, ranging from minimal in standard (nonrotating) stellar models to as much as an order of magnitude in models that include rotational mixing. Recent progress in the study of the angular momentum evolution of low-mass stars permits the construction of theoretical models capable of reproducing the angular momentum evolution of low-mass open cluster stars. The distribution of initial angular momenta can be inferred from stellar rotation data in young open clusters. In this paper we report on the application of these models to the study of lithium depletion in main-sequence halo stars. A range of initial angular momenta produces a range of lithium depletion factors on the main sequence. Using the distribution of initial conditions inferred from young open clusters leads to a well-defined halo lithium plateau with modest scatter and a small population of outliers. The mass-dependent angular momentum loss law inferred from open cluster studies produces a nearly flat plateau, unlike previous models that exhibited a downward curvature for hotter temperatures in the 7Li-Teff plane. The overall depletion factor for the plateau stars is sensitive primarily to the solar initial angular momentum used in the calibration for the mixing diffusion coefficients. Uncertainties remain in the treatment of the internal angular momentum transport in the models, and the potential impact of these uncertainties on our results is discussed. The 6Li/7Li depletion ratio is also examined. We find that the dispersion in the plateau and the 6Li/7Li depletion ratio scale with the absolute 7Li depletion in the plateau, and we use observational data to set bounds on the 7Li depletion in main-sequence halo

  7. CRDIAC: Coupled Reactor Depletion Instrument with Automated Control

    International Nuclear Information System (INIS)

    Logan, Steven K.

    2012-01-01

    When modeling the behavior of a nuclear reactor over time, it is important to understand how the isotopes in the reactor will change, or transmute, over that time. This is especially important in the reactor fuel itself. Many nuclear physics modeling codes model how particles interact in the system, but do not model this over time. Thus, another code is used in conjunction with the nuclear physics code to accomplish this. In our code, Monte Carlo N-Particle (MCNP) codes and the Multi Reactor Transmutation Analysis Utility (MRTAU) were chosen as the codes to use. In this way, MCNP would produce the reaction rates in the different isotopes present and MRTAU would use cross sections generated from these reaction rates to determine how the mass of each isotope is lost or gained. Between these two codes, the information must be altered and edited for use. For this, a Python 2.7 script was developed to aid the user in getting the information in the correct forms. This newly developed methodology was called the Coupled Reactor Depletion Instrument with Automated Controls (CRDIAC). As is the case in any newly developed methodology for modeling of physical phenomena, CRDIAC needed to be verified against similar methodology and validated against data taken from an experiment, in our case AFIP-3. AFIP-3 was a reduced enrichment plate type fuel tested in the ATR. We verified our methodology against the MCNP Coupled with ORIGEN2 (MCWO) method and validated our work against the Post Irradiation Examination (PIE) data. When compared to MCWO, the difference in concentration of U-235 throughout Cycle 144A was about 1%. When compared to the PIE data, the average bias for end of life U-235 concentration was about 2%. These results from CRDIAC therefore agree with the MCWO and PIE data, validating and verifying CRDIAC. CRDIAC provides an alternative to using ORIGEN-based methodology, which is useful because CRDIAC's depletion code, MRTAU, uses every available isotope in its depletion

  8. Monte Carlo - Advances and Challenges

    International Nuclear Information System (INIS)

    Brown, Forrest B.; Mosteller, Russell D.; Martin, William R.

    2008-01-01

    Abstract only, full text follows: With ever-faster computers and mature Monte Carlo production codes, there has been tremendous growth in the application of Monte Carlo methods to the analysis of reactor physics and reactor systems. In the past, Monte Carlo methods were used primarily for calculating k eff of a critical system. More recently, Monte Carlo methods have been increasingly used for determining reactor power distributions and many design parameters, such as β eff , l eff , τ, reactivity coefficients, Doppler defect, dominance ratio, etc. These advanced applications of Monte Carlo methods are now becoming common, not just feasible, but bring new challenges to both developers and users: Convergence of 3D power distributions must be assured; confidence interval bias must be eliminated; iterated fission probabilities are required, rather than single-generation probabilities; temperature effects including Doppler and feedback must be represented; isotopic depletion and fission product buildup must be modeled. This workshop focuses on recent advances in Monte Carlo methods and their application to reactor physics problems, and on the resulting challenges faced by code developers and users. The workshop is partly tutorial, partly a review of the current state-of-the-art, and partly a discussion of future work that is needed. It should benefit both novice and expert Monte Carlo developers and users. In each of the topic areas, we provide an overview of needs, perspective on past and current methods, a review of recent work, and discussion of further research and capabilities that are required. Electronic copies of all workshop presentations and material will be available. The workshop is structured as 2 morning and 2 afternoon segments: - Criticality Calculations I - convergence diagnostics, acceleration methods, confidence intervals, and the iterated fission probability, - Criticality Calculations II - reactor kinetics parameters, dominance ratio, temperature

  9. Addressing Ozone Layer Depletion

    Science.gov (United States)

    Access information on EPA's efforts to address ozone layer depletion through regulations, collaborations with stakeholders, international treaties, partnerships with the private sector, and enforcement actions under Title VI of the Clean Air Act.

  10. Dose optimization based on linear programming implemented in a system for treatment planning in Monte Carlo

    International Nuclear Information System (INIS)

    Ureba, A.; Palma, B. A.; Leal, A.

    2011-01-01

    Develop a more efficient method of optimization in relation to time, based on linear programming designed to implement a multi objective penalty function which also permits a simultaneous solution integrated boost situations considering two white volumes simultaneously.

  11. Efficient Monte Carlo sampling of inverse problems using a neural network-based forward—applied to GPR crosshole traveltime inversion

    Science.gov (United States)

    Hansen, T. M.; Cordua, K. S.

    2017-12-01

    Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.

  12. Warranty optimisation based on the prediction of costs to the manufacturer using neural network model and Monte Carlo simulation

    Science.gov (United States)

    Stamenkovic, Dragan D.; Popovic, Vladimir M.

    2015-02-01

    Warranty is a powerful marketing tool, but it always involves additional costs to the manufacturer. In order to reduce these costs and make use of warranty's marketing potential, the manufacturer needs to master the techniques for warranty cost prediction according to the reliability characteristics of the product. In this paper a combination free replacement and pro rata warranty policy is analysed as warranty model for one type of light bulbs. Since operating conditions have a great impact on product reliability, they need to be considered in such analysis. A neural network model is used to predict light bulb reliability characteristics based on the data from the tests of light bulbs in various operating conditions. Compared with a linear regression model used in the literature for similar tasks, the neural network model proved to be a more accurate method for such prediction. Reliability parameters obtained in this way are later used in Monte Carlo simulation for the prediction of times to failure needed for warranty cost calculation. The results of the analysis make possible for the manufacturer to choose the optimal warranty policy based on expected product operating conditions. In such a way, the manufacturer can lower the costs and increase the profit.

  13. Abstract ID: 240 A probabilistic-based nuclear reaction model for Monte Carlo ion transport in particle therapy.

    Science.gov (United States)

    Maria Jose, Gonzalez Torres; Jürgen, Henniger

    2018-01-01

    In order to expand the Monte Carlo transport program AMOS to particle therapy applications, the ion module is being developed in the radiation physics group (ASP) at the TU Dresden. This module simulates the three main interactions of ions in matter for the therapy energy range: elastic scattering, inelastic collisions and nuclear reactions. The simulation of the elastic scattering is based on the Binary Collision Approximation and the inelastic collisions on the Bethe-Bloch theory. The nuclear reactions, which are the focus of the module, are implemented according to a probabilistic-based model developed in the group. The developed model uses probability density functions to sample the occurrence of a nuclear reaction given the initial energy of the projectile particle as well as the energy at which this reaction will take place. The particle is transported until the reaction energy is reached and then the nuclear reaction is simulated. This approach allows a fast evaluation of the nuclear reactions. The theory and application of the proposed model will be addressed in this presentation. The results of the simulation of a proton beam colliding with tissue will also be presented. Copyright © 2017.

  14. Whole body counter calibration using Monte Carlo modeling with an array of phantom sizes based on national anthropometric reference data

    International Nuclear Information System (INIS)

    Shypailo, R J; Ellis, K J

    2011-01-01

    During construction of the whole body counter (WBC) at the Children's Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of 40 K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Carlo n-particle simulation program was used to describe the WBC (54 detectors plus shielding), test individual detector counting response, and create a series of virtual anthropomorphic phantoms based on national reference anthropometric data. Each phantom included an outer layer of adipose tissue and an inner core of lean tissue. Phantoms were designed for both genders representing ages 3.5 to 18.5 years with body sizes from the 5th to the 95th percentile based on body weight. In addition, a spherical surface source surrounding the WBC was modeled in order to measure the effects of subject mass on room background interference. Individual detector measurements showed good agreement with the MCNP model. The background source model came close to agreement with empirical measurements, but showed a trend deviating from unity with increasing subject size. Results from the MCNP simulation of the CNRC WBC agreed well with empirical measurements using BOMAB phantoms. Individual detector efficiency corrections were used to improve the accuracy of the model. Nonlinear multiple regression efficiency calibration equations were derived for each gender. Room background correction is critical in improving the accuracy of the WBC calibration.

  15. Monte-Carlo Simulation and Automated Test Bench for Developing a Multichannel NIR-Based Vital-Signs Monitor.

    Science.gov (United States)

    Bruser, Christoph; Strutz, Nils; Winter, Stefan; Leonhardt, Steffen; Walter, Marian

    2015-06-01

    Unobtrusive, long-term monitoring of cardiac (and respiratory) rhythms using only non-invasive vibration sensors mounted in beds promises to unlock new applications in home and low acuity monitoring. This paper presents a novel concept for such a system based on an array of near infrared (NIR) sensors placed underneath a regular bed mattress. We focus on modeling and analyzing the underlying technical measurement principle with the help of a 2D model of a polyurethane foam mattress and Monte-Carlo simulations of the opto-mechanical interaction responsible for signal genesis. Furthermore, a test rig to automatically and repeatably impress mechanical vibrations onto a mattress is introduced and used to identify the properties of a prototype implementation of the proposed measurement principle. Results show that NIR-based sensing is capable of registering miniscule deformations of the mattress with a high spatial specificity. As a final outlook, proof-of-concept measurements with the sensor array are presented which demonstrate that cardiorespiratory movements of the body can be detected and that automatic heart rate estimation at competitive error levels is feasible with the proposed approach.

  16. Advanced Monte Carlo procedure for the IFMIF d-Li neutron source term based on evaluated cross section data

    International Nuclear Information System (INIS)

    Simakov, S.P.; Fischer, U.; Moellendorff, U. von; Schmuck, I.; Konobeev, A.Yu.; Korovin, Yu.A.; Pereslavtsev, P.

    2002-01-01

    A newly developed computational procedure is presented for the generation of d-Li source neutrons in Monte Carlo transport calculations based on the use of evaluated double-differential d+ 6,7 Li cross section data. A new code M c DeLicious was developed as an extension to MCNP4C to enable neutronics design calculations for the d-Li based IFMIF neutron source making use of the evaluated deuteron data files. The M c DeLicious code was checked against available experimental data and calculation results of M c DeLi and MCNPX, both of which use built-in analytical models for the Li(d, xn) reaction. It is shown that M c DeLicious along with newly evaluated d+ 6,7 Li data is superior in predicting the characteristics of the d-Li neutron source. As this approach makes use of tabulated Li(d, xn) cross sections, the accuracy of the IFMIF d-Li neutron source term can be steadily improved with more advanced and validated data

  17. Advanced Monte Carlo procedure for the IFMIF d-Li neutron source term based on evaluated cross section data

    CERN Document Server

    Simakov, S P; Moellendorff, U V; Schmuck, I; Konobeev, A Y; Korovin, Y A; Pereslavtsev, P

    2002-01-01

    A newly developed computational procedure is presented for the generation of d-Li source neutrons in Monte Carlo transport calculations based on the use of evaluated double-differential d+ sup 6 sup , sup 7 Li cross section data. A new code M sup c DeLicious was developed as an extension to MCNP4C to enable neutronics design calculations for the d-Li based IFMIF neutron source making use of the evaluated deuteron data files. The M sup c DeLicious code was checked against available experimental data and calculation results of M sup c DeLi and MCNPX, both of which use built-in analytical models for the Li(d, xn) reaction. It is shown that M sup c DeLicious along with newly evaluated d+ sup 6 sup , sup 7 Li data is superior in predicting the characteristics of the d-Li neutron source. As this approach makes use of tabulated Li(d, xn) cross sections, the accuracy of the IFMIF d-Li neutron source term can be steadily improved with more advanced and validated data.

  18. Simulating polarized light scattering in terrestrial snow based on bicontinuous random medium and Monte Carlo ray tracing

    International Nuclear Information System (INIS)

    Xiong, Chuan; Shi, Jiancheng

    2014-01-01

    To date, the light scattering models of snow consider very little about the real snow microstructures. The ideal spherical or other single shaped particle assumptions in previous snow light scattering models can cause error in light scattering modeling of snow and further cause errors in remote sensing inversion algorithms. This paper tries to build up a snow polarized reflectance model based on bicontinuous medium, with which the real snow microstructure is considered. The accurate specific surface area of bicontinuous medium can be analytically derived. The polarized Monte Carlo ray tracing technique is applied to the computer generated bicontinuous medium. With proper algorithms, the snow surface albedo, bidirectional reflectance distribution function (BRDF) and polarized BRDF can be simulated. The validation of model predicted spectral albedo and bidirectional reflectance factor (BRF) using experiment data shows good results. The relationship between snow surface albedo and snow specific surface area (SSA) were predicted, and this relationship can be used for future improvement of snow specific surface area (SSA) inversion algorithms. The model predicted polarized reflectance is validated and proved accurate, which can be further applied in polarized remote sensing. -- Highlights: • Bicontinuous random medium were used for real snow microstructure modeling. • Photon tracing technique with polarization status tracking ability was applied. • SSA–albedo relationship of snow is close to that of sphere based medium. • Validation of albedo and BRDF showed good results. • Validation of polarized reflectance showed good agreement with experiment data

  19. TestDose: A nuclear medicine software based on Monte Carlo modeling for generating gamma camera acquisitions and dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Garcia, Marie-Paule, E-mail: marie-paule.garcia@univ-brest.fr; Villoing, Daphnée [UMR 1037 INSERM/UPS, CRCT, 133 Route de Narbonne, 31062 Toulouse (France); McKay, Erin [St George Hospital, Gray Street, Kogarah, New South Wales 2217 (Australia); Ferrer, Ludovic [ICO René Gauducheau, Boulevard Jacques Monod, St Herblain 44805 (France); Cremonesi, Marta; Botta, Francesca; Ferrari, Mahila [European Institute of Oncology, Via Ripamonti 435, Milano 20141 (Italy); Bardiès, Manuel [UMR 1037 INSERM/UPS, CRCT, 133 Route de Narbonne, Toulouse 31062 (France)

    2015-12-15

    Purpose: The TestDose platform was developed to generate scintigraphic imaging protocols and associated dosimetry by Monte Carlo modeling. TestDose is part of a broader project (www.dositest.com) whose aim is to identify the biases induced by different clinical dosimetry protocols. Methods: The TestDose software allows handling the whole pipeline from virtual patient generation to resulting planar and SPECT images and dosimetry calculations. The originality of their approach relies on the implementation of functional segmentation for the anthropomorphic model representing a virtual patient. Two anthropomorphic models are currently available: 4D XCAT and ICRP 110. A pharmacokinetic model describes the biodistribution of a given radiopharmaceutical in each defined compartment at various time-points. The Monte Carlo simulation toolkit GATE offers the possibility to accurately simulate scintigraphic images and absorbed doses in volumes of interest. The TestDose platform relies on GATE to reproduce precisely any imaging protocol and to provide reference dosimetry. For image generation, TestDose stores user’s imaging requirements and generates automatically command files used as input for GATE. Each compartment is simulated only once and the resulting output is weighted using pharmacokinetic data. Resulting compartment projections are aggregated to obtain the final image. For dosimetry computation, emission data are stored in the platform database and relevant GATE input files are generated for the virtual patient model and associated pharmacokinetics. Results: Two samples of software runs are given to demonstrate the potential of TestDose. A clinical imaging protocol for the Octreoscan™ therapeutical treatment was implemented using the 4D XCAT model. Whole-body “step and shoot” acquisitions at different times postinjection and one SPECT acquisition were generated within reasonable computation times. Based on the same Octreoscan™ kinetics, a dosimetry

  20. Proposal of flexible atomic and molecular process management for Monte Carlo impurity transport code based on object oriented method

    International Nuclear Information System (INIS)

    Asano, K.; Ohno, N.; Takamura, S.

    2001-01-01

    Monte Carlo simulation code on impurity transport has been developed by several groups to be utilized mainly for fusion related edge plasmas. State of impurity particle is determined by atomic and molecular processes such as ionization, charge exchange in plasma. A lot of atomic and molecular processes have been considered because the edge plasma has not only impurity atoms, but also impurity molecules mainly related to chemical erosion of carbon materials, and their cross sections have been given experimentally and theoretically. We need to reveal which process is essential in a given edge plasma condition. Monte Carlo simulation code, which takes such various atomic and molecular processes into account, is necessary to investigate the behavior of impurity particle in plasmas. Usually, the impurity transport simulation code has been intended for some specific atomic and molecular processes so that the introduction of a new process forces complicated programming work. In order to evaluate various proposed atomic and molecular processes, a flexible management of atomic and molecular reaction should be established. We have developed the impurity transport simulation code based on object-oriented method. By employing object-oriented programming, we can handle each particle as 'object', which enfolds data and procedure function itself. A user (notice, not programmer) can define property of each particle species and the related atomic and molecular processes and then each 'object' is defined by analyzing this information. According to the relation among plasma particle species, objects are connected with each other and change their state by themselves. Dynamic allocation of these objects to program memory is employed to adapt for arbitrary number of species and atomic/molecular reactions. Thus we can treat arbitrary species and process starting from, for instance, methane and acetylene. Such a software procedure would be useful also for industrial application plasmas

  1. Monte Carlo burnup codes acceleration using the correlated sampling method

    International Nuclear Information System (INIS)

    Dieudonne, C.

    2013-01-01

    For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this document we present an original methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time we develop a theoretical model to study the features of the correlated sampling method to understand its effects on depletion calculations. In a third time the implementation of this method in the TRIPOLI-4 code will be discussed, as well as the precise calculation scheme used to bring important speed-up of the depletion calculation. We will begin to validate and optimize the perturbed depletion scheme with the calculation of a REP-like fuel cell depletion. Then this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes. (author) [fr

  2. Monte Carlo based performance assessment of different animal PET architectures using pixellated CZT detectors

    International Nuclear Information System (INIS)

    Visvikis, D.; Lefevre, T.; Lamare, F.; Kontaxakis, G.; Santos, A.; Darambara, D.

    2006-01-01

    The majority of present position emission tomography (PET) animal systems are based on the coupling of high-density scintillators and light detectors. A disadvantage of these detector configurations is the compromise between image resolution, sensitivity and energy resolution. In addition, current combined imaging devices are based on simply placing back-to-back and in axial alignment different apparatus without any significant level of software or hardware integration. The use of semiconductor CdZnTe (CZT) detectors is a promising alternative to scintillators for gamma-ray imaging systems. At the same time CZT detectors have the potential properties necessary for the construction of a truly integrated imaging device (PET/SPECT/CT). The aims of this study was to assess the performance of different small animal PET scanner architectures based on CZT pixellated detectors and compare their performance with that of state of the art existing PET animal scanners. Different scanner architectures were modelled using GATE (Geant4 Application for Tomographic Emission). Particular scanner design characteristics included an overall cylindrical scanner format of 8 and 24 cm in axial and transaxial field of view, respectively, and a temporal coincidence window of 8 ns. Different individual detector modules were investigated, considering pixel pitch down to 0.625 mm and detector thickness from 1 to 5 mm. Modified NEMA NU2-2001 protocols were used in order to simulate performance based on mouse, rat and monkey imaging conditions. These protocols allowed us to directly compare the performance of the proposed geometries with the latest generation of current small animal systems. Results attained demonstrate the potential for higher NECR with CZT based scanners in comparison to scintillator based animal systems

  3. Pattern-oriented Agent-based Monte Carlo simulation of Cellular Redox Environment

    DEFF Research Database (Denmark)

    Tang, Jiaowei; Holcombe, Mike; Boonen, Harrie C.M.

    /CYSS) and mitochondrial redox couples. Evidence suggests that both intracellular and extracellular redox can affect overall cell redox state. How redox is communicated between extracellular and intracellular environments is still a matter of debate. Some researchers conclude based on experimental data...... cells. Biochimica Et Biophysica Acta-General Subjects, 2008. 1780(11): p. 1271-1290. 5. Jones, D.P., Redox sensing: orthogonal control in cell cycle and apoptosis signalling. J Intern Med, 2010. 268(5): p. 432-48. 6. Pogson, M., et al., Formal agent-based modelling of intracellular chemical interactions...

  4. TU-EF-304-07: Monte Carlo-Based Inverse Treatment Plan Optimization for Intensity Modulated Proton Therapy

    International Nuclear Information System (INIS)

    Li, Y; Tian, Z; Jiang, S; Jia, X; Song, T; Wu, Z; Liu, Y

    2015-01-01

    Purpose: Intensity-modulated proton therapy (IMPT) is increasingly used in proton therapy. For IMPT optimization, Monte Carlo (MC) is desired for spots dose calculations because of its high accuracy, especially in cases with a high level of heterogeneity. It is also preferred in biological optimization problems due to the capability of computing quantities related to biological effects. However, MC simulation is typically too slow to be used for this purpose. Although GPU-based MC engines have become available, the achieved efficiency is still not ideal. The purpose of this work is to develop a new optimization scheme to include GPU-based MC into IMPT. Methods: A conventional approach using MC in IMPT simply calls the MC dose engine repeatedly for each spot dose calculations. However, this is not the optimal approach, because of the unnecessary computations on some spots that turned out to have very small weights after solving the optimization problem. GPU-memory writing conflict occurring at a small beam size also reduces computational efficiency. To solve these problems, we developed a new framework that iteratively performs MC dose calculations and plan optimizations. At each dose calculation step, the particles were sampled from different spots altogether with Metropolis algorithm, such that the particle number is proportional to the latest optimized spot intensity. Simultaneously transporting particles from multiple spots also mitigated the memory writing conflict problem. Results: We have validated the proposed MC-based optimization schemes in one prostate case. The total computation time of our method was ∼5–6 min on one NVIDIA GPU card, including both spot dose calculation and plan optimization, whereas a conventional method naively using the same GPU-based MC engine were ∼3 times slower. Conclusion: A fast GPU-based MC dose calculation method along with a novel optimization workflow is developed. The high efficiency makes it attractive for clinical

  5. TU-EF-304-07: Monte Carlo-Based Inverse Treatment Plan Optimization for Intensity Modulated Proton Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Li, Y [Tsinghua University, Beijing, Beijing (China); UT Southwestern Medical Center, Dallas, TX (United States); Tian, Z; Jiang, S; Jia, X [UT Southwestern Medical Center, Dallas, TX (United States); Song, T [Southern Medical University, Guangzhou, Guangdong (China); UT Southwestern Medical Center, Dallas, TX (United States); Wu, Z; Liu, Y [Tsinghua University, Beijing, Beijing (China)

    2015-06-15

    Purpose: Intensity-modulated proton therapy (IMPT) is increasingly used in proton therapy. For IMPT optimization, Monte Carlo (MC) is desired for spots dose calculations because of its high accuracy, especially in cases with a high level of heterogeneity. It is also preferred in biological optimization problems due to the capability of computing quantities related to biological effects. However, MC simulation is typically too slow to be used for this purpose. Although GPU-based MC engines have become available, the achieved efficiency is still not ideal. The purpose of this work is to develop a new optimization scheme to include GPU-based MC into IMPT. Methods: A conventional approach using MC in IMPT simply calls the MC dose engine repeatedly for each spot dose calculations. However, this is not the optimal approach, because of the unnecessary computations on some spots that turned out to have very small weights after solving the optimization problem. GPU-memory writing conflict occurring at a small beam size also reduces computational efficiency. To solve these problems, we developed a new framework that iteratively performs MC dose calculations and plan optimizations. At each dose calculation step, the particles were sampled from different spots altogether with Metropolis algorithm, such that the particle number is proportional to the latest optimized spot intensity. Simultaneously transporting particles from multiple spots also mitigated the memory writing conflict problem. Results: We have validated the proposed MC-based optimization schemes in one prostate case. The total computation time of our method was ∼5–6 min on one NVIDIA GPU card, including both spot dose calculation and plan optimization, whereas a conventional method naively using the same GPU-based MC engine were ∼3 times slower. Conclusion: A fast GPU-based MC dose calculation method along with a novel optimization workflow is developed. The high efficiency makes it attractive for clinical

  6. Report: GPU Based Massive Parallel Kawasaki Kinetics In Monte Carlo Modelling of Lipid Microdomains

    OpenAIRE

    Lis, M.; Pintal, L.

    2013-01-01

    This paper introduces novel method of simulation of lipid biomembranes based on Metropolis Hastings algorithm and Graphic Processing Unit computational power. Method gives up to 55 times computational boost in comparison to classical computations. Extensive study of algorithm correctness is provided. Analysis of simulation results and results obtained with classical simulation methodologies are presented.

  7. Monte carlo efficiency calibration of a neutron generator-based total-body irradiator

    Science.gov (United States)

    The increasing prevalence of obesity world-wide has focused attention on the need for accurate body composition assessments, especially of large subjects. However, many body composition measurement systems are calibrated against a single-sized phantom, often based on the standard Reference Man mode...

  8. Multi-period mean–variance portfolio optimization based on Monte-Carlo simulation

    NARCIS (Netherlands)

    F. Cong (Fei); C.W. Oosterlee (Kees)

    2016-01-01

    htmlabstractWe propose a simulation-based approach for solving the constrained dynamic mean– variance portfolio managemen tproblem. For this dynamic optimization problem, we first consider a sub-optimal strategy, called the multi-stage strategy, which can be utilized in a forward fashion. Then,

  9. A Monte Carlo method for the simulation of coagulation and nucleation based on weighted particles and the concepts of stochastic resolution and merging

    Energy Technology Data Exchange (ETDEWEB)

    Kotalczyk, G., E-mail: Gregor.Kotalczyk@uni-due.de; Kruis, F.E.

    2017-07-01

    Monte Carlo simulations based on weighted simulation particles can solve a variety of population balance problems and allow thus to formulate a solution-framework for many chemical engineering processes. This study presents a novel concept for the calculation of coagulation rates of weighted Monte Carlo particles by introducing a family of transformations to non-weighted Monte Carlo particles. The tuning of the accuracy (named ‘stochastic resolution’ in this paper) of those transformations allows the construction of a constant-number coagulation scheme. Furthermore, a parallel algorithm for the inclusion of newly formed Monte Carlo particles due to nucleation is presented in the scope of a constant-number scheme: the low-weight merging. This technique is found to create significantly less statistical simulation noise than the conventional technique (named ‘random removal’ in this paper). Both concepts are combined into a single GPU-based simulation method which is validated by comparison with the discrete-sectional simulation technique. Two test models describing a constant-rate nucleation coupled to a simultaneous coagulation in 1) the free-molecular regime or 2) the continuum regime are simulated for this purpose.

  10. Specification for the VERA Depletion Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  11. SU-E-T-175: Clinical Evaluations of Monte Carlo-Based Inverse Treatment Plan Optimization for Intensity Modulated Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Chi, Y; Li, Y; Tian, Z; Gu, X; Jiang, S; Jia, X [UT Southwestern Medical Center, Dallas, TX (United States)

    2015-06-15

    Purpose: Pencil-beam or superposition-convolution type dose calculation algorithms are routinely used in inverse plan optimization for intensity modulated radiation therapy (IMRT). However, due to their limited accuracy in some challenging cases, e.g. lung, the resulting dose may lose its optimality after being recomputed using an accurate algorithm, e.g. Monte Carlo (MC). It is the objective of this study to evaluate the feasibility and advantages of a new method to include MC in the treatment planning process. Methods: We developed a scheme to iteratively perform MC-based beamlet dose calculations and plan optimization. In the MC stage, a GPU-based dose engine was used and the particle number sampled from a beamlet was proportional to its optimized fluence from the previous step. We tested this scheme in four lung cancer IMRT cases. For each case, the original plan dose, plan dose re-computed by MC, and dose optimized by our scheme were obtained. Clinically relevant dosimetric quantities in these three plans were compared. Results: Although the original plan achieved a satisfactory PDV dose coverage, after re-computing doses using MC method, it was found that the PTV D95% were reduced by 4.60%–6.67%. After re-optimizing these cases with our scheme, the PTV coverage was improved to the same level as in the original plan, while the critical OAR coverages were maintained to clinically acceptable levels. Regarding the computation time, it took on average 144 sec per case using only one GPU card, including both MC-based beamlet dose calculation and treatment plan optimization. Conclusion: The achieved dosimetric gains and high computational efficiency indicate the feasibility and advantages of the proposed MC-based IMRT optimization method. Comprehensive validations in more patient cases are in progress.

  12. WE-AB-204-11: Development of a Nuclear Medicine Dosimetry Module for the GPU-Based Monte Carlo Code ARCHER

    Energy Technology Data Exchange (ETDEWEB)

    Liu, T; Lin, H; Xu, X [Rensselaer Polytechnic Institute, Troy, NY (United States); Stabin, M [Vanderbilt Univ Medical Ctr, Nashville, TN (United States)

    2015-06-15

    Purpose: To develop a nuclear medicine dosimetry module for the GPU-based Monte Carlo code ARCHER. Methods: We have developed a nuclear medicine dosimetry module for the fast Monte Carlo code ARCHER. The coupled electron-photon Monte Carlo transport kernel included in ARCHER is built upon the Dose Planning Method code (DPM). The developed module manages the radioactive decay simulation by consecutively tracking several types of radiation on a per disintegration basis using the statistical sampling method. Optimization techniques such as persistent threads and prefetching are studied and implemented. The developed module is verified against the VIDA code, which is based on Geant4 toolkit and has previously been verified against OLINDA/EXM. A voxelized geometry is used in the preliminary test: a sphere made of ICRP soft tissue is surrounded by a box filled with water. Uniform activity distribution of I-131 is assumed in the sphere. Results: The self-absorption dose factors (mGy/MBqs) of the sphere with varying diameters are calculated by ARCHER and VIDA respectively. ARCHER’s result is in agreement with VIDA’s that are obtained from a previous publication. VIDA takes hours of CPU time to finish the computation, while it takes ARCHER 4.31 seconds for the 12.4-cm uniform activity sphere case. For a fairer CPU-GPU comparison, more effort will be made to eliminate the algorithmic differences. Conclusion: The coupled electron-photon Monte Carlo code ARCHER has been extended to radioactive decay simulation for nuclear medicine dosimetry. The developed code exhibits good performance in our preliminary test. The GPU-based Monte Carlo code is developed with grant support from the National Institute of Biomedical Imaging and Bioengineering through an R01 grant (R01EB015478)

  13. Recurrence formulas for evaluating expansion series of depletion functions

    International Nuclear Information System (INIS)

    Vukadin, Z.

    1991-01-01

    A high-accuracy analytical method for solving the depletion equations for chains of radioactive nuclides is based on the formulation of depletion functions. When all the arguments of the depletion function are too close to each other, series expansions of the depletion function have to be used. However, the high-accuracy series expressions for the depletion functions of high index become too complicated. Recursion relations are derived which enable an efficient high-accuracy evaluation of the depletion functions with high indices. (orig.) [de

  14. Sequential Monte Carlo filter for state estimation of LiFePO4 batteries based on an online updated model

    Science.gov (United States)

    Li, Jiahao; Klee Barillas, Joaquin; Guenther, Clemens; Danzer, Michael A.

    2014-02-01

    Battery state monitoring is one of the key techniques in battery management systems e.g. in electric vehicles. An accurate estimation can help to improve the system performance and to prolong the battery remaining useful life. Main challenges for the state estimation for LiFePO4 batteries are the flat characteristic of open-circuit-voltage over battery state of charge (SOC) and the existence of hysteresis phenomena. Classical estimation approaches like Kalman filtering show limitations to handle nonlinear and non-Gaussian error distribution problems. In addition, uncertainties in the battery model parameters must be taken into account to describe the battery degradation. In this paper, a novel model-based method combining a Sequential Monte Carlo filter with adaptive control to determine the cell SOC and its electric impedance is presented. The applicability of this dual estimator is verified using measurement data acquired from a commercial LiFePO4 cell. Due to a better handling of the hysteresis problem, results show the benefits of the proposed method against the estimation with an Extended Kalman filter.

  15. Application of a Java-based, univel geometry, neutral particle Monte Carlo code to the searchlight problem

    International Nuclear Information System (INIS)

    Charles A. Wemple; Joshua J. Cogliati

    2005-01-01

    A univel geometry, neutral particle Monte Carlo transport code, written entirely in the Java programming language, is under development for medical radiotherapy applications. The code uses ENDF-VI based continuous energy cross section data in a flexible XML format. Full neutron-photon coupling, including detailed photon production and photonuclear reactions, is included. Charged particle equilibrium is assumed within the patient model so that detailed transport of electrons produced by photon interactions may be neglected. External beam and internal distributed source descriptions for mixed neutron-photon sources are allowed. Flux and dose tallies are performed on a univel basis. A four-tap, shift-register-sequence random number generator is used. Initial verification and validation testing of the basic neutron transport routines is underway. The searchlight problem was chosen as a suitable first application because of the simplicity of the physical model. Results show excellent agreement with analytic solutions. Computation times for similar numbers of histories are comparable to other neutron MC codes written in C and FORTRAN

  16. Scalability tests of R-GMA based Grid job monitoring system for CMS Monte Carlo data production

    CERN Document Server

    Bonacorsi, D; Field, L; Fisher, S; Grandi, C; Hobson, P R; Kyberd, P; MacEvoy, B; Nebrensky, J J; Tallini, H; Traylen, S

    2004-01-01

    High Energy Physics experiments such as CMS (Compact Muon Solenoid) at the Large Hadron Collider have unprecedented, large-scale data processing computing requirements, with data accumulating at around 1 Gbyte/s. The Grid distributed computing paradigm has been chosen as the solution to provide the requisite computing power. The demanding nature of CMS software and computing requirements, such as the production of large quantities of Monte Carlo simulated data, makes them an ideal test case for the Grid and a major driver for the development of Grid technologies. One important challenge when using the Grid for large-scale data analysis is the ability to monitor the large numbers of jobs that are being executed simultaneously at multiple remote sites. R-GMA is a monitoring and information management service for distributed resources based on the Grid Monitoring Architecture of the Global Grid Forum. In this paper we report on the first measurements of R-GMA as part of a monitoring architecture to be used for b...

  17. Commissioning and Validation of the First Monte Carlo Based Dose Calculation Algorithm Commercial Treatment Planning System in Mexico

    International Nuclear Information System (INIS)

    Larraga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Hernandez-Bojorquez, M.; Galvan de la Cruz, O. O.; Ballesteros-Zebadua, P.

    2010-01-01

    This work presents the beam data commissioning and dose calculation validation of the first Monte Carlo (MC) based treatment planning system (TPS) installed in Mexico. According to the manufacturer specifications, the beam data commissioning needed for this model includes: several in-air and water profiles, depth dose curves, head-scatter factors and output factors (6x6, 12x12, 18x18, 24x24, 42x42, 60x60, 80x80 and 100x100 mm 2 ). Radiographic and radiochromic films, diode and ionization chambers were used for data acquisition. MC dose calculations in a water phantom were used to validate the MC simulations using comparisons with measured data. Gamma index criteria 2%/2 mm were used to evaluate the accuracy of MC calculations. MC calculated data show an excellent agreement for field sizes from 18x18 to 100x100 mm 2 . Gamma analysis shows that in average, 95% and 100% of the data passes the gamma index criteria for these fields, respectively. For smaller fields (12x12 and 6x6 mm 2 ) only 92% of the data meet the criteria. Total scatter factors show a good agreement ( 2 ) that show a error of 4.7%. MC dose calculations are accurate and precise for clinical treatment planning up to a field size of 18x18 mm 2 . Special care must be taken for smaller fields.

  18. Atomic structure of Mg-based metallic glass investigated with neutron diffraction, reverse Monte Carlo modeling and electron microscopy

    Directory of Open Access Journals (Sweden)

    Rafał Babilas

    2017-05-01

    Full Text Available The structure of a multicomponent metallic glass, Mg65Cu20Y10Ni5, was investigated by the combined methods of neutron diffraction (ND, reverse Monte Carlo modeling (RMC and high-resolution transmission electron microscopy (HRTEM. The RMC method, based on the results of ND measurements, was used to develop a realistic structure model of a quaternary alloy in a glassy state. The calculated model consists of a random packing structure of atoms in which some ordered regions can be indicated. The amorphous structure was also described by peak values of partial pair correlation functions and coordination numbers, which illustrated some types of cluster packing. The N = 9 clusters correspond to the tri-capped trigonal prisms, which are one of Bernal’s canonical clusters, and atomic clusters with N = 6 and N = 12 are suitable for octahedral and icosahedral atomic configurations. The nanocrystalline character of the alloy after annealing was also studied by HRTEM. The selected HRTEM images of the nanocrystalline regions were also processed by inverse Fourier transform analysis. The high-angle annular dark-field (HAADF technique was used to determine phase separation in the studied glass after heat treatment. The HAADF mode allows for the observation of randomly distributed, dark contrast regions of about 4–6 nm. The interplanar spacing identified for the orthorhombic Mg2Cu crystalline phase is similar to the value of the first coordination shell radius from the short-range order.

  19. A high-quality multilayer structure characterization method based on X-ray fluorescence and Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Brunetti, Antonio; Golosio, Bruno [Universita degli Studi di Sassari, Dipartimento di Scienze Politiche, Scienze della Comunicazione e Ingegneria dell' Informazione, Sassari (Italy); Melis, Maria Grazia [Universita degli Studi di Sassari, Dipartimento di Storia, Scienze dell' Uomo e della Formazione, Sassari (Italy); Mura, Stefania [Universita degli Studi di Sassari, Dipartimento di Agraria e Nucleo di Ricerca sulla Desertificazione, Sassari (Italy)

    2014-11-08

    X-ray fluorescence (XRF) is a well known nondestructive technique. It is also applied to multilayer characterization, due to its possibility of estimating both composition and thickness of the layers. Several kinds of cultural heritage samples can be considered as a complex multilayer, such as paintings or decorated objects or some types of metallic samples. Furthermore, they often have rough surfaces and this makes a precise determination of the structure and composition harder. The standard quantitative XRF approach does not take into account this aspect. In this paper, we propose a novel approach based on a combined use of X-ray measurements performed with a polychromatic beam and Monte Carlo simulations. All the information contained in an X-ray spectrum is used. This approach allows obtaining a very good estimation of the sample contents both in terms of chemical elements and material thickness, and in this sense, represents an improvement of the possibility of XRF measurements. Some examples will be examined and discussed. (orig.)

  20. Size dependent thermal hysteresis in spin crossover nanoparticles reflected within a Monte Carlo based Ising-like model

    International Nuclear Information System (INIS)

    Atitoaie, Alexandru; Tanasa, Radu; Enachescu, Cristian

    2012-01-01

    Spin crossover compounds are photo-magnetic bistable molecular magnets with two states in thermodynamic competition: the diamagnetic low-spin state and paramagnetic high-spin state. The thermal transition between the two states is often accompanied by a wide hysteresis, premise for possible application of these materials as recording media. In this paper we study the influence of the system's size on the thermal hysteresis loops using Monte Carlo simulations based on an Arrhenius dynamics applied for an Ising like model with long- and short-range interactions. We show that using appropriate boundary conditions it is possible to reproduce both the drop of hysteresis width with decreasing particle size, the hysteresis shift towards lower temperatures and the incomplete transition, as in the available experimental data. The case of larger systems composed by several sublattices is equally treated reproducing the shrinkage of the hysteresis loop's width experimentally observed. - Highlights: ► A study concerning size effects in spin crossover nanoparticles hysteresis is presented. ► An Ising like model with short- and long-range interactions and Arrhenius dynamics is employed. ► In open boundary system the hysteresis width decreases with particle size. ► With appropriate environment, hysteresis loop is shifted towards lower temperature and transition is incomplete.

  1. Monte Carlo simulation of neutron irradiation facility developed for accelerator based in vivo neutron activation measurements in human hand bones

    International Nuclear Information System (INIS)

    Aslam; Prestwich, W.V.; McNeill, F.E.; Waker, A.J.

    2006-01-01

    The neutron irradiation facility developed at the McMaster University 3 MV Van de Graaff accelerator was employed to assess in vivo elemental content of aluminum and manganese in human hands. These measurements were carried out to monitor the long-term exposure of these potentially toxic trace elements through hand bone levels. The dose equivalent delivered to a patient during irradiation procedure is the limiting factor for IVNAA measurements. This article describes a method to estimate the average radiation dose equivalent delivered to the patient's hand during irradiation. The computational method described in this work augments the dose measurements carried out earlier [Arnold et al., 2002. Med. Phys. 29(11), 2718-2724]. This method employs the Monte Carlo simulation of hand irradiation facility using MCNP4B. Based on the estimated dose equivalents received by the patient hand, the proposed irradiation procedure for the IVNAA measurement of manganese in human hands [Arnold et al., 2002. Med. Phys. 29(11), 2718-2724] with normal (1 ppm) and elevated manganese content can be carried out with a reasonably low dose of 31 mSv to the hand. Sixty-three percent of the total dose equivalent is delivered by non-useful fast group (>10 keV); the filtration of this neutron group from the beam will further decrease the dose equivalent to the patient's hand

  2. Hydroxyethylamine derivatives as HIV-1 protease inhibitors: a predictive QSAR modelling study based on Monte Carlo optimization.

    Science.gov (United States)

    Bhargava, S; Adhikari, N; Amin, S A; Das, K; Gayen, S; Jha, T

    2017-12-01

    Application of HIV-1 protease inhibitors (as an anti-HIV regimen) may serve as an attractive strategy for anti-HIV drug development. Several investigations suggest that there is a crucial need to develop a novel protease inhibitor with higher potency and reduced toxicity. Monte Carlo optimized QSAR study was performed on 200 hydroxyethylamine derivatives with antiprotease activity. Twenty-one QSAR models with good statistical qualities were developed from three different splits with various combinations of SMILES and GRAPH based descriptors. The best models from different splits were selected on the basis of statistically validated characteristics of the test set and have the following statistical parameters: r 2 = 0.806, Q 2 = 0.788 (split 1); r 2 = 0.842, Q 2 = 0.826 (split 2); r 2 = 0.774, Q 2 = 0.755 (split 3). The structural attributes obtained from the best models were analysed to understand the structural requirements of the selected series for HIV-1 protease inhibitory activity. On the basis of obtained structural attributes, 11 new compounds were designed, out of which five compounds were found to have better activity than the best active compound in the series.

  3. Analysis of the neutrons dispersion in a semi-infinite medium based in transport theory and the Monte Carlo method

    International Nuclear Information System (INIS)

    Arreola V, G.; Vazquez R, R.; Guzman A, J. R.

    2012-10-01

    In this work a comparative analysis of the results for the neutrons dispersion in a not multiplicative semi-infinite medium is presented. One of the frontiers of this medium is located in the origin of coordinates, where a neutrons source in beam form, i.e., μο=1 is also. The neutrons dispersion is studied on the statistical method of Monte Carlo and through the unidimensional transport theory and for an energy group. The application of transport theory gives a semi-analytic solution for this problem while the statistical solution for the flow was obtained applying the MCNPX code. The dispersion in light water and heavy water was studied. A first remarkable result is that both methods locate the maximum of the neutrons distribution to less than two mean free trajectories of transport for heavy water, while for the light water is less than ten mean free trajectories of transport; the differences between both methods is major for the light water case. A second remarkable result is that the tendency of both distributions is similar in small mean free trajectories, while in big mean free trajectories the transport theory spreads to an asymptote value and the solution in base statistical method spreads to zero. The existence of a neutron current of low energy and toward the source is demonstrated, in contrary sense to the neutron current of high energy coming from the own source. (Author)

  4. A high-quality multilayer structure characterization method based on X-ray fluorescence and Monte Carlo simulation

    International Nuclear Information System (INIS)

    Brunetti, Antonio; Golosio, Bruno; Melis, Maria Grazia; Mura, Stefania

    2015-01-01

    X-ray fluorescence (XRF) is a well known nondestructive technique. It is also applied to multilayer characterization, due to its possibility of estimating both composition and thickness of the layers. Several kinds of cultural heritage samples can be considered as a complex multilayer, such as paintings or decorated objects or some types of metallic samples. Furthermore, they often have rough surfaces and this makes a precise determination of the structure and composition harder. The standard quantitative XRF approach does not take into account this aspect. In this paper, we propose a novel approach based on a combined use of X-ray measurements performed with a polychromatic beam and Monte Carlo simulations. All the information contained in an X-ray spectrum is used. This approach allows obtaining a very good estimation of the sample contents both in terms of chemical elements and material thickness, and in this sense, represents an improvement of the possibility of XRF measurements. Some examples will be examined and discussed. (orig.)

  5. Integrated Building Energy Design of a Danish Office Building Based on Monte Carlo Simulation Method

    DEFF Research Database (Denmark)

    Sørensen, Mathias Juul; Myhre, Sindre Hammer; Hansen, Kasper Kingo

    2017-01-01

    The focus on reducing buildings energy consumption is gradually increasing, and the optimization of a building’s performance and maximizing its potential leads to great challenges between architects and engineers. In this study, we collaborate with a group of architects on a design project of a new...... office building located in Aarhus, Denmark. Building geometry, floor plans and employee schedules were obtained from the architects which is the basis for this study. This study aims to simplify the iterative design process that is based on the traditional trial and error method in the late design phases...

  6. Core physics design calculation of mini-type fast reactor based on Monte Carlo method

    International Nuclear Information System (INIS)

    He Keyu; Han Weishi

    2007-01-01

    An accurate physics calculation model has been set up for the mini-type sodium-cooled fast reactor (MFR) based on MCNP-4C code, then a detailed calculation of its critical physics characteristics, neutron flux distribution, power distribution and reactivity control has been carried out. The results indicate that the basic physics characteristics of MFR can satisfy the requirement and objectives of the core design. The power density and neutron flux distribution are symmetrical and reasonable. The control system is able to make a reliable reactivity balance efficiently and meets the request for long-playing operation. (authors)

  7. An automated Monte-Carlo based method for the calculation of cascade summing factors

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, M.J., E-mail: mark.j.jackson@awe.co.uk; Britton, R.; Davies, A.V.; McLarty, J.L.; Goodwin, M.

    2016-10-21

    A versatile method has been developed to calculate cascade summing factors for use in quantitative gamma-spectrometry analysis procedures. The proposed method is based solely on Evaluated Nuclear Structure Data File (ENSDF) nuclear data, an X-ray energy library, and accurate efficiency characterisations for single detector counting geometries. The algorithm, which accounts for γ–γ, γ–X, γ–511 and γ–e{sup −} coincidences, can be applied to any design of gamma spectrometer and can be expanded to incorporate any number of nuclides. Efficiency characterisations can be derived from measured or mathematically modelled functions, and can accommodate both point and volumetric source types. The calculated results are shown to be consistent with an industry standard gamma-spectrometry software package. Additional benefits including calculation of cascade summing factors for all gamma and X-ray emissions, not just the major emission lines, are also highlighted. - Highlights: • Versatile method to calculate coincidence summing factors for gamma-spectrometry analysis. • Based solely on ENSDF format nuclear data and detector efficiency characterisations. • Enables generation of a CSF library for any detector, geometry and radionuclide. • Improves measurement accuracy and reduces acquisition times required to meet MDA.

  8. Contributon Monte Carlo

    International Nuclear Information System (INIS)

    Dubi, A.; Gerstl, S.A.W.

    1979-05-01

    The contributon Monte Carlo method is based on a new recipe to calculate target responses by means of volume integral of the contributon current in a region between the source and the detector. A comprehensive description of the method, its implementation in the general-purpose MCNP code, and results of the method for realistic nonhomogeneous, energy-dependent problems are presented. 23 figures, 10 tables

  9. Kinetic-Monte-Carlo-Based Parallel Evolution Simulation Algorithm of Dust Particles

    Directory of Open Access Journals (Sweden)

    Xiaomei Hu

    2014-01-01

    Full Text Available The evolution simulation of dust particles provides an important way to analyze the impact of dust on the environment. KMC-based parallel algorithm is proposed to simulate the evolution of dust particles. In the parallel evolution simulation algorithm of dust particles, data distribution way and communication optimizing strategy are raised to balance the load of every process and reduce the communication expense among processes. The experimental results show that the simulation of diffusion, sediment, and resuspension of dust particles in virtual campus is realized and the simulation time is shortened by parallel algorithm, which makes up for the shortage of serial computing and makes the simulation of large-scale virtual environment possible.

  10. Monte Carlo; based validation of the ENDF/MC2-II/SDX cell homogenization path

    International Nuclear Information System (INIS)

    Wade, D.C.

    1978-11-01

    The results are summarized of a program of validation of the unit cell homogenization prescriptions and codes used for the analysis of Zero Power Reactor (ZPR) fast breeder reactor critical experiments. The ZPR drawer loading patterns comprise both plate type and pin-calandria type unit cells. A prescription is used to convert the three dimensional physical geometry of the drawer loadings into one dimensional calculational models. The ETOE-II/MC 2 -II/SDX code sequence is used to transform ENDF/B basic nuclear data into unit cell average broad group cross sections based on the 1D models. Cell average, broad group anisotropic diffusion coefficients are generated using the methods of Benoist or of Gelbard. The resulting broad (approx. 10 to 30) group parameters are used in multigroup diffusion and S/sub n/ transport calculations of full core XY or RZ models which employ smeared atom densities to represent the contents of the unit cells

  11. An automated Monte-Carlo based method for the calculation of cascade summing factors

    Science.gov (United States)

    Jackson, M. J.; Britton, R.; Davies, A. V.; McLarty, J. L.; Goodwin, M.

    2016-10-01

    A versatile method has been developed to calculate cascade summing factors for use in quantitative gamma-spectrometry analysis procedures. The proposed method is based solely on Evaluated Nuclear Structure Data File (ENSDF) nuclear data, an X-ray energy library, and accurate efficiency characterisations for single detector counting geometries. The algorithm, which accounts for γ-γ, γ-X, γ-511 and γ-e- coincidences, can be applied to any design of gamma spectrometer and can be expanded to incorporate any number of nuclides. Efficiency characterisations can be derived from measured or mathematically modelled functions, and can accommodate both point and volumetric source types. The calculated results are shown to be consistent with an industry standard gamma-spectrometry software package. Additional benefits including calculation of cascade summing factors for all gamma and X-ray emissions, not just the major emission lines, are also highlighted.

  12. Hydrologic transport of depleted uranium associated with open air dynamic range testing at Los Alamos National Laboratory, New Mexico, and Eglin Air Force Base, Florida

    Energy Technology Data Exchange (ETDEWEB)

    Becker, N.M. [Los Alamos National Lab., NM (United States); Vanta, E.B. [Wright Laboratory Armament Directorate, Eglin Air Force Base, FL (United States)

    1995-05-01

    Hydrologic investigations on depleted uranium fate and transport associated with dynamic testing activities were instituted in the 1980`s at Los Alamos National Laboratory and Eglin Air Force Base. At Los Alamos, extensive field watershed investigations of soil, sediment, and especially runoff water were conducted. Eglin conducted field investigations and runoff studies similar to those at Los Alamos at former and active test ranges. Laboratory experiments complemented the field investigations at both installations. Mass balance calculations were performed to quantify the mass of expended uranium which had transported away from firing sites. At Los Alamos, it is estimated that more than 90 percent of the uranium still remains in close proximity to firing sites, which has been corroborated by independent calculations. At Eglin, we estimate that 90 to 95 percent of the uranium remains at test ranges. These data demonstrate that uranium moves slowly via surface water, in both semi-arid (Los Alamos) and humid (Eglin) environments.

  13. Hydrologic transport of depleted uranium associated with open air dynamic range testing at Los Alamos National Laboratory, New Mexico, and Eglin Air Force Base, Florida

    International Nuclear Information System (INIS)

    Becker, N.M.; Vanta, E.B.

    1995-01-01

    Hydrologic investigations on depleted uranium fate and transport associated with dynamic testing activities were instituted in the 1980's at Los Alamos National Laboratory and Eglin Air Force Base. At Los Alamos, extensive field watershed investigations of soil, sediment, and especially runoff water were conducted. Eglin conducted field investigations and runoff studies similar to those at Los Alamos at former and active test ranges. Laboratory experiments complemented the field investigations at both installations. Mass balance calculations were performed to quantify the mass of expended uranium which had transported away from firing sites. At Los Alamos, it is estimated that more than 90 percent of the uranium still remains in close proximity to firing sites, which has been corroborated by independent calculations. At Eglin, we estimate that 90 to 95 percent of the uranium remains at test ranges. These data demonstrate that uranium moves slowly via surface water, in both semi-arid (Los Alamos) and humid (Eglin) environments

  14. A dual resolution measurement based Monte Carlo simulation technique for detailed dose analysis of small volume organs in the skull base region

    International Nuclear Information System (INIS)

    Yeh, Chi-Yuan; Tung, Chuan-Jung; Chao, Tsi-Chain; Lin, Mu-Han; Lee, Chung-Chi

    2014-01-01

    The purpose of this study was to examine dose distribution of a skull base tumor and surrounding critical structures in response to high dose intensity-modulated radiosurgery (IMRS) with Monte Carlo (MC) simulation using a dual resolution sandwich phantom. The measurement-based Monte Carlo (MBMC) method (Lin et al., 2009) was adopted for the study. The major components of the MBMC technique involve (1) the BEAMnrc code for beam transport through the treatment head of a Varian 21EX linear accelerator, (2) the DOSXYZnrc code for patient dose simulation and (3) an EPID-measured efficiency map which describes non-uniform fluence distribution of the IMRS treatment beam. For the simulated case, five isocentric 6 MV photon beams were designed to deliver a total dose of 1200 cGy in two fractions to the skull base tumor. A sandwich phantom for the MBMC simulation was created based on the patient's CT scan of a skull base tumor [gross tumor volume (GTV)=8.4 cm 3 ] near the right 8th cranial nerve. The phantom, consisted of a 1.2-cm thick skull base region, had a voxel resolution of 0.05×0.05×0.1 cm 3 and was sandwiched in between 0.05×0.05×0.3 cm 3 slices of a head phantom. A coarser 0.2×0.2×0.3 cm 3 single resolution (SR) phantom was also created for comparison with the sandwich phantom. A particle history of 3×10 8 for each beam was used for simulations of both the SR and the sandwich phantoms to achieve a statistical uncertainty of <2%. Our study showed that the planning target volume (PTV) receiving at least 95% of the prescribed dose (VPTV95) was 96.9%, 96.7% and 99.9% for the TPS, SR, and sandwich phantom, respectively. The maximum and mean doses to large organs such as the PTV, brain stem, and parotid gland for the TPS, SR and sandwich MC simulations did not show any significant difference; however, significant dose differences were observed for very small structures like the right 8th cranial nerve, right cochlea, right malleus and right semicircular

  15. An improved energy-range relationship for high-energy electron beams based on multiple accurate experimental and Monte Carlo data sets

    International Nuclear Information System (INIS)

    Sorcini, B.B.; Andreo, P.; Hyoedynmaa, S.; Brahme, A.; Bielajew, A.F.

    1995-01-01

    A theoretically based analytical energy-range relationship has been developed and calibrated against well established experimental and Monte Carlo calculated energy-range data. Only published experimental data with a clear statement of accuracy and method of evaluation have been used. Besides published experimental range data for different uniform media, new accurate experimental data on the practical range of high-energy electron beams in water for the energy range 10-50 MeV from accurately calibrated racetrack microtrons have been used. Largely due to the simultaneous pooling of accurate experimental and Monte Carlo data for different materials, the fit has resulted in an increased accuracy of the resultant energy-range relationship, particularly at high energies. Up to date Monte Carlo data from the latest versions of the codes ITS3 and EGS4 for absorbers of atomic numbers between four and 92 (Be, C, H 2 O, PMMA, Al, Cu, Ag, Pb and U) and incident electron energies between 1 and 100 MeV have been used as a complement where experimental data are sparse or missing. The standard deviation of the experimental data relative to the new relation is slightly larger than that of the Monte Carlo data. This is partly due to the fact that theoretically based stopping and scattering cross-sections are used both to account for the material dependence of the analytical energy-range formula and to calculate ranges with the Monte Carlo programs. For water the deviation from the traditional energy-range relation of ICRU Report 35 is only 0.5% at 20 MeV but as high as - 2.2% at 50 MeV. An improved method for divergence and ionization correction in high-energy electron beams has also been developed to enable use of a wider range of experimental results. (Author)

  16. Estimation of the dose deposited by electron beams in radiotherapy in voxelised phantoms using the Monte Carlo simulation platform GATE based on GEANT4 in a grid environment

    International Nuclear Information System (INIS)

    Perrot, Y.

    2011-01-01

    Radiation therapy treatment planning requires accurate determination of absorbed dose in the patient. Monte Carlo simulation is the most accurate method for solving the transport problem of particles in matter. This thesis is the first study dealing with the validation of the Monte Carlo simulation platform GATE (GEANT4 Application for Tomographic Emission), based on GEANT4 (Geometry And Tracking) libraries, for the computation of absorbed dose deposited by electron beams. This thesis aims at demonstrating that GATE/GEANT4 calculations are able to reach treatment planning requirements in situations where analytical algorithms are not satisfactory. The goal is to prove that GATE/GEANT4 is useful for treatment planning using electrons and competes with well validated Monte Carlo codes. This is demonstrated by the simulations with GATE/GEANT4 of realistic electron beams and electron sources used for external radiation therapy or targeted radiation therapy. The computed absorbed dose distributions are in agreement with experimental measurements and/or calculations from other Monte Carlo codes. Furthermore, guidelines are proposed to fix the physics parameters of the GATE/GEANT4 simulations in order to ensure the accuracy of absorbed dose calculations according to radiation therapy requirements. (author)

  17. Simulation of Ni-63 based nuclear micro battery using Monte Carlo modeling

    International Nuclear Information System (INIS)

    Kim, Tae Ho; Kim, Ji Hyun

    2013-01-01

    The radioisotope batteries have an energy density of 100-10000 times greater than chemical batteries. Also, Li ion battery has the fundamental problems such as short life time and requires recharge system. In addition to these things, the existing batteries are hard to operate at internal human body, national defense arms or space environment. Since the development of semiconductor process and materials technology, the micro device is much more integrated. It is expected that, based on new semiconductor technology, the conversion device efficiency of betavoltaic battery will be highly increased. Furthermore, the radioactivity from the beta particle cannot penetrate a skin of human body, so it is safer than Li battery which has the probability to explosion. In the other words, the interest for radioisotope battery is increased because it can be applicable to an artificial internal organ power source without recharge and replacement, micro sensor applied to arctic and special environment, small size military equipment and space industry. However, there is not enough data for beta particle fluence from radioisotope source using nuclear battery. Beta particle fluence directly influences on battery efficiency and it is seriously affected by radioisotope source thickness because of self-absorption effect. Therefore, in this article, we present a basic design of Ni-63 nuclear battery and simulation data of beta particle fluence with various thickness of radioisotope source and design of battery

  18. Full Monte Carlo-Based Biologic Treatment Plan Optimization System for Intensity Modulated Carbon Ion Therapy on Graphics Processing Unit.

    Science.gov (United States)

    Qin, Nan; Shen, Chenyang; Tsai, Min-Yu; Pinto, Marco; Tian, Zhen; Dedes, Georgios; Pompos, Arnold; Jiang, Steve B; Parodi, Katia; Jia, Xun

    2018-01-01

    One of the major benefits of carbon ion therapy is enhanced biological effectiveness at the Bragg peak region. For intensity modulated carbon ion therapy (IMCT), it is desirable to use Monte Carlo (MC) methods to compute the properties of each pencil beam spot for treatment planning, because of their accuracy in modeling physics processes and estimating biological effects. We previously developed goCMC, a graphics processing unit (GPU)-oriented MC engine for carbon ion therapy. The purpose of the present study was to build a biological treatment plan optimization system using goCMC. The repair-misrepair-fixation model was implemented to compute the spatial distribution of linear-quadratic model parameters for each spot. A treatment plan optimization module was developed to minimize the difference between the prescribed and actual biological effect. We used a gradient-based algorithm to solve the optimization problem. The system was embedded in the Varian Eclipse treatment planning system under a client-server architecture to achieve a user-friendly planning environment. We tested the system with a 1-dimensional homogeneous water case and 3 3-dimensional patient cases. Our system generated treatment plans with biological spread-out Bragg peaks covering the targeted regions and sparing critical structures. Using 4 NVidia GTX 1080 GPUs, the total computation time, including spot simulation, optimization, and final dose calculation, was 0.6 hour for the prostate case (8282 spots), 0.2 hour for the pancreas case (3795 spots), and 0.3 hour for the brain case (6724 spots). The computation time was dominated by MC spot simulation. We built a biological treatment plan optimization system for IMCT that performs simulations using a fast MC engine, goCMC. To the best of our knowledge, this is the first time that full MC-based IMCT inverse planning has been achieved in a clinically viable time frame. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Use of Monte Carlo analysis in a risk-based prioritization of toxic constituents in house dust.

    Science.gov (United States)

    Ginsberg, Gary L; Belleggia, Giuliana

    2017-12-01

    Many chemicals have been detected in house dust with exposures to the general public and particularly young children of potential health concern. House dust is also an indicator of chemicals present in consumer products and the built environment that may constitute a health risk. The current analysis compiles a database of recent house dust concentrations from the United States and Canada, focusing upon semi-volatile constituents. Seven constituents from the phthalate and flame retardant categories were selected for risk-based screening and prioritization: diethylhexyl phthalate (DEHP), butyl benzyl phthalate (BBzP), diisononyl phthalate (DINP), a pentabrominated diphenyl ether congener (BDE-99), hexabromocyclododecane (HBCDD), tris(1,3-dichloro-2-propyl) phosphate (TDCIPP) and tris(2-chloroethyl) phosphate (TCEP). Monte Carlo analysis was used to represent the variability in house dust concentration as well as the uncertainty in the toxicology database in the estimation of children's exposure and risk. Constituents were prioritized based upon the percentage of the distribution of risk results for cancer and non-cancer endpoints that exceeded a hazard quotient (HQ) of 1. The greatest percent HQ exceedances were for DEHP (cancer and non-cancer), BDE-99 (non-cancer) and TDCIPP (cancer). Current uses and the potential for reducing levels of these constituents in house dust are discussed. Exposure and risk for other phthalates and flame retardants in house dust may increase if they are used to substitute for these prioritized constituents. Therefore, alternative assessment and green chemistry solutions are important elements in decreasing children's exposure to chemicals of concern in the indoor environment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Monte Carlo simulation of moderator and reflector in coal analyzer based on a D-T neutron generator.

    Science.gov (United States)

    Shan, Qing; Chu, Shengnan; Jia, Wenbao

    2015-11-01

    Coal is one of the most popular fuels in the world. The use of coal not only produces carbon dioxide, but also contributes to the environmental pollution by heavy metals. In prompt gamma-ray neutron activation analysis (PGNAA)-based coal analyzer, the characteristic gamma rays of C and O are mainly induced by fast neutrons, whereas thermal neutrons can be used to induce the characteristic gamma rays of H, Si, and heavy metals. Therefore, appropriate thermal and fast neutrons are beneficial in improving the measurement accuracy of heavy metals, and ensure that the measurement accuracy of main elements meets the requirements of the industry. Once the required yield of the deuterium-tritium (d-T) neutron generator is determined, appropriate thermal and fast neutrons can be obtained by optimizing the neutron source term. In this article, the Monte Carlo N-Particle (MCNP) Transport Code and Evaluated Nuclear Data File (ENDF) database are used to optimize the neutron source term in PGNAA-based coal analyzer, including the material and shape of the moderator and neutron reflector. The optimized targets include two points: (1) the ratio of the thermal to fast neutron is 1:1 and (2) the total neutron flux from the optimized neutron source in the sample increases at least 100% when compared with the initial one. The simulation results show that, the total neutron flux in the sample increases 102%, 102%, 85%, 72%, and 62% with Pb, Bi, Nb, W, and Be reflectors, respectively. Maximum optimization of the targets is achieved when the moderator is a 3-cm-thick lead layer coupled with a 3-cm-thick high-density polyethylene (HDPE) layer, and the neutron reflector is a 27-cm-thick hemispherical lead layer. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Image quality assessment of LaBr3-based whole-body 3D PET scanners: a Monte Carlo evaluation

    International Nuclear Information System (INIS)

    Surti, S; Karp, J S; Muehllehner, G

    2004-01-01

    The main thrust for this work is the investigation and design of a whole-body PET scanner based on new lanthanum bromide scintillators. We use Monte Carlo simulations to generate data for a 3D PET scanner based on LaBr 3 detectors, and to assess the count-rate capability and the reconstructed image quality of phantoms with hot and cold spheres using contrast and noise parameters. Previously we have shown that LaBr 3 has very high light output, excellent energy resolution and fast timing properties which can lead to the design of a time-of-flight (TOF) whole-body PET camera. The data presented here illustrate the performance of LaBr 3 without the additional benefit of TOF information, although our intention is to develop a scanner with TOF measurement capability. The only drawbacks of LaBr 3 are the lower stopping power and photo-fraction which affect both sensitivity and spatial resolution. However, in 3D PET imaging where energy resolution is very important for reducing scattered coincidences in the reconstructed image, the image quality attained in a non-TOF LaBr 3 scanner can potentially equal or surpass that achieved with other high sensitivity scanners. Our results show that there is a gain in NEC arising from the reduced scatter and random fractions in a LaBr 3 scanner. The reconstructed image resolution is slightly worse than a high-Z scintillator, but at increased count-rates, reduced pulse pileup leads to an image resolution similar to that of LSO. Image quality simulations predict reduced contrast for small hot spheres compared to an LSO scanner, but improved noise characteristics at similar clinical activity levels

  2. Evaluation 2 of B10 depletion in the WH PWR

    International Nuclear Information System (INIS)

    Park, Sang Won; Woo, Hae Suk; Kim, Sun Doo; Chae, Hee Dong; Myung, Sun Yup; Jang, Ju Kyung

    2001-01-01

    This paper presents the methodology to evaluate the B 10 depletion behavior in the pressurized water reactor. And B 10 depletion evaluation is performed based on the prediction program and the measured data of B 10 . The result shows that B 10 depletion during normal operation is not negligible. Therefore, adjustments for this depletion effect should be made to calculate the estimated critical postion(ECP) and determine the boron concentration required to maintain the specified shutdown margin

  3. Fully iterative scatter corrected digital breast tomosynthesis using GPU-based fast Monte Carlo simulation and composition ratio update

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kyungsang; Ye, Jong Chul, E-mail: jong.ye@kaist.ac.kr [Bio Imaging and Signal Processing Laboratory, Department of Bio and Brain Engineering, KAIST 291, Daehak-ro, Yuseong-gu, Daejeon 34141 (Korea, Republic of); Lee, Taewon; Cho, Seungryong [Medical Imaging and Radiotherapeutics Laboratory, Department of Nuclear and Quantum Engineering, KAIST 291, Daehak-ro, Yuseong-gu, Daejeon 34141 (Korea, Republic of); Seong, Younghun; Lee, Jongha; Jang, Kwang Eun [Samsung Advanced Institute of Technology, Samsung Electronics, 130, Samsung-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do, 443-803 (Korea, Republic of); Choi, Jaegu; Choi, Young Wook [Korea Electrotechnology Research Institute (KERI), 111, Hanggaul-ro, Sangnok-gu, Ansan-si, Gyeonggi-do, 426-170 (Korea, Republic of); Kim, Hak Hee; Shin, Hee Jung; Cha, Joo Hee [Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro, 43-gil, Songpa-gu, Seoul, 138-736 (Korea, Republic of)

    2015-09-15

    Purpose: In digital breast tomosynthesis (DBT), scatter correction is highly desirable, as it improves image quality at low doses. Because the DBT detector panel is typically stationary during the source rotation, antiscatter grids are not generally compatible with DBT; thus, a software-based scatter correction is required. This work proposes a fully iterative scatter correction method that uses a novel fast Monte Carlo simulation (MCS) with a tissue-composition ratio estimation technique for DBT imaging. Methods: To apply MCS to scatter estimation, the material composition in each voxel should be known. To overcome the lack of prior accurate knowledge of tissue composition for DBT, a tissue-composition ratio is estimated based on the observation that the breast tissues are principally composed of adipose and glandular tissues. Using this approximation, the composition ratio can be estimated from the reconstructed attenuation coefficients, and the scatter distribution can then be estimated by MCS using the composition ratio. The scatter estimation and image reconstruction procedures can be performed iteratively until an acceptable accuracy is achieved. For practical use, (i) the authors have implemented a fast MCS using a graphics processing unit (GPU), (ii) the MCS is simplified to transport only x-rays in the energy range of 10–50 keV, modeling Rayleigh and Compton scattering and the photoelectric effect using the tissue-composition ratio of adipose and glandular tissues, and (iii) downsampling is used because the scatter distribution varies rather smoothly. Results: The authors have demonstrated that the proposed method can accurately estimate the scatter distribution, and that the contrast-to-noise ratio of the final reconstructed image is significantly improved. The authors validated the performance of the MCS by changing the tissue thickness, composition ratio, and x-ray energy. The authors confirmed that the tissue-composition ratio estimation was quite

  4. Revisiting Antarctic Ozone Depletion

    Science.gov (United States)

    Grooß, Jens-Uwe; Tritscher, Ines; Müller, Rolf

    2015-04-01

    Antarctic ozone depletion is known for almost three decades and it has been well settled that it is caused by chlorine catalysed ozone depletion inside the polar vortex. However, there are still some details, which need to be clarified. In particular, there is a current debate on the relative importance of liquid aerosol and crystalline NAT and ice particles for chlorine activation. Particles have a threefold impact on polar chlorine chemistry, temporary removal of HNO3 from the gas-phase (uptake), permanent removal of HNO3 from the atmosphere (denitrification), and chlorine activation through heterogeneous reactions. We have performed simulations with the Chemical Lagrangian Model of the Stratosphere (CLaMS) employing a recently developed algorithm for saturation-dependent NAT nucleation for the Antarctic winters 2011 and 2012. The simulation results are compared with different satellite observations. With the help of these simulations, we investigate the role of the different processes responsible for chlorine activation and ozone depletion. Especially the sensitivity with respect to the particle type has been investigated. If temperatures are artificially forced to only allow cold binary liquid aerosol, the simulation still shows significant chlorine activation and ozone depletion. The results of the 3-D Chemical Transport Model CLaMS simulations differ from purely Lagrangian longtime trajectory box model simulations which indicates the importance of mixing processes.

  5. Satellite based radar interferometry to estimate large-scale soil water depletion from clay shrinkage: possibilities and limitations

    NARCIS (Netherlands)

    Brake, te B.; Hanssen, R.F.; Ploeg, van der M.J.; Rooij, de G.H.

    2013-01-01

    Satellite-based radar interferometry is a technique capable of measuring small surface elevation changes at large scales and with a high resolution. In vadose zone hydrology, it has been recognized for a long time that surface elevation changes due to swell and shrinkage of clayey soils can serve as

  6. Physics of fully depleted CCDs

    International Nuclear Information System (INIS)

    Holland, S E; Bebek, C J; Kolbe, W F; Lee, J S

    2014-01-01

    In this work we present simple, physics-based models for two effects that have been noted in the fully depleted CCDs that are presently used in the Dark Energy Survey Camera. The first effect is the observation that the point-spread function increases slightly with the signal level. This is explained by considering the effect on charge-carrier diffusion due to the reduction in the magnitude of the channel potential as collected signal charge acts to partially neutralize the fixed charge in the depleted channel. The resulting reduced voltage drop across the carrier drift region decreases the vertical electric field and increases the carrier transit time. The second effect is the observation of low-level, concentric ring patterns seen in uniformly illuminated images. This effect is shown to be most likely due to lateral deflection of charge during the transit of the photo-generated carriers to the potential wells as a result of lateral electric fields. The lateral fields are a result of space charge in the fully depleted substrates arising from resistivity variations inherent to the growth of the high-resistivity silicon used to fabricate the CCDs

  7. A Monte Carlo based decision-support tool for assessing generation portfolios in future carbon constrained electricity industries

    International Nuclear Information System (INIS)

    Vithayasrichareon, Peerapat; MacGill, Iain F.

    2012-01-01

    This paper presents a novel decision-support tool for assessing future generation portfolios in an increasingly uncertain electricity industry. The tool combines optimal generation mix concepts with Monte Carlo simulation and portfolio analysis techniques to determine expected overall industry costs, associated cost uncertainty, and expected CO 2 emissions for different generation portfolio mixes. The tool can incorporate complex and correlated probability distributions for estimated future fossil-fuel costs, carbon prices, plant investment costs, and demand, including price elasticity impacts. The intent of this tool is to facilitate risk-weighted generation investment and associated policy decision-making given uncertainties facing the electricity industry. Applications of this tool are demonstrated through a case study of an electricity industry with coal, CCGT, and OCGT facing future uncertainties. Results highlight some significant generation investment challenges, including the impacts of uncertain and correlated carbon and fossil-fuel prices, the role of future demand changes in response to electricity prices, and the impact of construction cost uncertainties on capital intensive generation. The tool can incorporate virtually any type of input probability distribution, and support sophisticated risk assessments of different portfolios, including downside economic risks. It can also assess portfolios against multi-criterion objectives such as greenhouse emissions as well as overall industry costs. - Highlights: ► Present a decision support tool to assist generation investment and policy making under uncertainty. ► Generation portfolios are assessed based on their expected costs, risks, and CO 2 emissions. ► There is tradeoff among expected cost, risks, and CO 2 emissions of generation portfolios. ► Investment challenges include economic impact of uncertainties and the effect of price elasticity. ► CO 2 emissions reduction depends on the mix of

  8. An automated process for building reliable and optimal in vitro/in vivo correlation models based on Monte Carlo simulations.

    Science.gov (United States)

    Sutton, Steven C; Hu, Mingxiu

    2006-05-05

    Many mathematical models have been proposed for establishing an in vitro/in vivo correlation (IVIVC). The traditional IVIVC model building process consists of 5 steps: deconvolution, model fitting, convolution, prediction error evaluation, and cross-validation. This is a time-consuming process and typically a few models at most are tested for any given data set. The objectives of this work were to (1) propose a statistical tool to screen models for further development of an IVIVC, (2) evaluate the performance of each model under different circumstances, and (3) investigate the effectiveness of common statistical model selection criteria for choosing IVIVC models. A computer program was developed to explore which model(s) would be most likely to work well with a random variation from the original formulation. The process used Monte Carlo simulation techniques to build IVIVC models. Data-based model selection criteria (Akaike Information Criteria [AIC], R2) and the probability of passing the Food and Drug Administration "prediction error" requirement was calculated. To illustrate this approach, several real data sets representing a broad range of release profiles are used to illustrate the process and to demonstrate the advantages of this automated process over the traditional approach. The Hixson-Crowell and Weibull models were often preferred over the linear. When evaluating whether a Level A IVIVC model was possible, the model selection criteria AIC generally selected the best model. We believe that the approach we proposed may be a rapid tool to determine which IVIVC model (if any) is the most applicable.

  9. 3D VMAT Verification Based on Monte Carlo Log File Simulation with Experimental Feedback from Film Dosimetry.

    Science.gov (United States)

    Barbeiro, A R; Ureba, A; Baeza, J A; Linares, R; Perucha, M; Jiménez-Ortega, E; Velázquez, S; Mateos, J C; Leal, A

    2016-01-01

    A model based on a specific phantom, called QuAArC, has been designed for the evaluation of planning and verification systems of complex radiotherapy treatments, such as volumetric modulated arc therapy (VMAT). This model uses the high accuracy provided by the Monte Carlo (MC) simulation of log files and allows the experimental feedback from the high spatial resolution of films hosted in QuAArC. This cylindrical phantom was specifically designed to host films rolled at different radial distances able to take into account the entrance fluence and the 3D dose distribution. Ionization chamber measurements are also included in the feedback process for absolute dose considerations. In this way, automated MC simulation of treatment log files is implemented to calculate the actual delivery geometries, while the monitor units are experimentally adjusted to reconstruct the dose-volume histogram (DVH) on the patient CT. Prostate and head and neck clinical cases, previously planned with Monaco and Pinnacle treatment planning systems and verified with two different commercial systems (Delta4 and COMPASS), were selected in order to test operational feasibility of the proposed model. The proper operation of the feedback procedure was proved through the achieved high agreement between reconstructed dose distributions and the film measurements (global gamma passing rates > 90% for the 2%/2 mm criteria). The necessary discretization level of the log file for dose calculation and the potential mismatching between calculated control points and detection grid in the verification process were discussed. Besides the effect of dose calculation accuracy of the analytic algorithm implemented in treatment planning systems for a dynamic technique, it was discussed the importance of the detection density level and its location in VMAT specific phantom to obtain a more reliable DVH in the patient CT. The proposed model also showed enough robustness and efficiency to be considered as a pre

  10. A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC)

    Science.gov (United States)

    Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B.; Jia, Xun

    2015-09-01

    Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia’s CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE’s random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by

  11. A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC).

    Science.gov (United States)

    Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun

    2015-10-07

    Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia's CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE's random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by

  12. A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC)

    International Nuclear Information System (INIS)

    Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun

    2015-01-01

    Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia’s CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon–electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783–97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE’s random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48–0.53% for the electron beam cases and 0.15–0.17% for the photon beam cases. In terms of efficiency, goMC was ∼4–16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was

  13. Development of a new nuclide generation and depletion code using a topological solver based on graph theory

    Energy Technology Data Exchange (ETDEWEB)

    Kasselmann, S., E-mail: s.kasselmann@fz-juelich.de [Forschungszentrum Jülich, 52425 Jülich (Germany); Schitthelm, O. [Forschungszentrum Jülich, 52425 Jülich (Germany); Tantillo, F. [Forschungszentrum Jülich, 52425 Jülich (Germany); Institute for Reactor Safety and Reactor Technology, RWTH-Aachen, 52064 Aachen (Germany); Scholthaus, S.; Rössel, C. [Forschungszentrum Jülich, 52425 Jülich (Germany); Allelein, H.-J. [Forschungszentrum Jülich, 52425 Jülich (Germany); Institute for Reactor Safety and Reactor Technology, RWTH-Aachen, 52064 Aachen (Germany)

    2016-09-15

    The problem of calculating the amounts of a coupled nuclide system varying with time especially when exposed to a neutron flux is a well-known problem and has been addressed by a number of computer codes. These codes cover a broad spectrum of applications, are based on comprehensive validation work and are therefore justifiably renowned among their users. However, due to their long development history, they are lacking a modern interface, which impedes a fast and robust internal coupling to other codes applied in the field of nuclear reactor physics. Therefore a project has been initiated to develop a new object-oriented nuclide transmutation code. It comprises an innovative solver based on graph theory, which exploits the topology of nuclide chains and therefore speeds up the calculation scheme. Highest priority has been given to the existence of a generic software interface well as an easy handling by making use of XML files for the user input. In this paper we report on the status of the code development and present first benchmark results, which prove the applicability of the selected approach.

  14. Development of a new nuclide generation and depletion code using a topological solver based on graph theory

    International Nuclear Information System (INIS)

    Kasselmann, S.; Scholthaus, S.; Rössel, C.; Allelein, H.-J.

    2014-01-01

    The problem of calculating the amounts of a coupled nuclide system varying with time especially when exposed to a neutron flux is a well-known problem and has been addressed by a number of computer codes. These codes cover a broad spectrum of applications, are based on comprehensive validation work and are therefore justifiably renowned among their users. However, due to their long development history, they are lacking a modern interface, which impedes a fast and robust internal coupling to other codes applied in the field of nuclear reactor physics. Therefore a project has been initiated to develop a new object-oriented nuclide transmutation code. It comprises an innovative solver based on graph theory, which exploits the topology of nuclide chains. This allows to always deal with the smallest nuclide system for the problem of interest. Highest priority has been given to the existence of a generic software interfaces well as an easy handling by making use of XML files for input and output. In this paper we report on the status of the code development and present first benchmark results, which prove the applicability of the selected approach. (author)

  15. Study on shielding design method of radiation streaming in a tokamak-type DT fusion reactor based on Monte Carlo calculation

    International Nuclear Information System (INIS)

    Sato, Satoshi

    2003-09-01

    In tokamak-type DT nuclear fusion reactor, there are various type slits and ducts in the blanket and the vacuum vessel. The helium production in the rewelding location of the blanket and the vacuum vessel, the nuclear properties in the super-conductive TF coil, e.g. the nuclear heating rate in the coil winding pack, are enhanced by the radiation streaming through the slits and ducts, and they are critical concern in the shielding design. The decay gamma ray dose rate around the duct penetrating the blanket and the vacuum vessel is also enhanced by the radiation streaming through the duct, and they are also critical concern from the view point of the human access to the cryostat during maintenance. In order to evaluate these nuclear properties with good accuracy, three dimensional Monte Carlo calculation is required but requires long calculation time. Therefore, the development of the effective simple design evaluation method for radiation streaming is substantially important. This study aims to establish the systematic evaluation method for the nuclear properties of the blanket, the vacuum vessel and the Toroidal Field (TF) coil taking into account the radiation streaming through various types of slits and ducts, based on three dimensional Monte Carlo calculation using the MNCP code, and for the decay gamma ray dose rates penetrated around the ducts. The present thesis describes three topics in five chapters as follows; 1) In Chapter 2, the results calculated by the Monte Carlo code, MCNP, are compared with those by the Sn code, DOT3.5, for the radiation streaming in the tokamak-type nuclear fusion reactor, for validating the results of the Sn calculation. From this comparison, the uncertainties of the Sn calculation results coming from the ray-effect and the effect due to approximation of the geometry are investigated whether the two dimensional Sn calculation can be applied instead of the Monte Carlo calculation. Through the study, it can be concluded that the

  16. Accuracy and Radiation Dose of CT-Based Attenuation Correction for Small Animal PET: A Monte Carlo Simulation Study

    International Nuclear Information System (INIS)

    Yang, Ching-Ching; Chan, Kai-Chieh

    2013-06-01

    -Small animal PET allows qualitative assessment and quantitative measurement of biochemical processes in vivo, but the accuracy and reproducibility of imaging results can be affected by several parameters. The first aim of this study was to investigate the performance of different CT-based attenuation correction strategies and assess the resulting impact on PET images. The absorbed dose in different tissues caused by scanning procedures was also discussed to minimize biologic damage generated by radiation exposure due to PET/CT scanning. A small animal PET/CT system was modeled based on Monte Carlo simulation to generate imaging results and dose distribution. Three energy mapping methods, including the bilinear scaling method, the dual-energy method and the hybrid method which combines the kVp conversion and the dual-energy method, were investigated comparatively through assessing the accuracy of estimating linear attenuation coefficient at 511 keV and the bias introduced into PET quantification results due to CT-based attenuation correction. Our results showed that the hybrid method outperformed the bilinear scaling method, while the dual-energy method achieved the highest accuracy among the three energy mapping methods. Overall, the accuracy of PET quantification results have similar trend as that for the estimation of linear attenuation coefficients, whereas the differences between the three methods are more obvious in the estimation of linear attenuation coefficients than in the PET quantification results. With regards to radiation exposure from CT, the absorbed dose ranged between 7.29-45.58 mGy for 50-kVp scan and between 6.61-39.28 mGy for 80-kVp scan. For 18 F radioactivity concentration of 1.86x10 5 Bq/ml, the PET absorbed dose was around 24 cGy for tumor with a target-to-background ratio of 8. The radiation levels for CT scans are not lethal to the animal, but concurrent use of PET in longitudinal study can increase the risk of biological effects. The

  17. Hidden shift of the ionome of plants exposed to elevated CO₂depletes minerals at the base of human nutrition.

    Science.gov (United States)

    Loladze, Irakli

    2014-05-07

    Mineral malnutrition stemming from undiversified plant-based diets is a top global challenge. In C3 plants (e.g., rice, wheat), elevated concentrations of atmospheric carbon dioxide (eCO2) reduce protein and nitrogen concentrations, and can increase the total non-structural carbohydrates (TNC; mainly starch, sugars). However, contradictory findings have obscured the effect of eCO2 on the ionome-the mineral and trace-element composition-of plants. Consequently, CO2-induced shifts in plant quality have been ignored in the estimation of the impact of global change on humans. This study shows that eCO2 reduces the overall mineral concentrations (-8%, 95% confidence interval: -9.1 to -6.9, p carbon:minerals in C3 plants. The meta-analysis of 7761 observations, including 2264 observations at state of the art FACE centers, covers 130 species/cultivars. The attained statistical power reveals that the shift is systemic and global. Its potential to exacerbate the prevalence of 'hidden hunger' and obesity is discussed.DOI: http://dx.doi.org/10.7554/eLife.02245.001. Copyright © 2014, Loladze.

  18. Hidden shift of the ionome of plants exposed to elevated CO2 depletes minerals at the base of human nutrition

    Science.gov (United States)

    Loladze, Irakli

    2014-01-01

    Mineral malnutrition stemming from undiversified plant-based diets is a top global challenge. In C3 plants (e.g., rice, wheat), elevated concentrations of atmospheric carbon dioxide (eCO2) reduce protein and nitrogen concentrations, and can increase the total non-structural carbohydrates (TNC; mainly starch, sugars). However, contradictory findings have obscured the effect of eCO2 on the ionome—the mineral and trace-element composition—of plants. Consequently, CO2-induced shifts in plant quality have been ignored in the estimation of the impact of global change on humans. This study shows that eCO2 reduces the overall mineral concentrations (−8%, 95% confidence interval: −9.1 to −6.9, p carbon:minerals in C3 plants. The meta-analysis of 7761 observations, including 2264 observations at state of the art FACE centers, covers 130 species/cultivars. The attained statistical power reveals that the shift is systemic and global. Its potential to exacerbate the prevalence of ‘hidden hunger’ and obesity is discussed. DOI: http://dx.doi.org/10.7554/eLife.02245.001 PMID:24867639

  19. Calculation of absorbed fractions to human skeletal tissues due to alpha particles using the Monte Carlo and 3-d chord-based transport techniques

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, J.G. [Institute of Radiation Protection and Dosimetry, Av. Salvador Allende s/n, Recreio, Rio de Janeiro, CEP 22780-160 (Brazil); Watchman, C.J. [Department of Radiation Oncology, University of Arizona, Tucson, AZ, 85721 (United States); Bolch, W.E. [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL, 32611 (United States); Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611 (United States)

    2007-07-01

    Absorbed fraction (AF) calculations to the human skeletal tissues due to alpha particles are of interest to the internal dosimetry of occupationally exposed workers and members of the public. The transport of alpha particles through the skeletal tissue is complicated by the detailed and complex microscopic histology of the skeleton. In this study, both Monte Carlo and chord-based techniques were applied to the transport of alpha particles through 3-D micro-CT images of the skeletal microstructure of trabecular spongiosa. The Monte Carlo program used was 'Visual Monte Carlo-VMC'. VMC simulates the emission of the alpha particles and their subsequent energy deposition track. The second method applied to alpha transport is the chord-based technique, which randomly generates chord lengths across bone trabeculae and the marrow cavities via alternate and uniform sampling of their cumulative density functions. This paper compares the AF of energy to two radiosensitive skeletal tissues, active marrow and shallow active marrow, obtained with these two techniques. (authors)

  20. TH-A-18C-04: Ultrafast Cone-Beam CT Scatter Correction with GPU-Based Monte Carlo Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Y [UT Southwestern Medical Center, Dallas, TX (United States); Southern Medical University, Guangzhou (China); Bai, T [UT Southwestern Medical Center, Dallas, TX (United States); Xi' an Jiaotong University, Xi' an (China); Yan, H; Ouyang, L; Wang, J; Pompos, A; Jiang, S; Jia, X [UT Southwestern Medical Center, Dallas, TX (United States); Zhou, L [Southern Medical University, Guangzhou (China)

    2014-06-15

    Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT). We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC) simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstructions within 30 seconds. Methods: The method consists of six steps: 1) FDK reconstruction using raw projection data; 2) Rigid Registration of planning CT to the FDK results; 3) MC scatter calculation at sparse view angles using the planning CT; 4) Interpolation of the calculated scatter signals to other angles; 5) Removal of scatter from the raw projections; 6) FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC scatter noise caused by low photon numbers. The method is validated on head-and-neck cases with simulated and clinical data. Results: We have studied impacts of photo histories, volume down sampling factors on the accuracy of scatter estimation. The Fourier analysis was conducted to show that scatter images calculated at 31 angles are sufficient to restore those at all angles with <0.1% error. For the simulated case with a resolution of 512×512×100, we simulated 10M photons per angle. The total computation time is 23.77 seconds on a Nvidia GTX Titan GPU. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU. Similar results were found for a real patient case. Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. The whole process of scatter correction and reconstruction is accomplished within 30 seconds. This study is supported in part by NIH (1R01CA154747-01), The Core Technology Research

  1. SU-C-BRC-06: OpenCL-Based Cross-Platform Monte Carlo Simulation Package for Carbon Ion Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Qin, N; Tian, Z; Pompos, A; Jiang, S; Jia, X [UT Southwestern Medical Ctr, Dallas, TX (United States); Pinto, M; Dedes, G; Parodi, K [Ludwig-Maximilians-Universitaet Muenchen, Garching / Munich (Germany)

    2016-06-15

    Purpose: Monte Carlo (MC) simulation is considered to be the most accurate method for calculation of absorbed dose and fundamental physical quantities related to biological effects in carbon ion therapy. Its long computation time impedes clinical and research applications. We have developed an MC package, goCMC, on parallel processing platforms, aiming at achieving accurate and efficient simulations for carbon therapy. Methods: goCMC was developed under OpenCL framework. It supported transport simulation in voxelized geometry with kinetic energy up to 450 MeV/u. Class II condensed history algorithm was employed for charged particle transport with stopping power computed via Bethe-Bloch equation. Secondary electrons were not transported with their energy locally deposited. Energy straggling and multiple scattering were modeled. Production of secondary charged particles from nuclear interactions was implemented based on cross section and yield data from Geant4. They were transported via the condensed history scheme. goCMC supported scoring various quantities of interest e.g. physical dose, particle fluence, spectrum, linear energy transfer, and positron emitting nuclei. Results: goCMC has been benchmarked against Geant4 with different phantoms and beam energies. For 100 MeV/u, 250 MeV/u and 400 MeV/u beams impinging to a water phantom, range difference was 0.03 mm, 0.20 mm and 0.53 mm, and mean dose difference was 0.47%, 0.72% and 0.79%, respectively. goCMC can run on various computing devices. Depending on the beam energy and voxel size, it took 20∼100 seconds to simulate 10{sup 7} carbons on an AMD Radeon GPU card. The corresponding CPU time for Geant4 with the same setup was 60∼100 hours. Conclusion: We have developed an OpenCL-based cross-platform carbon MC simulation package, goCMC. Its accuracy, efficiency and portability make goCMC attractive for research and clinical applications in carbon therapy.

  2. MO-FG-202-08: Real-Time Monte Carlo-Based Treatment Dose Reconstruction and Monitoring for Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Tian, Z; Shi, F; Gu, X; Tan, J; Hassan-Rezaeian, N; Jiang, S; Jia, X [UT Southwestern Medical Center, Dallas, TX (United States); Graves, Y [University of California, San Diego, La Jolla, CA (United States)

    2016-06-15

    Purpose: This proof-of-concept study is to develop a real-time Monte Carlo (MC) based treatment-dose reconstruction and monitoring system for radiotherapy, especially for the treatments with complicated delivery, to catch treatment delivery errors at the earliest possible opportunity and interrupt the treatment only when an unacceptable dosimetric deviation from our expectation occurs. Methods: First an offline scheme is launched to pre-calculate the expected dose from the treatment plan, used as ground truth for real-time monitoring later. Then an online scheme with three concurrent threads is launched while treatment delivering, to reconstruct and monitor the patient dose in a temporally resolved fashion in real-time. Thread T1 acquires machine status every 20 ms to calculate and accumulate fluence map (FM). Once our accumulation threshold is reached, T1 transfers the FM to T2 for dose reconstruction ad starts to accumulate a new FM. A GPU-based MC dose calculation is performed on T2 when MC dose engine is ready and a new FM is available. The reconstructed instantaneous dose is directed to T3 for dose accumulation and real-time visualization. Multiple dose metrics (e.g. maximum and mean dose for targets and organs) are calculated from the current accumulated dose and compared with the pre-calculated expected values. Once the discrepancies go beyond our tolerance, an error message will be send to interrupt the treatment delivery. Results: A VMAT Head-and-neck patient case was used to test the performance of our system. Real-time machine status acquisition was simulated here. The differences between the actual dose metrics and the expected ones were 0.06%–0.36%, indicating an accurate delivery. ∼10Hz frequency of dose reconstruction and monitoring was achieved, with 287.94s online computation time compared to 287.84s treatment delivery time. Conclusion: Our study has demonstrated the feasibility of computing a dose distribution in a temporally resolved fashion

  3. Depleted uranium: A DOE management guide

    International Nuclear Information System (INIS)

    1995-10-01

    The U.S. Department of Energy (DOE) has a management challenge and financial liability in the form of 50,000 cylinders containing 555,000 metric tons of depleted uranium hexafluoride (UF 6 ) that are stored at the gaseous diffusion plants. The annual storage and maintenance cost is approximately $10 million. This report summarizes several studies undertaken by the DOE Office of Technology Development (OTD) to evaluate options for long-term depleted uranium management. Based on studies conducted to date, the most likely use of the depleted uranium is for shielding of spent nuclear fuel (SNF) or vitrified high-level waste (HLW) containers. The alternative to finding a use for the depleted uranium is disposal as a radioactive waste. Estimated disposal costs, utilizing existing technologies, range between $3.8 and $11.3 billion, depending on factors such as applicability of the Resource Conservation and Recovery Act (RCRA) and the location of the disposal site. The cost of recycling the depleted uranium in a concrete based shielding in SNF/HLW containers, although substantial, is comparable to or less than the cost of disposal. Consequently, the case can be made that if DOE invests in developing depleted uranium shielded containers instead of disposal, a long-term solution to the UF 6 problem is attained at comparable or lower cost than disposal as a waste. Two concepts for depleted uranium storage casks were considered in these studies. The first is based on standard fabrication concepts previously developed for depleted uranium metal. The second converts the UF 6 to an oxide aggregate that is used in concrete to make dry storage casks

  4. Capital expenditure and depletion

    International Nuclear Information System (INIS)

    Rech, O.; Saniere, A.

    2003-01-01

    In the future, the increase in oil demand will be covered for the most part by non conventional oils, but conventional sources will continue to represent a preponderant share of the world oil supply. Their depletion represents a complex challenge involving technological, economic and political factors. At the same time, there is reason for concern about the decrease in exploration budgets at the major oil companies. (author)

  5. Capital expenditure and depletion

    Energy Technology Data Exchange (ETDEWEB)

    Rech, O.; Saniere, A

    2003-07-01

    In the future, the increase in oil demand will be covered for the most part by non conventional oils, but conventional sources will continue to represent a preponderant share of the world oil supply. Their depletion represents a complex challenge involving technological, economic and political factors. At the same time, there is reason for concern about the decrease in exploration budgets at the major oil companies. (author)

  6. Uncertainties in models of tropospheric ozone based on Monte Carlo analysis: Tropospheric ozone burdens, atmospheric lifetimes and surface distributions

    Science.gov (United States)

    Derwent, Richard G.; Parrish, David D.; Galbally, Ian E.; Stevenson, David S.; Doherty, Ruth M.; Naik, Vaishali; Young, Paul J.

    2018-05-01

    Recognising that global tropospheric ozone models have many uncertain input parameters, an attempt has been made to employ Monte Carlo sampling to quantify the uncertainties in model output that arise from global tropospheric ozone precursor emissions and from ozone production and destruction in a global Lagrangian chemistry-transport model. Ninety eight quasi-randomly Monte Carlo sampled model runs were completed and the uncertainties were quantified in tropospheric burdens and lifetimes of ozone, carbon monoxide and methane, together with the surface distribution and seasonal cycle in ozone. The results have shown a satisfactory degree of convergence and provide a first estimate of the likely uncertainties in tropospheric ozone model outputs. There are likely to be diminishing returns in carrying out many more Monte Carlo runs in order to refine further these outputs. Uncertainties due to model formulation were separately addressed using the results from 14 Atmospheric Chemistry Coupled Climate Model Intercomparison Project (ACCMIP) chemistry-climate models. The 95% confidence ranges surrounding the ACCMIP model burdens and lifetimes for ozone, carbon monoxide and methane were somewhat smaller than for the Monte Carlo estimates. This reflected the situation where the ACCMIP models used harmonised emissions data and differed only in their meteorological data and model formulations whereas a conscious effort was made to describe the uncertainties in the ozone precursor emissions and in the kinetic and photochemical data in the Monte Carlo runs. Attention was focussed on the model predictions of the ozone seasonal cycles at three marine boundary layer stations: Mace Head, Ireland, Trinidad Head, California and Cape Grim, Tasmania. Despite comprehensively addressing the uncertainties due to global emissions and ozone sources and sinks, none of the Monte Carlo runs were able to generate seasonal cycles that matched the observations at all three MBL stations. Although

  7. Fully Depleted Charge-Coupled Devices

    International Nuclear Information System (INIS)

    Holland, Stephen E.

    2006-01-01

    We have developed fully depleted, back-illuminated CCDs that build upon earlier research and development efforts directed towards technology development of silicon-strip detectors used in high-energy-physics experiments. The CCDs are fabricated on the same type of high-resistivity, float-zone-refined silicon that is used for strip detectors. The use of high-resistivity substrates allows for thick depletion regions, on the order of 200-300 um, with corresponding high detection efficiency for near-infrared and soft x-ray photons. We compare the fully depleted CCD to the p-i-n diode upon which it is based, and describe the use of fully depleted CCDs in astronomical and x-ray imaging applications

  8. Statistical estimation Monte Carlo for unreliability evaluation of highly reliable system

    International Nuclear Information System (INIS)

    Xiao Gang; Su Guanghui; Jia Dounan; Li Tianduo

    2000-01-01

    Based on analog Monte Carlo simulation, statistical Monte Carlo methods for unreliable evaluation of highly reliable system are constructed, including direct statistical estimation Monte Carlo method and weighted statistical estimation Monte Carlo method. The basal element is given, and the statistical estimation Monte Carlo estimators are derived. Direct Monte Carlo simulation method, bounding-sampling method, forced transitions Monte Carlo method, direct statistical estimation Monte Carlo and weighted statistical estimation Monte Carlo are used to evaluate unreliability of a same system. By comparing, weighted statistical estimation Monte Carlo estimator has smallest variance, and has highest calculating efficiency

  9. Posture-specific phantoms representing female and male adults in Monte Carlo-based simulations for radiological protection

    Science.gov (United States)

    Cassola, V. F.; Kramer, R.; Brayner, C.; Khoury, H. J.

    2010-08-01

    Does the posture of a patient have an effect on the organ and tissue absorbed doses caused by x-ray examinations? This study aims to find the answer to this question, based on Monte Carlo (MC) simulations of commonly performed x-ray examinations using adult phantoms modelled to represent humans in standing as well as in the supine posture. The recently published FASH (female adult mesh) and MASH (male adult mesh) phantoms have the standing posture. In a first step, both phantoms were updated with respect to their anatomy: glandular tissue was separated from adipose tissue in the breasts, visceral fat was separated from subcutaneous fat, cartilage was segmented in ears, nose and around the thyroid, and the mass of the right lung is now 15% greater than the left lung. The updated versions are called FASH2_sta and MASH2_sta (sta = standing). Taking into account the gravitational effects on organ position and fat distribution, supine versions of the FASH2 and the MASH2 phantoms have been developed in this study and called FASH2_sup and MASH2_sup. MC simulations of external whole-body exposure to monoenergetic photons and partial-body exposure to x-rays have been made with the standing and supine FASH2 and MASH2 phantoms. For external whole-body exposure for AP and PA projection with photon energies above 30 keV, the effective dose did not change by more than 5% when the posture changed from standing to supine or vice versa. Apart from that, the supine posture is quite rare in occupational radiation protection from whole-body exposure. However, in the x-ray diagnosis supine posture is frequently used for patients submitted to examinations. Changes of organ absorbed doses up to 60% were found for simulations of chest and abdomen radiographs if the posture changed from standing to supine or vice versa. A further increase of differences between posture-specific organ and tissue absorbed doses with increasing whole-body mass is to be expected.

  10. Monte Carlo simulation of beam characteristics from small fields based on TrueBeam flattening-filter-free mode

    International Nuclear Information System (INIS)

    Feng, Zhongsu; Yue, Haizhen; Zhang, Yibao; Wu, Hao; Cheng, Jinsheng; Su, Xu

    2016-01-01

    Through the Monte Carlo (MC) simulation of 6 and 10 MV flattening-filter-free (FFF) beams from Varian TrueBeam accelerator, this study aims to find the best incident electron distribution for further studying the small field characteristics of these beams. By incorporating the training materials of Varian on the geometry and material parameters of TrueBeam Linac head, the 6 and 10 MV FFF beams were modelled using the BEAMnrc and DOSXYZnrc codes, where the percentage depth doses (PDDs) and the off-axis ratios (OARs) curves of fields ranging from 4 × 4 to 40 × 40 cm 2 were simulated for both energies by adjusting the incident beam energy, radial intensity distribution and angular spread, respectively. The beam quality and relative output factor (ROF) were calculated. The simulations and measurements were compared using Gamma analysis method provided by Verisoft program (PTW, Freiburg, Germany), based on which the optimal MC model input parameters were selected and were further used to investigate the beam characteristics of small fields. The Full Width Half Maximum (FWHM), mono-energetic energy and angular spread of the resultant incident Gaussian radial intensity electron distribution were 0.75 mm, 6.1 MeV and 0.9° for the nominal 6 MV FFF beam, and 0.7 mm, 10.8 MeV and 0.3° for the nominal 10 MV FFF beam respectively. The simulation was mostly comparable to the measurement. Gamma criteria of 1 mm/1 % (local dose) can be met by all PDDs of fields larger than 1 × 1 cm 2 , and by all OARs of no larger than 20 × 20 cm 2 , otherwise criteria of 1 mm/2 % can be fulfilled. Our MC simulated ROFs agreed well with the measured ROFs of various field sizes (the discrepancies were less than 1 %), except for the 1 × 1 cm 2 field. The MC simulation agrees well with the measurement and the proposed model parameters can be clinically used for further dosimetric studies of 6 and 10 MV FFF beams

  11. Bathymetry and composition of Titan's Ontario Lacus derived from Monte Carlo-based waveform inversion of Cassini RADAR altimetry data

    Science.gov (United States)

    Mastrogiuseppe, M.; Hayes, A. G.; Poggiali, V.; Lunine, J. I.; Lorenz, R. D.; Seu, R.; Le Gall, A.; Notarnicola, C.; Mitchell, K. L.; Malaska, M.; Birch, S. P. D.

    2018-01-01

    Recently, the Cassini RADAR was used to sound hydrocarbon lakes and seas on Saturn's moon Titan. Since the initial discovery of echoes from the seabed of Ligeia Mare, the second largest liquid body on Titan, a dedicated radar processing chain has been developed to retrieve liquid depth and microwave absorptivity information from RADAR altimetry of Titan's lakes and seas. Herein, we apply this processing chain to altimetry data acquired over southern Ontario Lacus during Titan fly-by T49 in December 2008. The new signal processing chain adopts super resolution techniques and dedicated taper functions to reveal the presence of reflection from Ontario's lakebed. Unfortunately, the extracted waveforms from T49 are often distorted due to signal saturation, owing to the extraordinarily strong specular reflections from the smooth lake surface. This distortion is a function of the saturation level and can introduce artifacts, such as signal precursors, which complicate data interpretation. We use a radar altimetry simulator to retrieve information from the saturated bursts and determine the liquid depth and loss tangent of Ontario Lacus. Received waveforms are represented using a two-layer model, where Cassini raw radar data are simulated in order to reproduce the effects of receiver saturation. A Monte Carlo based approach along with a simulated waveform look-up table is used to retrieve parameters that are given as inputs to a parametric model which constrains radio absorption of Ontario Lacus and retrieves information about the dielectric properties of the liquid. We retrieve a maximum depth of 50 m along the radar transect and a best-fit specific attenuation of the liquid equal to 0.2 ± 0.09 dB m-1 that, when converted into loss tangent, gives tanδ = 7 ± 3 × 10-5. When combined with laboratory measured cryogenic liquid alkane dielectric properties and the variable solubility of nitrogen in ethane-methane mixtures, the best-fit loss tangent is consistent with a

  12. Monte Carlo based estimation of organ and effective doses to patients undergoing hysterosalpingography and retrograde urethrography fluoroscopy procedures

    Science.gov (United States)

    Ngaile, J. E.; Msaki, P. K.; Kazema, R. R.

    2018-04-01

    Contrast investigations of hysterosalpingography (HSG) and retrograde urethrography (RUG) fluoroscopy procedures remain the dominant diagnostic tools for the investigation of infertility in females and urethral strictures in males, respectively, owing to the scarcity and high cost of services of alternative diagnostic technologies. In light of the radiological risks associated with contrast based investigations of the genitourinary tract systems, there is a need to assess the magnitude of radiation burden imparted to patients undergoing HSG and RUG fluoroscopy procedures in Tanzania. The air kerma area product (KAP), fluoroscopy time, number of images, organ dose and effective dose to patients undergoing HSG and RUG procedures were obtained from four hospitals. The KAP was measured using a flat transmission ionization chamber, while the organ and effective doses were estimated using the knowledge of the patient characteristics, patient related exposure parameters, geometry of examination, KAP and Monte Carlo calculations (PCXMC). The median values of KAP for the HSG and RUG were 2.2 Gy cm2 and 3.3 Gy cm2, respectively. The median organ doses in the present study for the ovaries, urinary bladder and uterus for the HSG procedures, were 1.0 mGy, 4.0 mGy and 1.6 mGy, respectively, while for urinary bladder and testes of the RUG were 3.4 mGy and 5.9 mGy, respectively. The median values of effective doses for the HSG and RUG procedures were 0.65 mSv and 0.59 mSv, respectively. The median values of effective dose per hospital for the HSG and RUG procedures had a range of 1.6-2.8 mSv and 1.9-5.6 mSv, respectively, while the overall differences between individual effective doses across the four hospitals varied by factors of up to 22.0 and 46.7, respectively for the HSG and RUG procedures. The proposed diagnostic reference levels (DRLs) for the HSG and RUG were for KAP 2.8 Gy cm2 and 3.9 Gy cm2, for fluoroscopy time 0.8 min and 0.9 min, and for number of images 5 and 4

  13. High-resolution three-dimensional imaging of a depleted CMOS sensor using an edge Transient Current Technique based on the Two Photon Absorption process (TPA-eTCT)

    CERN Document Server

    García, Marcos Fernández; Echeverría, Richard Jaramillo; Moll, Michael; Santos, Raúl Montero; Moya, David; Pinto, Rogelio Palomo; Vila, Iván

    2016-01-01

    For the first time, the deep n-well (DNW) depletion space of a High Voltage CMOS sensor has been characterized using a Transient Current Technique based on the simultaneous absorption of two photons. This novel approach has allowed to resolve the DNW implant boundaries and therefore to accurately determine the real depleted volume and the effective doping concentration of the substrate. The unprecedented spatial resolution of this new method comes from the fact that measurable free carrier generation in two photon mode only occurs in a micrometric scale voxel around the focus of the beam. Real three-dimensional spatial resolution is achieved by scanning the beam focus within the sample.

  14. High-resolution three-dimensional imaging of a depleted CMOS sensor using an edge Transient Current Technique based on the Two Photon Absorption process (TPA-eTCT)

    Energy Technology Data Exchange (ETDEWEB)

    García, Marcos Fernández; Sánchez, Javier González; Echeverría, Richard Jaramillo [Instituto de Física de Cantabria (CSIC-UC), Avda. los Castros s/n, E-39005 Santander (Spain); Moll, Michael [CERN, Organisation europénne pour la recherche nucléaire, CH-1211 Genéve 23 (Switzerland); Santos, Raúl Montero [SGIker Laser Facility, UPV/EHU, Sarriena, s/n - 48940 Leioa-Bizkaia (Spain); Moya, David [Instituto de Física de Cantabria (CSIC-UC), Avda. los Castros s/n, E-39005 Santander (Spain); Pinto, Rogelio Palomo [Departamento de Ingeniería Electrónica, Escuela Superior de Ingenieros Universidad de Sevilla (Spain); Vila, Iván [Instituto de Física de Cantabria (CSIC-UC), Avda. los Castros s/n, E-39005 Santander (Spain)

    2017-02-11

    For the first time, the deep n-well (DNW) depletion space of a High Voltage CMOS sensor has been characterized using a Transient Current Technique based on the simultaneous absorption of two photons. This novel approach has allowed to resolve the DNW implant boundaries and therefore to accurately determine the real depleted volume and the effective doping concentration of the substrate. The unprecedented spatial resolution of this new method comes from the fact that measurable free carrier generation in two photon mode only occurs in a micrometric scale voxel around the focus of the beam. Real three-dimensional spatial resolution is achieved by scanning the beam focus within the sample.

  15. Ozone-depleting Substances (ODS)

    Data.gov (United States)

    U.S. Environmental Protection Agency — This site includes all of the ozone-depleting substances (ODS) recognized by the Montreal Protocol. The data include ozone depletion potentials (ODP), global warming...

  16. Risk-based determination of design pressure of LNG fuel storage tanks based on dynamic process simulation combined with Monte Carlo method

    International Nuclear Information System (INIS)

    Noh, Yeelyong; Chang, Kwangpil; Seo, Yutaek; Chang, Daejun

    2014-01-01

    This study proposes a new methodology that combines dynamic process simulation (DPS) and Monte Carlo simulation (MCS) to determine the design pressure of fuel storage tanks on LNG-fueled ships. Because the pressure of such tanks varies with time, DPS is employed to predict the pressure profile. Though equipment failure and subsequent repair affect transient pressure development, it is difficult to implement these features directly in the process simulation due to the randomness of the failure. To predict the pressure behavior realistically, MCS is combined with DPS. In MCS, discrete events are generated to create a lifetime scenario for a system. The combination of MCS with long-term DPS reveals the frequency of the exceedance pressure. The exceedance curve of the pressure provides risk-based information for determining the design pressure based on risk acceptance criteria, which may vary with different points of view. - Highlights: • The realistic operation scenario of the LNG FGS system is estimated by MCS. • In repeated MCS trials, the availability of the FGS system is evaluated. • The realistic pressure profile is obtained by the proposed methodology. • The exceedance curve provides risk-based information for determining design pressure

  17. Monte Carlo techniques in radiation therapy

    CERN Document Server

    Verhaegen, Frank

    2013-01-01

    Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...

  18. PCXMC. A PC-based Monte Carlo program for calculating patient doses in medical x-ray examinations

    International Nuclear Information System (INIS)

    Tapiovaara, M.; Lakkisto, M.; Servomaa, A.

    1997-02-01

    The report describes PCXMC, a Monte Carlo program for calculating patients' organ doses and the effective dose in medical x-ray examinations. The organs considered are: the active bone marrow, adrenals, brain, breasts, colon (upper and lower large intestine), gall bladder, heats, kidneys, liver, lungs, muscle, oesophagus, ovaries, pancreas, skeleton, skin, small intestine, spleen, stomach, testes, thymes, thyroid, urinary bladder, and uterus. (42 refs.)

  19. SU-E-T-256: Development of a Monte Carlo-Based Dose-Calculation System in a Cloud Environment for IMRT and VMAT Dosimetric Verification

    Energy Technology Data Exchange (ETDEWEB)

    Fujita, Y [Tokai University School of Medicine, Isehara, Kanagawa (Japan)

    2015-06-15

    Purpose: Intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT) are techniques that are widely used for treating cancer due to better target coverage and critical structure sparing. The increasing complexity of IMRT and VMAT plans leads to decreases in dose calculation accuracy. Monte Carlo simulations are the most accurate method for the determination of dose distributions in patients. However, the simulation settings for modeling an accurate treatment head are very complex and time consuming. The purpose of this work is to report our implementation of a simple Monte Carlo simulation system in a cloud-computing environment for dosimetric verification of IMRT and VMAT plans. Methods: Monte Carlo simulations of a Varian Clinac linear accelerator were performed using the BEAMnrc code, and dose distributions were calculated using the DOSXYZnrc code. Input files for the simulations were automatically generated from DICOM RT files by the developed web application. We therefore must only upload the DICOM RT files through the web interface, and the simulations are run in the cloud. The calculated dose distributions were exported to RT Dose files that can be downloaded through the web interface. The accuracy of the calculated dose distribution was verified by dose measurements. Results: IMRT and VMAT simulations were performed and good agreement results were observed for measured and MC dose comparison. Gamma analysis with a 3% dose and 3 mm DTA criteria shows a mean gamma index value of 95% for the studied cases. Conclusion: A Monte Carlo-based dose calculation system has been successfully implemented in a cloud environment. The developed system can be used for independent dose verification of IMRT and VMAT plans in routine clinical practice. The system will also be helpful for improving accuracy in beam modeling and dose calculation in treatment planning systems. This work was supported by JSPS KAKENHI Grant Number 25861057.

  20. SU-E-T-256: Development of a Monte Carlo-Based Dose-Calculation System in a Cloud Environment for IMRT and VMAT Dosimetric Verification

    International Nuclear Information System (INIS)

    Fujita, Y

    2015-01-01

    Purpose: Intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT) are techniques that are widely used for treating cancer due to better target coverage and critical structure sparing. The increasing complexity of IMRT and VMAT plans leads to decreases in dose calculation accuracy. Monte Carlo simulations are the most accurate method for the determination of dose distributions in patients. However, the simulation settings for modeling an accurate treatment head are very complex and time consuming. The purpose of this work is to report our implementation of a simple Monte Carlo simulation system in a cloud-computing environment for dosimetric verification of IMRT and VMAT plans. Methods: Monte Carlo simulations of a Varian Clinac linear accelerator were performed using the BEAMnrc code, and dose distributions were calculated using the DOSXYZnrc code. Input files for the simulations were automatically generated from DICOM RT files by the developed web application. We therefore must only upload the DICOM RT files through the web interface, and the simulations are run in the cloud. The calculated dose distributions were exported to RT Dose files that can be downloaded through the web interface. The accuracy of the calculated dose distribution was verified by dose measurements. Results: IMRT and VMAT simulations were performed and good agreement results were observed for measured and MC dose comparison. Gamma analysis with a 3% dose and 3 mm DTA criteria shows a mean gamma index value of 95% for the studied cases. Conclusion: A Monte Carlo-based dose calculation system has been successfully implemented in a cloud environment. The developed system can be used for independent dose verification of IMRT and VMAT plans in routine clinical practice. The system will also be helpful for improving accuracy in beam modeling and dose calculation in treatment planning systems. This work was supported by JSPS KAKENHI Grant Number 25861057

  1. N-(sulfoethyl) iminodiacetic acid-based lanthanide coordination polymers: Synthesis, magnetism and quantum Monte Carlo studies

    Energy Technology Data Exchange (ETDEWEB)

    Zhuang Guilin, E-mail: glzhuang@zjut.edu.cn [Institute of Industrial Catalysis, College of Chemical Engineering and Materials Science, Zhejiang University of Technology, Hangzhou 310032 (China); Chen Wulin [Institute of Industrial Catalysis, College of Chemical Engineering and Materials Science, Zhejiang University of Technology, Hangzhou 310032 (China); Zheng Jun [Center of Modern Experimental Technology, Anhui University, Hefei 230039 (China); Yu Huiyou [Institute of Industrial Catalysis, College of Chemical Engineering and Materials Science, Zhejiang University of Technology, Hangzhou 310032 (China); Wang Jianguo, E-mail: jgw@zjut.edu.cn [Institute of Industrial Catalysis, College of Chemical Engineering and Materials Science, Zhejiang University of Technology, Hangzhou 310032 (China)

    2012-08-15

    A series of lanthanide coordination polymers have been obtained through the hydrothermal reaction of N-(sulfoethyl) iminodiacetic acid (H{sub 3}SIDA) and Ln(NO{sub 3}){sub 3} (Ln=La, 1; Pr, 2; Nd, 3; Gd, 4). Crystal structure analysis exhibits that lanthanide ions affect the coordination number, bond length and dimension of compounds 1-4, which reveal that their structure diversity can be attributed to the effect of lanthanide contraction. Furthermore, the combination of magnetic measure with quantum Monte Carlo(QMC) studies exhibits that the coupling parameters between two adjacent Gd{sup 3+} ions for anti-anti and syn-anti carboxylate bridges are -1.0 Multiplication-Sign 10{sup -3} and -5.0 Multiplication-Sign 10{sup -3} cm{sup -1}, respectively, which reveals weak antiferromagnetic interaction in 4. - Graphical abstract: Four lanthanide coordination polymers with N-(sulfoethyl) iminodiacetic acid were obtained under hydrothermal condition and reveal the weak antiferromagnetic coupling between two Gd{sup 3+} ions by Quantum Monte Carlo studies. Highlights: Black-Right-Pointing-Pointer Four lanthanide coordination polymers of H{sub 3}SIDA ligand were obtained. Black-Right-Pointing-Pointer Lanthanide ions play an important role in their structural diversity. Black-Right-Pointing-Pointer Magnetic measure exhibits that compound 4 features antiferromagnetic property. Black-Right-Pointing-Pointer Quantum Monte Carlo studies reveal the coupling parameters of two Gd{sup 3+} ions.

  2. Validating a virtual source model based in Monte Carlo Method for profiles and percent deep doses calculation

    Energy Technology Data Exchange (ETDEWEB)

    Del Nero, Renata Aline; Yoriyaz, Hélio [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Nakandakari, Marcos Vinicius Nakaoka, E-mail: hyoriyaz@ipen.br, E-mail: marcos.sake@gmail.com [Hospital Beneficência Portuguesa de São Paulo, SP (Brazil)

    2017-07-01

    The Monte Carlo method for radiation transport data has been adapted for medical physics application. More specifically, it has received more attention in clinical treatment planning with the development of more efficient computer simulation techniques. In linear accelerator modeling by the Monte Carlo method, the phase space data file (phsp) is used a lot. However, to obtain precision in the results, it is necessary detailed information about the accelerator's head and commonly the supplier does not provide all the necessary data. An alternative to the phsp is the Virtual Source Model (VSM). This alternative approach presents many advantages for the clinical Monte Carlo application. This is the most efficient method for particle generation and can provide an accuracy similar when the phsp is used. This research propose a VSM simulation with the use of a Virtual Flattening Filter (VFF) for profiles and percent deep doses calculation. Two different sizes of open fields (40 x 40 cm² and 40√2 x 40√2 cm²) were used and two different source to surface distance (SSD) were applied: the standard 100 cm and custom SSD of 370 cm, which is applied in radiotherapy treatments of total body irradiation. The data generated by the simulation was analyzed and compared with experimental data to validate the VSM. This current model is easy to build and test. (author)

  3. Consequences of biome depletion

    International Nuclear Information System (INIS)

    Salvucci, Emiliano

    2013-01-01

    The human microbiome is an integral part of the superorganism together with their host and they have co-evolved since the early days of the existence of the human species. The modification of the microbiome as a result changes in food and social habits of human beings throughout their life history has led to the emergence of many diseases. In contrast with the Darwinian view of nature of selfishness and competence, new holistic approaches are rising. Under these views, the reconstitution of the microbiome comes out as a fundamental therapy for emerging diseases related to biome depletion.

  4. Development of a Combined In Vitro Physiologically Based Kinetic (PBK) and Monte Carlo Modelling Approach to Predict Interindividual Human Variation in Phenol-Induced Developmental Toxicity.

    Science.gov (United States)

    Strikwold, Marije; Spenkelink, Bert; Woutersen, Ruud A; Rietjens, Ivonne M C M; Punt, Ans

    2017-06-01

    With our recently developed in vitro physiologically based kinetic (PBK) modelling approach, we could extrapolate in vitro toxicity data to human toxicity values applying PBK-based reverse dosimetry. Ideally information on kinetic differences among human individuals within a population should be considered. In the present study, we demonstrated a modelling approach that integrated in vitro toxicity data, PBK modelling and Monte Carlo simulations to obtain insight in interindividual human kinetic variation and derive chemical specific adjustment factors (CSAFs) for phenol-induced developmental toxicity. The present study revealed that UGT1A6 is the primary enzyme responsible for the glucuronidation of phenol in humans followed by UGT1A9. Monte Carlo simulations were performed taking into account interindividual variation in glucuronidation by these specific UGTs and in the oral absorption coefficient. Linking Monte Carlo simulations with PBK modelling, population variability in the maximum plasma concentration of phenol for the human population could be predicted. This approach provided a CSAF for interindividual variation of 2.0 which covers the 99th percentile of the population, which is lower than the default safety factor of 3.16 for interindividual human kinetic differences. Dividing the dose-response curve data obtained with in vitro PBK-based reverse dosimetry, with the CSAF provided a dose-response curve that reflects the consequences of the interindividual variability in phenol kinetics for the developmental toxicity of phenol. The strength of the presented approach is that it provides insight in the effect of interindividual variation in kinetics for phenol-induced developmental toxicity, based on only in vitro and in silico testing. © The Author 2017. Published by Oxford University Press on behalf of the Society of Toxicology. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. EPRI depletion benchmark calculations using PARAGON

    International Nuclear Information System (INIS)

    Kucukboyaci, Vefa N.

    2015-01-01

    Highlights: • PARAGON depletion calculations are benchmarked against the EPRI reactivity decrement experiments. • Benchmarks cover a wide range of enrichments, burnups, cooling times, and burnable absorbers, and different depletion and storage conditions. • Results from PARAGON-SCALE scheme are more conservative relative to the benchmark data. • ENDF/B-VII based data reduces the excess conservatism and brings the predictions closer to benchmark reactivity decrement values. - Abstract: In order to conservatively apply burnup credit in spent fuel pool criticality analyses, code validation for both fresh and used fuel is required. Fresh fuel validation is typically done by modeling experiments from the “International Handbook.” A depletion validation can determine a bias and bias uncertainty for the worth of the isotopes not found in the fresh fuel critical experiments. Westinghouse’s burnup credit methodology uses PARAGON™ (Westinghouse 2-D lattice physics code) and its 70-group cross-section library, which have been benchmarked, qualified, and licensed both as a standalone transport code and as a nuclear data source for core design simulations. A bias and bias uncertainty for the worth of depletion isotopes, however, are not available for PARAGON. Instead, the 5% decrement approach for depletion uncertainty is used, as set forth in the Kopp memo. Recently, EPRI developed a set of benchmarks based on a large set of power distribution measurements to ascertain reactivity biases. The depletion reactivity has been used to create 11 benchmark cases for 10, 20, 30, 40, 50, and 60 GWd/MTU and 3 cooling times 100 h, 5 years, and 15 years. These benchmark cases are analyzed with PARAGON and the SCALE package and sensitivity studies are performed using different cross-section libraries based on ENDF/B-VI.3 and ENDF/B-VII data to assess that the 5% decrement approach is conservative for determining depletion uncertainty

  6. State of the art of Monte Carlo technics for reliable activated waste evaluations

    International Nuclear Information System (INIS)

    Culioli, Matthieu; Chapoutier, Nicolas; Barbier, Samuel; Janski, Sylvain

    2016-01-01

    This paper presents the calculation scheme used for many studies to assess the activities inventory of French shutdown reactors (including Pressurized Water Reactor, Heavy Water Reactor, Sodium-Cooled Fast Reactor and Natural Uranium Gas Cooled or UNGG). This calculation scheme is based on Monte Carlo calculations (MCNP) and involves advanced technique for source modeling, geometry modeling (with Computer-Aided Design integration), acceleration methods and depletion calculations coupling on 3D meshes. All these techniques offer efficient and reliable evaluations on large scale model with a high level of details reducing the risks of underestimation or conservatisms. (authors)

  7. Synthesis of novel fluorene-based two-photon absorbing molecules and their applications in optical data storage, microfabrication, and stimulated emission depletion

    Science.gov (United States)

    Yanez, Ciceron

    2009-12-01

    Two-photon absorption (2PA) has been used for a number of scientific and technological applications, exploiting the fact that the 2PA probability is directly proportional to the square of the incident light intensity (while one-photon absorption bears a linear relation to the incident light intensity). This intrinsic property of 2PA leads to 3D spatial localization, important in fields such as optical data storage, fluorescence microscopy, and 3D microfabrication. The spatial confinement that 2PA enables has been used to induce photochemical and photophysical events in increasingly smaller volumes and allowed nonlinear, 2PA-based, technologies to reach sub-diffraction limit resolutions. The primary focus of this dissertation is the development of novel, efficient 2PA, fluorene-based molecules to be used either as photoacid generators (PAGs) or fluorophores. A second aim is to develop more effective methods of synthesizing these compounds. As a third and final objective, the new molecules were used to develop a write-once-read many (WORM) optical data storage system, and stimulated emission depletion probes for bioimaging. In Chapter I, the microwave-assisted synthesis of triarylsulfonium salt photoacid generators (PAGs) from their diphenyliodonium counterparts is reported. The microwave-assisted synthesis of these novel sulfonium salts afforded reaction times 90 to 420 times faster than conventional thermal conditions, with photoacid quantum yields of new sulfonium PAGs ranging from 0.01 to 0.4. These PAGs were used to develop a fluorescence readout-based, nonlinear three-dimensional (3D) optical data storage system (Chapter II). In this system, writing was achieved by acid generation upon two-photon absorption (2PA) of a PAG (at 710 or 730 nm). Readout was then performed by interrogating two-photon absorbing dyes, after protonation, at 860 nm. Two-photon recording and readout of voxels was demonstrated in five and eight consecutive, crosstalk-free layers within a

  8. Microscopic to macroscopic depletion model development for FORMOSA-P

    International Nuclear Information System (INIS)

    Noh, J.M.; Turinsky, P.J.; Sarsour, H.N.

    1996-01-01

    Microscopic depletion has been gaining popularity with regard to employment in reactor core nodal calculations, mainly attributed to the superiority of microscopic depletion in treating spectral history effects during depletion. Another trend is the employment of loading pattern optimization computer codes in support of reload core design. Use of such optimization codes has significantly reduced design efforts to optimize reload core loading patterns associated with increasingly complicated lattice designs. A microscopic depletion model has been developed for the FORMOSA-P pressurized water reactor (PWR) loading pattern optimization code. This was done for both fidelity improvements and to make FORMOSA-P compatible with microscopic-based nuclear design methods. Needless to say, microscopic depletion requires more computational effort compared with macroscopic depletion. This implies that microscopic depletion may be computationally restrictive if employed during the loading pattern optimization calculation because many loading patterns are examined during the course of an optimization search. Therefore, the microscopic depletion model developed here uses combined models of microscopic and macroscopic depletion. This is done by first performing microscopic depletions for a subset of possible loading patterns from which 'collapsed' macroscopic cross sections are obtained. The collapsed macroscopic cross sections inherently incorporate spectral history effects. Subsequently, the optimization calculations are done using the collapsed macroscopic cross sections. Using this approach allows maintenance of microscopic depletion level accuracy without substantial additional computing resources

  9. GATE based Monte Carlo simulation of planar scintigraphy to estimate the nodular dose in radioiodine therapy for autonomous thyroid adenoma

    Energy Technology Data Exchange (ETDEWEB)

    Hammes, Jochen; Schmidt, Matthias; Schicha, Harald; Eschner, Wolfgang [Universitaetsklinikum Koeln (Germany). Klinik und Poliklinik fuer Nuklearmedizin; Pietrzyk, Uwe [Forschungszentrum Juelich GmbH (Germany). Inst. fuer Neurowissenschaften und Medizin (INM-4); Wuppertal Univ. (Germany). Fachbereich C - Physik

    2011-07-01

    The recommended target dose in radioiodine therapy of solitary hyperfunctioning thyroid nodules is 300-400 Gy and therefore higher than in other radiotherapies. This is due to the fact that an unknown, yet significant portion of the activity is stored in extranodular areas but is neglected in the calculatory dosimetry. We investigate the feasibility of determining the ratio of nodular and extranodular activity concentrations (uptakes) from post-therapeutically acquired planar scintigrams with Monte Carlo simulations in GATE. The geometry of a gamma camera with a high energy collimator was emulated in GATE (Version 5). A geometrical thyroid-neck phantom (GP) and the ICRP reference voxel phantoms 'Adult Female' (AF, 16 ml thyroid) and 'Adult Male' (AM, 19 ml thyroid) were used as source regions. Nodules of 1 ml and 3 ml volume were placed in the phantoms. For each phantom and each nodule 200 scintigraphic acquisitions were simulated. Uptake ratios of nodule and rest of thyroid ranging from 1 to 20 could be created by summation. Quantitative image analysis was performed by investigating the number of simulated counts in regions of interest (ROIs). ROIs were created by perpendicular projection of the phantom onto the camera plane to avoid a user dependant bias. The ratio of count densities in ROIs over the nodule and over the contralateral lobe, which should be least affected by nodular activity, was taken to be the best available measure for the uptake ratios. However, the predefined uptake ratios are underestimated by these count density ratios: For an uptake ratio of 20 the count ratios range from 4.5 (AF, 1 ml nodule) to 15.3 (AM, 3 ml nodule). Furthermore, the contralateral ROI is more strongly affected by nodular activity than expected: For an uptake ratio of 20 between nodule and rest of thyroid up to 29% of total counts in the ROI over the contralateral lobe are caused by decays in the nodule (AF 3 ml). In the case of the 1 ml nodules this

  10. GATE based Monte Carlo simulation of planar scintigraphy to estimate the nodular dose in radioiodine therapy for autonomous thyroid adenoma.

    Science.gov (United States)

    Hammes, Jochen; Pietrzyk, Uwe; Schmidt, Matthias; Schicha, Harald; Eschner, Wolfgang

    2011-12-01

    The recommended target dose in radioiodine therapy of solitary hyperfunctioning thyroid nodules is 300-400Gy and therefore higher than in other radiotherapies. This is due to the fact that an unknown, yet significant portion of the activity is stored in extranodular areas but is neglected in the calculatory dosimetry. We investigate the feasibility of determining the ratio of nodular and extranodular activity concentrations (uptakes) from post-therapeutically acquired planar scintigrams with Monte Carlo simulations in GATE. The geometry of a gamma camera with a high energy collimator was emulated in GATE (Version 5). A geometrical thyroid-neck phantom (GP) and the ICRP reference voxel phantoms "Adult Female" (AF, 16ml thyroid) and "Adult Male" (AM, 19ml thyroid) were used as source regions. Nodules of 1ml and 3ml volume were placed in the phantoms. For each phantom and each nodule 200 scintigraphic acquisitions were simulated. Uptake ratios of nodule and rest of thyroid ranging from 1 to 20 could be created by summation. Quantitative image analysis was performed by investigating the number of simulated counts in regions of interest (ROIs). ROIs were created by perpendicular projection of the phantom onto the camera plane to avoid a user dependant bias. The ratio of count densities in ROIs over the nodule and over the contralateral lobe, which should be least affected by nodular activity, was taken to be the best available measure for the uptake ratios. However, the predefined uptake ratios are underestimated by these count density ratios: For an uptake ratio of 20 the count ratios range from 4.5 (AF, 1ml nodule) to 15.3 (AM, 3ml nodule). Furthermore, the contralateral ROI is more strongly affected by nodular activity than expected: For an uptake ratio of 20 between nodule and rest of thyroid up to 29% of total counts in the ROI over the contralateral lobe are caused by decays in the nodule (AF 3 ml). In the case of the 1ml nodules this effect is smaller: 9-11% (AF

  11. GATE based Monte Carlo simulation of planar scintigraphy to estimate the nodular dose in radioiodine therapy for autonomous thyroid adenoma

    International Nuclear Information System (INIS)

    Hammes, Jochen; Schmidt, Matthias; Schicha, Harald; Eschner, Wolfgang; Pietrzyk, Uwe; Wuppertal Univ.

    2011-01-01

    The recommended target dose in radioiodine therapy of solitary hyperfunctioning thyroid nodules is 300-400 Gy and therefore higher than in other radiotherapies. This is due to the fact that an unknown, yet significant portion of the activity is stored in extranodular areas but is neglected in the calculatory dosimetry. We investigate the feasibility of determining the ratio of nodular and extranodular activity concentrations (uptakes) from post-therapeutically acquired planar scintigrams with Monte Carlo simulations in GATE. The geometry of a gamma camera with a high energy collimator was emulated in GATE (Version 5). A geometrical thyroid-neck phantom (GP) and the ICRP reference voxel phantoms 'Adult Female' (AF, 16 ml thyroid) and 'Adult Male' (AM, 19 ml thyroid) were used as source regions. Nodules of 1 ml and 3 ml volume were placed in the phantoms. For each phantom and each nodule 200 scintigraphic acquisitions were simulated. Uptake ratios of nodule and rest of thyroid ranging from 1 to 20 could be created by summation. Quantitative image analysis was performed by investigating the number of simulated counts in regions of interest (ROIs). ROIs were created by perpendicular projection of the phantom onto the camera plane to avoid a user dependant bias. The ratio of count densities in ROIs over the nodule and over the contralateral lobe, which should be least affected by nodular activity, was taken to be the best available measure for the uptake ratios. However, the predefined uptake ratios are underestimated by these count density ratios: For an uptake ratio of 20 the count ratios range from 4.5 (AF, 1 ml nodule) to 15.3 (AM, 3 ml nodule). Furthermore, the contralateral ROI is more strongly affected by nodular activity than expected: For an uptake ratio of 20 between nodule and rest of thyroid up to 29% of total counts in the ROI over the contralateral lobe are caused by decays in the nodule (AF 3 ml). In the case of the 1 ml nodules this effect is smaller: 9

  12. A Monte-Carlo method which is not based on Markov chain algorithm, used to study electrostatic screening of ion potential

    Science.gov (United States)

    Šantić, Branko; Gracin, Davor

    2017-12-01

    A new simple Monte Carlo method is introduced for the study of electrostatic screening by surrounding ions. The proposed method is not based on the generally used Markov chain method for sample generation. Each sample is pristine and there is no correlation with other samples. As the main novelty, the pairs of ions are gradually added to a sample provided that the energy of each ion is within the boundaries determined by the temperature and the size of ions. The proposed method provides reliable results, as demonstrated by the screening of ion in plasma and in water.

  13. Combining the auxin-inducible degradation system with CRISPR/Cas9-based genome editing for the conditional depletion of endogenous Drosophila melanogaster proteins.

    Science.gov (United States)

    Bence, Melinda; Jankovics, Ferenc; Lukácsovich, Tamás; Erdélyi, Miklós

    2017-04-01

    Inducible protein degradation techniques have considerable advantages over classical genetic approaches, which generate loss-of-function phenotypes at the gene or mRNA level. The plant-derived auxin-inducible degradation system (AID) is a promising technique which enables the degradation of target proteins tagged with the AID motif in nonplant cells. Here, we present a detailed characterization of this method employed during the adult oogenesis of Drosophila. Furthermore, with the help of CRISPR/Cas9-based genome editing, we improve the utility of the AID system in the conditional elimination of endogenously expressed proteins. We demonstrate that the AID system induces efficient and reversible protein depletion of maternally provided proteins both in the ovary and the early embryo. Moreover, the AID system provides a fine spatiotemporal control of protein degradation and allows for the generation of different levels of protein knockdown in a well-regulated manner. These features of the AID system enable the unraveling of the discrete phenotypes of genes with highly complex functions. We utilized this system to generate a conditional loss-of-function allele which allows for the specific degradation of the Vasa protein without affecting its alternative splice variant (solo) and the vasa intronic gene (vig). With the help of this special allele, we demonstrate that dramatic decrease of Vasa protein in the vitellarium does not influence the completion of oogenesis as well as the establishment of proper anteroposterior and dorsoventral polarity in the developing oocyte. Our study suggests that both the localization and the translation of gurken mRNA in the vitellarium is independent from Vasa. © 2017 Federation of European Biochemical Societies.

  14. MOx Depletion Calculation Benchmark

    International Nuclear Information System (INIS)

    San Felice, Laurence; Eschbach, Romain; Dewi Syarifah, Ratna; Maryam, Seif-Eddine; Hesketh, Kevin

    2016-01-01

    Under the auspices of the NEA Nuclear Science Committee (NSC), the Working Party on Scientific Issues of Reactor Systems (WPRS) has been established to study the reactor physics, fuel performance, radiation transport and shielding, and the uncertainties associated with modelling of these phenomena in present and future nuclear power systems. The WPRS has different expert groups to cover a wide range of scientific issues in these fields. The Expert Group on Reactor Physics and Advanced Nuclear Systems (EGRPANS) was created in 2011 to perform specific tasks associated with reactor physics aspects of present and future nuclear power systems. EGRPANS provides expert advice to the WPRS and the nuclear community on the development needs (data and methods, validation experiments, scenario studies) for different reactor systems and also provides specific technical information regarding: core reactivity characteristics, including fuel depletion effects; core power/flux distributions; Core dynamics and reactivity control. In 2013 EGRPANS published a report that investigated fuel depletion effects in a Pressurised Water Reactor (PWR). This was entitled 'International Comparison of a Depletion Calculation Benchmark on Fuel Cycle Issues' NEA/NSC/DOC(2013) that documented a benchmark exercise for UO 2 fuel rods. This report documents a complementary benchmark exercise that focused on PuO 2 /UO 2 Mixed Oxide (MOX) fuel rods. The results are especially relevant to the back-end of the fuel cycle, including irradiated fuel transport, reprocessing, interim storage and waste repository. Saint-Laurent B1 (SLB1) was the first French reactor to use MOx assemblies. SLB1 is a 900 MWe PWR, with 30% MOx fuel loading. The standard MOx assemblies, used in Saint-Laurent B1 reactor, include three zones with different plutonium enrichments, high Pu content (5.64%) in the center zone, medium Pu content (4.42%) in the intermediate zone and low Pu content (2.91%) in the peripheral zone

  15. SU-F-T-193: Evaluation of a GPU-Based Fast Monte Carlo Code for Proton Therapy Biological Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Taleei, R; Qin, N; Jiang, S [UT Southwestern Medical Center, Dallas, TX (United States); Peeler, C [UT MD Anderson Cancer Center, Houston, TX (United States); Jia, X [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States)

    2016-06-15

    Purpose: Biological treatment plan optimization is of great interest for proton therapy. It requires extensive Monte Carlo (MC) simulations to compute physical dose and biological quantities. Recently, a gPMC package was developed for rapid MC dose calculations on a GPU platform. This work investigated its suitability for proton therapy biological optimization in terms of accuracy and efficiency. Methods: We performed simulations of a proton pencil beam with energies of 75, 150 and 225 MeV in a homogeneous water phantom using gPMC and FLUKA. Physical dose and energy spectra for each ion type on the central beam axis were scored. Relative Biological Effectiveness (RBE) was calculated using repair-misrepair-fixation model. Microdosimetry calculations were performed using Monte Carlo Damage Simulation (MCDS). Results: Ranges computed by the two codes agreed within 1 mm. Physical dose difference was less than 2.5 % at the Bragg peak. RBE-weighted dose agreed within 5 % at the Bragg peak. Differences in microdosimetric quantities such as dose average lineal energy transfer and specific energy were < 10%. The simulation time per source particle with FLUKA was 0.0018 sec, while gPMC was ∼ 600 times faster. Conclusion: Physical dose computed by FLUKA and gPMC were in a good agreement. The RBE differences along the central axis were small, and RBE-weighted dose difference was found to be acceptable. The combined accuracy and efficiency makes gPMC suitable for proton therapy biological optimization.

  16. WE-DE-201-05: Evaluation of a Windowless Extrapolation Chamber Design and Monte Carlo Based Corrections for the Calibration of Ophthalmic Applicators

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, J; Culberson, W; DeWerd, L [University of Wisconsin Medical Radiation Research Center, Madison, WI (United States); Soares, C [NIST (retired), Gaithersburg, MD (United States)

    2016-06-15

    Purpose: To test the validity of a windowless extrapolation chamber used to measure surface dose rate from planar ophthalmic applicators and to compare different Monte Carlo based codes for deriving correction factors. Methods: Dose rate measurements were performed using a windowless, planar extrapolation chamber with a {sup 90}Sr/{sup 90}Y Tracerlab RA-1 ophthalmic applicator previously calibrated at the National Institute of Standards and Technology (NIST). Capacitance measurements were performed to estimate the initial air gap width between the source face and collecting electrode. Current was measured as a function of air gap, and Bragg-Gray cavity theory was used to calculate the absorbed dose rate to water. To determine correction factors for backscatter, divergence, and attenuation from the Mylar entrance window found in the NIST extrapolation chamber, both EGSnrc Monte Carlo user code and Monte Carlo N-Particle Transport Code (MCNP) were utilized. Simulation results were compared with experimental current readings from the windowless extrapolation chamber as a function of air gap. Additionally, measured dose rate values were compared with the expected result from the NIST source calibration to test the validity of the windowless chamber design. Results: Better agreement was seen between EGSnrc simulated dose results and experimental current readings at very small air gaps (<100 µm) for the windowless extrapolation chamber, while MCNP results demonstrated divergence at these small gap widths. Three separate dose rate measurements were performed with the RA-1 applicator. The average observed difference from the expected result based on the NIST calibration was −1.88% with a statistical standard deviation of 0.39% (k=1). Conclusion: EGSnrc user code will be used during future work to derive correction factors for extrapolation chamber measurements. Additionally, experiment results suggest that an entrance window is not needed in order for an extrapolation

  17. Riddle of depleted uranium

    International Nuclear Information System (INIS)

    Hussein, A.S.

    2005-01-01

    Depleted Uranium (DU) is the waste product of uranium enrichment from the manufacturing of fuel rods for nuclear reactors in nuclear power plants and nuclear power ships. DU may also results from the reprocessing of spent nuclear reactor fuel. Potentially DU has both chemical and radiological toxicity with two important targets organs being the kidney and the lungs. DU is made into a metal and, due to its availability, low price, high specific weight, density and melting point as well as its pyrophoricity; it has a wide range of civilian and military applications. Due to the use of DU over the recent years, there appeared in some press on health hazards that are alleged to be due to DU. In these paper properties, applications, potential environmental and health effects of DU are briefly reviewed

  18. Local condensate depletion at trap center under strong interactions

    Science.gov (United States)

    Yukalov, V. I.; Yukalova, E. P.

    2018-04-01

    Cold trapped Bose-condensed atoms, interacting via hard-sphere repulsive potentials are considered. Simple mean-field approximations show that the condensate distribution inside a harmonic trap always has the shape of a hump with the maximum condensate density occurring at the trap center. However, Monte Carlo simulations at high density and strong interactions display the condensate depletion at the trap center. The explanation of this effect of local condensate depletion at trap center is suggested in the frame of self-consistent theory of Bose-condensed systems. The depletion is shown to be due to the existence of the anomalous average that takes into account pair correlations and appears in systems with broken gauge symmetry.

  19. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  20. The Toxicity of Depleted Uranium

    OpenAIRE

    Briner, Wayne

    2010-01-01

    Depleted uranium (DU) is an emerging environmental pollutant that is introduced into the environment primarily by military activity. While depleted uranium is less radioactive than natural uranium, it still retains all the chemical toxicity associated with the original element. In large doses the kidney is the target organ for the acute chemical toxicity of this metal, producing potentially lethal tubular necrosis. In contrast, chronic low dose exposure to depleted uranium may not produce a c...

  1. Monte Carlo simulation of VHTR particle fuel with chord length sampling

    International Nuclear Information System (INIS)

    Ji, W.; Martin, W. R.

    2007-01-01

    The Very High Temperature Gas-Cooled Reactor (VHTR) poses a problem for neutronic analysis due to the double heterogeneity posed by the particle fuel and either the fuel compacts in the case of the prismatic block reactor or the fuel pebbles in the case of the pebble bed reactor. Direct Monte Carlo simulation has been used in recent years to analyze these VHTR configurations but is computationally challenged when space dependent phenomena are considered such as depletion or temperature feedback. As an alternative approach, we have considered chord length sampling to reduce the computational burden of the Monte Carlo simulation. We have improved on an existing method called 'limited chord length sampling' and have used it to analyze stochastic media representative of either pebble bed or prismatic VHTR fuel geometries. Based on the assumption that the PDF had an exponential form, a theoretical chord length distribution is derived and shown to be an excellent model for a wide range of packing fractions. This chord length PDF was then used to analyze a stochastic medium that was constructed using the RSA (Random Sequential Addition) algorithm and the results were compared to a benchmark Monte Carlo simulation of the actual stochastic geometry. The results are promising and suggest that the theoretical chord length PDF can be used instead of a full Monte Carlo random walk simulation in the stochastic medium, saving orders of magnitude in computational time (and memory demand) to perform the simulation. (authors)

  2. Coupling effects of depletion interactions in a three-sphere colloidal system

    International Nuclear Information System (INIS)

    Chen Ze-Shun; Dai Gang; Gao Hai-Xia; Xiao Chang-Ming

    2013-01-01

    In a three-sphere system, the middle sphere is acted upon by two opposite depletion forces from the other two spheres. It is found that, in this system, the two depletion forces are coupled with each other and result in a strengthened depletion force. So the difference of the depletion forces of the three-sphere system and its corresponding two two-sphere systems is introduced to describe the coupling effect of the depletion interactions. The numerical results obtained by Monte-Carlo simulations show that this coupling effect is affected by both the concentration of small spheres and the geometrical confinement. Meanwhile, it is also found that the mechanisms of the coupling effect and the effect on the depletion force from the geometry factor are the same. (interdisciplinary physics and related areas of science and technology)

  3. Radiation transport simulation in gamma irradiator systems using E G S 4 Monte Carlo code and dose mapping calculations based on point kernel technique

    International Nuclear Information System (INIS)

    Raisali, G.R.

    1992-01-01

    A series of computer codes based on point kernel technique and also Monte Carlo method have been developed. These codes perform radiation transport calculations for irradiator systems having cartesian, cylindrical and mixed geometries. The monte Carlo calculations, the computer code 'EGS4' has been applied to a radiation processing type problem. This code has been acompanied by a specific user code. The set of codes developed include: GCELLS, DOSMAPM, DOSMAPC2 which simulate the radiation transport in gamma irradiator systems having cylinderical, cartesian, and mixed geometries, respectively. The program 'DOSMAP3' based on point kernel technique, has been also developed for dose rate mapping calculations in carrier type gamma irradiators. Another computer program 'CYLDETM' as a user code for EGS4 has been also developed to simulate dose variations near the interface of heterogeneous media in gamma irradiator systems. In addition a system of computer codes 'PRODMIX' has been developed which calculates the absorbed dose in the products with different densities. validation studies of the calculated results versus experimental dosimetry has been performed and good agreement has been obtained

  4. A GPU-based large-scale Monte Carlo simulation method for systems with long-range interactions

    Science.gov (United States)

    Liang, Yihao; Xing, Xiangjun; Li, Yaohang

    2017-06-01

    In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures, and adopts the sequential updating scheme of Metropolis algorithm. It makes no approximation in the computation of energy, and reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We further use this method to simulate primitive model electrolytes, and measure very precisely all ion-ion pair correlation functions at high concentrations. From these data, we extract the renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.

  5. Study of the response of a lithium yttrium borate scintillator based neutron rem counter by Monte Carlo radiation transport simulations

    Science.gov (United States)

    Sunil, C.; Tyagi, Mohit; Biju, K.; Shanbhag, A. A.; Bandyopadhyay, T.

    2015-12-01

    The scarcity and the high cost of 3He has spurred the use of various detectors for neutron monitoring. A new lithium yttrium borate scintillator developed in BARC has been studied for its use in a neutron rem counter. The scintillator is made of natural lithium and boron, and the yield of reaction products that will generate a signal in a real time detector has been studied by FLUKA Monte Carlo radiation transport code. A 2 cm lead introduced to enhance the gamma rejection shows no appreciable change in the shape of the fluence response or in the yield of reaction products. The fluence response when normalized at the average energy of an Am-Be neutron source shows promise of being used as rem counter.

  6. Study of the response of a lithium yttrium borate scintillator based neutron rem counter by Monte Carlo radiation transport simulations

    Energy Technology Data Exchange (ETDEWEB)

    Sunil, C., E-mail: csunil11@gmail.com [Accelerator Radiation Safety Section, Health Physics Division, Bhabha Atomic Research Centre, Mumbai 400085 (India); Tyagi, Mohit [Technical Physics Division, Bhabha Atomic Research Centre, Mumbai 400085 (India); Biju, K.; Shanbhag, A.A.; Bandyopadhyay, T. [Accelerator Radiation Safety Section, Health Physics Division, Bhabha Atomic Research Centre, Mumbai 400085 (India)

    2015-12-11

    The scarcity and the high cost of {sup 3}He has spurred the use of various detectors for neutron monitoring. A new lithium yttrium borate scintillator developed in BARC has been studied for its use in a neutron rem counter. The scintillator is made of natural lithium and boron, and the yield of reaction products that will generate a signal in a real time detector has been studied by FLUKA Monte Carlo radiation transport code. A 2 cm lead introduced to enhance the gamma rejection shows no appreciable change in the shape of the fluence response or in the yield of reaction products. The fluence response when normalized at the average energy of an Am–Be neutron source shows promise of being used as rem counter.

  7. Assessment study for multi-barrier system used in radioactive borate waste isolation based on Monte Carlo simulations.

    Science.gov (United States)

    Bayoumi, T A; Reda, S M; Saleh, H M

    2012-01-01

    Radioactive waste generated from the nuclear applications should be properly isolated by a suitable containment system such as, multi-barrier container. The present study aims to evaluate the isolation capacity of a new multi-barrier container made from cement and clay and including borate waste materials. These wastes were spiked by (137)Cs and (60)Co radionuclides to simulate that waste generated from the primary cooling circuit of pressurized water reactors. Leaching of both radionuclides in ground water was followed and calculated during ten years. Monte Carlo (MCNP5) simulations computed the photon flux distribution of the multi-barrier container, including radioactive borate waste of specific activity 11.22KBq/g and 4.18KBq/g for (137)Cs and (60)Co, respectively, at different periods of 0, 15.1, 30.2 and 302 years. The average total flux for 100cm radius of spherical cell was 0.192photon/cm(2) at initial time and 2.73×10(-4)photon/cm(2) after 302 years. Maximum waste activity keeping the surface radiation dose within the permissible level was calculated and found to be 56KBq/g with attenuation factors of 0.73cm(-1) and 0.6cm(-1) for cement and clay, respectively. The average total flux was 1.37×10(-3)photon/cm(2) after 302 years. Monte Carlo simulations revealed that the proposed multi-barrier container is safe enough during transportation, evacuation or rearrangement in the disposal site for more than 300 years. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Gold nanoparticle-based brachytherapy enhancement in choroidal melanoma using a full Monte Carlo model of the human eye.

    Science.gov (United States)

    Asadi, Somayeh; Vaez-zadeh, Mehdi; Masoudi, S Farhad; Rahmani, Faezeh; Knaup, Courtney; Meigooni, Ali S

    2015-09-08

    The effects of gold nanoparticles (GNPs) in 125I brachytherapy dose enhancement on choroidal melanoma are examined using the Monte Carlo simulation technique. Usually, Monte Carlo ophthalmic brachytherapy dosimetry is performed in a water phantom. However, here, the compositions of human eye have been considered instead of water. Both human eye and water phantoms have been simulated with MCNP5 code. These simulations were performed for a fully loaded 16 mm COMS eye plaque containing 13 125I seeds. The dose delivered to the tumor and normal tissues have been calculated in both phantoms with and without GNPs. Normally, the radiation therapy of cancer patients is designed to deliver a required dose to the tumor while sparing the surrounding normal tissues. However, as the normal and cancerous cells absorbed dose in an almost identical fashion, the normal tissue absorbed radiation dose during the treatment time. The use of GNPs in combination with radiotherapy in the treatment of tumor decreases the absorbed dose by normal tissues. The results indicate that the dose to the tumor in an eyeball implanted with COMS plaque increases with increasing GNPs concentration inside the target. Therefore, the required irradiation time for the tumors in the eye is decreased by adding the GNPs prior to treatment. As a result, the dose to normal tissues decreases when the irradiation time is reduced. Furthermore, a comparison between the simulated data in an eye phantom made of water and eye phantom made of human eye composition, in the presence of GNPs, shows the significance of utilizing the composition of eye in ophthalmic brachytherapy dosimetry Also, defining the eye composition instead of water leads to more accurate calculations of GNPs radiation effects in ophthalmic brachytherapy dosimetry.

  9. High order depletion sensitivity analysis

    International Nuclear Information System (INIS)

    Naguib, K.; Adib, M.; Morcos, H.N.

    2002-01-01

    A high order depletion sensitivity method was applied to calculate the sensitivities of build-up of actinides in the irradiated fuel due to cross-section uncertainties. An iteration method based on Taylor series expansion was applied to construct stationary principle, from which all orders of perturbations were calculated. The irradiated EK-10 and MTR-20 fuels at their maximum burn-up of 25% and 65% respectively were considered for sensitivity analysis. The results of calculation show that, in case of EK-10 fuel (low burn-up), the first order sensitivity was found to be enough to perform an accuracy of 1%. While in case of MTR-20 (high burn-up) the fifth order was found to provide 3% accuracy. A computer code SENS was developed to provide the required calculations

  10. Groundwater Depletion Embedded in International Food Trade

    Science.gov (United States)

    Dalin, Carole; Wada, Yoshihide; Kastner, Thomas; Puma, Michael J.

    2017-01-01

    Recent hydrological modeling and Earth observations have located and quantified alarming rates of groundwater depletion worldwide. This depletion is primarily due to water withdrawals for irrigation, but its connection with the main driver of irrigation, global food consumption, has not yet been explored. Here we show that approximately eleven per cent of non-renewable groundwater use for irrigation is embedded in international food trade, of which two-thirds are exported by Pakistan, the USA and India alone. Our quantification of groundwater depletion embedded in the world's food trade is based on a combination of global, crop-specific estimates of non-renewable groundwater abstraction and international food trade data. A vast majority of the world's population lives in countries sourcing nearly all their staple crop imports from partners who deplete groundwater to produce these crops, highlighting risks for global food and water security. Some countries, such as the USA, Mexico, Iran and China, are particularly exposed to these risks because they both produce and import food irrigated from rapidly depleting aquifers. Our results could help to improve the sustainability of global food production and groundwater resource management by identifying priority regions and agricultural products at risk as well as the end consumers of these products.

  11. Groundwater depletion embedded in international food trade

    Science.gov (United States)

    Dalin, Carole; Wada, Yoshihide; Kastner, Thomas; Puma, Michael J.

    2017-03-01

    Recent hydrological modelling and Earth observations have located and quantified alarming rates of groundwater depletion worldwide. This depletion is primarily due to water withdrawals for irrigation, but its connection with the main driver of irrigation, global food consumption, has not yet been explored. Here we show that approximately eleven per cent of non-renewable groundwater use for irrigation is embedded in international food trade, of which two-thirds are exported by Pakistan, the USA and India alone. Our quantification of groundwater depletion embedded in the world’s food trade is based on a combination of global, crop-specific estimates of non-renewable groundwater abstraction and international food trade data. A vast majority of the world’s population lives in countries sourcing nearly all their staple crop imports from partners who deplete groundwater to produce these crops, highlighting risks for global food and water security. Some countries, such as the USA, Mexico, Iran and China, are particularly exposed to these risks because they both produce and import food irrigated from rapidly depleting aquifers. Our results could help to improve the sustainability of global food production and groundwater resource management by identifying priority regions and agricultural products at risk as well as the end consumers of these products.

  12. Exploring Monte Carlo methods

    CERN Document Server

    Dunn, William L

    2012-01-01

    Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble

  13. Monte Carlo methods

    Directory of Open Access Journals (Sweden)

    Bardenet Rémi

    2013-07-01

    Full Text Available Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of Monte Carlo in experimental physics, and point to landmarks in the literature for the curious reader.

  14. WE-H-BRA-08: A Monte Carlo Cell Nucleus Model for Assessing Cell Survival Probability Based On Particle Track Structure Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, B [Northwestern Memorial Hospital, Chicago, IL (United States); Georgia Institute of Technology, Atlanta, GA (Georgia); Wang, C [Georgia Institute of Technology, Atlanta, GA (Georgia)

    2016-06-15

    Purpose: To correlate the damage produced by particles of different types and qualities to cell survival on the basis of nanodosimetric analysis and advanced DNA structures in the cell nucleus. Methods: A Monte Carlo code was developed to simulate subnuclear DNA chromatin fibers (CFs) of 30nm utilizing a mean-free-path approach common to radiation transport. The cell nucleus was modeled as a spherical region containing 6000 chromatin-dense domains (CDs) of 400nm diameter, with additional CFs modeled in a sparser interchromatin region. The Geant4-DNA code was utilized to produce a particle track database representing various particles at different energies and dose quantities. These tracks were used to stochastically position the DNA structures based on their mean free path to interaction with CFs. Excitation and ionization events intersecting CFs were analyzed using the DBSCAN clustering algorithm for assessment of the likelihood of producing DSBs. Simulated DSBs were then assessed based on their proximity to one another for a probability of inducing cell death. Results: Variations in energy deposition to chromatin fibers match expectations based on differences in particle track structure. The quality of damage to CFs based on different particle types indicate more severe damage by high-LET radiation than low-LET radiation of identical particles. In addition, the model indicates more severe damage by protons than of alpha particles of same LET, which is consistent with differences in their track structure. Cell survival curves have been produced showing the L-Q behavior of sparsely ionizing radiation. Conclusion: Initial results indicate the feasibility of producing cell survival curves based on the Monte Carlo cell nucleus method. Accurate correlation between simulated DNA damage to cell survival on the basis of nanodosimetric analysis can provide insight into the biological responses to various radiation types. Current efforts are directed at producing cell

  15. MVP/GMVP 2: general purpose Monte Carlo codes for neutron and photon transport calculations based on continuous energy and multigroup methods

    International Nuclear Information System (INIS)

    Nagaya, Yasunobu; Okumura, Keisuke; Mori, Takamasa; Nakagawa, Masayuki

    2005-06-01

    In order to realize fast and accurate Monte Carlo simulation of neutron and photon transport problems, two vectorized Monte Carlo codes MVP and GMVP have been developed at JAERI. MVP is based on the continuous energy model and GMVP is on the multigroup model. Compared with conventional scalar codes, these codes achieve higher computation speed by a factor of 10 or more on vector super-computers. Both codes have sufficient functions for production use by adopting accurate physics model, geometry description capability and variance reduction techniques. The first version of the codes was released in 1994. They have been extensively improved and new functions have been implemented. The major improvements and new functions are (1) capability to treat the scattering model expressed with File 6 of the ENDF-6 format, (2) time-dependent tallies, (3) reaction rate calculation with the pointwise response function, (4) flexible source specification, (5) continuous-energy calculation at arbitrary temperatures, (6) estimation of real variances in eigenvalue problems, (7) point detector and surface crossing estimators, (8) statistical geometry model, (9) function of reactor noise analysis (simulation of the Feynman-α experiment), (10) arbitrary shaped lattice boundary, (11) periodic boundary condition, (12) parallelization with standard libraries (MPI, PVM), (13) supporting many platforms, etc. This report describes the physical model, geometry description method used in the codes, new functions and how to use them. (author)

  16. Monte Carlo simulation of high-flux 14 MeV neutron source based on muon catalyzed fusion using a high-power 50 MW deuteron beam

    Energy Technology Data Exchange (ETDEWEB)

    Vecchi, M [ENEA, Bologna (Italy); Karmanov, F I [Inst. of Nuclear Power Engineering, Obninsk (Russian Federation); Latysheva, L N; Pshenichnov, I A [Russian Academy of Sciences, Moscow (Russian Federation). Inst. for Nuclear Research

    1997-12-31

    The results Monte Carlo simulations of an intense neutron source based on muon catalyzed fusion process are presented. A deuteron beam is directed onto a cylindrical carbon target, located in vacuum converter chamber with a strong solenoidal magnetic field. The produced pions and muons which originate from pion decay are guided along magnetic field to a DT-synthesizer. Pion production in the primary target is simulated by means of Intranuclear and Internuclear cascade codes developed in INR, Moscow, while pion and muon transport process is studied by using a Monte Carlo code originated at CERN. The main purpose of the work is to calculate the pion and muon utilization efficiency taking into account the pion absorption in the primary target as well as all other losses of pions and muons in the converter and DT-cell walls. Preliminary estimations demonstrate the possibility to reach the level of 1014 n/s/cm{sup 2} for the neutron flux. (J.U.). 3 tabs., 4 figs., 8 refs.

  17. VIP-Man: An image-based whole-body adult male model constructed from color photographs of the visible human project for multi-particle Monte Carlo calculations

    International Nuclear Information System (INIS)

    Xu, X.G.; Chao, T.C.; Bozkurt, A.

    2000-01-01

    Human anatomical models have been indispensable to radiation protection dosimetry using Monte Carlo calculations. Existing MIRD-based mathematical models are easy to compute and standardize, but they are simplified and crude compared to human anatomy. This article describes the development of an image-based whole-body model, called VIP-Man, using transversal color photographic images obtained from the National Library of Medicine's Visible Human Project for Monte Carlo organ dose calculations involving photons, electron, neutrons, and protons. As the first of a series of papers on dose calculations based on VIP-Man, this article provides detailed information about how to construct an image-based model, as well as how to adopt it into well-tested Monte Carlo codes, EGS4, MCNP4B, and MCNPX

  18. The Toxicity of Depleted Uranium

    Directory of Open Access Journals (Sweden)

    Wayne Briner

    2010-01-01

    Full Text Available Depleted uranium (DU is an emerging environmental pollutant that is introduced into the environment primarily by military activity. While depleted uranium is less radioactive than natural uranium, it still retains all the chemical toxicity associated with the original element. In large doses the kidney is the target organ for the acute chemical toxicity of this metal, producing potentially lethal tubular necrosis. In contrast, chronic low dose exposure to depleted uranium may not produce a clear and defined set of symptoms. Chronic low-dose, or subacute, exposure to depleted uranium alters the appearance of milestones in developing organisms. Adult animals that were exposed to depleted uranium during development display persistent alterations in behavior, even after cessation of depleted uranium exposure. Adult animals exposed to depleted uranium demonstrate altered behaviors and a variety of alterations to brain chemistry. Despite its reduced level of radioactivity evidence continues to accumulate that depleted uranium, if ingested, may pose a radiologic hazard. The current state of knowledge concerning DU is discussed.

  19. Ego depletion impairs implicit learning.

    Science.gov (United States)

    Thompson, Kelsey R; Sanchez, Daniel J; Wesley, Abigail H; Reber, Paul J

    2014-01-01

    Implicit skill learning occurs incidentally and without conscious awareness of what is learned. However, the rate and effectiveness of learning may still be affected by decreased availability of central processing resources. Dual-task experiments have generally found impairments in implicit learning, however, these studies have also shown that certain characteristics of the secondary task (e.g., timing) can complicate the interpretation of these results. To avoid this problem, the current experiments used a novel method to impose resource constraints prior to engaging in skill learning. Ego depletion theory states that humans possess a limited store of cognitive resources that, when depleted, results in deficits in self-regulation and cognitive control. In a first experiment, we used a standard ego depletion manipulation prior to performance of the Serial Interception Sequence Learning (SISL) task. Depleted participants exhibited poorer test performance than did non-depleted controls, indicating that reducing available executive resources may adversely affect implicit sequence learning, expression of sequence knowledge, or both. In a second experiment, depletion was administered either prior to or after training. Participants who reported higher levels of depletion before or after training again showed less sequence-specific knowledge on the post-training assessment. However, the results did not allow for clear separation of ego depletion effects on learning versus subsequent sequence-specific performance. These results indicate that performance on an implicitly learned sequence can be impaired by a reduction in executive resources, in spite of learning taking place outside of awareness and without conscious intent.

  20. Dose enhancement in radiotherapy of small lung tumors using inline magnetic fields: A Monte Carlo based planning study

    Energy Technology Data Exchange (ETDEWEB)

    Oborn, B. M., E-mail: brad.oborn@gmail.com [Illawarra Cancer Care Centre (ICCC), Wollongong, NSW 2500, Australia and Centre for Medical Radiation Physics (CMRP), University of Wollongong, Wollongong, NSW 2500 (Australia); Ge, Y. [Sydney Medical School, University of Sydney, NSW 2006 (Australia); Hardcastle, N. [Northern Sydney Cancer Centre, Royal North Shore Hospital, Sydney, NSW 2065 (Australia); Metcalfe, P. E. [Centre for Medical Radiation Physics (CMRP), University of Wollongong, Wollongong NSW 2500, Australia and Ingham Institute for Applied Medical Research, Liverpool, NSW 2170 (Australia); Keall, P. J. [Sydney Medical School, University of Sydney, NSW 2006, Australia and Ingham Institute for Applied Medical Research, Liverpool, NSW 2170 (Australia)

    2016-01-15

    Purpose: To report on significant dose enhancement effects caused by magnetic fields aligned parallel to 6 MV photon beam radiotherapy of small lung tumors. Findings are applicable to future inline MRI-guided radiotherapy systems. Methods: A total of eight clinical lung tumor cases were recalculated using Monte Carlo methods, and external magnetic fields of 0.5, 1.0, and 3 T were included to observe the impact on dose to the planning target volume (PTV) and gross tumor volume (GTV). Three plans were 6 MV 3D-CRT plans while 6 were 6 MV IMRT. The GTV’s ranged from 0.8 to 16 cm{sup 3}, while the PTV’s ranged from 1 to 59 cm{sup 3}. In addition, the dose changes in a 30 cm diameter cylindrical water phantom were investigated for small beams. The central 20 cm of this phantom contained either water or lung density insert. Results: For single beams, an inline magnetic field of 1 T has a small impact in lung dose distributions by reducing the lateral scatter of secondary electrons, resulting in a small dose increase along the beam. Superposition of multiple small beams leads to significant dose enhancements. Clinically, this process occurs in the lung tissue typically surrounding the GTV, resulting in increases to the D{sub 98%} (PTV). Two isolated tumors with very small PTVs (3 and 6 cm{sup 3}) showed increases in D{sub 98%} of 23% and 22%. Larger PTVs of 13, 26, and 59 cm{sup 3} had increases of 9%, 6%, and 4%, describing a natural fall-off in enhancement with increasing PTV size. However, three PTVs bounded to the lung wall showed no significant increase, due to lack of dose enhancement in the denser PTV volume. In general, at 0.5 T, the GTV mean dose enhancement is around 60% lower than that at 1 T, while at 3 T, it is 5%–60% higher than 1 T. Conclusions: Monte Carlo methods have described significant and predictable dose enhancement effects in small lung tumor plans for 6 MV radiotherapy when an external inline magnetic field is included. Results of this study

  1. Experimental verification of a commercial Monte Carlo-based dose calculation module for high-energy photon beams

    International Nuclear Information System (INIS)

    Kuenzler, Thomas; Fotina, Irina; Stock, Markus; Georg, Dietmar

    2009-01-01

    The dosimetric performance of a Monte Carlo algorithm as implemented in a commercial treatment planning system (iPlan, BrainLAB) was investigated. After commissioning and basic beam data tests in homogenous phantoms, a variety of single regular beams and clinical field arrangements were tested in heterogeneous conditions (conformal therapy, arc therapy and intensity-modulated radiotherapy including simultaneous integrated boosts). More specifically, a cork phantom containing a concave-shaped target was designed to challenge the Monte Carlo algorithm in more complex treatment cases. All test irradiations were performed on an Elekta linac providing 6, 10 and 18 MV photon beams. Absolute and relative dose measurements were performed with ion chambers and near tissue equivalent radiochromic films which were placed within a transverse plane of the cork phantom. For simple fields, a 1D gamma (γ) procedure with a 2% dose difference and a 2 mm distance to agreement (DTA) was applied to depth dose curves, as well as to inplane and crossplane profiles. The average gamma value was 0.21 for all energies of simple test cases. For depth dose curves in asymmetric beams similar gamma results as for symmetric beams were obtained. Simple regular fields showed excellent absolute dosimetric agreement to measurement values with a dose difference of 0.1% ± 0.9% (1 standard deviation) at the dose prescription point. A more detailed analysis at tissue interfaces revealed dose discrepancies of 2.9% for an 18 MV energy 10 x 10 cm 2 field at the first density interface from tissue to lung equivalent material. Small fields (2 x 2 cm 2 ) have their largest discrepancy in the re-build-up at the second interface (from lung to tissue equivalent material), with a local dose difference of about 9% and a DTA of 1.1 mm for 18 MV. Conformal field arrangements, arc therapy, as well as IMRT beams and simultaneous integrated boosts were in good agreement with absolute dose measurements in the

  2. 11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing

    CERN Document Server

    Nuyens, Dirk

    2016-01-01

    This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.

  3. Monte Carlo: Basics

    OpenAIRE

    Murthy, K. P. N.

    2001-01-01

    An introduction to the basics of Monte Carlo is given. The topics covered include, sample space, events, probabilities, random variables, mean, variance, covariance, characteristic function, chebyshev inequality, law of large numbers, central limit theorem (stable distribution, Levy distribution), random numbers (generation and testing), random sampling techniques (inversion, rejection, sampling from a Gaussian, Metropolis sampling), analogue Monte Carlo and Importance sampling (exponential b...

  4. A proposal on alternative sampling-based modeling method of spherical particles in stochastic media for Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Song Hyun; Lee, Jae Yong; KIm, Do Hyun; Kim, Jong Kyung [Dept. of Nuclear Engineering, Hanyang University, Seoul (Korea, Republic of); Noh, Jae Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-08-15

    Chord length sampling method in Monte Carlo simulations is a method used to model spherical particles with random sampling technique in a stochastic media. It has received attention due to the high calculation efficiency as well as user convenience; however, a technical issue regarding boundary effect has been noted. In this study, after analyzing the distribution characteristics of spherical particles using an explicit method, an alternative chord length sampling method is proposed. In addition, for modeling in finite media, a correction method of the boundary effect is proposed. Using the proposed method, sample probability distributions and relative errors were estimated and compared with those calculated by the explicit method. The results show that the reconstruction ability and modeling accuracy of the particle probability distribution with the proposed method were considerably high. Also, from the local packing fraction results, the proposed method can successfully solve the boundary effect problem. It is expected that the proposed method can contribute to the increasing of the modeling accuracy in stochastic media.

  5. Polarimetric imaging of turbid inhomogeneous slab media based on backscattering using a pencil beam for illumination: Monte Carlo simulation

    Science.gov (United States)

    Otsuki, Soichi

    2018-04-01

    Polarimetric imaging of absorbing, strongly scattering, or birefringent inclusions is investigated in a negligibly absorbing, moderately scattering, and isotropic slab medium. It was proved that the reduced effective scattering Mueller matrix is exactly calculated from experimental or simulated raw matrices even if the medium is anisotropic and/or heterogeneous, or the outgoing light beam exits obliquely to the normal of the slab surface. The calculation also gives a reasonable approximation of the reduced matrix using a light beam with a finite diameter for illumination. The reduced matrix was calculated using a Monte Carlo simulation and was factorized in two dimensions by the Lu-Chipman polar decomposition. The intensity of backscattered light shows clear and modestly clear differences for absorbing and strongly scattering inclusions, respectively, whereas it shows no difference for birefringent inclusions. Conversely, some polarization parameters, for example, the selective depolarization coefficients exhibit only a slight difference for the absorbing inclusions, whereas they showed clear difference for the strongly scattering or birefringent inclusions. Moreover, these quantities become larger with increasing the difference in the optical properties of the inclusions relative to the surrounding medium. However, it is difficult to recognize inclusions that buried at the depth deeper than 3 mm under the surface. Thus, the present technique can detect the approximate shape and size of these inclusions, and considering the depth where inclusions lie, estimate their optical properties. This study reveals the possibility of the polarization-sensitive imaging of turbid inhomogeneous media using a pencil beam for illumination.

  6. A novel radiation detector for removing scattered radiation in chest radiography: Monte Carlo simulation-based performance evaluation

    Science.gov (United States)

    Roh, Y. H.; Yoon, Y.; Kim, K.; Kim, J.; Kim, J.; Morishita, J.

    2016-10-01

    Scattered radiation is the main reason for the degradation of image quality and the increased patient exposure dose in diagnostic radiology. In an effort to reduce scattered radiation, a novel structure of an indirect flat panel detector has been proposed. In this study, a performance evaluation of the novel system in terms of image contrast as well as an estimation of the number of photons incident on the detector and the grid exposure factor were conducted using Monte Carlo simulations. The image contrast of the proposed system was superior to that of the no-grid system but slightly inferior to that of the parallel-grid system. The number of photons incident on the detector and the grid exposure factor of the novel system were higher than those of the parallel-grid system but lower than those of the no-grid system. The proposed system exhibited the potential for reduced exposure dose without image quality degradation; additionally, can be further improved by a structural optimization considering the manufacturer's specifications of its lead contents.

  7. Monte Carlo-based subgrid parameterization of vertical velocity and stratiform cloud microphysics in ECHAM5.5-HAM2

    Directory of Open Access Journals (Sweden)

    J. Tonttila

    2013-08-01

    Full Text Available A new method for parameterizing the subgrid variations of vertical velocity and cloud droplet number concentration (CDNC is presented for general circulation models (GCMs. These parameterizations build on top of existing parameterizations that create stochastic subgrid cloud columns inside the GCM grid cells, which can be employed by the Monte Carlo independent column approximation approach for radiative transfer. The new model version adds a description for vertical velocity in individual subgrid columns, which can be used to compute cloud activation and the subgrid distribution of the number of cloud droplets explicitly. Autoconversion is also treated explicitly in the subcolumn space. This provides a consistent way of simulating the cloud radiative effects with two-moment cloud microphysical properties defined at subgrid scale. The primary impact of the new parameterizations is to decrease the CDNC over polluted continents, while over the oceans the impact is smaller. Moreover, the lower CDNC induces a stronger autoconversion of cloud water to rain. The strongest reduction in CDNC and cloud water content over the continental areas promotes weaker shortwave cloud radiative effects (SW CREs even after retuning the model. However, compared to the reference simulation, a slightly stronger SW CRE is seen e.g. over mid-latitude oceans, where CDNC remains similar to the reference simulation, and the in-cloud liquid water content is slightly increased after retuning the model.

  8. The sensitivity studies of a landmine explosive detection system based on neutron backscattering using Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    Khan Hamda

    2017-01-01

    Full Text Available This paper carries out a Monte Carlo simulation of a landmine detection system, using the MCNP5 code, for the detection of concealed explosives such as trinitrotoluene and cyclonite. In portable field detectors, the signal strength of backscattered neutrons and gamma rays from thermal neutron activation is sensitive to a number of parameters such as the mass of explosive, depth of concealment, neutron moderation, background soil composition, soil porosity, soil moisture, multiple scattering in the background material, and configuration of the detection system. In this work, a detection system, with BF3 detectors for neutrons and sodium iodide scintillator for g-rays, is modeled to investigate the neutron signal-to-noise ratio and to obtain an empirical formula for the photon production rate Ri(n,γ= SfGfMf(d,m from radiative capture reactions in constituent nuclides of trinitrotoluene. This formula can be used for the efficient landmine detection of explosives in quantities as small as ~200 g of trinitrotoluene concealed at depths down to about 15 cm. The empirical formula can be embedded in a field programmable gate array on a field-portable explosives' sensor for efficient online detection.

  9. Development of PC based Monte Carlo simulations for the calculation of scanner-specific normalized organ doses from CT

    International Nuclear Information System (INIS)

    Jansen, J. T. M.; Shrimpton, P. C.; Zankl, M.

    2009-01-01

    This paper discusses the simulation of contemporary computed tomography (CT) scanners using Monte Carlo calculation methods to derive normalized organ doses, which enable hospital physicists to estimate typical organ and effective doses for CT examinations. The hardware used in a small PC-cluster at the Health Protection Agency (HPA) for these calculations is described. Investigations concerning optimization of software, including the radiation transport codes MCNP5 and MCNPX, and the Intel and PGI FORTRAN compilers, are presented in relation to results and calculation speed. Differences in approach for modelling the X-ray source are described and their influences are analysed. Comparisons with previously published calculations at HPA from the early 1990's proved satisfactory for the purposes of quality assurance and are presented in terms of organ dose ratios for whole body exposure and differences in organ location. Influences on normalized effective dose are discussed in relation to choice of cross section library, CT scanner technology (contemporary multi slice versus single slice), definition for effective dose (1990 and 2007 versions) and anthropomorphic phantom (mathematical and voxel). The results illustrate the practical need for the updated scanner-specific dose coefficients presently being calculated at HPA, in order to facilitate improved dosimetry for contemporary CT practice. (authors)

  10. A proposal on alternative sampling-based modeling method of spherical particles in stochastic media for Monte Carlo simulation

    International Nuclear Information System (INIS)

    Kim, Song Hyun; Lee, Jae Yong; KIm, Do Hyun; Kim, Jong Kyung; Noh, Jae Man

    2015-01-01

    Chord length sampling method in Monte Carlo simulations is a method used to model spherical particles with random sampling technique in a stochastic media. It has received attention due to the high calculation efficiency as well as user convenience; however, a technical issue regarding boundary effect has been noted. In this study, after analyzing the distribution characteristics of spherical particles using an explicit method, an alternative chord length sampling method is proposed. In addition, for modeling in finite media, a correction method of the boundary effect is proposed. Using the proposed method, sample probability distributions and relative errors were estimated and compared with those calculated by the explicit method. The results show that the reconstruction ability and modeling accuracy of the particle probability distribution with the proposed method were considerably high. Also, from the local packing fraction results, the proposed method can successfully solve the boundary effect problem. It is expected that the proposed method can contribute to the increasing of the modeling accuracy in stochastic media

  11. Is Monte Carlo embarrassingly parallel?

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  12. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  13. Deciphering the Glacial-Interglacial Landscape History in Greenland Based on Markov Chain Monte Carlo Inversion of Existing 10Be-26Al Data

    DEFF Research Database (Denmark)

    Strunk, Astrid; Knudsen, Mads Faurschou; Larsen, Nicolaj Krog

    the landscape history in previously glaciated terrains may be difficult, however, due to unknown erosion rates and the presence of inherited nuclides. The potential use of cosmogenic nuclides in landscapes with a complex history of exposure and erosion is therefore often quite limited. In this study, we...... investigate the landscape history in eastern and western Greenland by applying a novel Markov Chain Monte Carlo (MCMC) inversion approach to the existing 10Be-26Al data from these regions. The new MCMC approach allows us to constrain the most likely landscape history based on comparisons between simulated...... and measured cosmogenic nuclide concentrations. It is a fundamental assumption of the model approach that the exposure history at the site/location can be divided into two distinct regimes: i) interglacial periods characterized by zero shielding due to overlying ice and a uniform interglacial erosion rate...

  14. Atomic kinetic Monte Carlo model based on ab initio data: Simulation of microstructural evolution under irradiation of dilute Fe-CuNiMnSi alloys

    International Nuclear Information System (INIS)

    Vincent, E.; Becquart, C.S.; Domain, C.

    2007-01-01

    The embrittlement of pressure vessel steels under radiation has been long ago correlated with the presence of Cu solutes. Other solutes such as Ni, Mn and Si are now suspected to contribute also to the embrittlement. The interactions of these solutes with radiation induced point defects thus need to be characterized properly in order to understand the elementary mechanisms behind the formation of the clusters formed upon radiation. Ab initio calculations based on the density functional theory have been performed to determine the interactions of point defects with solute atoms in dilute FeX alloys (X = Cu, Mn, Ni or Si) in order to build a database used to parameterise an atomic kinetic Monte Carlo model. Some results of irradiation damage in dilute Fe-CuNiMnSi alloys obtained with this model are presented

  15. Atomic kinetic Monte Carlo model based on ab initio data: Simulation of microstructural evolution under irradiation of dilute Fe CuNiMnSi alloys

    Science.gov (United States)

    Vincent, E.; Becquart, C. S.; Domain, C.

    2007-02-01

    The embrittlement of pressure vessel steels under radiation has been long ago correlated with the presence of Cu solutes. Other solutes such as Ni, Mn and Si are now suspected to contribute also to the embrittlement. The interactions of these solutes with radiation induced point defects thus need to be characterized properly in order to understand the elementary mechanisms behind the formation of the clusters formed upon radiation. Ab initio calculations based on the density functional theory have been performed to determine the interactions of point defects with solute atoms in dilute FeX alloys (X = Cu, Mn, Ni or Si) in order to build a database used to parameterise an atomic kinetic Monte Carlo model. Some results of irradiation damage in dilute Fe-CuNiMnSi alloys obtained with this model are presented.

  16. Optimization of a neutron production target based on the 7Li (p,n)7Be reaction with the Monte Carlo Method

    International Nuclear Information System (INIS)

    Burlon, Alejandro A.; Kreiner, Andres J.; Minsky, Daniel; Valda, Alejandro A.; Somacal, Hector R.

    2003-01-01

    In order to optimize a neutron production target for accelerator-based boron neutron capture therapy (AB-BNCT) a Monte Carlo Neutron and Photon (MCNP) investigation has been performed. Neutron fields from a LiF thick target (with both a D 2 O-graphite and a Al/AlF 3 -graphite moderator/reflector assembly) were evaluated along the centerline in a head phantom. The target neutron beam was simulated from the 7 Li(p,n) 7 Be nuclear reaction for 1.89, 2.0 and 2.3 MeV protons. The results show that it is more advantageous to irradiate the target with near resonance energy protons (2.3 MeV) because of the high neutron yield at this energy. On the other hand, the Al/AlF 3 -graphite exhibits a more efficient performance than D 2 O. (author)

  17. Estimation of miniature forest parameters, species, tree shape, and distance between canopies by means of Monte-Carlo based radiative transfer model with forestry surface model

    International Nuclear Information System (INIS)

    Ding, Y.; Arai, K.

    2007-01-01

    A method for estimation of forest parameters, species, tree shape, distance between canopies by means of Monte-Carlo based radiative transfer model with forestry surface model is proposed. The model is verified through experiments with the miniature model of forest, tree array of relatively small size of trees. Two types of miniature trees, ellipse-looking and cone-looking canopy are examined in the experiments. It is found that the proposed model and experimental results show a coincidence so that the proposed method is validated. It is also found that estimation of tree shape, trunk tree distance as well as distinction between deciduous or coniferous trees can be done with the proposed model. Furthermore, influences due to multiple reflections between trees and interaction between trees and under-laying grass are clarified with the proposed method

  18. A 3D kinetic Monte Carlo simulation study of resistive switching processes in Ni/HfO2/Si-n+-based RRAMs

    International Nuclear Information System (INIS)

    Aldana, S; García-Fernández, P; Jiménez-Molinos, F; Gómez-Campos, F; Roldán, J B; Rodríguez-Fernández, Alberto; Romero-Zaliz, R; González, M B; Campabadal, F

    2017-01-01

    A new RRAM simulation tool based on a 3D kinetic Monte Carlo algorithm has been implemented. The redox reactions and migration of cations are developed taking into consideration the temperature and electric potential 3D distributions within the device dielectric at each simulation time step. The filamentary conduction has been described by obtaining the percolation paths formed by metallic atoms. Ni/HfO 2 /Si-n + unipolar devices have been fabricated and measured. The different experimental characteristics of the devices under study have been reproduced with accuracy by means of simulations. The main physical variables can be extracted at any simulation time to clarify the physics behind resistive switching; in particular, the final conductive filament shape can be studied in detail. (paper)

  19. Monte Carlo Method to Study Properties of Acceleration Factor Estimation Based on the Test Results with Varying Load

    Directory of Open Access Journals (Sweden)

    N. D. Tiannikova

    2014-01-01

    Full Text Available G.D. Kartashov has developed a technique to determine the rapid testing results scaling functions to the normal mode. Its feature is preliminary tests of products of one sample including tests using the alternating modes. Standard procedure of preliminary tests (researches is as follows: n groups of products with m elements in each start being tested in normal mode and, after a failure of one of products in the group, the remained products are tested in accelerated mode. In addition to tests in alternating mode, tests in constantly normal mode are conducted as well. The acceleration factor of rapid tests for this type of products, identical to any lots is determined using such testing results of products from the same lot. A drawback of this technique is that tests are to be conducted in alternating mode till the failure of all products. That is not always is possible. To avoid this shortcoming, the Renyi criterion is offered. It allows us to determine scaling functions using the right-censored data thus giving the opportunity to stop testing prior to all failures of products.In this work a statistical modeling of the acceleration factor estimation owing to Renyi statistics minimization is implemented by the Monte-Carlo method. Results of modeling show that the acceleration factor estimation obtained through Renyi statistics minimization is conceivable for rather large n . But for small sample volumes some systematic bias of acceleration factor estimation, which decreases with growth n is observed for both distributions (exponential and Veybull's distributions. Therefore the paper also presents calculation results of correction factors for a case of exponential distribution and Veybull's distribution.

  20. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance.

    Science.gov (United States)

    Dong, Han; Sharma, Diksha; Badano, Aldo

    2014-12-01

    Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridmantis, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webmantis and visualmantis to facilitate the setup of computational experiments via hybridmantis. The visualization tools visualmantis and webmantis enable the user to control simulation properties through a user interface. In the case of webmantis, control via a web browser allows access through mobile devices such as smartphones or tablets. webmantis acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridmantis. The users can download the output images and statistics through a zip file for future reference. In addition, webmantis provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. The visualization tools visualmantis and webmantis provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual

  1. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance

    International Nuclear Information System (INIS)

    Dong, Han; Sharma, Diksha; Badano, Aldo

    2014-01-01

    Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: The visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying

  2. Monte Carlo simulations for describing the ferroelectric-relaxor crossover in BaTiO3-based solid solutions

    International Nuclear Information System (INIS)

    Padurariu, Leontin; Enachescu, Cristian; Mitoseriu, Liliana

    2011-01-01

    The properties induced by the M 4+ addition (M = Zr, Sn, Hf) in BaM x Ti 1-x O 3 solid solutions have been described on the basis of a 2D Ising-like network and Monte Carlo calculations, in which BaMO 3 randomly distributed unit cells were considered as being non-ferroelectric. The polarization versus temperature dependences when increasing the M 4+ concentration (x) showed a continuous reduction of the remanent polarization and of the critical temperature corresponding to the ferroelectric-paraelectric transition and a modification from a first-order to a second-order phase transition with a broad temperature range for which the transition takes place, as commonly reported for relaxors. The model also describes the system's tendency to reduce the polar clusters' average size while increasing their stability in time at higher temperatures above the Curie range, when a ferroelectric-relaxor crossover is induced by increasing the substitution (x). The equilibrium micropolar states during the polarization reversal process while describing the P(E) loops were comparatively monitored for the ferroelectric (x = 0) and relaxor (x = 0.3) states. Polarization reversal in relaxor compositions proceeds by the growth of several nucleated domains (the 'labyrinthine domain pattern') instead of the large scale domain formation typical for the ferroelectric state. The spatial and temporal evolution of the polar clusters in BaM x Ti 1-x O 3 solid solutions at various x has also been described by the correlation length and correlation time. As expected for the ferroelectric-relaxor crossover characterized by a progressive increasing degree of disorder, local fluctuations cause a reducing correlation time when the substitution degree increases, at a given temperature. The correlation time around the Curie temperature increases, reflecting the increasing stability in time of some polar nanoregions in relaxors in comparison with ferroelectrics, which was experimentally proved in

  3. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Han; Sharma, Diksha; Badano, Aldo, E-mail: aldo.badano@fda.hhs.gov [Division of Imaging, Diagnostics, and Software Reliability, Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, Maryland 20993 (United States)

    2014-12-15

    Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: The visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying

  4. SU-F-T-81: Treating Nose Skin Using Energy and Intensity Modulated Electron Beams with Monte Carlo Based Dose Calculation

    Energy Technology Data Exchange (ETDEWEB)

    Jin, L; Fan, J; Eldib, A; Price, R; Ma, C [Fox Chase Cancer Center, Philadelphia, PA (United States)

    2016-06-15

    Purpose: Treating nose skin with an electron beam is of a substantial challenge due to uneven nose surfaces and tissue heterogeneity, and consequently could have a great uncertainty of dose accuracy on the target. This work explored the method using Monte Carlo (MC)-based energy and intensity modulated electron radiotherapy (MERT), which would be delivered with a photon MLC in a standard medical linac (Artiste). Methods: The traditional treatment on the nose skin involves the usage of a bolus, often with a single energy electron beam. This work avoided using the bolus, and utilized mixed energies of electron beams. An in-house developed Monte Carlo (MC)-based dose calculation/optimization planning system was employed for treatment planning. Phase space data (6, 9, 12 and 15 MeV) were used as an input source for MC dose calculations for the linac. To reduce the scatter-caused penumbra, a short SSD (61 cm) was used. A clinical case of the nose skin, which was previously treated with a single 9 MeV electron beam, was replanned with the MERT method. The resultant dose distributions were compared with the plan previously clinically used. The dose volume histogram of the MERT plan is calculated to examine the coverage of the planning target volume (PTV) and critical structure doses. Results: The target coverage and conformality in the MERT plan are improved as compared to the conventional plan. The MERT can provide more sufficient target coverage and less normal tissue dose underneath the nose skin. Conclusion: Compared to the conventional treatment technique, using MERT for the nose skin treatment has shown the dosimetric advantages in the PTV coverage and conformality. In addition, this technique eliminates the necessity of the cutout and bolus, which makes the treatment more efficient and accurate.

  5. Investigation of time-of-flight benefits in an LYSO-based PET/CT scanner: A Monte Carlo study using GATE

    International Nuclear Information System (INIS)

    Geramifar, P.; Ay, M.R.; Shamsaie Zafarghandi, M.; Sarkar, S.; Loudos, G.; Rahmim, A.

    2011-01-01

    The advent of fast scintillators yielding great light yield and/or stopping power, along with advances in photomultiplier tubes and electronics, have rekindled interest in time-of-flight (TOF) PET. Because the potential performance improvements offered by TOF PET are substantial, efforts to improve PET timing should prove very fruitful. In this study, we performed Monte Carlo simulations to explore what gains in PET performance could be achieved if the coincidence resolving time (CRT) in the LYSO-based PET component of Discovery RX PET/CT scanner were improved. For this purpose, the GATE Monte Carlo package was utilized, providing the ability to model and characterize various physical phenomena in PET imaging. For the present investigation, count rate performance and signal to noise ratio (SNR) values in different activity concentrations were simulated for different coincidence timing windows of 4, 5.85, 6, 6.5, 8, 10 and 12 ns and with different CRTs of 100-900 ps FWHM involving 50 ps FWHM increments using the NEMA scatter phantom. Strong evidence supporting robustness of the simulations was found as observed in the good agreement between measured and simulated data for the cases of estimating axial sensitivity, axial and transaxial detection position, gamma non-collinearity angle distribution and positron annihilation distance. In the non-TOF context, the results show that the random event rate can be reduced by using narrower coincidence timing window widths, demonstrating considerable enhancements in the peak noise equivalent count rate (NECR) performance. The peak NECR had increased by ∼50% when utilizing the coincidence window width of 4 ns. At the same time, utilization of TOF information resulted in improved NECR and SNR with the dramatic reduction of random coincidences as a function of CRT. For example, with CRT of 500 ps FWHM, a factor of 2.3 reduction in random rates, factor of 1.5 increase in NECR and factor of 2.1 improvement in SNR is achievable

  6. Investigation of time-of-flight benefits in an LYSO-based PET/CT scanner: A Monte Carlo study using GATE

    Energy Technology Data Exchange (ETDEWEB)

    Geramifar, P. [Faculty of Physics and Nuclear Engineering, Amir Kabir University of Technology (Tehran Polytechnic), Tehran (Iran, Islamic Republic of); Research Center for Science and Technology in Medicine, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Research Institute for Nuclear Medicine, Tehran University of Medical Sciences, Shariati Hospital, Tehran (Iran, Islamic Republic of); Ay, M.R., E-mail: mohammadreza_ay@tums.ac.ir [Research Center for Science and Technology in Medicine, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Research Institute for Nuclear Medicine, Tehran University of Medical Sciences, Shariati Hospital, Tehran (Iran, Islamic Republic of); Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Shamsaie Zafarghandi, M. [Faculty of Physics and Nuclear Engineering, Amir Kabir University of Technology (Tehran Polytechnic), Tehran (Iran, Islamic Republic of); Sarkar, S. [Research Center for Science and Technology in Medicine, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Research Institute for Nuclear Medicine, Tehran University of Medical Sciences, Shariati Hospital, Tehran (Iran, Islamic Republic of); Loudos, G. [Department of Medical Instruments Technology, Technological Educational Institute, Athens (Greece); Rahmim, A. [Department of Radiology, School of Medicine, Johns Hopkins University, Baltimore (United States); Department of Electrical and Computer Engineering, School of Engineering, Johns Hopkins University, Baltimore (United States)

    2011-06-11

    The advent of fast scintillators yielding great light yield and/or stopping power, along with advances in photomultiplier tubes and electronics, have rekindled interest in time-of-flight (TOF) PET. Because the potential performance improvements offered by TOF PET are substantial, efforts to improve PET timing should prove very fruitful. In this study, we performed Monte Carlo simulations to explore what gains in PET performance could be achieved if the coincidence resolving time (CRT) in the LYSO-based PET component of Discovery RX PET/CT scanner were improved. For this purpose, the GATE Monte Carlo package was utilized, providing the ability to model and characterize various physical phenomena in PET imaging. For the present investigation, count rate performance and signal to noise ratio (SNR) values in different activity concentrations were simulated for different coincidence timing windows of 4, 5.85, 6, 6.5, 8, 10 and 12 ns and with different CRTs of 100-900 ps FWHM involving 50 ps FWHM increments using the NEMA scatter phantom. Strong evidence supporting robustness of the simulations was found as observed in the good agreement between measured and simulated data for the cases of estimating axial sensitivity, axial and transaxial detection position, gamma non-collinearity angle distribution and positron annihilation distance. In the non-TOF context, the results show that the random event rate can be reduced by using narrower coincidence timing window widths, demonstrating considerable enhancements in the peak noise equivalent count rate (NECR) performance. The peak NECR had increased by {approx}50% when utilizing the coincidence window width of 4 ns. At the same time, utilization of TOF information resulted in improved NECR and SNR with the dramatic reduction of random coincidences as a function of CRT. For example, with CRT of 500 ps FWHM, a factor of 2.3 reduction in random rates, factor of 1.5 increase in NECR and factor of 2.1 improvement in SNR is

  7. Monte Carlo-based investigations on the impact of removing the flattening filter on beam quality specifiers for photon beam dosimetry.

    Science.gov (United States)

    Czarnecki, Damian; Poppe, Björn; Zink, Klemens

    2017-06-01

    The impact of removing the flattening filter in clinical electron accelerators on the relationship between dosimetric quantities such as beam quality specifiers and the mean photon and electron energies of the photon radiation field was investigated by Monte Carlo simulations. The purpose of this work was to determine the uncertainties when using the well-known beam quality specifiers or energy-based beam specifiers as predictors of dosimetric photon field properties when removing the flattening filter. Monte Carlo simulations applying eight different linear accelerator head models with and without flattening filter were performed in order to generate realistic radiation sources and calculate field properties such as restricted mass collision stopping power ratios (L¯/ρ)airwater, mean photon and secondary electron energies. To study the impact of removing the flattening filter on the beam quality correction factors k Q , this factor for detailed ionization chamber models was calculated by Monte Carlo simulations. Stopping power ratios (L¯/ρ)airwater and k Q values for different ionization chambers as a function of TPR1020 and %dd(10) x were calculated. Moreover, mean photon energies in air and at the point of measurement in water as well as mean secondary electron energies at the point of measurement were calculated. The results revealed that removing the flattening filter led to a change within 0.3% in the relationship between %dd(10) x and (L¯/ρ)airwater, whereby the relationship between TPR1020 and (L¯/ρ)airwater changed up to 0.8% for high energy photon beams. However, TPR1020 was a good predictor of (L¯/ρ)airwater for both types of linear accelerator with energies filter within 1.1% and 1.6% was observed for TPR1020 and %dd(10) x respectively. The results of this study have shown that removing the flattening filter led to a change in the relationship between the well-known beam quality specifiers and dosimetric quantities at the point of measurement

  8. MORSE Monte Carlo code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described

  9. Too Depleted to Try? Testing the Process Model of Ego Depletion in the Context of Unhealthy Snack Consumption.

    Science.gov (United States)

    Haynes, Ashleigh; Kemps, Eva; Moffitt, Robyn

    2016-11-01

    The process model proposes that the ego depletion effect is due to (a) an increase in motivation toward indulgence, and (b) a decrease in motivation to control behaviour following an initial act of self-control. In contrast, the reflective-impulsive model predicts that ego depletion results in behaviour that is more consistent with desires, and less consistent with motivations, rather than influencing the strength of desires and motivations. The current study sought to test these alternative accounts of the relationships between ego depletion, motivation, desire, and self-control. One hundred and fifty-six undergraduate women were randomised to complete a depleting e-crossing task or a non-depleting task, followed by a lab-based measure of snack intake, and self-report measures of motivation and desire strength. In partial support of the process model, ego depletion was related to higher intake, but only indirectly via the influence of lowered motivation. Motivation was more strongly predictive of intake for those in the non-depletion condition, providing partial support for the reflective-impulsive model. Ego depletion did not affect desire, nor did depletion moderate the effect of desire on intake, indicating that desire may be an appropriate target for reducing unhealthy behaviour across situations where self-control resources vary. © 2016 The International Association of Applied Psychology.

  10. Two computational approaches for Monte Carlo based shutdown dose rate calculation with applications to the JET fusion machine

    Energy Technology Data Exchange (ETDEWEB)

    Petrizzi, L.; Batistoni, P.; Migliori, S. [Associazione EURATOM ENEA sulla Fusione, Frascati (Roma) (Italy); Chen, Y.; Fischer, U.; Pereslavtsev, P. [Association FZK-EURATOM Forschungszentrum Karlsruhe (Germany); Loughlin, M. [EURATOM/UKAEA Fusion Association, Culham Science Centre, Abingdon, Oxfordshire, OX (United Kingdom); Secco, A. [Nice Srl Via Serra 33 Camerano Casasco AT (Italy)

    2003-07-01

    In deuterium-deuterium (D-D) and deuterium-tritium (D-T) fusion plasmas neutrons are produced causing activation of JET machine components. For safe operation and maintenance it is important to be able to predict the induced activation and the resulting shut down dose rates. This requires a suitable system of codes which is capable of simulating both the neutron induced material activation during operation and the decay gamma radiation transport after shut-down in the proper 3-D geometry. Two methodologies to calculate the dose rate in fusion devices have been developed recently and applied to fusion machines, both using the MCNP Monte Carlo code. FZK has developed a more classical approach, the rigorous 2-step (R2S) system in which MCNP is coupled to the FISPACT inventory code with an automated routing. ENEA, in collaboration with the ITER Team, has developed an alternative approach, the direct 1 step method (D1S). Neutron and decay gamma transport are handled in one single MCNP run, using an ad hoc cross section library. The intention was to tightly couple the neutron induced production of a radio-isotope and the emission of its decay gammas for an accurate spatial distribution and a reliable calculated statistical error. The two methods have been used by the two Associations to calculate the dose rate in five positions of JET machine, two inside the vacuum chamber and three outside, at cooling times between 1 second and 1 year after shutdown. The same MCNP model and irradiation conditions have been assumed. The exercise has been proposed and financed in the frame of the Fusion Technological Program of the JET machine. The scope is to supply the designers with the most reliable tool and data to calculate the dose rate on fusion machines. Results showed that there is a good agreement: the differences range between 5-35%. The next step to be considered in 2003 will be an exercise in which the comparison will be done with dose-rate data from JET taken during and

  11. Two computational approaches for Monte Carlo based shutdown dose rate calculation with applications to the JET fusion machine

    International Nuclear Information System (INIS)

    Petrizzi, L.; Batistoni, P.; Migliori, S.; Chen, Y.; Fischer, U.; Pereslavtsev, P.; Loughlin, M.; Secco, A.

    2003-01-01

    In deuterium-deuterium (D-D) and deuterium-tritium (D-T) fusion plasmas neutrons are produced causing activation of JET machine components. For safe operation and maintenance it is important to be able to predict the induced activation and the resulting shut down dose rates. This requires a suitable system of codes which is capable of simulating both the neutron induced material activation during operation and the decay gamma radiation transport after shut-down in the proper 3-D geometry. Two methodologies to calculate the dose rate in fusion devices have been developed recently and applied to fusion machines, both using the MCNP Monte Carlo code. FZK has developed a more classical approach, the rigorous 2-step (R2S) system in which MCNP is coupled to the FISPACT inventory code with an automated routing. ENEA, in collaboration with the ITER Team, has developed an alternative approach, the direct 1 step method (D1S). Neutron and decay gamma transport are handled in one single MCNP run, using an ad hoc cross section library. The intention was to tightly couple the neutron induced production of a radio-isotope and the emission of its decay gammas for an accurate spatial distribution and a reliable calculated statistical error. The two methods have been used by the two Associations to calculate the dose rate in five positions of JET machine, two inside the vacuum chamber and three outside, at cooling times between 1 second and 1 year after shutdown. The same MCNP model and irradiation conditions have been assumed. The exercise has been proposed and financed in the frame of the Fusion Technological Program of the JET machine. The scope is to supply the designers with the most reliable tool and data to calculate the dose rate on fusion machines. Results showed that there is a good agreement: the differences range between 5-35%. The next step to be considered in 2003 will be an exercise in which the comparison will be done with dose-rate data from JET taken during and

  12. Ego Depletion Impairs Implicit Learning

    Science.gov (United States)

    Thompson, Kelsey R.; Sanchez, Daniel J.; Wesley, Abigail H.; Reber, Paul J.

    2014-01-01

    Implicit skill learning occurs incidentally and without conscious awareness of what is learned. However, the rate and effectiveness of learning may still be affected by decreased availability of central processing resources. Dual-task experiments have generally found impairments in implicit learning, however, these studies have also shown that certain characteristics of the secondary task (e.g., timing) can complicate the interpretation of these results. To avoid this problem, the current experiments used a novel method to impose resource constraints prior to engaging in skill learning. Ego depletion theory states that humans possess a limited store of cognitive resources that, when depleted, results in deficits in self-regulation and cognitive control. In a first experiment, we used a standard ego depletion manipulation prior to performance of the Serial Interception Sequence Learning (SISL) task. Depleted participants exhibited poorer test performance than did non-depleted controls, indicating that reducing available executive resources may adversely affect implicit sequence learning, expression of sequence knowledge, or both. In a second experiment, depletion was administered either prior to or after training. Participants who reported higher levels of depletion before or after training again showed less sequence-specific knowledge on the post-training assessment. However, the results did not allow for clear separation of ego depletion effects on learning versus subsequent sequence-specific performance. These results indicate that performance on an implicitly learned sequence can be impaired by a reduction in executive resources, in spite of learning taking place outside of awareness and without conscious intent. PMID:25275517

  13. Hsp90 depletion goes wild

    OpenAIRE

    Siegal, Mark L; Masel, Joanna

    2012-01-01

    Abstract Hsp90 reveals phenotypic variation in the laboratory, but is Hsp90 depletion important in the wild? Recent work from Chen and Wagner in BMC Evolutionary Biology has discovered a naturally occurring Drosophila allele that downregulates Hsp90, creating sensitivity to cryptic genetic variation. Laboratory studies suggest that the exact magnitude of Hsp90 downregulation is important. Extreme Hsp90 depletion might reactivate transposable elements and/or induce aneuploidy, in addition to r...

  14. Ego depletion impairs implicit learning.

    Directory of Open Access Journals (Sweden)

    Kelsey R Thompson

    Full Text Available Implicit skill learning occurs incidentally and without conscious awareness of what is learned. However, the rate and effectiveness of learning may still be affected by decreased availability of central processing resources. Dual-task experiments have generally found impairments in implicit learning, however, these studies have also shown that certain characteristics of the secondary task (e.g., timing can complicate the interpretation of these results. To avoid this problem, the current experiments used a novel method to impose resource constraints prior to engaging in skill learning. Ego depletion theory states that humans possess a limited store of cognitive resources that, when depleted, results in deficits in self-regulation and cognitive control. In a first experiment, we used a standard ego depletion manipulation prior to performance of the Serial Interception Sequence Learning (SISL task. Depleted participants exhibited poorer test performance than did non-depleted controls, indicating that reducing available executive resources may adversely affect implicit sequence learning, expression of sequence knowledge, or both. In a second experiment, depletion was administered either prior to or after training. Participants who reported higher levels of depletion before or after training again showed less sequence-specific knowledge on the post-training assessment. However, the results did not allow for clear separation of ego depletion effects on learning versus subsequent sequence-specific performance. These results indicate that performance on an implicitly learned sequence can be impaired by a reduction in executive resources, in spite of learning taking place outside of awareness and without conscious intent.

  15. "When the going gets tough, who keeps going?" Depletion sensitivity moderates the ego-depletion effect.

    Science.gov (United States)

    Salmon, Stefanie J; Adriaanse, Marieke A; De Vet, Emely; Fennis, Bob M; De Ridder, Denise T D

    2014-01-01

    Self-control relies on a limited resource that can get depleted, a phenomenon that has been labeled ego-depletion. We argue that individuals may differ in their sensitivity to depleting tasks, and that consequently some people deplete their self-control resource at a faster rate than others. In three studies, we assessed individual differences in depletion sensitivity, and demonstrate that depletion sensitivity moderates ego-depletion effects. The Depletion Sensitivity Scale (DSS) was employed to assess depletion sensitivity. Study 1 employs the DSS to demonstrate that individual differences in sensitivity to ego-depletion exist. Study 2 shows moderate correlations of depletion sensitivity with related self-control concepts, indicating that these scales measure conceptually distinct constructs. Study 3 demonstrates that depletion sensitivity moderates the ego-depletion effect. Specifically, participants who are sensitive to depletion performed worse on a second self-control task, indicating a stronger ego-depletion effect, compared to participants less sensitive to depletion.

  16. Monte Carlo simulations and radiation dosimetry measurements of 142Pr capillary tube-based radioactive implant (CTRI). A new structure for brachytherapy sources

    International Nuclear Information System (INIS)

    Bakht, M.K.; Haddadi, A.; Sadeghi, M.; Ahmadi, S.J.; Sadjadi, S.S.; Tenreiro, C.

    2013-01-01

    Previously, a promising β - -emitting praseodymium-142 glass seed was proposed for brachytherapy of prostate cancer. In accordance with the previous study, a 142 Pr capillary tube-based radioactive implant (CTRI) was suggested as a source with a new structure to enhance application of β - -emitting radioisotopes such as 142 Pr in brachytherapy. Praseodymium oxide powder was encapsulated in a glass capillary tube. Then, a thin and flexible fluorinated ethylene propylene Teflon layer sealed the capillary tube. The source was activated in the Tehran Research Reactor by the 141 Pr(n, γ) 142 Pr reaction. Measurements of the dosimetric parameters were performed using GafChromic radiochromic film. In addition, the dose rate distribution of 142 Pr CTRI was calculated by modeling 142 Pr source in a water phantom using Monte Carlo N-Particle Transport (MCNP5) Code. The active source was unreactive and did not leak in water. In comparison with the earlier proposed 142 Pr seed, the suggested source showed similar desirable dosimetric characteristics. Moreover, the 142 Pr CTRI production procedure may be technically and economically more feasible. The mass of praseodymium in CTRI structure could be greater than that of the 142 Pr glass seed; therefore, the required irradiation time and the neutron flux could be reduced. A 142 Pr CTRI was proposed for brachytherapy of prostate cancer. The dosimetric calculations by the experimental measurements and Monte Carlo simulation were performed to fulfill the requirements according to the American Association of Physicists in Medicine recommendations before the clinical use of new brachytherapy sources. The characteristics of the suggested source were compared with those of the previously proposed 142 Pr glass seed. (author)

  17. Application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) to the steel process chain: Case study

    Energy Technology Data Exchange (ETDEWEB)

    Bieda, Bogusław

    2014-05-01

    The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Kraków, Poland. In order to assess the uncertainty, the software CrystalBall® (CB), which is associated with Microsoft® Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management. - Highlights: • The benefits of Monte Carlo simulation are examined. • The normal probability distribution is studied. • LCI data on Mittal Steel Poland (MSP) complex in Kraków, Poland dates back to 2005. • This is the first assessment of the LCI uncertainties in the Polish steel industry.

  18. Variational Monte Carlo Technique

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 19; Issue 8. Variational Monte Carlo Technique: Ground State Energies of Quantum Mechanical Systems. Sukanta Deb. General Article Volume 19 Issue 8 August 2014 pp 713-739 ...

  19. A study of parallelizing O(N) Green-function-based Monte Carlo method for many fermions coupled with classical degrees of freedom

    International Nuclear Information System (INIS)

    Zhang Shixun; Yamagia, Shinichi; Yunoki, Seiji

    2013-01-01

    Models of fermions interacting with classical degrees of freedom are applied to a large variety of systems in condensed matter physics. For this class of models, Weiße [Phys. Rev. Lett. 102, 150604 (2009)] has recently proposed a very efficient numerical method, called O(N) Green-Function-Based Monte Carlo (GFMC) method, where a kernel polynomial expansion technique is used to avoid the full numerical diagonalization of the fermion Hamiltonian matrix of size N, which usually costs O(N 3 ) computational complexity. Motivated by this background, in this paper we apply the GFMC method to the double exchange model in three spatial dimensions. We mainly focus on the implementation of GFMC method using both MPI on a CPU-based cluster and Nvidia's Compute Unified Device Architecture (CUDA) programming techniques on a GPU-based (Graphics Processing Unit based) cluster. The time complexity of the algorithm and the parallel implementation details on the clusters are discussed. We also show the performance scaling for increasing Hamiltonian matrix size and increasing number of nodes, respectively. The performance evaluation indicates that for a 32 3 Hamiltonian a single GPU shows higher performance equivalent to more than 30 CPU cores parallelized using MPI

  20. Transient Treg depletion enhances therapeutic anti‐cancer vaccination

    Science.gov (United States)

    Aston, Wayne J.; Chee, Jonathan; Khong, Andrea; Cleaver, Amanda L.; Solin, Jessica N.; Ma, Shaokang; Lesterhuis, W. Joost; Dick, Ian; Holt, Robert A.; Creaney, Jenette; Boon, Louis; Robinson, Bruce; Lake, Richard A.

    2016-01-01

    Abstract Introduction Regulatory T cells (Treg) play an important role in suppressing anti‐ immunity and their depletion has been linked to improved outcomes. To better understand the role of Treg in limiting the efficacy of anti‐cancer immunity, we used a Diphtheria toxin (DTX) transgenic mouse model to specifically target and deplete Treg. Methods Tumor bearing BALB/c FoxP3.dtr transgenic mice were subjected to different treatment protocols, with or without Treg depletion and tumor growth and survival monitored. Results DTX specifically depleted Treg in a transient, dose‐dependent manner. Treg depletion correlated with delayed tumor growth, increased effector T cell (Teff) activation, and enhanced survival in a range of solid tumors. Tumor regression was dependent on Teffs as depletion of both CD4 and CD8 T cells completely abrogated any survival benefit. Severe morbidity following Treg depletion was only observed, when consecutive doses of DTX were given during peak CD8 T cell activation, demonstrating that Treg can be depleted on multiple occasions, but only when CD8 T cell activation has returned to base line levels. Finally, we show that even minimal Treg depletion is sufficient to significantly improve the efficacy of tumor‐peptide vaccination. Conclusions BALB/c.FoxP3.dtr mice are an ideal model to investigate the full therapeutic potential of Treg depletion to boost anti‐tumor immunity. DTX‐mediated Treg depletion is transient, dose‐dependent, and leads to strong anti‐tumor immunity and complete tumor regression at high doses, while enhancing the efficacy of tumor‐specific vaccination at low doses. Together this data highlight the importance of Treg manipulation as a useful strategy for enhancing current and future cancer immunotherapies. PMID:28250921

  1. [Acute tryptophan depletion in eating disorders].

    Science.gov (United States)

    Díaz-Marsa, M; Lozano, C; Herranz, A S; Asensio-Vegas, M J; Martín, O; Revert, L; Saiz-Ruiz, J; Carrasco, J L

    2006-01-01

    This work describes the rational bases justifying the use of acute tryptophan depletion technique in eating disorders (ED) and the methods and design used in our studies. Tryptophan depletion technique has been described and used in previous studies safely and makes it possible to evaluate the brain serotonin activity. Therefore it is used in the investigation of hypotheses on serotonergic deficiency in eating disorders. Furthermore, and given the relationship of the dysfunctions of serotonin activity with impulsive symptoms, the technique may be useful in biological differentiation of different subtypes, that is restrictive and bulimic, of ED. 57 female patients with DSM-IV eating disorders and 20 female controls were investigated with the tryptophan depletion test. A tryptophan-free amino acid solution was administered orally after a two-day low tryptophan diet to patients and controls. Free plasma tryptophan was measured at two and five hours following administration of the drink. Eating and emotional responses were measured with specific scales for five hours following the depletion. A study of the basic characteristics of the personality and impulsivity traits was also done. Relationship of the response to the test with the different clinical subtypes and with the temperamental and impulsive characteristics of the patients was studied. The test was effective in considerably reducing plasma tryptophan in five hours from baseline levels (76%) in the global sample. The test was well tolerated and no severe adverse effects were reported. Two patients withdrew from the test due to gastric intolerance. The tryptophan depletion test could be of value to study involvement of serotonin deficits in the symptomatology and pathophysiology of eating disorders.

  2. Computer simulations and theoretical aspects of the depletion interaction in protein-oligomer mixtures.

    Science.gov (United States)

    Boncina, M; Rescic, J; Kalyuzhnyi, Yu V; Vlachy, V

    2007-07-21

    The depletion interaction between proteins caused by addition of either uncharged or partially charged oligomers was studied using the canonical Monte Carlo simulation technique and the integral equation theory. A protein molecule was modeled in two different ways: either as (i) a hard sphere of diameter 30.0 A with net charge 0, or +5, or (ii) as a hard sphere with discrete charges (depending on the pH of solution) of diameter 45.4 A. The oligomers were pictured as tangentially jointed, uncharged, or partially charged, hard spheres. The ions of a simple electrolyte present in solution were represented by charged hard spheres distributed in the dielectric continuum. In this study we were particularly interested in changes of the protein-protein pair-distribution function, caused by addition of the oligomer component. In agreement with previous studies we found that addition of a nonadsorbing oligomer reduces the phase stability of solution, which is reflected in the shape of the protein-protein pair-distribution function. The value of this function in protein-protein contact increases with increasing oligomer concentration, and is larger for charged oligomers. The range of the depletion interaction and its strength also depend on the length (number of monomer units) of the oligomer chain. The integral equation theory, based on the Wertheim Ornstein-Zernike approach applied in this study, was found to be in fair agreement with Monte Carlo results only for very short oligomers. The computer simulations for a model mimicking the lysozyme molecule (ii) are in qualitative agreement with small-angle neutron experiments for lysozyme-dextran mixtures.

  3. Depleted uranium plasma reduction system study

    International Nuclear Information System (INIS)

    Rekemeyer, P.; Feizollahi, F.; Quapp, W.J.; Brown, B.W.

    1994-12-01

    A system life-cycle cost study was conducted of a preliminary design concept for a plasma reduction process for converting depleted uranium to uranium metal and anhydrous HF. The plasma-based process is expected to offer significant economic and environmental advantages over present technology. Depleted Uranium is currently stored in the form of solid UF 6 , of which approximately 575,000 metric tons is stored at three locations in the U.S. The proposed system is preconceptual in nature, but includes all necessar