Validation of a Monte Carlo Based Depletion Methodology Using HFIR Post-Irradiation Measurements
Energy Technology Data Exchange (ETDEWEB)
Chandler, David [ORNL; Maldonado, G Ivan [ORNL; Primm, Trent [ORNL
2009-11-01
Post-irradiation uranium isotopic atomic densities within the core of the High Flux Isotope Reactor (HFIR) were calculated and compared to uranium mass spectrographic data measured in the late 1960s and early 70s [1]. This study was performed in order to validate a Monte Carlo based depletion methodology for calculating the burn-up dependent nuclide inventory, specifically the post-irradiation uranium
Tippayakul, Chanatip
The main objective of this research is to develop a practical fuel management system for the Pennsylvania State University Breazeale research reactor (PSBR) based on several advanced Monte Carlo coupled depletion methodologies. Primarily, this research involved two major activities: model and method developments and analyses and validations of the developed models and methods. The starting point of this research was the utilization of the earlier developed fuel management tool, TRIGSIM, to create the Monte Carlo model of core loading 51 (end of the core loading). It was found when comparing the normalized power results of the Monte Carlo model to those of the current fuel management system (using HELIOS/ADMARC-H) that they agreed reasonably well (within 2%--3% differences on average). Moreover, the reactivity of some fuel elements was calculated by the Monte Carlo model and it was compared with measured data. It was also found that the fuel element reactivity results of the Monte Carlo model were in good agreement with the measured data. However, the subsequent task of analyzing the conversion from the core loading 51 to the core loading 52 using TRIGSIM showed quite significant difference of each control rod worth between the Monte Carlo model and the current methodology model. The differences were mainly caused by inconsistent absorber atomic number densities between the two models. Hence, the model of the first operating core (core loading 2) was revised in light of new information about the absorber atomic densities to validate the Monte Carlo model with the measured data. With the revised Monte Carlo model, the results agreed better to the measured data. Although TRIGSIM showed good modeling and capabilities, the accuracy of TRIGSIM could be further improved by adopting more advanced algorithms. Therefore, TRIGSIM was planned to be upgraded. The first task of upgrading TRIGSIM involved the improvement of the temperature modeling capability. The new TRIGSIM was
International Nuclear Information System (INIS)
The purpose of this study is to validate a Monte Carlo based depletion methodology by comparing calculated post-irradiation uranium isotopic compositions in the fuel elements of the High Flux Isotope Reactor (HFIR) core to values measured using uranium mass-spectrographic analysis. Three fuel plates were analyzed: two from the outer fuel element (OFE) and one from the inner fuel element (IFE). Fuel plates O-111-8, O-350-I, and I-417-24 from outer fuel elements 5-O and 21-O and inner fuel element 49-I, respectively, were selected for examination. Fuel elements 5-O, 21-O, and 49-I were loaded into HFIR during cycles 4, 16, and 35, respectively (mid to late 1960s). Approximately one year after each of these elements were irradiated, they were transferred to the High Radiation Level Examination Laboratory (HRLEL) where samples from these fuel plates were sectioned and examined via uranium mass-spectrographic analysis. The isotopic composition of each of the samples was used to determine the atomic percent of the uranium isotopes. A Monte Carlo based depletion computer program, ALEPH, which couples the MCNP and ORIGEN codes, was utilized to calculate the nuclide inventory at the end-of-cycle (EOC). A current ALEPH/MCNP input for HFIR fuel cycle 400 was modified to replicate cycles 4, 16, and 35. The control element withdrawal curves and flux trap loadings were revised, as well as the radial zone boundaries and nuclide concentrations in the MCNP model. The calculated EOC uranium isotopic compositions for the analyzed plates were found to be in good agreement with measurements, which reveals that ALEPH/MCNP can accurately calculate burn-up dependent uranium isotopic concentrations for the HFIR core. The spatial power distribution in HFIR changes significantly as irradiation time increases due to control element movement. Accurate calculation of the end-of-life uranium isotopic inventory is a good indicator that the power distribution variation as a function of space and
MCOR - Monte Carlo depletion code for reference LWR calculations
International Nuclear Information System (INIS)
Research highlights: → Introduction of a reference Monte Carlo based depletion code with extended capabilities. → Verification and validation results for MCOR. → Utilization of MCOR for benchmarking deterministic lattice physics (spectral) codes. - Abstract: The MCOR (MCnp-kORigen) code system is a Monte Carlo based depletion system for reference fuel assembly and core calculations. The MCOR code is designed as an interfacing code that provides depletion capability to the LANL Monte Carlo code by coupling two codes: MCNP5 with the AREVA NP depletion code, KORIGEN. The physical quality of both codes is unchanged. The MCOR code system has been maintained and continuously enhanced since it was initially developed and validated. The verification of the coupling was made by evaluating the MCOR code against similar sophisticated code systems like MONTEBURNS, OCTOPUS and TRIPOLI-PEPIN. After its validation, the MCOR code has been further improved with important features. The MCOR code presents several valuable capabilities such as: (a) a predictor-corrector depletion algorithm, (b) utilization of KORIGEN as the depletion module, (c) individual depletion calculation of each burnup zone (no burnup zone grouping is required, which is particularly important for the modeling of gadolinium rings), and (d) on-line burnup cross-section generation by the Monte Carlo calculation for 88 isotopes and usage of the KORIGEN libraries for PWR and BWR typical spectra for the remaining isotopes. Besides the just mentioned capabilities, the MCOR code newest enhancements focus on the possibility of executing the MCNP5 calculation in sequential or parallel mode, a user-friendly automatic re-start capability, a modification of the burnup step size evaluation, and a post-processor and test-matrix, just to name the most important. The article describes the capabilities of the MCOR code system; from its design and development to its latest improvements and further ameliorations
MCOR - Monte Carlo depletion code for reference LWR calculations
Energy Technology Data Exchange (ETDEWEB)
Puente Espel, Federico, E-mail: fup104@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Tippayakul, Chanatip, E-mail: cut110@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Ivanov, Kostadin, E-mail: kni1@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Misu, Stefan, E-mail: Stefan.Misu@areva.com [AREVA, AREVA NP GmbH, Erlangen (Germany)
2011-04-15
Research highlights: > Introduction of a reference Monte Carlo based depletion code with extended capabilities. > Verification and validation results for MCOR. > Utilization of MCOR for benchmarking deterministic lattice physics (spectral) codes. - Abstract: The MCOR (MCnp-kORigen) code system is a Monte Carlo based depletion system for reference fuel assembly and core calculations. The MCOR code is designed as an interfacing code that provides depletion capability to the LANL Monte Carlo code by coupling two codes: MCNP5 with the AREVA NP depletion code, KORIGEN. The physical quality of both codes is unchanged. The MCOR code system has been maintained and continuously enhanced since it was initially developed and validated. The verification of the coupling was made by evaluating the MCOR code against similar sophisticated code systems like MONTEBURNS, OCTOPUS and TRIPOLI-PEPIN. After its validation, the MCOR code has been further improved with important features. The MCOR code presents several valuable capabilities such as: (a) a predictor-corrector depletion algorithm, (b) utilization of KORIGEN as the depletion module, (c) individual depletion calculation of each burnup zone (no burnup zone grouping is required, which is particularly important for the modeling of gadolinium rings), and (d) on-line burnup cross-section generation by the Monte Carlo calculation for 88 isotopes and usage of the KORIGEN libraries for PWR and BWR typical spectra for the remaining isotopes. Besides the just mentioned capabilities, the MCOR code newest enhancements focus on the possibility of executing the MCNP5 calculation in sequential or parallel mode, a user-friendly automatic re-start capability, a modification of the burnup step size evaluation, and a post-processor and test-matrix, just to name the most important. The article describes the capabilities of the MCOR code system; from its design and development to its latest improvements and further ameliorations. Additionally
Monte Carlo simulation in UWB1 depletion code
International Nuclear Information System (INIS)
UWB1 depletion code is being developed as a fast computational tool for the study of burnable absorbers in the University of West Bohemia in Pilsen, Czech Republic. In order to achieve higher precision, the newly developed code was extended by adding a Monte Carlo solver. Research of fuel depletion aims at development and introduction of advanced types of burnable absorbers in nuclear fuel. Burnable absorbers (BA) allow the compensation of the initial reactivity excess of nuclear fuel and result in an increase of fuel cycles lengths with higher enriched fuels. The paper describes the depletion calculations of VVER nuclear fuel doped with rare earth oxides as burnable absorber based on performed depletion calculations, rare earth oxides are divided into two equally numerous groups, suitable burnable absorbers and poisoning absorbers. According to residual poisoning and BA reactivity worth, rare earth oxides marked as suitable burnable absorbers are Nd, Sm, Eu, Gd, Dy, Ho and Er, while poisoning absorbers include Sc, La, Lu, Y, Ce, Pr and Tb. The presentation slides have been added to the article
Monte Carlo solver for UWB1 nuclear fuel depletion code
International Nuclear Information System (INIS)
Highlights: • A new Monte Carlo solver was developed in order to speed-up depletion calculations. • For LWR model, UWB1 Monte Carlo solver is on average 10 times faster than MCNP6. • The UWB1 code will allow faster calculation analysis of BA parameters in fuel design. - Abstract: Recent nuclear reactor burnable absorber research tries to introduce new materials in the nuclear fuel. As a part of this effort, a fast computational tool is being developed for the advanced nuclear fuel. The first version of the newly developed UWB1 fast nuclear fuel depletion code significantly reduced calculation time by omitting the solution step for the Boltzmann transport equation. However, estimation of neutron multiplication factor during depletion was not sufficiently calculated. Therefore, at least one transport calculation for fuel depletion is necessary. This paper presents a new Monte Carlo solver that is implemented into the UWB1 code. The UWB1 Monte Carlo solver calculates neutron multiplication factor and neutron flux in the fuel for collapsed cross sections. Accuracy of the solver is supported by using current nuclear data stored in the ENDF/B-VII.1 library. Speed of the solver is the product of development focusing on minimization of CPU utilization at the expense of RAM demands. The UWB1 Monte Carlo solver is approximately 14 times faster than the MCNP6 reference code when one transport equation solution within fuel depletion is compared. Another speed-up can be achieved by employing advanced depletion scheme in the coupled transport and burnup equations. The resulting faster code will be used in optimization studies for ideal burnable absorber material selection where many various materials and concentrations will be evaluated
Monte Carlo depletion analysis of a PWR integral fuel burnable absorber by MCNAP
International Nuclear Information System (INIS)
The MCNAP is a personal computer-based continuous energy Monte Carlo (MC) neutronics analysis program written on C++ language. For the purpose of examining its qualification, a comparison of the depletion analysis of three integral burnable fuel assemblies of the pressurized water reactor(PWR) by the MCNAP and deterministic fuel assembly(FA) design vendor codes is presented. It is demonstrated that the continuous energy MC calculation by the MCNAP can provide a very accurate neutronics analysis method for the burnable absorber FA's. It is also demonstrated that the parallel MC computation by adoption of multiple PC's enables one to complete the lifetime depletion analysis of the FA's within the order of hours instead of order of days otherwise. (orig.)
Institute of Scientific and Technical Information of China (English)
XIAO Chang-Ming; GUO Ji-Yuan; HU Ping
2006-01-01
@@ According to the acceptance ratio method, the influences on the depletion interactions between a large sphere and a plate from another closely placed large sphere are studied by Monte Carlo simulation. The numerical results show that both the depletion potential and depletion force are affected by the presence of the closely placed large sphere; the closer the large sphere are placed to them, the larger the influence will be. Furthermore, the influences on the depletion interactions from another large sphere are more sensitive to the angle than to the distance.
Progress on burnup calculation methods coupling Monte Carlo and depletion codes
Energy Technology Data Exchange (ETDEWEB)
Leszczynski, Francisco [Comision Nacional de Energia Atomica, San Carlos de Bariloche, RN (Argentina). Centro Atomico Bariloche]. E-mail: lesinki@cab.cnea.gob.ar
2005-07-01
Several methods of burnup calculations coupling Monte Carlo and depletion codes that were investigated and applied for the author last years are described. here. Some benchmark results and future possibilities are analyzed also. The methods are: depletion calculations at cell level with WIMS or other cell codes, and use of the resulting concentrations of fission products, poisons and actinides on Monte Carlo calculation for fixed burnup distributions obtained from diffusion codes; same as the first but using a method o coupling Monte Carlo (MCNP) and a depletion code (ORIGEN) at a cell level for obtaining the concentrations of nuclides, to be used on full reactor calculation with Monte Carlo code; and full calculation of the system with Monte Carlo and depletion codes, on several steps. All these methods were used for different problems for research reactors and some comparisons with experimental results of regular lattices were performed. On this work, a resume of all these works is presented and discussion of advantages and problems found are included. Also, a brief description of the methods adopted and MCQ system for coupling MCNP and ORIGEN codes is included. (author)
International Nuclear Information System (INIS)
Monte Carlo depletion calculations for nuclear reactors are affected by the presence of stochastic noise in the local flux estimates produced during the calculation. The effects of this random noise and its propagation between timesteps during long depletion simulations are not well understood. To improve this understanding, a series of Monte Carlo depletion simulations have been conducted for a 3-D, eighth-core model of the H.B. Robinson PWR. The studies were performed by using the in-line depletion capability of the MC21 Monte Carlo code to produce multiple independent depletion simulations. Global and local results from each simulation are compared in order to determine the variance among the different depletion realizations. These comparisons indicate that global quantities, such as eigenvalue (keff), do not tend to diverge among the independent depletion calculations. However, local quantities, such as fuel concentration, can deviate wildly between independent depletion realizations, especially at high burnup levels. Analysis and discussion of the results from the study are provided, along with several new observations regarding the propagation of random noise during Monte Carlo depletion calculations. (author)
Depletion of a BWR lattice using the racer continuous energy Monte Carlo code
International Nuclear Information System (INIS)
In the past several years there has been a renewed interest in the accuracy of a new generation of lattice physics codes. Most of the time these codes are benchmarked against Monte Carlo codes only at beginning of cycle. In this paper a highly heterogeneous BWR lattice depletion benchmark problem is presented. Results of a 40% void depletion using the RACER continuous energy Monte Carlo code are also presented. Complete problem specifications are given so that comparisons with lattice physics codes or other Monte Carlo codes is possible. The RACER calculations were performed with the ENDF/B-V cross section set. Each flux calculation utilized 2.7 million histories resulting in 95% confidence intervals of ∼1 milli-k on the eigenvalue and ∼1% uncertainties on pin-wise power fractions. Timing statistics for the calculation using the vectorized RACER code averaged ∼ 24,000 neutrons/minute on a single processor of a CRAY-C90 computer
The development of depletion program coupled with Monte Carlo computer code
International Nuclear Information System (INIS)
The paper presents the development of depletion code for light water reactor coupled with MCNP5 code called the MCDL code (Monte Carlo Depletion for Light Water Reactor). The first order differential depletion system equations of 21 actinide isotopes and 50 fission product isotopes are solved by the Radau IIA Implicit Runge Kutta (IRK) method after receiving neutron flux, reaction rates in one group energy and multiplication factors for fuel pin, fuel assembly or whole reactor core from the calculation results of the MCNP5 code. The calculation for beryllium poisoning and cooling time is also integrated in the code. To verify and validate the MCDL code, high enriched uranium (HEU) and low enriched uranium (LEU) fuel assemblies VVR-M2 types and 89 fresh HEU fuel assemblies, 92 LEU fresh fuel assemblies cores of the Dalat Nuclear Research Reactor (DNRR) have been investigated and compared with the results calculated by the SRAC code and the MCNPREBUS linkage system code. The results show good agreement between calculated data of the MCDL code and reference codes. (author)
ORPHEE research reactor: 3D core depletion calculation using Monte-Carlo code TRIPOLI-4®
Damian, F.; Brun, E.
2014-06-01
ORPHEE is a research reactor located at CEA Saclay. It aims at producing neutron beams for experiments. This is a pool-type reactor (heavy water), and the core is cooled by light water. Its thermal power is 14 MW. ORPHEE core is 90 cm height and has a cross section of 27x27 cm2. It is loaded with eight fuel assemblies characterized by a various number of fuel plates. The fuel plate is composed of aluminium and High Enriched Uranium (HEU). It is a once through core with a fuel cycle length of approximately 100 Equivalent Full Power Days (EFPD) and with a maximum burnup of 40%. Various analyses under progress at CEA concern the determination of the core neutronic parameters during irradiation. Taking into consideration the geometrical complexity of the core and the quasi absence of thermal feedback for nominal operation, the 3D core depletion calculations are performed using the Monte-Carlo code TRIPOLI-4® [1,2,3]. A preliminary validation of the depletion calculation was performed on a 2D core configuration by comparison with the deterministic transport code APOLLO2 [4]. The analysis showed the reliability of TRIPOLI-4® to calculate a complex core configuration using a large number of depleting regions with a high level of confidence.
Monte Carlo Depletion Analysis of a TRU-Cermet Fuel. Design for a Sodium Cooled Fast Reactor
International Nuclear Information System (INIS)
Monte Carlo depletion has generally not been considered practical for designing the equilibrium cycle of a reactor. One objective of the work here was to demonstrate that recent advances in high performance computing clusters is making Monte Carlo core depletion competitive with traditional deterministic depletion methods for some applications. The application here was to a sodium fast reactor core with an innovative TRU cermet fuel type. An equilibrium cycle search was performed for a multi-batch core loading using the Monte Carlo depletion code Monteburn. A final fuel design of 38% w/o TRU with a pin radius of 0.32 cm was found to display similar operating characteristics to its metal fueled counterparts. The TRU-cermet fueled core has a smaller sodium void worth, and a less negative axial expansion coefficient. These effects result in a core with safety characteristics similar to the metal fuel design, however, the TRU consumption rate of the cermet fueled core is found to be higher than that of the metal fueled core. (authors)
Qin, Jianguo; Lai, Caifeng; Liu, Rong; Zhu, Tonghua; Zhang, Xinwei; Ye, Bangjiao
2015-01-01
To overcome the problem of inefficient computing time and unreliable results in MCNP5 calculation, a two-step method is adopted to calculate the energy deposition of prompt gamma-rays in detectors for depleted uranium spherical shells under D-T neutrons irradiation. In the first step, the gamma-ray spectrum for energy below 7 MeV is calculated by MCNP5 code; secondly, the electron recoil spectrum in a BC501A liquid scintillator detector is simulated based on EGSnrc Monte Carlo Code with the g...
Coevolution Based Adaptive Monte Carlo Localization (CEAMCL
Directory of Open Access Journals (Sweden)
Luo Ronghua
2008-11-01
Full Text Available An adaptive Monte Carlo localization algorithm based on coevolution mechanism of ecological species is proposed. Samples are clustered into species, each of which represents a hypothesis of the robot's pose. Since the coevolution between the species ensures that the multiple distinct hypotheses can be tracked stably, the problem of premature convergence when using MCL in highly symmetric environments can be solved. And the sample size can be adjusted adaptively over time according to the uncertainty of the robot's pose by using the population growth model. In addition, by using the crossover and mutation operators in evolutionary computation, intra-species evolution can drive the samples move towards the regions where the desired posterior density is large. So a small size of samples can represent the desired density well enough to make precise localization. The new algorithm is termed coevolution based adaptive Monte Carlo localization (CEAMCL. Experiments have been carried out to prove the efficiency of the new localization algorithm.
Coevolution Based Adaptive Monte Carlo Localization (CEAMCL)
Luo Ronghua; Hong Bingrong
2004-01-01
An adaptive Monte Carlo localization algorithm based on coevolution mechanism of ecological species is proposed. Samples are clustered into species, each of which represents a hypothesis of the robot's pose. Since the coevolution between the species ensures that the multiple distinct hypotheses can be tracked stably, the problem of premature convergence when using MCL in highly symmetric environments can be solved. And the sample size can be adjusted adaptively over time according to the unce...
Qin, Jianguo; Liu, Rong; Zhu, Tonghua; Zhang, Xinwei; Ye, Bangjiao
2015-01-01
To overcome the problem of inefficient computing time and unreliable results in MCNP5 calculation, a two-step method is adopted to calculate the energy deposition of prompt gamma-rays in detectors for depleted uranium spherical shells under D-T neutrons irradiation. In the first step, the gamma-ray spectrum for energy below 7 MeV is calculated by MCNP5 code; secondly, the electron recoil spectrum in a BC501A liquid scintillator detector is simulated based on EGSnrc Monte Carlo Code with the gamma-ray spectrum from the first step as input. The comparison of calculated results with experimental ones shows that the simulations agree well with experiment in the energy region 0.4-3 MeV for the prompt gamma-ray spectrum and below 4 MeVee for the electron recoil spectrum. The reliability of the two-step method in this work is validated.
Accelerated GPU based SPECT Monte Carlo simulations.
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-01
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational
Accelerated GPU based SPECT Monte Carlo simulations
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-01
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency
CERN Summer Student Report 2016 Monte Carlo Data Base Improvement
Caciulescu, Alexandru Razvan
2016-01-01
During my Summer Student project I worked on improving the Monte Carlo Data Base and MonALISA services for the ALICE Collaboration. The project included learning the infrastructure for tracking and monitoring of the Monte Carlo productions as well as developing a new RESTful API for seamless integration with the JIRA issue tracking framework.
International Nuclear Information System (INIS)
Full text of publication follows. We propose a novel approach for simulating, with atomistic kinetic Monte Carlo (KMC), the segregation or depletion of solute atoms at interfaces, via transport by vacancies. Differently from classical lattice KMC, no assumption is made regarding the crystallographic structure. The model can thus potentially be applied to any type of interfaces, e.g. grain boundaries. Fully off-lattice KMC models were already proposed in the literature, but are rather demanding in CPU time, mainly because of the necessity to perform static relaxation several times at every step of the simulation, and to calculate migration energies between different metastable states. In our LA-KMC model, we aim at performing static relaxation only once per step at the most, and define possible transitions to other metastable states following a generic predefined procedure. The corresponding migration energies can then be calculated using artificial neural networks, trained to predict them as a function of a full description of the local atomic environment, in term of both the exact location in space of atoms and in term of their chemical nature. Our model is thus a compromise between fully off-lattice and fully on-lattice models: (a) The description of the system is not bound to strict assumptions, but is readapted automatically performing the minimum required amount of static relaxation; (b) The procedure to define transition events is not guaranteed to find all important transitions, and is thereby potentially disregarding some mechanisms of system evolution. This shortcoming is in fact classical to non-fully off-lattice models, but is in our case limited thanks to the application of relaxation at every step; (c) Computing time is largely reduced thanks to the use of neural network to calculate the migration energies. In this presentation, we show the premises of this novel approach, in the case of grain-boundaries for bcc Fe-Cr alloys. (authors)
Jian-Guo, Qin; Cai-Feng, Lai; Rong, Liu; Tong-Hua, Zhu; Xin-Wei, Zhang; Bang-Jiao, Ye
2016-03-01
To overcome the problem of inefficient computing time and unreliable results in MCNP5 calculation, a two-step method is adopted to calculate the energy deposition of prompt γ-rays in detectors for depleted uranium spherical shells under D-T neutron irradiation. In the first step, the γ-ray spectrum for energy below 7 MeV is calculated by MCNP5 code; secondly, the electron recoil spectrum in a BC501A liquid scintillator detector is simulated based on EGSnrc Monte Carlo Code with the γ-ray spectrum from the first step as input. The comparison of calculated results with experimental ones shows that the simulations agree well with experiment in the energy region 0.4-3 MeV for the prompt γ-ray spectrum and below 4 MeVee for the electron recoil spectrum. The reliability of the two-step method in this work is validated. Supported by the National Natural Science Foundation of China (91226104) and National Special Magnetic Confinement Fusion Energy Research, China (2015GB108001)
Investigations on Monte Carlo based coupled core calculations
International Nuclear Information System (INIS)
The present trend in advanced and next generation nuclear reactor core designs is towards increased material heterogeneity and geometry complexity. The continuous energy Monte Carlo method has the capability of modeling such core environments with high accuracy. This paper presents results from feasibility studies being performed at the Pennsylvania State University (PSU) on both accelerating Monte Carlo criticality calculations by using hybrid nodal diffusion Monte Carlo schemes and thermal-hydraulic feedback modeling in Monte Carlo core calculations. The computation process is greatly accelerated by calculating the three-dimensional (3D) distributions of fission source and thermal-hydraulics parameters with the coupled NEM/COBRA-TF code and then using coupled MCNP5/COBRA-TF code to fine tune the results to obtain an increased accuracy. The PSU NEM code employs cross-sections generated by MCNP5 for pin-cell based nodal compositions. The implementation of different code modifications facilitating coupled calculations are presented first. Then the coupled hybrid Monte Carlo based code system is applied to a 3D 2*2 pin array extracted from a Boiling Water Reactor (BWR) assembly with reflective radial boundary conditions. The obtained results are discussed and it is showed that performing Monte-Carlo based coupled core steady state calculations are feasible. (authors)
Rundel, R. D.; Butler, D. M.; Stolarski, R. S.
1978-01-01
The paper discusses the development of a concise stratospheric model which uses iteration to obtain coupling between interacting species. The one-dimensional, steady-state, diurnally-averaged model generates diffusion equations with appropriate sources and sinks for species odd oxygen, H2O, H2, CO, N2O, odd nitrogen, CH4, CH3Cl, CCl4, CF2Cl2, CFCl3, and odd chlorine. The model evaluates steady-state perturbations caused by injections of chlorine and NO(x) and may be used to predict ozone depletion. The model is used in a Monte Carlo study of the propagation of reaction-rate imprecisions by calculating an ozone perturbation caused by the addition of chlorine. Since the model is sensitive to only 10 of the more than 50 reaction rates considered, only about 1000 Monte Carlo cases are required to span the space of possible results.
International Nuclear Information System (INIS)
The double-heterogeneity characterising pebble-bed high temperature reactors (HTRs) makes Monte Carlo based calculation tools the most suitable for detailed core analyses. These codes can be successfully used to predict the isotopic evolution during irradiation of the fuel of this kind of cores. At the moment, there are many computational systems based on MCNP that are available for performing depletion calculation. All these systems use MCNP to supply problem dependent fluxes and/or microscopic cross sections to the depletion module. This latter then calculates the isotopic evolution of the fuel resolving Bateman's equations. In this paper, a comparative analysis of three different MCNP-based depletion codes is performed: Montburns2.0, MCNPX2.6.0 and BGCore. Monteburns code can be considered as the reference code for HTR calculations, since it has been already verified during HTR-N and HTR-N1 EU project. All calculations have been performed on a reference model representing an infinite lattice of thorium-plutonium fuelled pebbles. The evolution of k-inf as a function of burnup has been compared, as well as the inventory of the important actinides. The k-inf comparison among the codes shows a good agreement during the entire burnup history with the maximum difference lower than 1%. The actinide inventory prediction agrees well. However significant discrepancy in Am and Cm concentrations calculated by MCNPX as compared to those of Monteburns and BGCore has been observed. This is mainly due to different Am-241 (n,γ) branching ratio utilized by the codes. The important advantage of BGCore is its significantly lower execution time required to perform considered depletion calculations. While providing reasonably accurate results BGCore runs depletion problem about two times faster than Monteburns and two to five times faster than MCNPX.
Depleting methyl bromide residues in soil by reaction with bases
Despite generally being considered the most effective soil fumigant, methyl bromide (MeBr) use is being phased out because its emissions from soil can lead to stratospheric ozone depletion. However, a large amount is still currently used due to Critical Use Exemptions. As strategies for reducing the...
International Nuclear Information System (INIS)
This paper summarizes studies performed on the Deep-Burner Modular Helium Reactor (DB-MHR) concept-design. Feasibility and sensitivity studies as well as fuel-cycle studies with probabilistic methodology are presented. Current investigations on design strategies in one and two pass scenarios, and the computational tools are also presented. Computations on the prismatic concept-design were performed on a full-core 3D model basis. The probabilistic MCNP-MONTEBURNS-ORIGEN chain, with either JEF2.2 or BVI libraries, was used. One or two independently depleting media per assembly were accounted. Due to the calculation time necessary to perform MCNP5 calculations with sufficient accuracy, the different parameters of the depletion calculations have to be optimized according to the desired accuracy of the results. Three strategies were compared: the two pass with driver and transmuter fuel loading in three rings, the one pass with driver fuel only in three rings geometry and finally the one pass in four rings. The 'two pass' scenario is the best deep burner with about 70% mass reduction of actinides for the PWR discharged fuel. However the small difference obtained for incineration (∼5%) raises the question of the interest of this scenario given the difficulty of the process for TF fuel. Finally the advantage of the 'two pass' scenario is mainly the reduction of actinide activity. (author)
DEFF Research Database (Denmark)
Stenbæk, D S; Einarsdottir, H S; Goregliad-Fjaellingsdal, T;
2016-01-01
Acute Tryptophan Depletion (ATD) is a dietary method used to modulate central 5-HT to study the effects of temporarily reduced 5-HT synthesis. The aim of this study is to evaluate a novel method of ATD using a gelatin-based collagen peptide (CP) mixture. We administered CP-Trp or CP+Trp mixtures ...... effects of CP-Trp compared to CP+Trp were observed. The transient increase in plasma Trp after CP+Trp may impair comparison to the CP-Trp and we therefore recommend in future studies to use a smaller dose of Trp supplement to the CP mixture....
Satellite-based estimates of groundwater depletion in India
Rodell, M.; Velicogna, I; Famiglietti, JS
2009-01-01
Groundwater is a primary source of fresh water in many parts of the world. Some regions are becoming overly dependent on it, consuming groundwater faster than it is naturally replenished and causing water tables to decline unremittingly. Indirect evidence suggests that this is the case in northwest India, but there has been no regional assessment of the rate of groundwater depletion. Here we use terrestrial water storage-change observations from the NASA Gravity Recovery and Climate Experimen...
GPU based Monte Carlo for PET image reconstruction: detector modeling
International Nuclear Information System (INIS)
Monte Carlo (MC) calculations and Graphical Processing Units (GPUs) are almost like the dedicated hardware designed for the specific task given the similarities between visible light transport and neutral particle trajectories. A GPU based MC gamma transport code has been developed for Positron Emission Tomography iterative image reconstruction calculating the projection from unknowns to data at each iteration step taking into account the full physics of the system. This paper describes the simplified scintillation detector modeling and its effect on convergence. (author)
Monte Carlo based weighting functions in neutron capture measurements
International Nuclear Information System (INIS)
To determine neutron capture cross sections using C6D6 detectors, the Pulse Height Weighting Technique (PHWT) is mostly applied. The weighting function depends from the response function of the detection system in use. Therefore, the quality of the data depends on the detector response used for the calculation of the weighting function. An experimental determination of the response of C6D6 detectors is not always straightforward. We determined the detector response and, hence, the weighting function from Monte Carlo simulations, using the MCNP 4C2 code. To obtain reliable results a big effort was made in preparing geometry input file describing the experimental conditions. To validate the results of the Monte Carlo simulations we performed several experiments at GELINA. First, we measured the C6D6 detector response for standard -sources and for selected resonances in the 206Pb(n,). These responses were compared with the one based on Monte Carlo simulations. The good agreement between experimental and simulated data confirms the reliability of the Monte Carlo simulations. As a second validation exercise, we also determined the normalization factor in Ag and Au sample of different composition and thickness and the neutron width of the 1.15 keV resonance in 5 Fe using samples of different compositions. The result of this validation exercise was that the photon transport and the coupling of the photon and neutron transport must be accounted for in the determination of the weighting function. Accurate weighting functions are required for capture reactions in nuclei where the gamma cascade differs strongly from resonance to resonance, and are extremely important for neutron data related to reactor technologies where Pb-isotopes play an important role. The Monte Carlo based weighting function have been used to deduce the capture yield of 206Pb between 3 and 620 keV and of 232Th between 5 and 150 keV. This method will also be used for the analysis of other neutron capture
International Nuclear Information System (INIS)
A practical fuel management system for the he Pennsylvania State University Breazeale Research Reactor (PSBR) based on the advanced Monte Carlo methodology was developed from the existing fuel management tool in this research. Several modeling improvements were implemented to the old system. The improved fuel management system can now utilize the burnup dependent cross section libraries generated specifically for PSBR fuel and it is also able to update the cross sections of these libraries by the Monte Carlo calculation automatically. Considerations were given to balance the computation time and the accuracy of the cross section update. Thus, certain types of a limited number of isotopes, which are considered 'important', are calculated and updated by the scheme. Moreover, the depletion algorithm of the existing fuel management tool was replaced from the predictor only to the predictor-corrector depletion scheme to account for burnup spectrum changes during the burnup step more accurately. An intermediate verification of the fuel management system was performed to assess the correctness of the newly implemented schemes against HELIOS. It was found that the agreement of both codes is good when the same energy released per fission (Q values) is used. Furthermore, to be able to model the reactor at various temperatures, the fuel management tool is able to utilize automatically the continuous cross sections generated at different temperatures. Other additional useful capabilities were also added to the fuel management tool to make it easy to use and be practical. As part of the development, a hybrid nodal diffusion/Monte Carlo calculation was devised to speed up the Monte Carlo calculation by providing more converged initial source distribution for the Monte Carlo calculation from the nodal diffusion calculation. Finally, the fuel management system was validated against the measured data using several actual PSBR core loadings. The agreement of the predicted core
Quantitative Monte Carlo-based holmium-166 SPECT reconstruction
International Nuclear Information System (INIS)
Purpose: Quantitative imaging of the radionuclide distribution is of increasing interest for microsphere radioembolization (RE) of liver malignancies, to aid treatment planning and dosimetry. For this purpose, holmium-166 (166Ho) microspheres have been developed, which can be visualized with a gamma camera. The objective of this work is to develop and evaluate a new reconstruction method for quantitative 166Ho SPECT, including Monte Carlo-based modeling of photon contributions from the full energy spectrum.Methods: A fast Monte Carlo (MC) simulator was developed for simulation of 166Ho projection images and incorporated in a statistical reconstruction algorithm (SPECT-fMC). Photon scatter and attenuation for all photons sampled from the full 166Ho energy spectrum were modeled during reconstruction by Monte Carlo simulations. The energy- and distance-dependent collimator-detector response was modeled using precalculated convolution kernels. Phantom experiments were performed to quantitatively evaluate image contrast, image noise, count errors, and activity recovery coefficients (ARCs) of SPECT-fMC in comparison with those of an energy window-based method for correction of down-scattered high-energy photons (SPECT-DSW) and a previously presented hybrid method that combines MC simulation of photopeak scatter with energy window-based estimation of down-scattered high-energy contributions (SPECT-ppMC+DSW). Additionally, the impact of SPECT-fMC on whole-body recovered activities (Aest) and estimated radiation absorbed doses was evaluated using clinical SPECT data of six 166Ho RE patients.Results: At the same noise level, SPECT-fMC images showed substantially higher contrast than SPECT-DSW and SPECT-ppMC+DSW in spheres ≥17 mm in diameter. The count error was reduced from 29% (SPECT-DSW) and 25% (SPECT-ppMC+DSW) to 12% (SPECT-fMC). ARCs in five spherical volumes of 1.96–106.21 ml were improved from 32%–63% (SPECT-DSW) and 50%–80% (SPECT-ppMC+DSW) to 76%–103
Clinical dosimetry in photon radiotherapy. A Monte Carlo based investigation
International Nuclear Information System (INIS)
Practical clinical dosimetry is a fundamental step within the radiation therapy process and aims at quantifying the absorbed radiation dose within a 1-2% uncertainty. To achieve this level of accuracy, corrections are needed for calibrated and air-filled ionization chambers, which are used for dose measurement. The procedures of correction are based on cavity theory of Spencer-Attix and are defined in current dosimetry protocols. Energy dependent corrections for deviations from calibration beams account for changed ionization chamber response in the treatment beam. The corrections applied are usually based on semi-analytical models or measurements and are generally hard to determine due to their magnitude of only a few percents or even less. Furthermore the corrections are defined for fixed geometrical reference-conditions and do not apply to non-reference conditions in modern radiotherapy applications. The stochastic Monte Carlo method for the simulation of radiation transport is becoming a valuable tool in the field of Medical Physics. As a suitable tool for calculation of these corrections with high accuracy the simulations enable the investigation of ionization chambers under various conditions. The aim of this work is the consistent investigation of ionization chamber dosimetry in photon radiation therapy with the use of Monte Carlo methods. Nowadays Monte Carlo systems exist, which enable the accurate calculation of ionization chamber response in principle. Still, their bare use for studies of this type is limited due to the long calculation times needed for a meaningful result with a small statistical uncertainty, inherent to every result of a Monte Carlo simulation. Besides heavy use of computer hardware, techniques methods of variance reduction to reduce the needed calculation time can be applied. Methods for increasing the efficiency in the results of simulation were developed and incorporated in a modern and established Monte Carlo simulation environment
Too exhausted to remember: ego depletion undermines subsequent event-based prospective memory.
Li, Jian-Bin; Nie, Yan-Gang; Zeng, Min-Xia; Huntoon, Meghan; Smith, Jessi L
2013-01-01
Past research has consistently found that people are likely to do worse on high-level cognitive tasks after exerting self-control on previous actions. However, little has been unraveled about to what extent ego depletion affects subsequent prospective memory. Drawing upon the self-control strength model and the relationship between self-control resources and executive control, this study proposes that the initial actions of self-control may undermine subsequent event-based prospective memory (EBPM). Ego depletion was manipulated through watching a video requiring visual attention (Experiment 1) or completing an incongruent Stroop task (Experiment 2). Participants were then tested on EBPM embedded in an ongoing task. As predicted, the results showed that after ruling out possible intervening variables (e.g. mood, focal and nonfocal cues, and characteristics of ongoing task and ego depletion task), participants in the high-depletion condition performed significantly worse on EBPM than those in the low-depletion condition. The results suggested that the effect of ego depletion on EBPM was mainly due to an impaired prospective component rather than to a retrospective component. PMID:23432682
Quantitative Monte Carlo-based holmium-166 SPECT reconstruction
Energy Technology Data Exchange (ETDEWEB)
Elschot, Mattijs; Smits, Maarten L. J.; Nijsen, Johannes F. W.; Lam, Marnix G. E. H.; Zonnenberg, Bernard A.; Bosch, Maurice A. A. J. van den; Jong, Hugo W. A. M. de [Department of Radiology and Nuclear Medicine, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht (Netherlands); Viergever, Max A. [Image Sciences Institute, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht (Netherlands)
2013-11-15
Purpose: Quantitative imaging of the radionuclide distribution is of increasing interest for microsphere radioembolization (RE) of liver malignancies, to aid treatment planning and dosimetry. For this purpose, holmium-166 ({sup 166}Ho) microspheres have been developed, which can be visualized with a gamma camera. The objective of this work is to develop and evaluate a new reconstruction method for quantitative {sup 166}Ho SPECT, including Monte Carlo-based modeling of photon contributions from the full energy spectrum.Methods: A fast Monte Carlo (MC) simulator was developed for simulation of {sup 166}Ho projection images and incorporated in a statistical reconstruction algorithm (SPECT-fMC). Photon scatter and attenuation for all photons sampled from the full {sup 166}Ho energy spectrum were modeled during reconstruction by Monte Carlo simulations. The energy- and distance-dependent collimator-detector response was modeled using precalculated convolution kernels. Phantom experiments were performed to quantitatively evaluate image contrast, image noise, count errors, and activity recovery coefficients (ARCs) of SPECT-fMC in comparison with those of an energy window-based method for correction of down-scattered high-energy photons (SPECT-DSW) and a previously presented hybrid method that combines MC simulation of photopeak scatter with energy window-based estimation of down-scattered high-energy contributions (SPECT-ppMC+DSW). Additionally, the impact of SPECT-fMC on whole-body recovered activities (A{sup est}) and estimated radiation absorbed doses was evaluated using clinical SPECT data of six {sup 166}Ho RE patients.Results: At the same noise level, SPECT-fMC images showed substantially higher contrast than SPECT-DSW and SPECT-ppMC+DSW in spheres ≥17 mm in diameter. The count error was reduced from 29% (SPECT-DSW) and 25% (SPECT-ppMC+DSW) to 12% (SPECT-fMC). ARCs in five spherical volumes of 1.96–106.21 ml were improved from 32%–63% (SPECT-DSW) and 50%–80
GPU-Monte Carlo based fast IMRT plan optimization
Directory of Open Access Journals (Sweden)
Yongbao Li
2014-03-01
Full Text Available Purpose: Intensity-modulated radiation treatment (IMRT plan optimization needs pre-calculated beamlet dose distribution. Pencil-beam or superposition/convolution type algorithms are typically used because of high computation speed. However, inaccurate beamlet dose distributions, particularly in cases with high levels of inhomogeneity, may mislead optimization, hindering the resulting plan quality. It is desire to use Monte Carlo (MC methods for beamlet dose calculations. Yet, the long computational time from repeated dose calculations for a number of beamlets prevents this application. It is our objective to integrate a GPU-based MC dose engine in lung IMRT optimization using a novel two-steps workflow.Methods: A GPU-based MC code gDPM is used. Each particle is tagged with an index of a beamlet where the source particle is from. Deposit dose are stored separately for beamlets based on the index. Due to limited GPU memory size, a pyramid space is allocated for each beamlet, and dose outside the space is neglected. A two-steps optimization workflow is proposed for fast MC-based optimization. At first step, a rough dose calculation is conducted with only a few number of particle per beamlet. Plan optimization is followed to get an approximated fluence map. In the second step, more accurate beamlet doses are calculated, where sampled number of particles for a beamlet is proportional to the intensity determined previously. A second-round optimization is conducted, yielding the final result.Results: For a lung case with 5317 beamlets, 105 particles per beamlet in the first round, and 108 particles per beam in the second round are enough to get a good plan quality. The total simulation time is 96.4 sec.Conclusion: A fast GPU-based MC dose calculation method along with a novel two-step optimization workflow are developed. The high efficiency allows the use of MC for IMRT optimizations.--------------------------------Cite this article as: Li Y, Tian Z
A CNS calculation line based on a Monte Carlo method
International Nuclear Information System (INIS)
Full text: The design of the moderator cell of a Cold Neutron Source (CNS) involves many different considerations regarding geometry, location, and materials. Decisions taken in this sense affect not only the neutron flux in the source neighborhood, which can be evaluated by a standard empirical method, but also the neutron flux values in experimental positions far away of the neutron source. At long distances from the neutron source, very time consuming 3D deterministic methods or Monte Carlo transport methods are necessary in order to get accurate figures. Standard and typical terminology such as average neutron flux, neutron current, angular flux, luminosity, are magnitudes very difficult to evaluate in positions located several meters away from the neutron source. The Monte Carlo method is a unique and powerful tool to transport neutrons. Its use in a bootstrap scheme appears to be an appropriate solution for this type of systems. The proper use of MCNP as the main tool leads to a fast and reliable method to perform calculations in a relatively short time with low statistical errors. The design goal is to evaluate the performance of the neutron sources, their beam tubes and neutron guides at specific experimental locations in the reactor hall as well as in the neutron or experimental hall. In this work, the calculation methodology used to design Cold, Thermal and Hot Neutron Sources and their associated Neutron Beam Transport Systems, based on the use of the MCNP code, is presented. This work also presents some changes made to the cross section libraries in order to cope with cryogenic moderators such as liquid hydrogen and liquid deuterium. (author)
The New MCNP6 Depletion Capability
Energy Technology Data Exchange (ETDEWEB)
Fensin, Michael Lorne [Los Alamos National Laboratory; James, Michael R. [Los Alamos National Laboratory; Hendricks, John S. [Los Alamos National Laboratory; Goorley, John T. [Los Alamos National Laboratory
2012-06-19
The first MCNP based inline Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. Both the MCNP5 and MCNPX codes have historically provided a successful combinatorial geometry based, continuous energy, Monte Carlo radiation transport solution for advanced reactor modeling and simulation. However, due to separate development pathways, useful simulation capabilities were dispersed between both codes and not unified in a single technology. MCNP6, the next evolution in the MCNP suite of codes, now combines the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. We describe here the new capabilities of the MCNP6 depletion code dating from the official RSICC release MCNPX 2.6.0, reported previously, to the now current state of MCNP6. NEA/OECD benchmark results are also reported. The MCNP6 depletion capability enhancements beyond MCNPX 2.6.0 reported here include: (1) new performance enhancing parallel architecture that implements both shared and distributed memory constructs; (2) enhanced memory management that maximizes calculation fidelity; and (3) improved burnup physics for better nuclide prediction. MCNP6 depletion enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code. The enhancements described here help provide a powerful capability as well as dictate a path forward for future development to improve the usefulness of the technology.
GPU based Monte Carlo for PET image reconstruction: parameter optimization
International Nuclear Information System (INIS)
This paper presents the optimization of a fully Monte Carlo (MC) based iterative image reconstruction of Positron Emission Tomography (PET) measurements. With our MC re- construction method all the physical effects in a PET system are taken into account thus superior image quality is achieved in exchange for increased computational effort. The method is feasible because we utilize the enormous processing power of Graphical Processing Units (GPUs) to solve the inherently parallel problem of photon transport. The MC approach regards the simulated positron decays as samples in mathematical sums required in the iterative reconstruction algorithm, so to complement the fast architecture, our work of optimization focuses on the number of simulated positron decays required to obtain sufficient image quality. We have achieved significant results in determining the optimal number of samples for arbitrary measurement data, this allows to achieve the best image quality with the least possible computational effort. Based on this research recommendations can be given for effective partitioning of computational effort into the iterations in limited time reconstructions. (author)
A spreading resistance based method of mapping the resistivity and potential of a depleted diode
International Nuclear Information System (INIS)
The characterization of the depletion states of reverse operated p-n junctions is an important task within the scope of high energy physics detector development. The configuration of the sensitive volume inside these structures determines the particle detection process. Therefore a spreading resistance profiling based method has been developed to map the local resistivity and potential along the prepared edge of a depleted diode. This ''edge-SRP' method is capable of detecting the boarder of the space charge region and its transition to the electrical neutral bulk. In order to characterize the depleted space charge region, the surface potential along the edge could be measured by slightly modifying the setup. These surface potential results complement the spreading resistance one. In this paper the functionality of the developed method is verified by performing measurements on a prepared diode, which has been biased with different voltages
EXPERIMENTAL ACIDIFICATION CAUSES SOIL BASE-CATION DEPLETION AT THE BEAR BROOK WATERSHED IN MAINE
There is concern that changes in atmospheric deposition, climate, or land use have altered the biogeochemistry of forests causing soil base-cation depletion, particularly Ca. The Bear Brook Watershed in Maine (BBWM) is a paired watershed experiment with one watershed subjected to...
Application of equivalence methods on Monte Carlo method based homogenization multi-group constants
International Nuclear Information System (INIS)
The multi-group constants generated via continuous energy Monte Carlo method do not satisfy the equivalence between reference calculation and diffusion calculation applied in reactor core analysis. To the satisfaction of the equivalence theory, general equivalence theory (GET) and super homogenization method (SPH) were applied to the Monte Carlo method based group constants, and a simplified reactor core and C5G7 benchmark were examined with the Monte Carlo constants. The results show that the calculating precision of group constants is improved, and GET and SPH are good candidates for the equivalence treatment of Monte Carlo homogenization. (authors)
Dieudonne, Cyril; Dumonteil, Eric; Malvagi, Fausto; M'Backé Diop, Cheikh
2014-06-01
For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this paper we present a methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time the implementation of this method in the TRIPOLI-4® code will be discussed, as well as the precise calculation scheme a meme to bring important speed-up of the depletion calculation. Finally, this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes.
Refined Stratified Sampling for efficient Monte Carlo based uncertainty quantification
International Nuclear Information System (INIS)
A general adaptive approach rooted in stratified sampling (SS) is proposed for sample-based uncertainty quantification (UQ). To motivate its use in this context the space-filling, orthogonality, and projective properties of SS are compared with simple random sampling and Latin hypercube sampling (LHS). SS is demonstrated to provide attractive properties for certain classes of problems. The proposed approach, Refined Stratified Sampling (RSS), capitalizes on these properties through an adaptive process that adds samples sequentially by dividing the existing subspaces of a stratified design. RSS is proven to reduce variance compared to traditional stratified sample extension methods while providing comparable or enhanced variance reduction when compared to sample size extension methods for LHS – which do not afford the same degree of flexibility to facilitate a truly adaptive UQ process. An initial investigation of optimal stratification is presented and motivates the potential for major advances in variance reduction through optimally designed RSS. Potential paths for extension of the method to high dimension are discussed. Two examples are provided. The first involves UQ for a low dimensional function where convergence is evaluated analytically. The second presents a study to asses the response variability of a floating structure to an underwater shock. - Highlights: • An adaptive process, rooted in stratified sampling, is proposed for Monte Carlo-based uncertainty quantification. • Space-filling, orthogonality, and projective properties of stratified sampling are investigated • Stratified sampling is shown to possess attractive properties for certain classes of problems. • Refined Stratified Sampling, a new sampling method is proposed that enables the adaptive UQ process. • Optimality of RSS stratum division is explored
Monte Carlo-based simulation of dynamic jaws tomotherapy
International Nuclear Information System (INIS)
Purpose: Original TomoTherapy systems may involve a trade-off between conformity and treatment speed, the user being limited to three slice widths (1.0, 2.5, and 5.0 cm). This could be overcome by allowing the jaws to define arbitrary fields, including very small slice widths (<1 cm), which are challenging for a beam model. The aim of this work was to incorporate the dynamic jaws feature into a Monte Carlo (MC) model called TomoPen, based on the MC code PENELOPE, previously validated for the original TomoTherapy system. Methods: To keep the general structure of TomoPen and its efficiency, the simulation strategy introduces several techniques: (1) weight modifiers to account for any jaw settings using only the 5 cm phase-space file; (2) a simplified MC based model called FastStatic to compute the modifiers faster than pure MC; (3) actual simulation of dynamic jaws. Weight modifiers computed with both FastStatic and pure MC were compared. Dynamic jaws simulations were compared with the convolution/superposition (C/S) of TomoTherapy in the ''cheese'' phantom for a plan with two targets longitudinally separated by a gap of 3 cm. Optimization was performed in two modes: asymmetric jaws-constant couch speed (''running start stop,'' RSS) and symmetric jaws-variable couch speed (''symmetric running start stop,'' SRSS). Measurements with EDR2 films were also performed for RSS for the formal validation of TomoPen with dynamic jaws. Results: Weight modifiers computed with FastStatic were equivalent to pure MC within statistical uncertainties (0.5% for three standard deviations). Excellent agreement was achieved between TomoPen and C/S for both asymmetric jaw opening/constant couch speed and symmetric jaw opening/variable couch speed, with deviations well within 2%/2 mm. For RSS procedure, agreement between C/S and measurements was within 2%/2 mm for 95% of the points and 3%/3 mm for 98% of the points, where dose is greater than 30% of the prescription dose (gamma analysis
MCHITS: Monte Carlo based Method for Hyperlink Induced Topic Search on Networks
Directory of Open Access Journals (Sweden)
Zhaoyan Jin
2013-10-01
Full Text Available Hyperlink Induced Topic Search (HITS is the most authoritative and most widely used personalized ranking algorithm on networks. The HITS algorithm ranks nodes on networks according to power iteration, and has high complexity of computation. This paper models the HITS algorithm with the Monte Carlo method, and proposes Monte Carlo based algorithms for the HITS computation. Theoretical analysis and experiments show that the Monte Carlo based approximate computing of the HITS ranking reduces computing resources a lot while keeping higher accuracy, and is significantly better than related works
An empirical formula based on Monte Carlo simulation for diffuse reflectance from turbid media
Gnanatheepam, Einstein; Aruna, Prakasa Rao; Ganesan, Singaravelu
2016-03-01
Diffuse reflectance spectroscopy has been widely used in diagnostic oncology and characterization of laser irradiated tissue. However, still accurate and simple analytical equation does not exist for estimation of diffuse reflectance from turbid media. In this work, a diffuse reflectance lookup table for a range of tissue optical properties was generated using Monte Carlo simulation. Based on the generated Monte Carlo lookup table, an empirical formula for diffuse reflectance was developed using surface fitting method. The variance between the Monte Carlo lookup table surface and the surface obtained from the proposed empirical formula is less than 1%. The proposed empirical formula may be used for modeling of diffuse reflectance from tissue.
CAD-based Monte Carlo program for integrated simulation of nuclear system SuperMC
International Nuclear Information System (INIS)
Highlights: • The new developed CAD-based Monte Carlo program named SuperMC for integrated simulation of nuclear system makes use of hybrid MC-deterministic method and advanced computer technologies. SuperMC is designed to perform transport calculation of various types of particles, depletion and activation calculation including isotope burn-up, material activation and shutdown dose, and multi-physics coupling calculation including thermo-hydraulics, fuel performance and structural mechanics. The bi-directional automatic conversion between general CAD models and physical settings and calculation models can be well performed. Results and process of simulation can be visualized with dynamical 3D dataset and geometry model. Continuous-energy cross section, burnup, activation, irradiation damage and material data etc. are used to support the multi-process simulation. Advanced cloud computing framework makes the computation and storage extremely intensive simulation more attractive just as a network service to support design optimization and assessment. The modular design and generic interface promotes its flexible manipulation and coupling of external solvers. • The new developed and incorporated advanced methods in SuperMC was introduced including hybrid MC-deterministic transport method, particle physical interaction treatment method, multi-physics coupling calculation method, geometry automatic modeling and processing method, intelligent data analysis and visualization method, elastic cloud computing technology and parallel calculation method. • The functions of SuperMC2.1 integrating automatic modeling, neutron and photon transport calculation, results and process visualization was introduced. It has been validated by using a series of benchmarking cases such as the fusion reactor ITER model and the fast reactor BN-600 model. - Abstract: Monte Carlo (MC) method has distinct advantages to simulate complicated nuclear systems and is envisioned as a routine
Monte-Carlo based uncertainty analysis: Sampling efficiency and sampling convergence
International Nuclear Information System (INIS)
Monte Carlo analysis has become nearly ubiquitous since its introduction, now over 65 years ago. It is an important tool in many assessments of the reliability and robustness of systems, structures or solutions. As the deterministic core simulation can be lengthy, the computational costs of Monte Carlo can be a limiting factor. To reduce that computational expense as much as possible, sampling efficiency and convergence for Monte Carlo are investigated in this paper. The first section shows that non-collapsing space-filling sampling strategies, illustrated here with the maximin and uniform Latin hypercube designs, highly enhance the sampling efficiency, and render a desired level of accuracy of the outcomes attainable with far lesser runs. In the second section it is demonstrated that standard sampling statistics are inapplicable for Latin hypercube strategies. A sample-splitting approach is put forward, which in combination with a replicated Latin hypercube sampling allows assessing the accuracy of Monte Carlo outcomes. The assessment in turn permits halting the Monte Carlo simulation when the desired levels of accuracy are reached. Both measures form fairly noncomplex upgrades of the current state-of-the-art in Monte-Carlo based uncertainty analysis but give a substantial further progress with respect to its applicability.
Implementation of a Monte Carlo based inverse planning model for clinical IMRT with MCNP code
He, Tongming Tony
In IMRT inverse planning, inaccurate dose calculations and limitations in optimization algorithms introduce both systematic and convergence errors to treatment plans. The goal of this work is to practically implement a Monte Carlo based inverse planning model for clinical IMRT. The intention is to minimize both types of error in inverse planning and obtain treatment plans with better clinical accuracy than non-Monte Carlo based systems. The strategy is to calculate the dose matrices of small beamlets by using a Monte Carlo based method. Optimization of beamlet intensities is followed based on the calculated dose data using an optimization algorithm that is capable of escape from local minima and prevents possible pre-mature convergence. The MCNP 4B Monte Carlo code is improved to perform fast particle transport and dose tallying in lattice cells by adopting a selective transport and tallying algorithm. Efficient dose matrix calculation for small beamlets is made possible by adopting a scheme that allows concurrent calculation of multiple beamlets of single port. A finite-sized point source (FSPS) beam model is introduced for easy and accurate beam modeling. A DVH based objective function and a parallel platform based algorithm are developed for the optimization of intensities. The calculation accuracy of improved MCNP code and FSPS beam model is validated by dose measurements in phantoms. Agreements better than 1.5% or 0.2 cm have been achieved. Applications of the implemented model to clinical cases of brain, head/neck, lung, spine, pancreas and prostate have demonstrated the feasibility and capability of Monte Carlo based inverse planning for clinical IMRT. Dose distributions of selected treatment plans from a commercial non-Monte Carlo based system are evaluated in comparison with Monte Carlo based calculations. Systematic errors of up to 12% in tumor doses and up to 17% in critical structure doses have been observed. The clinical importance of Monte Carlo based
Reactor physics analysis method based on Monte Carlo homogenization
International Nuclear Information System (INIS)
Background: Many new concepts of nuclear energy systems with complicated geometric structures and diverse energy spectra have been put forward to meet the future demand of nuclear energy market. The traditional deterministic neutronics analysis method has been challenged in two aspects: one is the ability of generic geometry processing; the other is the multi-spectrum applicability of the multi-group cross section libraries. The Monte Carlo (MC) method predominates the suitability of geometry and spectrum, but faces the problems of long computation time and slow convergence. Purpose: This work aims to find a novel scheme to take the advantages of both methods drawn from the deterministic core analysis method and MC method. Methods: A new two-step core analysis scheme is proposed to combine the geometry modeling capability and continuous energy cross section libraries of MC method, as well as the higher computational efficiency of deterministic method. First of all, the MC simulations are performed for assembly, and the assembly homogenized multi-group cross sections are tallied at the same time. Then, the core diffusion calculations can be done with these multi-group cross sections. Results: The new scheme can achieve high efficiency while maintain acceptable precision. Conclusion: The new scheme can be used as an effective tool for the design and analysis of innovative nuclear energy systems, which has been verified by numeric tests. (authors)
Application of Photon Transport Monte Carlo Module with GPU-based Parallel System
Energy Technology Data Exchange (ETDEWEB)
Park, Chang Je [Sejong University, Seoul (Korea, Republic of); Shon, Heejeong [Golden Eng. Co. LTD, Seoul (Korea, Republic of); Lee, Donghak [CoCo Link Inc., Seoul (Korea, Republic of)
2015-05-15
In general, it takes lots of computing time to get reliable results in Monte Carlo simulations especially in deep penetration problems with a thick shielding medium. To mitigate such a weakness of Monte Carlo methods, lots of variance reduction algorithms are proposed including geometry splitting and Russian roulette, weight windows, exponential transform, and forced collision, etc. Simultaneously, advanced computing hardware systems such as GPU(Graphics Processing Units)-based parallel machines are used to get a better performance of the Monte Carlo simulation. The GPU is much easier to access and to manage when comparing a CPU cluster system. It also becomes less expensive these days due to enhanced computer technology. There, lots of engineering areas adapt GPU-bases massive parallel computation technique. based photon transport Monte Carlo method. It provides almost 30 times speedup without any optimization and it is expected almost 200 times with fully supported GPU system. It is expected that GPU system with advanced parallelization algorithm will contribute successfully for development of the Monte Carlo module which requires quick and accurate simulations.
Application of Photon Transport Monte Carlo Module with GPU-based Parallel System
International Nuclear Information System (INIS)
In general, it takes lots of computing time to get reliable results in Monte Carlo simulations especially in deep penetration problems with a thick shielding medium. To mitigate such a weakness of Monte Carlo methods, lots of variance reduction algorithms are proposed including geometry splitting and Russian roulette, weight windows, exponential transform, and forced collision, etc. Simultaneously, advanced computing hardware systems such as GPU(Graphics Processing Units)-based parallel machines are used to get a better performance of the Monte Carlo simulation. The GPU is much easier to access and to manage when comparing a CPU cluster system. It also becomes less expensive these days due to enhanced computer technology. There, lots of engineering areas adapt GPU-bases massive parallel computation technique. based photon transport Monte Carlo method. It provides almost 30 times speedup without any optimization and it is expected almost 200 times with fully supported GPU system. It is expected that GPU system with advanced parallelization algorithm will contribute successfully for development of the Monte Carlo module which requires quick and accurate simulations
Coupled neutronic thermo-hydraulic analysis of full PWR core with Monte-Carlo based BGCore system
International Nuclear Information System (INIS)
Highlights: → New thermal-hydraulic (TH) feedback module was integrated into the MCNP based depletion system BGCore. → A coupled neutronic-TH analysis of a full PWR core was performed with the upgraded BGCore system. → The BGCore results were verified against those of 3D nodal diffusion code DYN3D. → Very good agreement in major core operational parameters between the BGCore and DYN3D results was observed. - Abstract: BGCore reactor analysis system was recently developed at Ben-Gurion University for calculating in-core fuel composition and spent fuel emissions following discharge. It couples the Monte Carlo transport code MCNP with an independently developed burnup and decay module SARAF. Most of the existing MCNP based depletion codes (e.g. MOCUP, Monteburns, MCODE) tally directly the one-group fluxes and reaction rates in order to prepare one-group cross sections necessary for the fuel depletion analysis. BGCore, on the other hand, uses a multi-group (MG) approach for generation of one group cross-sections. This coupling approach significantly reduces the code execution time without compromising the accuracy of the results. Substantial reduction in the BGCore code execution time allows consideration of problems with much higher degree of complexity, such as introduction of thermal hydraulic (TH) feedback into the calculation scheme. Recently, a simplified TH feedback module, THERMO, was developed and integrated into the BGCore system. To demonstrate the capabilities of the upgraded BGCore system, a coupled neutronic TH analysis of a full PWR core was performed. The BGCore results were compared with those of the state of the art 3D deterministic nodal diffusion code DYN3D. Very good agreement in major core operational parameters including k-eff eigenvalue, axial and radial power profiles, and temperature distributions between the BGCore and DYN3D results was observed. This agreement confirms the consistency of the implementation of the TH feedback module
Acceptance and implementation of a system of planning computerized based on Monte Carlo
International Nuclear Information System (INIS)
It has been done the acceptance for use clinical Monaco computerized planning system, based on an on a virtual model of the energy yield of the head of the linear electron Accelerator and that performs the calculation of the dose with an algorithm of x-rays (XVMC) based on Monte Carlo algorithm. (Author)
A Monte-Carlo-Based Network Method for Source Positioning in Bioluminescence Tomography
Zhun Xu; Xiaolei Song; Xiaomeng Zhang; Jing Bai
2007-01-01
We present an approach based on the improved Levenberg Marquardt (LM) algorithm of backpropagation (BP) neural network to estimate the light source position in bioluminescent imaging. For solving the forward problem, the table-based random sampling algorithm (TBRS), a fast Monte Carlo simulation method ...
Energy Technology Data Exchange (ETDEWEB)
Ondis, L.A., II; Tyburski, L.J.; Moskowitz, B.S.
2000-03-01
The RCP01 Monte Carlo program is used to analyze many geometries of interest in nuclear design and analysis of light water moderated reactors such as the core in its pressure vessel with complex piping arrangement, fuel storage arrays, shipping and container arrangements, and neutron detector configurations. Written in FORTRAN and in use on a variety of computers, it is capable of estimating steady state neutron or photon reaction rates and neutron multiplication factors. The energy range covered in neutron calculations is that relevant to the fission process and subsequent slowing-down and thermalization, i.e., 20 MeV to 0 eV. The same energy range is covered for photon calculations.
International Nuclear Information System (INIS)
The RCP01 Monte Carlo program is used to analyze many geometries of interest in nuclear design and analysis of light water moderated reactors such as the core in its pressure vessel with complex piping arrangement, fuel storage arrays, shipping and container arrangements, and neutron detector configurations. Written in FORTRAN and in use on a variety of computers, it is capable of estimating steady state neutron or photon reaction rates and neutron multiplication factors. The energy range covered in neutron calculations is that relevant to the fission process and subsequent slowing-down and thermalization, i.e., 20 MeV to 0 eV. The same energy range is covered for photon calculations
Energy Technology Data Exchange (ETDEWEB)
Lopez-Tarjuelo, J.; Garcia-Molla, R.; Suan-Senabre, X. J.; Quiros-Higueras, J. Q.; Santos-Serra, A.; Marco-Blancas, N.; Calzada-Feliu, S.
2013-07-01
It has been done the acceptance for use clinical Monaco computerized planning system, based on an on a virtual model of the energy yield of the head of the linear electron Accelerator and that performs the calculation of the dose with an algorithm of x-rays (XVMC) based on Monte Carlo algorithm. (Author)
Energy Technology Data Exchange (ETDEWEB)
Makovicka, L.; Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J. [Universite de Franche-Comte, Equipe IRMA/ENISYS/FEMTO-ST, UMR6174 CNRS, 25 - Montbeliard (France); Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J.; Salomon, M. [Universite de Franche-Comte, Equipe AND/LIFC, 90 - Belfort (France)
2009-01-15
Monte Carlo codes, precise but slow, are very important tools in the vast majority of specialities connected to Radiation Physics, Radiation Protection and Dosimetry. A discussion about some other computing solutions is carried out; solutions not only based on the enhancement of computer power, or on the 'biasing'used for relative acceleration of these codes (in the case of photons), but on more efficient methods (A.N.N. - artificial neural network, C.B.R. - case-based reasoning - or other computer science techniques) already and successfully used for a long time in other scientific or industrial applications and not only Radiation Protection or Medical Dosimetry. (authors)
Response matrix Monte Carlo based on a general geometry local calculation for electron transport
International Nuclear Information System (INIS)
A Response Matrix Monte Carlo (RMMC) method has been developed for solving electron transport problems. This method was born of the need to have a reliable, computationally efficient transport method for low energy electrons (below a few hundred keV) in all materials. Today, condensed history methods are used which reduce the computation time by modeling the combined effect of many collisions but fail at low energy because of the assumptions required to characterize the electron scattering. Analog Monte Carlo simulations are prohibitively expensive since electrons undergo coulombic scattering with little state change after a collision. The RMMC method attempts to combine the accuracy of an analog Monte Carlo simulation with the speed of the condensed history methods. Like condensed history, the RMMC method uses probability distributions functions (PDFs) to describe the energy and direction of the electron after several collisions. However, unlike the condensed history method the PDFs are based on an analog Monte Carlo simulation over a small region. Condensed history theories require assumptions about the electron scattering to derive the PDFs for direction and energy. Thus the RMMC method samples from PDFs which more accurately represent the electron random walk. Results show good agreement between the RMMC method and analog Monte Carlo. 13 refs., 8 figs
Convex-based void filling method for CAD-based Monte Carlo geometry modeling
International Nuclear Information System (INIS)
Highlights: • We present a new void filling method named CVF for CAD based MC geometry modeling. • We describe convex based void description based and quality-based space subdivision. • The results showed improvements provided by CVF for both modeling and MC calculation efficiency. - Abstract: CAD based automatic geometry modeling tools have been widely applied to generate Monte Carlo (MC) calculation geometry for complex systems according to CAD models. Automatic void filling is one of the main functions in the CAD based MC geometry modeling tools, because the void space between parts in CAD models is traditionally not modeled while MC codes such as MCNP need all the problem space to be described. A dedicated void filling method, named Convex-based Void Filling (CVF), is proposed in this study for efficient void filling and concise void descriptions. The method subdivides all the problem space into disjointed regions using Quality based Subdivision (QS) and describes the void space in each region with complementary descriptions of the convex volumes intersecting with that region. It has been implemented in SuperMC/MCAM, the Multiple-Physics Coupling Analysis Modeling Program, and tested on International Thermonuclear Experimental Reactor (ITER) Alite model. The results showed that the new method reduced both automatic modeling time and MC calculation time
Yu Hyeong; Kim Byungwook; Kim Hyunsoo; Min Hophil; Yu Jiyoung; Kim Kyunggon; Kim Youngsoo
2010-01-01
Abstract Background The removal of high-abundance proteins from plasma is an efficient approach to investigating flow-through proteins for biomarker discovery studies. Most depletion methods are based on multiple immunoaffinity methods available commercially including LC columns and spin columns. Despite its usefulness, high-abundance depletion has an intrinsic problem, the sponge effect, which should be assessed during depletion experiments. Concurrently, the yield of depletion of high-abund...
Monte-Carlo based prediction of radiochromic film response for hadrontherapy dosimetry
International Nuclear Information System (INIS)
A model has been developed to calculate MD-55-V2 radiochromic film response to ion irradiation. This model is based on photon film response and film saturation by high local energy deposition computed by Monte-Carlo simulation. We have studied the response of the film to photon irradiation and we proposed a calculation method for hadron beams.
A lattice-based Monte Carlo evaluation of Canada Deuterium Uranium-6 safety parameters
Energy Technology Data Exchange (ETDEWEB)
Kim, Yong Hee; Hartanto, Donny; Kim, Woo Song [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon (Korea, Republic of)
2016-06-15
Important safety parameters such as the fuel temperature coefficient (FTC) and the power coefficient of reactivity (PCR) of the CANada Deuterium Uranium (CANDU-6) reactor have been evaluated using the Monte Carlo method. For accurate analysis of the parameters, the Doppler broadening rejection correction scheme was implemented in the MCNPX code to account for the thermal motion of the heavy uranium-238 nucleus in the neutron-U scattering reactions. In this work, a standard fuel lattice has been modeled and the fuel is depleted using MCNPX. The FTC value is evaluated for several burnup points including the mid-burnup representing a near-equilibrium core. The Doppler effect has been evaluated using several cross-section libraries such as ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1.1, and JENDL-4.0. The PCR value is also evaluated at mid-burnup conditions to characterize the safety features of an equilibrium CANDU-6 reactor. To improve the reliability of the Monte Carlo calculations, we considered a huge number of neutron histories in this work and the standard deviation of the k-infinity values is only 0.5-1 pcm.
MOx benchmark calculations by deterministic and Monte Carlo codes
International Nuclear Information System (INIS)
Highlights: ► MOx based depletion calculation. ► Methodology to create continuous energy pseudo cross section for lump of minor fission products. ► Mass inventory comparison between deterministic and Monte Carlo codes. ► Higher deviation was found for several isotopes. - Abstract: A depletion calculation benchmark devoted to MOx fuel is an ongoing objective of the OECD/NEA WPRS following the study of depletion calculation concerning UOx fuels. The objective of the proposed benchmark is to compare existing depletion calculations obtained with various codes and data libraries applied to fuel and back-end cycle configurations. In the present work the deterministic code NEWT/ORIGEN-S of the SCALE6 codes package and the Monte Carlo based code MONTEBURNS2.0 were used to calculate the masses of inventory isotopes. The methodology to apply the MONTEBURNS2.0 to this benchmark is also presented. Then the results from both code were compared.
Valence-dependent influence of serotonin depletion on model-based choice strategy.
Worbe, Y; Palminteri, S; Savulich, G; Daw, N D; Fernandez-Egea, E; Robbins, T W; Voon, V
2016-05-01
Human decision-making arises from both reflective and reflexive mechanisms, which underpin goal-directed and habitual behavioural control. Computationally, these two systems of behavioural control have been described by different learning algorithms, model-based and model-free learning, respectively. Here, we investigated the effect of diminished serotonin (5-hydroxytryptamine) neurotransmission using dietary tryptophan depletion (TD) in healthy volunteers on the performance of a two-stage decision-making task, which allows discrimination between model-free and model-based behavioural strategies. A novel version of the task was used, which not only examined choice balance for monetary reward but also for punishment (monetary loss). TD impaired goal-directed (model-based) behaviour in the reward condition, but promoted it under punishment. This effect on appetitive and aversive goal-directed behaviour is likely mediated by alteration of the average reward representation produced by TD, which is consistent with previous studies. Overall, the major implication of this study is that serotonin differentially affects goal-directed learning as a function of affective valence. These findings are relevant for a further understanding of psychiatric disorders associated with breakdown of goal-directed behavioural control such as obsessive-compulsive disorders or addictions. PMID:25869808
Development of the point-depletion code DEPTH
International Nuclear Information System (INIS)
Highlights: ► The DEPTH code has been developed for the large-scale depletion system. ► DEPTH uses the data library which is convenient to couple with MC codes. ► TTA and matrix exponential methods are implemented and compared. ► DEPTH is able to calculate integral quantities based on the matrix inverse. ► Code-to-code comparisons prove the accuracy and efficiency of DEPTH. -- Abstract: The burnup analysis is an important aspect in reactor physics, which is generally done by coupling of transport calculations and point-depletion calculations. DEPTH is a newly-developed point-depletion code of handling large burnup depletion systems and detailed depletion chains. For better coupling with Monte Carlo transport codes, DEPTH uses data libraries based on the combination of ORIGEN-2 and ORIGEN-S and allows users to assign problem-dependent libraries for each depletion step. DEPTH implements various algorithms of treating the stiff depletion systems, including the Transmutation trajectory analysis (TTA), the Chebyshev Rational Approximation Method (CRAM), the Quadrature-based Rational Approximation Method (QRAM) and the Laguerre Polynomial Approximation Method (LPAM). Three different modes are supported by DEPTH to execute the decay, constant flux and constant power calculations. In addition to obtaining the instantaneous quantities of the radioactivity, decay heats and reaction rates, DEPTH is able to calculate the integral quantities by a time-integrated solver. Through calculations compared with ORIGEN-2, the validity of DEPTH in point-depletion calculations is proved. The accuracy and efficiency of depletion algorithms are also discussed. In addition, an actual pin-cell burnup case is calculated to illustrate the DEPTH code performance in coupling with the RMC Monte Carlo code
Development of Monte Carlo-based pebble bed reactor fuel management code
International Nuclear Information System (INIS)
Highlights: • A new Monte Carlo-based fuel management code for OTTO cycle pebble bed reactor was developed. • The double-heterogeneity was modeled using statistical method in MVP-BURN code. • The code can perform analysis of equilibrium and non-equilibrium phase. • Code-to-code comparisons for Once-Through-Then-Out case were investigated. • Ability of the code to accommodate the void cavity was confirmed. - Abstract: A fuel management code for pebble bed reactors (PBRs) based on the Monte Carlo method has been developed in this study. The code, named Monte Carlo burnup analysis code for PBR (MCPBR), enables a simulation of the Once-Through-Then-Out (OTTO) cycle of a PBR from the running-in phase to the equilibrium condition. In MCPBR, a burnup calculation based on a continuous-energy Monte Carlo code, MVP-BURN, is coupled with an additional utility code to be able to simulate the OTTO cycle of PBR. MCPBR has several advantages in modeling PBRs, namely its Monte Carlo neutron transport modeling, its capability of explicitly modeling the double heterogeneity of the PBR core, and its ability to model different axial fuel speeds in the PBR core. Analysis at the equilibrium condition of the simplified PBR was used as the validation test of MCPBR. The calculation results of the code were compared with the results of diffusion-based fuel management PBR codes, namely the VSOP and PEBBED codes. Using JENDL-4.0 nuclide library, MCPBR gave a 4.15% and 3.32% lower keff value compared to VSOP and PEBBED, respectively. While using JENDL-3.3, MCPBR gave a 2.22% and 3.11% higher keff value compared to VSOP and PEBBED, respectively. The ability of MCPBR to analyze neutron transport in the top void of the PBR core and its effects was also confirmed
Jeraj, Robert; Keall, Paul
2000-12-01
The effect of the statistical uncertainty, or noise, in inverse treatment planning for intensity modulated radiotherapy (IMRT) based on Monte Carlo dose calculation was studied. Sets of Monte Carlo beamlets were calculated to give uncertainties at Dmax ranging from 0.2% to 4% for a lung tumour plan. The weights of these beamlets were optimized using a previously described procedure based on a simulated annealing optimization algorithm. Several different objective functions were used. It was determined that the use of Monte Carlo dose calculation in inverse treatment planning introduces two errors in the calculated plan. In addition to the statistical error due to the statistical uncertainty of the Monte Carlo calculation, a noise convergence error also appears. For the statistical error it was determined that apparently successfully optimized plans with a noisy dose calculation (3% 1σ at Dmax ), which satisfied the required uniformity of the dose within the tumour, showed as much as 7% underdose when recalculated with a noise-free dose calculation. The statistical error is larger towards the tumour and is only weakly dependent on the choice of objective function. The noise convergence error appears because the optimum weights are determined using a noisy calculation, which is different from the optimum weights determined for a noise-free calculation. Unlike the statistical error, the noise convergence error is generally larger outside the tumour, is case dependent and strongly depends on the required objectives.
A Markov Chain Monte Carlo Based Method for System Identification
Energy Technology Data Exchange (ETDEWEB)
Glaser, R E; Lee, C L; Nitao, J J; Hanley, W G
2002-10-22
This paper describes a novel methodology for the identification of mechanical systems and structures from vibration response measurements. It combines prior information, observational data and predictive finite element models to produce configurations and system parameter values that are most consistent with the available data and model. Bayesian inference and a Metropolis simulation algorithm form the basis for this approach. The resulting process enables the estimation of distributions of both individual parameters and system-wide states. Attractive features of this approach include its ability to: (1) provide quantitative measures of the uncertainty of a generated estimate; (2) function effectively when exposed to degraded conditions including: noisy data, incomplete data sets and model misspecification; (3) allow alternative estimates to be produced and compared, and (4) incrementally update initial estimates and analysis as more data becomes available. A series of test cases based on a simple fixed-free cantilever beam is presented. These results demonstrate that the algorithm is able to identify the system, based on the stiffness matrix, given applied force and resultant nodal displacements. Moreover, it effectively identifies locations on the beam where damage (represented by a change in elastic modulus) was specified.
Monte Carlo vs. Pencil Beam based optimization of stereotactic lung IMRT
Weinmann Martin; Söhn Matthias; Muzik Jan; Sikora Marcin; Alber Markus
2009-01-01
Abstract Background The purpose of the present study is to compare finite size pencil beam (fsPB) and Monte Carlo (MC) based optimization of lung intensity-modulated stereotactic radiotherapy (lung IMSRT). Materials and methods A fsPB and a MC algorithm as implemented in a biological IMRT planning system were validated by film measurements in a static lung phantom. Then, they were applied for static lung IMSRT planning based on three different geometrical patient models (one phase static CT, ...
Monte Carlo vs. Pencil Beam based optimization of stereotactic lung IMRT
Sikora, Marcin; Muzik, Jan; Söhn, Matthias; Weinmann, Martin; Alber, Markus
2009-01-01
Background The purpose of the present study is to compare finite size pencil beam (fsPB) and Monte Carlo (MC) based optimization of lung intensity-modulated stereotactic radiotherapy (lung IMSRT). Materials and methods A fsPB and a MC algorithm as implemented in a biological IMRT planning system were validated by film measurements in a static lung phantom. Then, they were applied for static lung IMSRT planning based on three different geometrical patient models (one phase static CT, density o...
Monte Carlo vs. Pencil Beam based optimization of stereotactic lung IMRT
Sikora, Marcin Pawel; Muzik, Jan; Söhn, Matthias; Weinmann, Martin; Alber, Markus
2009-01-01
Background: The purpose of the present study is to compare finite size pencil beam (fsPB) and Monte Carlo (MC) based optimization of lung intensity-modulated stereotactic radiotherapy (lung IMSRT). Materials and methods: A fsPB and a MC algorithm as implemented in a biological IMRT planning system were validated by film measurements in a static lung phantom. Then, they were applied for static lung IMSRT planning based on three different geometrical patient models (one phase ...
Effects of CT based Voxel Phantoms on Dose Distribution Calculated with Monte Carlo Method
Institute of Scientific and Technical Information of China (English)
Chen Chaobin; Huang Qunying; Wu Yican
2005-01-01
A few CT-based voxel phantoms were produced to investigate the sensitivity of Monte Carlo simulations of X-ray beam and electron beam to the proportions of elements and the mass densities of the materials used to express the patient's anatomical structure. The human body can be well outlined by air, lung, adipose, muscle, soft bone and hard bone to calculate the dose distribution with Monte Carlo method. The effects of the calibration curves established by using various CT scanners are not clinically significant based on our investigation. The deviation from the values of cumulative dose volume histogram derived from CT-based voxel phantoms is less than 1% for the given target.
Effects of CT based Voxel Phantoms on Dose Distribution Calculated with Monte Carlo Method
Chen, Chaobin; Huang, Qunying; Wu, Yican
2005-04-01
A few CT-based voxel phantoms were produced to investigate the sensitivity of Monte Carlo simulations of x-ray beam and electron beam to the proportions of elements and the mass densities of the materials used to express the patient's anatomical structure. The human body can be well outlined by air, lung, adipose, muscle, soft bone and hard bone to calculate the dose distribution with Monte Carlo method. The effects of the calibration curves established by using various CT scanners are not clinically significant based on our investigation. The deviation from the values of cumulative dose volume histogram derived from CT-based voxel phantoms is less than 1% for the given target.
DNA vector-based RNAi approach for stable depletion of poly(ADP-ribose) polymerase-1
International Nuclear Information System (INIS)
RNA-mediated interference (RNAi) is a powerful technique that is now being used in mammalian cells to specifically silence a gene. Some recent studies have used this technique to achieve variable extent of depletion of a nuclear enzyme poly(ADP-ribose) polymerase-1 (PARP-1). These studies reported either transient silencing of PARP-1 using double-stranded RNA or stable silencing of PARP-1 with a DNA vector which was introduced by a viral delivery system. In contrast, here we report that a simple RNAi approach which utilizes a pBS-U6-based DNA vector containing strategically selected PARP-1 targeting sequence, introduced in the cells by conventional CaPO4 protocol, can be used to achieve stable and specific silencing of PARP-1 in different types of cells. We also provide a detailed strategy for selection and cloning of PARP-1-targeting sequences for the DNA vector, and demonstrate that this technique does not affect expression of its closest functional homolog PARP-2
Development of 3d reactor burnup code based on Monte Carlo method and exponential Euler method
International Nuclear Information System (INIS)
Burnup analysis plays a key role in fuel breeding, transmutation and post-processing in nuclear reactor. Burnup codes based on one-dimensional and two-dimensional transport method have difficulties in meeting the accuracy requirements. A three-dimensional burnup analysis code based on Monte Carlo method and Exponential Euler method has been developed. The coupling code combines advantage of Monte Carlo method in complex geometry neutron transport calculation and FISPACT in fast and precise inventory calculation, meanwhile resonance Self-shielding effect in inventory calculation can also be considered. The IAEA benchmark text problem has been adopted for code validation. Good agreements were shown in the comparison with other participants' results. (authors)
Ray-Based Calculations with DEPLETE of Laser Backscatter in ICF Targets
Energy Technology Data Exchange (ETDEWEB)
Strozzi, D J; Williams, E; Hinkel, D; Froula, D; London, R; Callahan, D
2008-05-19
A steady-state model for Brillouin and Raman backscatter along a laser ray path is presented. The daughter plasma waves are treated in the strong damping limit, and have amplitudes given by the (linear) kinetic response to the ponderomotive drive. Pump depletion, inverse-bremsstrahlung damping, bremsstrahlung emission, Thomson scattering off density fluctuations, and whole-beam focusing are included. The numerical code Deplete, which implements this model, is described. The model is compared with traditional linear gain calculations, as well as 'plane-wave' simulations with the paraxial propagation code pF3D. Comparisons with Brillouin-scattering experiments at the Omega Laser Facility show that laser speckles greatly enhance the reflectivity over the Deplete results. An approximate upper bound on this enhancement is given by doubling the Deplete coupling coefficient. Analysis with Deplete of an ignition design for the National Ignition Facility (NIF), with a peak radiation temperature of 285 eV, shows encouragingly low reflectivity. Doubling the coupling to bracket speckle effects suggests a less optimistic picture. Re-absorption of Raman light is seen to be significant in this design.
International Nuclear Information System (INIS)
We briefly present our atomistic kinetic Monte Carlo approach to model the diffusion of point-defects in Fe-based alloys, and therefore to simulate diffusion induced mass transport and subsequent nano-structural and microchemical changes. This methodology has been hitherto successfully applied to the simulation of thermal annealing experiments. We here present our achievements in the generalization of this method to the simulation of neutron irradiation damage. (authors)
Laser-based detection and tracking moving objects using data-driven Markov chain Monte Carlo
Vu, Trung-Dung; Aycard, Olivier
2009-01-01
We present a method of simultaneous detection and tracking moving objects from a moving vehicle equipped with a single layer laser scanner. A model-based approach is introduced to interpret the laser measurement sequence by hypotheses of moving object trajectories over a sliding window of time. Knowledge of various aspects including object model, measurement model, motion model are integrated in one theoretically sound Bayesian framework. The data-driven Markov chain Monte Carlo (DDMCMC) tech...
Development of a space radiation Monte Carlo computer simulation based on the FLUKA and ROOT codes
Pinsky, L; Ferrari, A; Sala, P; Carminati, F; Brun, R
2001-01-01
This NASA funded project is proceeding to develop a Monte Carlo-based computer simulation of the radiation environment in space. With actual funding only initially in place at the end of May 2000, the study is still in the early stage of development. The general tasks have been identified and personnel have been selected. The code to be assembled will be based upon two major existing software packages. The radiation transport simulation will be accomplished by updating the FLUKA Monte Carlo program, and the user interface will employ the ROOT software being developed at CERN. The end-product will be a Monte Carlo-based code which will complement the existing analytic codes such as BRYNTRN/HZETRN presently used by NASA to evaluate the effects of radiation shielding in space. The planned code will possess the ability to evaluate the radiation environment for spacecraft and habitats in Earth orbit, in interplanetary space, on the lunar surface, or on a planetary surface such as Mars. Furthermore, it will be usef...
The influence of air cavities within the PTV on Monte Carlo-based IMRT optimization
Energy Technology Data Exchange (ETDEWEB)
Smedt, Bart de [Department of Medical Physics, Ghent University, Gent (Belgium); Vanderstraeten, Barbara [Department of Medical Physics, Ghent University, Gent (Belgium); Reynaert, Nick [Department of Medical Physics, Ghent University, Gent (Belgium); Gersem, Werner de [Department of Radiotherapy, Ghent University Hospital, Gent (Belgium); Neve, Wilfried de [Department of Radiotherapy, Ghent University Hospital, Gent (Belgium); Thierens, Hubert [Department of Medical Physics, Ghent University, Gent (Belgium)
2007-06-15
Integrating Monte Carlo calculated dose distributions into an iterative aperture-based IMRT optimization process can improve the final treatment plan. However, the influence of large air cavities in the planning target volume (PTV) on the outcome of the optimization process should not be underestimated. To study this influence, the treatment plan of an ethmoid sinus cancer patient, which has large air cavities included in the PTV, is iteratively optimized in two different situations, namely when the large air cavities are included in the PTV and when these air cavities are excluded from the PTV. Two optimization methods were applied to integrate the Monte Carlo calculated dose distributions into the optimization process, namely the 'Correction-method' and the 'Per Segment-method'. The 'Correction-method' takes the Monte Carlo calculated global dose distribution into account in the optimization process by means of a correction matrix, which is in fact a dose distribution that is equal to the difference between the Monte Carlo calculated global dose distribution and the global dose distribution calculated by a conventional dose calculation algorithm. The 'Per Segment-method' uses directly the Monte Carlo calculated dose distributions of the individual segments in the optimization process. Both methods tend to converge whether or not large air cavities are excluded from the PTV during the optimization process. However, the 'Per Segment-method' performs better than the 'Correction-method' in both situations and the 'Per Segment-method' in the case where the large air cavities are excluded from the PTV leads to a better treatment plan then when these air cavities are included. Therefore we advise to exclude large air cavities and to apply the 'Per Segment-method' to integrate the Monte Carlo dose calculations into an iterative aperture-based optimization process. Nevertheless, the &apos
Fluctuations in the EAS radio signal derived with improved Monte Carlo simulations based on CORSIKA
Huege, T; Badea, F; Bähren, L; Bekk, K; Bercuci, A; Bertaina, M; Biermann, P L; Blumer, J; Bozdog, H; Brancus, I M; Buitink, S; Bruggemann, M; Buchholz, P; Butcher, H; Chiavassa, A; Daumiller, K; De Bruyn, A G; De Vos, C M; Di Pierro, F; Doll, P; Engel, R; Falcke, H; Gemmeke, H; Ghia, P L; Glasstetter, R; Grupen, C; Haungs, A; Heck, D; Hörandel, J R; Horneffer, A; Kampert, K H; Kant, G W; Klein, U; Kolotaev, Yu; Koopman, Y; Krömer, O; Kuijpers, J; Lafebre, S; Maier, G; Mathes, H J; Mayer, H J; Milke, J; Mitrica, B; Morello, C; Navarra, G; Nehls, S; Nigl, A; Obenland, R; Oehlschläger, J; Ostapchenko, S; Over, S; Pepping, H J; Petcu, M; Petrovic, J; Pierog, T; Plewnia, S; Rebel, H; Risse, A; Roth, M; Schieler, H; Schoonderbeek, G; Sima, O; Stumpert, M; Toma, G; Trinchero, G C; Ulrich, H; Valchierotti, S; Van Buren, J; Van Capellen, W; Walkowiak, W; Weindl, A; Wijnholds, S J; Wochele, J; Zabierowski, J; Zensus, J A; Zimmermann, D; Bowman, J D; Huege, Tim
2005-01-01
Cosmic ray air showers are known to emit pulsed radio emission which can be understood as coherent geosynchrotron radiation arising from the deflection of electron-positron pairs in the earth's magnetic field. Here, we present simulations carried out with an improved version of our Monte Carlo code for the calculation of geosynchrotron radiation. Replacing the formerly analytically parametrised longitudinal air shower development with CORSIKA-generated longitudinal profiles, we study the radio flux variations arising from inherent fluctuations between individual air showers. Additionally, we quantify the dependence of the radio emission on the nature of the primary particle by comparing the emission generated by proton- and iron-induced showers. This is only the first step in the incorporation of a more realistic air shower model into our Monte Carlo code. The inclusion of highly realistic CORSIKA-based particle energy, momentum and spatial distributions together with an analytical treatment of ionisation los...
ERSN-OpenMC, a Java-based GUI for OpenMC Monte Carlo code
Directory of Open Access Journals (Sweden)
Jaafar EL Bakkali
2016-07-01
Full Text Available OpenMC is a new Monte Carlo transport particle simulation code focused on solving two types of neutronic problems mainly the k-eigenvalue criticality fission source problems and external fixed fission source problems. OpenMC does not have any Graphical User Interface and the creation of one is provided by our java-based application named ERSN-OpenMC. The main feature of this application is to provide to the users an easy-to-use and flexible graphical interface to build better and faster simulations, with less effort and great reliability. Additionally, this graphical tool was developed with several features, as the ability to automate the building process of OpenMC code and related libraries as well as the users are given the freedom to customize their installation of this Monte Carlo code. A full description of the ERSN-OpenMC application is presented in this paper.
Polarization imaging of multiply-scattered radiation based on integral-vector Monte Carlo method
International Nuclear Information System (INIS)
A new integral-vector Monte Carlo method (IVMCM) is developed to analyze the transfer of polarized radiation in 3D multiple scattering particle-laden media. The method is based on a 'successive order of scattering series' expression of the integral formulation of the vector radiative transfer equation (VRTE) for application of efficient statistical tools to improve convergence of Monte Carlo calculations of integrals. After validation against reference results in plane-parallel layer backscattering configurations, the model is applied to a cubic container filled with uniformly distributed monodispersed particles and irradiated by a monochromatic narrow collimated beam. 2D lateral images of effective Mueller matrix elements are calculated in the case of spherical and fractal aggregate particles. Detailed analysis of multiple scattering regimes, which are very similar for unpolarized radiation transfer, allows identifying the sensitivity of polarization imaging to size and morphology.
Monte Carlo Methods in Materials Science Based on FLUKA and ROOT
Pinsky, Lawrence; Wilson, Thomas; Empl, Anton; Andersen, Victor
2003-01-01
A comprehensive understanding of mitigation measures for space radiation protection necessarily involves the relevant fields of nuclear physics and particle transport modeling. One method of modeling the interaction of radiation traversing matter is Monte Carlo analysis, a subject that has been evolving since the very advent of nuclear reactors and particle accelerators in experimental physics. Countermeasures for radiation protection from neutrons near nuclear reactors, for example, were an early application and Monte Carlo methods were quickly adapted to this general field of investigation. The project discussed here is concerned with taking the latest tools and technology in Monte Carlo analysis and adapting them to space applications such as radiation shielding design for spacecraft, as well as investigating how next-generation Monte Carlos can complement the existing analytical methods currently used by NASA. We have chosen to employ the Monte Carlo program known as FLUKA (A legacy acronym based on the German for FLUctuating KAscade) used to simulate all of the particle transport, and the CERN developed graphical-interface object-oriented analysis software called ROOT. One aspect of space radiation analysis for which the Monte Carlo s are particularly suited is the study of secondary radiation produced as albedoes in the vicinity of the structural geometry involved. This broad goal of simulating space radiation transport through the relevant materials employing the FLUKA code necessarily requires the addition of the capability to simulate all heavy-ion interactions from 10 MeV/A up to the highest conceivable energies. For all energies above 3 GeV/A the Dual Parton Model (DPM) is currently used, although the possible improvement of the DPMJET event generator for energies 3-30 GeV/A is being considered. One of the major tasks still facing us is the provision for heavy ion interactions below 3 GeV/A. The ROOT interface is being developed in conjunction with the
Espel, Federico Puente
The main objective of this PhD research is to develop a high accuracy modeling tool using a Monte Carlo based coupled system. The presented research comprises the development of models to include the thermal-hydraulic feedback to the Monte Carlo method and speed-up mechanisms to accelerate the Monte Carlo criticality calculation. Presently, deterministic codes based on the diffusion approximation of the Boltzmann transport equation, coupled with channel-based (or sub-channel based) thermal-hydraulic codes, carry out the three-dimensional (3-D) reactor core calculations of the Light Water Reactors (LWRs). These deterministic codes utilize nuclear homogenized data (normally over large spatial zones, consisting of fuel assembly or parts of fuel assembly, and in the best case, over small spatial zones, consisting of pin cell), which is functionalized in terms of thermal-hydraulic feedback parameters (in the form of off-line pre-generated cross-section libraries). High accuracy modeling is required for advanced nuclear reactor core designs that present increased geometry complexity and material heterogeneity. Such high-fidelity methods take advantage of the recent progress in computation technology and coupled neutron transport solutions with thermal-hydraulic feedback models on pin or even on sub-pin level (in terms of spatial scale). The continuous energy Monte Carlo method is well suited for solving such core environments with the detailed representation of the complicated 3-D problem. The major advantages of the Monte Carlo method over the deterministic methods are the continuous energy treatment and the exact 3-D geometry modeling. However, the Monte Carlo method involves vast computational time. The interest in Monte Carlo methods has increased thanks to the improvements of the capabilities of high performance computers. Coupled Monte-Carlo calculations can serve as reference solutions for verifying high-fidelity coupled deterministic neutron transport methods
Comparing analytical and Monte Carlo optical diffusion models in phosphor-based X-ray detectors
Kalyvas, N.; Liaparinos, P.
2014-03-01
Luminescent materials are employed as radiation to light converters in detectors of medical imaging systems, often referred to as phosphor screens. Several processes affect the light transfer properties of phosphors. Amongst the most important is the interaction of light. Light attenuation (absorption and scattering) can be described either through "diffusion" theory in theoretical models or "quantum" theory in Monte Carlo methods. Although analytical methods, based on photon diffusion equations, have been preferentially employed to investigate optical diffusion in the past, Monte Carlo simulation models can overcome several of the analytical modelling assumptions. The present study aimed to compare both methodologies and investigate the dependence of the analytical model optical parameters as a function of particle size. It was found that the optical photon attenuation coefficients calculated by analytical modeling are decreased with respect to the particle size (in the region 1- 12 μm). In addition, for particles sizes smaller than 6μm there is no simultaneous agreement between the theoretical modulation transfer function and light escape values with respect to the Monte Carlo data.
Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy
Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe
2015-07-01
The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm3 calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.
Lin, N.-H.; Saxena, V. K.
1992-01-01
The physical characteristics of the Antarctic stratospheric aerosol are investigated via a comprehensive analysis of the SAGE II data during the most severe ozone depletion episode of October 1987. The aerosol size distribution is found to be bimodal in several instances using the randomized minimization search technique, which suggests that the distribution of a single mode may be used to fit the data in the retrieved size range only at the expense of resolution for the larger particles. On average, in the region below 18 km, a wavelike perturbation with the upstream tilting for the parameters of mass loading, total number, and surface area concentration is found to be located just above the region of the most severe ozone depletion.
Adjoint-based uncertainty quantification and sensitivity analysis for reactor depletion calculations
Stripling, Hayes Franklin
Depletion calculations for nuclear reactors model the dynamic coupling between the material composition and neutron flux and help predict reactor performance and safety characteristics. In order to be trusted as reliable predictive tools and inputs to licensing and operational decisions, the simulations must include an accurate and holistic quantification of errors and uncertainties in its outputs. Uncertainty quantification is a formidable challenge in large, realistic reactor models because of the large number of unknowns and myriad sources of uncertainty and error. We present a framework for performing efficient uncertainty quantification in depletion problems using an adjoint approach, with emphasis on high-fidelity calculations using advanced massively parallel computing architectures. This approach calls for a solution to two systems of equations: (a) the forward, engineering system that models the reactor, and (b) the adjoint system, which is mathematically related to but different from the forward system. We use the solutions of these systems to produce sensitivity and error estimates at a cost that does not grow rapidly with the number of uncertain inputs. We present the framework in a general fashion and apply it to both the source-driven and k-eigenvalue forms of the depletion equations. We describe the implementation and verification of solvers for the forward and ad- joint equations in the PDT code, and we test the algorithms on realistic reactor analysis problems. We demonstrate a new approach for reducing the memory and I/O demands on the host machine, which can be overwhelming for typical adjoint algorithms. Our conclusion is that adjoint depletion calculations using full transport solutions are not only computationally tractable, they are the most attractive option for performing uncertainty quantification on high-fidelity reactor analysis problems.
State-of-the-art in Comprehensive Cascade Control Approach through Monte-Carlo Based Representation
Directory of Open Access Journals (Sweden)
A.H. Mazinan
2015-10-01
Full Text Available The research relies on the comprehensive cascade control approach to be developed in the area of spacecraft, as long as Monte-Carlo based representation is taken into real consideration with respect to state-of-the-art. It is obvious that the conventional methods do not have sufficient merit to be able to deal with such a process under control, constantly, provided that a number of system parameters variations are to be used in providing real situations. It is to note that the new insights in the area of the research’s topic are valuable to outperform a class of spacecrafts performance as the realizations of the acquired results are to be addressed in both real and academic environments. In a word, there are a combination of double closed loop based upon quaternion based control approach in connection with Euler based control approach to handle the three-axis rotational angles and its rates, synchronously, in association with pulse modulation analysis and control allocation, where the dynamics and kinematics of the present system under control are analyzed. A series of experiments are carried out to consider the approach performance in which the aforementioned Monte-Carlo based representation is to be realized in verifying the investigated outcomes.
Experimental method for scanning the surface depletion region in nitride based heterostructures
International Nuclear Information System (INIS)
The group-III-nitride semiconductors feature strong spontaneous polarization in the[0001] direction and charges on the respective polar surfaces. Within the resulting surface depletion region the surface field causes band banding and affects the optical transitions in quantum wells. We studied the changes of the emission characteristics of a single GaInN quantum well when its distance to the surface and the influence of the surface field varies. We observe a strong increase of the quantum well emission energy and a decrease of the line width when the surface field partially compensates the piezoelectric field of the quantum well. A scan of the total surface depletion region with a single quantum well as probe was performed. The obtained emission data allow for the direct determination of the width of the depletion region. The experimental method is promising for studies of the surface field and the surface potential of III-nitride surfaces and interfaces. (copyright 2009 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)
Energy Technology Data Exchange (ETDEWEB)
Zhu Feng [State Key Laboratory for Physical Chemistry of Solid Surfaces and Department of Chemistry, College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, Fujian 361005 (China); Yan Jiawei, E-mail: jwyan@xmu.edu.cn [State Key Laboratory for Physical Chemistry of Solid Surfaces and Department of Chemistry, College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, Fujian 361005 (China); Lu Miao [Pen-Tung Sah Micro-Nano Technology Research Center, Xiamen University, Xiamen, Fujian 361005 (China); Zhou Yongliang; Yang Yang; Mao Bingwei [State Key Laboratory for Physical Chemistry of Solid Surfaces and Department of Chemistry, College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, Fujian 361005 (China)
2011-10-01
Highlights: > A novel strategy based on a combination of interferent depleting and redox cycling is proposed for the plane-recessed microdisk array electrodes. > The strategy break up the restriction of selectively detecting a species that exhibits reversible reaction in a mixture with one that exhibits an irreversible reaction. > The electrodes enhance the current signal by redox cycling. > The electrodes can work regardless of the reversibility of interfering species. - Abstract: The fabrication, characterization and application of the plane-recessed microdisk array electrodes for selective detection are demonstrated. The electrodes, fabricated by lithographic microfabrication technology, are composed of a planar film electrode and a 32 x 32 recessed microdisk array electrode. Different from commonly used redox cycling operating mode for array configurations such as interdigitated array electrodes, a novel strategy based on a combination of interferent depleting and redox cycling is proposed for the electrodes with an appropriate configuration. The planar film electrode (the plane electrode) is used to deplete the interferent in the diffusion layer. The recessed microdisk array electrode (the microdisk array), locating within the diffusion layer of the plane electrode, works for detecting the target analyte in the interferent-depleted diffusion layer. In addition, the microdisk array overcomes the disadvantage of low current signal for a single microelectrode. Moreover, the current signal of the target analyte that undergoes reversible electron transfer can be enhanced due to the redox cycling between the plane electrode and the microdisk array. Based on the above working principle, the plane-recessed microdisk array electrodes break up the restriction of selectively detecting a species that exhibits reversible reaction in a mixture with one that exhibits an irreversible reaction, which is a limitation of single redox cycling operating mode. The advantages of the
Application of backtracking algorithm to depletion calculations
International Nuclear Information System (INIS)
Based on the theory of linear chain method for analytical depletion calculations, the burn-up matrix is decoupled by the divide and conquer strategy and the linear chain with Markov characteristic is formed. The density, activity and decay heat of every nuclide in the chain can be calculated by analytical solutions. Every possible reaction path of the nuclide must be considered during the linear chain establishment process. To confirm the calculation precision and efficiency, the algorithm which can cover all the reaction paths of the nuclide and search the paths automatically according to to problem description and precision restrictions should be sought. Through analysis and comparison of several kinds of searching algorithms, the backtracking algorithm was selected to search and calculate the linear chains using Depth First Search (DFS) method. The depletion program can solve the depletion problem adaptively and with high fidelity. The solution space and time complexity of the program were analyzed. The new developed depletion program was coupled with Monte Carlo program MCMG-II to calculate the benchmark burn-up problem of the first core of China Experimental Fast Reactor (CEFR). The initial verification and validation of the program was performed by the calculation. (author)
Application of backtracking algorithm to depletion calculations
International Nuclear Information System (INIS)
Based on the theory of linear chain method for analytical depletion calculations, the burnup matrix is decoupled by the divide and conquer strategy and the linear chain with Markov characteristic is formed. The density, activity and decay heat of every nuclide in the chain then can be calculated by analytical solutions. Every possible reaction path of the nuclide must be considered during the linear chain establishment process. To confirm the calculation precision and efficiency, the algorithm which can cover all the reaction paths and search the paths automatically according to the problem description and precision restrictions should be found. Through analysis and comparison of several kinds of searching algorithms, the backtracking algorithm was selected to establish and calculate the linear chains in searching process using depth first search (DFS) method, forming an algorithm which can solve the depletion problem adaptively and with high fidelity. The complexity of the solution space and time was analyzed by taking into account depletion process and the characteristics of the backtracking algorithm. The newly developed depletion program was coupled with Monte Carlo program MCMG-Ⅱ to calculate the benchmark burnup problem of the first core of China Experimental Fast Reactor (CEFR) and the preliminary verification and validation of the program were performed. (authors)
International Nuclear Information System (INIS)
Channel capacity of ocean water is limited by propagation distance and optical properties. Previous studies on this problem are based on water-tank experiments with different amounts of Maalox antacid. However, propagation distance is limited by the experimental set-up and the optical properties are different from ocean water. Therefore, the experiment result is not accurate for the physical design of underwater wireless communications links. This letter developed a Monte Carlo model to study channel capacity of underwater optical communications. Moreover, this model can flexibly configure various parameters of transmitter, receiver and channel, and is suitable for physical underwater optical communications links design. (paper)
A new Monte-Carlo based simulation for the CryoEDM experiment
Raso-Barnett, Matthew
2015-01-01
This thesis presents a new Monte-Carlo based simulation of the physics of ultra-cold neutrons (UCN) in complex geometries and its application to the CryoEDM experiment. It includes a detailed description of the design and performance of this simulation along with its use in a project to study the magnetic depolarisation time of UCN within the apparatus due to magnetic impurities in the measurement cell, which is a crucial parameter in the sensitivity of a neutron electricdipole-moment (nEDM) ...
CARMEN: a system Monte Carlo based on linear programming from direct openings
International Nuclear Information System (INIS)
The use of Monte Carlo (MC) has shown an improvement in the accuracy of the calculation of the dose compared to other analytics algorithms installed on the systems of business planning, especially in the case of non-standard situations typical of complex techniques such as IMRT and VMAT. Our treatment planning system called CARMEN, is based on the complete simulation, both the beam transport in the head of the accelerator and the patient, and simulation designed for efficient operation in terms of the accuracy of the estimate and the required computation times. (Author)
Research on Reliability Modelling Method of Machining Center Based on Monte Carlo Simulation
Directory of Open Access Journals (Sweden)
Chuanhai Chen
2013-03-01
Full Text Available The aim of this study is to get the reliability of series system and analyze the reliability of machining center. So a modified method of reliability modelling based on Monte Carlo simulation for series system is proposed. The reliability function, which is built by the classical statistics method based on the assumption that machine tools were repaired as good as new, may be biased in the real case. The reliability functions of subsystems are established respectively and then the reliability model is built according to the reliability block diagram. Then the fitting reliability function of machine tools is established using the failure data of sample generated by Monte Carlo simulation, whose inverse reliability function is solved by the linearization technique based on radial basis function. Finally, an example of the machining center is presented using the proposed method to show its potential application. The analysis results show that the proposed method can provide an accurate reliability model compared with the conventional method.
GPU-based Monte Carlo radiotherapy dose calculation using phase-space sources
Townson, Reid; Tian, Zhen; Graves, Yan Jiang; Zavgorodni, Sergei; Jiang, Steve B
2013-01-01
A novel phase-space source implementation has been designed for GPU-based Monte Carlo dose calculation engines. Due to the parallelized nature of GPU hardware, it is essential to simultaneously transport particles of the same type and similar energies but separated spatially to yield a high efficiency. We present three methods for phase-space implementation that have been integrated into the most recent version of the GPU-based Monte Carlo radiotherapy dose calculation package gDPM v3.0. The first method is to sequentially read particles from a patient-dependent phase-space and sort them on-the-fly based on particle type and energy. The second method supplements this with a simple secondary collimator model and fluence map implementation so that patient-independent phase-space sources can be used. Finally, as the third method (called the phase-space-let, or PSL, method) we introduce a novel strategy to pre-process patient-independent phase-spaces and bin particles by type, energy and position. Position bins l...
PeneloPET, a Monte Carlo PET simulation tool based on PENELOPE: features and validation
Energy Technology Data Exchange (ETDEWEB)
Espana, S; Herraiz, J L; Vicente, E; Udias, J M [Grupo de Fisica Nuclear, Departmento de Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid, Madrid (Spain); Vaquero, J J; Desco, M [Unidad de Medicina y CirugIa Experimental, Hospital General Universitario Gregorio Maranon, Madrid (Spain)], E-mail: jose@nuc2.fis.ucm.es
2009-03-21
Monte Carlo simulations play an important role in positron emission tomography (PET) imaging, as an essential tool for the research and development of new scanners and for advanced image reconstruction. PeneloPET, a PET-dedicated Monte Carlo tool, is presented and validated in this work. PeneloPET is based on PENELOPE, a Monte Carlo code for the simulation of the transport in matter of electrons, positrons and photons, with energies from a few hundred eV to 1 GeV. PENELOPE is robust, fast and very accurate, but it may be unfriendly to people not acquainted with the FORTRAN programming language. PeneloPET is an easy-to-use application which allows comprehensive simulations of PET systems within PENELOPE. Complex and realistic simulations can be set by modifying a few simple input text files. Different levels of output data are available for analysis, from sinogram and lines-of-response (LORs) histogramming to fully detailed list mode. These data can be further exploited with the preferred programming language, including ROOT. PeneloPET simulates PET systems based on crystal array blocks coupled to photodetectors and allows the user to define radioactive sources, detectors, shielding and other parts of the scanner. The acquisition chain is simulated in high level detail; for instance, the electronic processing can include pile-up rejection mechanisms and time stamping of events, if desired. This paper describes PeneloPET and shows the results of extensive validations and comparisons of simulations against real measurements from commercial acquisition systems. PeneloPET is being extensively employed to improve the image quality of commercial PET systems and for the development of new ones.
International Nuclear Information System (INIS)
Purpose: Microbeam radiation therapy (MRT) is an experimental radiotherapy technique that has shown potent antitumor effects with minimal damage to normal tissue in animal studies. This unique form of radiation is currently only produced in a few large synchrotron accelerator research facilities in the world. To promote widespread translational research on this promising treatment technology we have proposed and are in the initial development stages of a compact MRT system that is based on carbon nanotube field emission x-ray technology. We report on a Monte Carlo based feasibility study of the compact MRT system design. Methods: Monte Carlo calculations were performed using EGSnrc-based codes. The proposed small animal research MRT device design includes carbon nanotube cathodes shaped to match the corresponding MRT collimator apertures, a common reflection anode with filter, and a MRT collimator. Each collimator aperture is sized to deliver a beam width ranging from 30 to 200 μm at 18.6 cm source-to-axis distance. Design parameters studied with Monte Carlo include electron energy, cathode design, anode angle, filtration, and collimator design. Calculations were performed for single and multibeam configurations. Results: Increasing the energy from 100 kVp to 160 kVp increased the photon fluence through the collimator by a factor of 1.7. Both energies produced a largely uniform fluence along the long dimension of the microbeam, with 5% decreases in intensity near the edges. The isocentric dose rate for 160 kVp was calculated to be 700 Gy/min/A in the center of a 3 cm diameter target. Scatter contributions resulting from collimator size were found to produce only small (<7%) changes in the dose rate for field widths greater than 50 μm. Dose vs depth was weakly dependent on filtration material. The peak-to-valley ratio varied from 10 to 100 as the separation between adjacent microbeams varies from 150 to 1000 μm. Conclusions: Monte Carlo simulations demonstrate
Monte Carlo capabilities of the SCALE code system
International Nuclear Information System (INIS)
Highlights: • Foundational Monte Carlo capabilities of SCALE are described. • Improvements in continuous-energy treatments are detailed. • New methods for problem-dependent temperature corrections are described. • New methods for sensitivity analysis and depletion are described. • Nuclear data, users interfaces, and quality assurance activities are summarized. - Abstract: SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. An overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2
International Nuclear Information System (INIS)
A four-dimensional (x, y, z, t) composite superquadric-based object model of the human heart for Monte Carlo simulation of radiological imaging systems has been developed. The phantom models the real temporal geometric conditions of a beating heart for frame rates up to 32 per cardiac cycle. Phantom objects are described by boolean combinations of superquadric ellipsoid sections.Moving spherical coordinate systems are chosen to model wall movement whereby points of the ventricle and atria walls are assumed to move towards a moving center-of-gravity point. Due to the non-static coordinate systems, the atrial/ventricular valve plane of the mathematical heart phantom moves up and down along the left ventricular long axis resulting in reciprocal emptying and filling of atria and ventricles. Compared to the base movement, the epicardial apex as well as the superior atria area are almost fixed in space. Since geometric parameters of the objects are directly applied on intersection calculations of the photon ray with object boundaries during Monte Carlo simulation, no phantom discretization artifacts are involved
Inverse treatment planning for radiation therapy based on fast Monte Carlo dose calculation
International Nuclear Information System (INIS)
An inverse treatment planning system based on fast Monte Carlo (MC) dose calculation is presented. It allows optimisation of intensity modulated dose distributions in 15 to 60 minutes on present day personal computers. If a multi-processor machine is available, parallel simulation of particle histories is also possible, leading to further calculation time reductions. The optimisation process is divided into two stages. The first stage results influence profiles based on pencil beam (PB) dose calculation. The second stage starts with MC verification and post-optimisation of the PB dose and fluence distributions. Because of the potential to accurately model beam modifiers, MC based inverse planning systems are able to optimise compensator thicknesses and leaf trajectories instead of intensity profiles only. The corresponding techniques, whose implementation is the subject for future work, are also presented here. (orig.)
GPU-based high performance Monte Carlo simulation in neutron transport
Energy Technology Data Exchange (ETDEWEB)
Heimlich, Adino; Mol, Antonio C.A.; Pereira, Claudio M.N.A. [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. de Inteligencia Artificial Aplicada], e-mail: cmnap@ien.gov.br
2009-07-01
Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in neutron transport simulation by Monte Carlo method. To accomplish that, GPU- and CPU-based (single and multicore) approaches were developed and applied to a simple, but time-consuming problem. Comparisons demonstrated that the GPU-based approach is about 15 times faster than a parallel 8-core CPU-based approach also developed in this work. (author)
IMPROVED ALGORITHM FOR ROAD REGION SEGMENTATION BASED ON SEQUENTIAL MONTE-CARLO ESTIMATION
Directory of Open Access Journals (Sweden)
Zdenek Prochazka
2014-12-01
Full Text Available In recent years, many researchers and car makers put a lot of intensive effort into development of autonomous driving systems. Since visual information is the main modality used by human driver, a camera mounted on moving platform is very important kind of sensor, and various computer vision algorithms to handle vehicle surrounding situation are under intensive research. Our final goal is to develop a vision based lane detection system with ability to handle various types of road shapes, working on both structured and unstructured roads, ideally under presence of shadows. This paper presents a modified road region segmentation algorithm based on sequential Monte-Carlo estimation. Detailed description of the algorithm is given, and evaluation results show that the proposed algorithm outperforms the segmentation algorithm developed as a part of our previous work, as well as an conventional algorithm based on colour histogram.
Pattern-oriented Agent-based Monte Carlo simulation of Cellular Redox Environment
DEFF Research Database (Denmark)
Tang, Jiaowei; Holcombe, Mike; Boonen, Harrie C.M.
. Because complex networks and dynamics of redox still is not completely understood , results of existing experiments will be used to validate the modeling according to ideas in pattern-oriented agent-based modeling[8]. The simulation of this model is computational intensive, thus an application 'FLAME......] could be very important factors. In our project, an agent-based Monte Carlo modeling [6] is offered to study the dynamic relationship between extracellular and intracellular redox and complex networks of redox reactions. In the model, pivotal redox-related reactions will be included, and the reactants...... cells. Biochimica Et Biophysica Acta-General Subjects, 2008. 1780(11): p. 1271-1290. 5. Jones, D.P., Redox sensing: orthogonal control in cell cycle and apoptosis signalling. J Intern Med, 2010. 268(5): p. 432-48. 6. Pogson, M., et al., Formal agent-based modelling of intracellular chemical interactions...
GPU-based high performance Monte Carlo simulation in neutron transport
International Nuclear Information System (INIS)
Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in neutron transport simulation by Monte Carlo method. To accomplish that, GPU- and CPU-based (single and multicore) approaches were developed and applied to a simple, but time-consuming problem. Comparisons demonstrated that the GPU-based approach is about 15 times faster than a parallel 8-core CPU-based approach also developed in this work. (author)
Energy Technology Data Exchange (ETDEWEB)
Ureba, A.; Pereira-Barbeiro, A. R.; Jimenez-Ortega, E.; Baeza, J. A.; Salguero, F. J.; Leal, A.
2013-07-01
The use of Monte Carlo (MC) has shown an improvement in the accuracy of the calculation of the dose compared to other analytics algorithms installed on the systems of business planning, especially in the case of non-standard situations typical of complex techniques such as IMRT and VMAT. Our treatment planning system called CARMEN, is based on the complete simulation, both the beam transport in the head of the accelerator and the patient, and simulation designed for efficient operation in terms of the accuracy of the estimate and the required computation times. (Author)
Monte Carlo calculation based on hydrogen composition of the tissue for MV photon radiotherapy.
Demol, Benjamin; Viard, Romain; Reynaert, Nick
2015-01-01
The purpose of this study was to demonstrate that Monte Carlo treatment planning systems require tissue characterization (density and composition) as a function of CT number. A discrete set of tissue classes with a specific composition is introduced. In the current work we demonstrate that, for megavoltage photon radiotherapy, only the hydrogen content of the different tissues is of interest. This conclusion might have an impact on MRI-based dose calculations and on MVCT calibration using tissue substitutes. A stoichiometric calibration was performed, grouping tissues with similar atomic composition into 15 dosimetrically equivalent subsets. To demonstrate the importance of hydrogen, a new scheme was derived, with correct hydrogen content, complemented by oxygen (all elements differing from hydrogen are replaced by oxygen). Mass attenuation coefficients and mass stopping powers for this scheme were calculated and compared to the original scheme. Twenty-five CyberKnife treatment plans were recalculated by an in-house developed Monte Carlo system using tissue density and hydrogen content derived from the CT images. The results were compared to Monte Carlo simulations using the original stoichiometric calibration. Between 300 keV and 3 MeV, the relative difference of mass attenuation coefficients is under 1% within all subsets. Between 10 keV and 20 MeV, the relative difference of mass stopping powers goes up to 5% in hard bone and remains below 2% for all other tissue subsets. Dose-volume histograms (DVHs) of the treatment plans present no visual difference between the two schemes. Relative differences of dose indexes D98, D95, D50, D05, D02, and Dmean were analyzed and a distribution centered around zero and of standard deviation below 2% (3 σ) was established. On the other hand, once the hydrogen content is slightly modified, important dose differences are obtained. Monte Carlo dose planning in the field of megavoltage photon radiotherapy is fully achievable using
International Nuclear Information System (INIS)
After an accidental release of radionuclides to the inhabited environment the external gamma irradiation from deposited radioactivity contributes significantly to the radiation exposure of the population for extended periods. For evaluating this exposure pathway, three main model requirements are needed: (i) to calculate the air kerma value per photon emitted per unit source area, based on Monte Carlo (MC) simulations; (ii) to describe the distribution and dynamics of radionuclides on the diverse urban surfaces; and (iii) to combine all these elements in a relevant urban model to calculate the resulting doses according to the actual scenario. This paper provides an overview about the different approaches to calculate photon transport in urban areas and about several dose calculation codes published. Two types of Monte Carlo simulations are presented using the global and the local approaches of photon transport. Moreover, two different philosophies of the dose calculation, the 'location factor method' and a combination of relative contamination of surfaces with air kerma values are described. The main features of six codes (ECOSYS, EDEM2M, EXPURT, PARATI, TEMAS, URGENT) are highlighted together with a short model-model features intercomparison
A CNS calculation line based on a Monte-Carlo method
International Nuclear Information System (INIS)
The neutronic design of the moderator cell of a Cold Neutron Source (CNS) involves many different considerations regarding geometry, location, and materials. The decisions taken in this sense affect not only the neutron flux in the source neighbourhood, which can be evaluated by a standard deterministic method, but also the neutron flux values in experimental positions far away from the neutron source. At long distances from the CNS, very time consuming 3D deterministic methods or Monte Carlo transport methods are necessary in order to get accurate figures of standard and typical magnitudes such as average neutron flux, neutron current, angular flux, and luminosity. The Monte Carlo method is a unique and powerful tool to calculate the transport of neutrons and photons. Its use in a bootstrap scheme appears to be an appropriate solution for this type of systems. The use of MCNP as the main neutronic design tool leads to a fast and reliable method to perform calculations in a relatively short time with low statistical errors, if the proper scheme is applied. The design goal is to evaluate the performance of the CNS, its beam tubes and neutron guides, at specific experimental locations in the reactor hall and in the neutron or experimental hall. In this work, the calculation methodology used to design a CNS and its associated Neutron Beam Transport Systems (NBTS), based on the use of the MCNP code, is presented. (author)
Fission yield calculation using toy model based on Monte Carlo simulation
Energy Technology Data Exchange (ETDEWEB)
Jubaidah, E-mail: jubaidah@student.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia); Physics Department, Faculty of Mathematics and Natural Science – State University of Medan. Jl. Willem Iskandar Pasar V Medan Estate – North Sumatera, Indonesia 20221 (Indonesia); Kurniadi, Rizal, E-mail: rijalk@fi.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia)
2015-09-30
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90
Fission yield calculation using toy model based on Monte Carlo simulation
International Nuclear Information System (INIS)
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (Rc), mean of left curve (μL) and mean of right curve (μR), deviation of left curve (σL) and deviation of right curve (σR). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90
Adaptation of GEANT4 to Monte Carlo dose calculations based on CT data
International Nuclear Information System (INIS)
The GEANT4 Monte Carlo code provides many powerful functions for conducting particle transport simulations with great reliability and flexibility. However, as a general purpose Monte Carlo code, not all the functions were specifically designed and fully optimized for applications in radiation therapy. One of the primary issues is the computational efficiency, which is especially critical when patient CT data have to be imported into the simulation model. In this paper we summarize the relevant aspects of the GEANT4 tracking and geometry algorithms and introduce our work on using the code to conduct dose calculations based on CT data. The emphasis is focused on modifications of the GEANT4 source code to meet the requirements for fast dose calculations. The major features include a quick voxel search algorithm, fast volume optimization, and the dynamic assignment of material density. These features are ready to be used for tracking the primary types of particles employed in radiation therapy such as photons, electrons, and heavy charged particles. Re-calculation of a proton therapy treatment plan generated by a commercial treatment planning program for a paranasal sinus case is presented as an example
Directory of Open Access Journals (Sweden)
Yu Hyeong
2010-12-01
Full Text Available Abstract Background The removal of high-abundance proteins from plasma is an efficient approach to investigating flow-through proteins for biomarker discovery studies. Most depletion methods are based on multiple immunoaffinity methods available commercially including LC columns and spin columns. Despite its usefulness, high-abundance depletion has an intrinsic problem, the sponge effect, which should be assessed during depletion experiments. Concurrently, the yield of depletion of high-abundance proteins must be monitored during the use of the depletion column. To date, there is no reasonable technique for measuring the recovery of flow-through proteins after depletion and assessing the capacity for capture of high-abundance proteins. Results In this study, we developed a method of measuring recovery yields of a multiple affinity removal system column easily and rapidly using enhanced green fluorescence protein as an indicator of flow-through proteins. Also, we monitored the capture efficiency through depletion of a high-abundance protein, albumin labeled with fluorescein isothiocyanate. Conclusion This simple method can be applied easily to common high-abundance protein depletion methods, effectively reducing experimental variations in biomarker discovery studies.
Sign learning kink-based (SiLK) quantum Monte Carlo for molecular systems
Ma, Xiaoyao; Loffler, Frank; Kowalski, Karol; Bhaskaran-Nair, Kiran; Jarrell, Mark; Moreno, Juana
2015-01-01
The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H$_{2}$O, N$_2$, and F$_2$ molecules. The method is based on Feynman's path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem.
Study of CANDU thorium-based fuel cycles by deterministic and Monte Carlo methods
International Nuclear Information System (INIS)
In the framework of the Generation IV forum, there is a renewal of interest in self-sustainable thorium fuel cycles applied to various concepts such as Molten Salt Reactors [1, 2] or High Temperature Reactors [3, 4]. Precise evaluations of the U-233 production potential relying on existing reactors such as PWRs [5] or CANDUs [6] are hence necessary. As a consequence of its design (online refueling and D2O moderator in a thermal spectrum), the CANDU reactor has moreover an excellent neutron economy and consequently a high fissile conversion ratio [7]. For these reasons, we try here, with a shorter term view, to re-evaluate the economic competitiveness of once-through thorium-based fuel cycles in CANDU [8]. Two simulation tools are used: the deterministic Canadian cell code DRAGON [9] and MURE [10], a C++ tool for reactor evolution calculations based on the Monte Carlo code MCNP [11]. (authors)
Sign Learning Kink-based (SiLK) Quantum Monte Carlo for molecular systems.
Ma, Xiaoyao; Hall, Randall W; Löffler, Frank; Kowalski, Karol; Bhaskaran-Nair, Kiran; Jarrell, Mark; Moreno, Juana
2016-01-01
The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H2O, N2, and F2 molecules. The method is based on Feynman's path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem. PMID:26747795
Sign Learning Kink-based (SiLK) Quantum Monte Carlo for molecular systems
Ma, Xiaoyao; Hall, Randall W.; Löffler, Frank; Kowalski, Karol; Bhaskaran-Nair, Kiran; Jarrell, Mark; Moreno, Juana
2016-01-01
The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H2O, N2, and F2 molecules. The method is based on Feynman's path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem.
Sign learning kink-based (SiLK) quantum Monte Carlo for molecular systems
Energy Technology Data Exchange (ETDEWEB)
Ma, Xiaoyao; Hall, Randall W.; Loffler, Frank; Kowalski, Karol; Bhaskaran-Nair, Kiran; Jarrell, Mark; Moreno, Juana
2016-01-07
The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H2O, N2, and F2 molecules. The method is based on Feynman’s path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem.
Sign Learning Kink-based (SiLK) Quantum Monte Carlo for molecular systems
Energy Technology Data Exchange (ETDEWEB)
Ma, Xiaoyao [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Hall, Randall W. [Department of Natural Sciences and Mathematics, Dominican University of California, San Rafael, California 94901 (United States); Department of Chemistry, Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Löffler, Frank [Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Kowalski, Karol [William R. Wiley Environmental Molecular Sciences Laboratory, Battelle, Pacific Northwest National Laboratory, Richland, Washington 99352 (United States); Bhaskaran-Nair, Kiran; Jarrell, Mark; Moreno, Juana [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803 (United States)
2016-01-07
The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H{sub 2}O, N{sub 2}, and F{sub 2} molecules. The method is based on Feynman’s path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem.
Sign Learning Kink-based (SiLK) Quantum Monte Carlo for molecular systems
International Nuclear Information System (INIS)
The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H2O, N2, and F2 molecules. The method is based on Feynman’s path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem
Simulation of Cone Beam CT System Based on Monte Carlo Method
Wang, Yu; Cao, Ruifen; Hu, Liqin; Li, Bingbing
2014-01-01
Adaptive Radiation Therapy (ART) was developed based on Image-guided Radiation Therapy (IGRT) and it is the trend of photon radiation therapy. To get a better use of Cone Beam CT (CBCT) images for ART, the CBCT system model was established based on Monte Carlo program and validated against the measurement. The BEAMnrc program was adopted to the KV x-ray tube. Both IOURCE-13 and ISOURCE-24 were chosen to simulate the path of beam particles. The measured Percentage Depth Dose (PDD) and lateral dose profiles under 1cm water were compared with the dose calculated by DOSXYZnrc program. The calculated PDD was better than 1% within the depth of 10cm. More than 85% points of calculated lateral dose profiles was within 2%. The correct CBCT system model helps to improve CBCT image quality for dose verification in ART and assess the CBCT image concomitant dose risk.
Monte Carlo dose calculation using a cell processor based PlayStation 3 system
International Nuclear Information System (INIS)
This study investigates the performance of the EGSnrc computer code coupled with a Cell-based hardware in Monte Carlo simulation of radiation dose in radiotherapy. Performance evaluations of two processor-intensive functions namely, HOWNEAR and RANMARGET in the EGSnrc code were carried out basing on the 20-80 rule (Pareto principle). The execution speeds of the two functions were measured by the profiler gprof specifying the number of executions and total time spent on the functions. A testing architecture designed for Cell processor was implemented in the evaluation using a PlayStation3 (PS3) system. The evaluation results show that the algorithms examined are readily parallelizable on the Cell platform, provided that an architectural change of the EGSnrc was made. However, as the EGSnrc performance was limited by the PowerPC Processing Element in the PS3, PC coupled with graphics processing units or GPCPU may provide a more viable avenue for acceleration.
GPU-based Monte Carlo radiotherapy dose calculation using phase-space sources
International Nuclear Information System (INIS)
A novel phase-space source implementation has been designed for graphics processing unit (GPU)-based Monte Carlo dose calculation engines. Short of full simulation of the linac head, using a phase-space source is the most accurate method to model a clinical radiation beam in dose calculations. However, in GPU-based Monte Carlo dose calculations where the computation efficiency is very high, the time required to read and process a large phase-space file becomes comparable to the particle transport time. Moreover, due to the parallelized nature of GPU hardware, it is essential to simultaneously transport particles of the same type and similar energies but separated spatially to yield a high efficiency. We present three methods for phase-space implementation that have been integrated into the most recent version of the GPU-based Monte Carlo radiotherapy dose calculation package gDPM v3.0. The first method is to sequentially read particles from a patient-dependent phase-space and sort them on-the-fly based on particle type and energy. The second method supplements this with a simple secondary collimator model and fluence map implementation so that patient-independent phase-space sources can be used. Finally, as the third method (called the phase-space-let, or PSL, method) we introduce a novel source implementation utilizing pre-processed patient-independent phase-spaces that are sorted by particle type, energy and position. Position bins located outside a rectangular region of interest enclosing the treatment field are ignored, substantially decreasing simulation time with little effect on the final dose distribution. The three methods were validated in absolute dose against BEAMnrc/DOSXYZnrc and compared using gamma-index tests (2%/2 mm above the 10% isodose). It was found that the PSL method has the optimal balance between accuracy and efficiency and thus is used as the default method in gDPM v3.0. Using the PSL method, open fields of 4 × 4, 10 × 10 and 30 × 30 cm
GPU-based Monte Carlo radiotherapy dose calculation using phase-space sources.
Townson, Reid W; Jia, Xun; Tian, Zhen; Graves, Yan Jiang; Zavgorodni, Sergei; Jiang, Steve B
2013-06-21
A novel phase-space source implementation has been designed for graphics processing unit (GPU)-based Monte Carlo dose calculation engines. Short of full simulation of the linac head, using a phase-space source is the most accurate method to model a clinical radiation beam in dose calculations. However, in GPU-based Monte Carlo dose calculations where the computation efficiency is very high, the time required to read and process a large phase-space file becomes comparable to the particle transport time. Moreover, due to the parallelized nature of GPU hardware, it is essential to simultaneously transport particles of the same type and similar energies but separated spatially to yield a high efficiency. We present three methods for phase-space implementation that have been integrated into the most recent version of the GPU-based Monte Carlo radiotherapy dose calculation package gDPM v3.0. The first method is to sequentially read particles from a patient-dependent phase-space and sort them on-the-fly based on particle type and energy. The second method supplements this with a simple secondary collimator model and fluence map implementation so that patient-independent phase-space sources can be used. Finally, as the third method (called the phase-space-let, or PSL, method) we introduce a novel source implementation utilizing pre-processed patient-independent phase-spaces that are sorted by particle type, energy and position. Position bins located outside a rectangular region of interest enclosing the treatment field are ignored, substantially decreasing simulation time with little effect on the final dose distribution. The three methods were validated in absolute dose against BEAMnrc/DOSXYZnrc and compared using gamma-index tests (2%/2 mm above the 10% isodose). It was found that the PSL method has the optimal balance between accuracy and efficiency and thus is used as the default method in gDPM v3.0. Using the PSL method, open fields of 4 × 4, 10 × 10 and 30 × 30 cm
GGEMS-Brachy: GPU GEant4-based Monte Carlo simulation for brachytherapy applications
Lemaréchal, Yannick; Bert, Julien; Falconnet, Claire; Després, Philippe; Valeri, Antoine; Schick, Ulrike; Pradier, Olivier; Garcia, Marie-Paule; Boussion, Nicolas; Visvikis, Dimitris
2015-07-01
In brachytherapy, plans are routinely calculated using the AAPM TG43 formalism which considers the patient as a simple water object. An accurate modeling of the physical processes considering patient heterogeneity using Monte Carlo simulation (MCS) methods is currently too time-consuming and computationally demanding to be routinely used. In this work we implemented and evaluated an accurate and fast MCS on Graphics Processing Units (GPU) for brachytherapy low dose rate (LDR) applications. A previously proposed Geant4 based MCS framework implemented on GPU (GGEMS) was extended to include a hybrid GPU navigator, allowing navigation within voxelized patient specific images and analytically modeled 125I seeds used in LDR brachytherapy. In addition, dose scoring based on track length estimator including uncertainty calculations was incorporated. The implemented GGEMS-brachy platform was validated using a comparison with Geant4 simulations and reference datasets. Finally, a comparative dosimetry study based on the current clinical standard (TG43) and the proposed platform was performed on twelve prostate cancer patients undergoing LDR brachytherapy. Considering patient 3D CT volumes of 400 × 250 × 65 voxels and an average of 58 implanted seeds, the mean patient dosimetry study run time for a 2% dose uncertainty was 9.35 s (≈500 ms 10-6 simulated particles) and 2.5 s when using one and four GPUs, respectively. The performance of the proposed GGEMS-brachy platform allows envisaging the use of Monte Carlo simulation based dosimetry studies in brachytherapy compatible with clinical practice. Although the proposed platform was evaluated for prostate cancer, it is equally applicable to other LDR brachytherapy clinical applications. Future extensions will allow its application in high dose rate brachytherapy applications.
GGEMS-Brachy: GPU GEant4-based Monte Carlo simulation for brachytherapy applications
International Nuclear Information System (INIS)
In brachytherapy, plans are routinely calculated using the AAPM TG43 formalism which considers the patient as a simple water object. An accurate modeling of the physical processes considering patient heterogeneity using Monte Carlo simulation (MCS) methods is currently too time-consuming and computationally demanding to be routinely used. In this work we implemented and evaluated an accurate and fast MCS on Graphics Processing Units (GPU) for brachytherapy low dose rate (LDR) applications. A previously proposed Geant4 based MCS framework implemented on GPU (GGEMS) was extended to include a hybrid GPU navigator, allowing navigation within voxelized patient specific images and analytically modeled 125I seeds used in LDR brachytherapy. In addition, dose scoring based on track length estimator including uncertainty calculations was incorporated. The implemented GGEMS-brachy platform was validated using a comparison with Geant4 simulations and reference datasets. Finally, a comparative dosimetry study based on the current clinical standard (TG43) and the proposed platform was performed on twelve prostate cancer patients undergoing LDR brachytherapy. Considering patient 3D CT volumes of 400 × 250 × 65 voxels and an average of 58 implanted seeds, the mean patient dosimetry study run time for a 2% dose uncertainty was 9.35 s (≈500 ms 10−6 simulated particles) and 2.5 s when using one and four GPUs, respectively. The performance of the proposed GGEMS-brachy platform allows envisaging the use of Monte Carlo simulation based dosimetry studies in brachytherapy compatible with clinical practice. Although the proposed platform was evaluated for prostate cancer, it is equally applicable to other LDR brachytherapy clinical applications. Future extensions will allow its application in high dose rate brachytherapy applications. (paper)
International Nuclear Information System (INIS)
Pollutant nitrogen deposition effects on soil and foliar element concentrations were investigated in acidic and limestone grasslands, located in one of the most nitrogen and acid rain polluted regions of the UK, using plots treated for 8-10 years with 35-140 kg N ha-2 y-1 as NH4NO3. Historic data suggests both grasslands have acidified over the past 50 years. Nitrogen deposition treatments caused the grassland soils to lose 23-35% of their total available bases (Ca, Mg, K, and Na) and they became acidified by 0.2-0.4 pH units. Aluminium, iron and manganese were mobilised and taken up by limestone grassland forbs and were translocated down the acid grassland soil. Mineral nitrogen availability increased in both grasslands and many species showed foliar N enrichment. This study provides the first definitive evidence that nitrogen deposition depletes base cations from grassland soils. The resulting acidification, metal mobilisation and eutrophication are implicated in driving floristic changes. - Nitrogen deposition causes base cation depletion, acidification and eutrophication of semi-natural grassland soils
Development of a space radiation Monte Carlo computer simulation based on the FLUKA and ROOT codes.
Pinsky, L S; Wilson, T L; Ferrari, A; Sala, P; Carminati, F; Brun, R
2001-01-01
This NASA funded project is proceeding to develop a Monte Carlo-based computer simulation of the radiation environment in space. With actual funding only initially in place at the end of May 2000, the study is still in the early stage of development. The general tasks have been identified and personnel have been selected. The code to be assembled will be based upon two major existing software packages. The radiation transport simulation will be accomplished by updating the FLUKA Monte Carlo program, and the user interface will employ the ROOT software being developed at CERN. The end-product will be a Monte Carlo-based code which will complement the existing analytic codes such as BRYNTRN/HZETRN presently used by NASA to evaluate the effects of radiation shielding in space. The planned code will possess the ability to evaluate the radiation environment for spacecraft and habitats in Earth orbit, in interplanetary space, on the lunar surface, or on a planetary surface such as Mars. Furthermore, it will be useful in the design and analysis of experiments such as ACCESS (Advanced Cosmic-ray Composition Experiment for Space Station), which is an Office of Space Science payload currently under evaluation for deployment on the International Space Station (ISS). FLUKA will be significantly improved and tailored for use in simulating space radiation in four ways. First, the additional physics not presently within the code that is necessary to simulate the problems of interest, namely the heavy ion inelastic processes, will be incorporated. Second, the internal geometry package will be replaced with one that will substantially increase the calculation speed as well as simplify the data input task. Third, default incident flux packages that include all of the different space radiation sources of interest will be included. Finally, the user interface and internal data structure will be melded together with ROOT, the object-oriented data analysis infrastructure system. Beyond
Nanoscale Field Effect Optical Modulators Based on Depletion of Epsilon-Near-Zero Films
Lu, Zhaolin; Shi, Kaifeng
2015-01-01
The field effect in metal-oxide-semiconductor (MOS) capacitors plays a key role in field-effect transistors (FETs), which are the fundamental building blocks of modern digital integrated circuits. Recent works show that the field effect can also be used to make optical/plasmonic modulators. In this paper, we report field effect electro-absorption modulators (FEOMs) each made of an ultrathin epsilon-near-zero (ENZ) film, as the active material, sandwiched in a silicon or plasmonic waveguide. Without a bias, the ENZ film maximizes the attenuation of the waveguides and the modulators work at the OFF state; contrariwise, depletion of the carriers in the ENZ film greatly reduces the attenuation and the modulators work at the ON state. The double capacitor gating scheme is used to enhance the modulation by the field effect. According to our simulation, extinction ratio up to 3.44 dB can be achieved in a 500-nm long Si waveguide with insertion loss only 0.71 dB (85.0%); extinction ratio up to 7.86 dB can be achieved...
Nishidate, Izumi; Wiswadarma, Aditya; Hase, Yota; Tanaka, Noriyuki; Maeda, Takaaki; Niizeki, Kyuichi; Aizu, Yoshihisa
2011-08-01
In order to visualize melanin and blood concentrations and oxygen saturation in human skin tissue, a simple imaging technique based on multispectral diffuse reflectance images acquired at six wavelengths (500, 520, 540, 560, 580 and 600nm) was developed. The technique utilizes multiple regression analysis aided by Monte Carlo simulation for diffuse reflectance spectra. Using the absorbance spectrum as a response variable and the extinction coefficients of melanin, oxygenated hemoglobin, and deoxygenated hemoglobin as predictor variables, multiple regression analysis provides regression coefficients. Concentrations of melanin and total blood are then determined from the regression coefficients using conversion vectors that are deduced numerically in advance, while oxygen saturation is obtained directly from the regression coefficients. Experiments with a tissue-like agar gel phantom validated the method. In vivo experiments with human skin of the human hand during upper limb occlusion and of the inner forearm exposed to UV irradiation demonstrated the ability of the method to evaluate physiological reactions of human skin tissue.
MaGe - a Geant4-based Monte Carlo framework for low-background experiments
Chan, Yuen-Dat; Henning, Reyco; Gehman, Victor M; Johnson, Rob A; Jordan, David V; Kazkaz, Kareem; Knapp, Markus; Kroninger, Kevin; Lenz, Daniel; Liu, Jing; Liu, Xiang; Marino, Michael G; Mokhtarani, Akbar; Pandola, Luciano; Schubert, Alexis G; Tomei, Claudia
2008-01-01
A Monte Carlo framework, MaGe, has been developed based on the Geant4 simulation toolkit. Its purpose is to simulate physics processes in low-energy and low-background radiation detectors, specifically for the Majorana and Gerda $^{76}$Ge neutrinoless double-beta decay experiments. This jointly-developed tool is also used to verify the simulation of physics processes relevant to other low-background experiments in Geant4. The MaGe framework contains simulations of prototype experiments and test stands, and is easily extended to incorporate new geometries and configurations while still using the same verified physics processes, tunings, and code framework. This reduces duplication of efforts and improves the robustness of and confidence in the simulation output.
A Monte Carlo simulation based inverse propagation method for stochastic model updating
Bao, Nuo; Wang, Chunjie
2015-08-01
This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.
Electric conduction in semiconductors: a pedagogical model based on the Monte Carlo method
International Nuclear Information System (INIS)
We present a pedagogic approach aimed at modelling electric conduction in semiconductors in order to describe and explain some macroscopic properties, such as the characteristic behaviour of resistance as a function of temperature. A simple model of the band structure is adopted for the generation of electron-hole pairs as well as for the carrier transport in moderate electric fields. The semiconductor behaviour is described by substituting the traditional statistical approach (requiring a deep mathematical background) with microscopic models, based on the Monte Carlo method, in which simple rules applied to microscopic particles and quasi-particles determine the macroscopic properties. We compare measurements of electric properties of matter with 'virtual experiments' built by using some models where the physical concepts can be presented at different formalization levels
CAD-based Monte Carlo program for integrated simulation of nuclear system SuperMC
International Nuclear Information System (INIS)
SuperMC is a (Computer-Aided-Design) CAD-based Monte Carlo (MC) program for integrated simulation of nuclear systems developed by FDS Team (China), making use of hybrid MC-deterministic method and advanced computer technologies. The design aim, architecture and main methodology of SuperMC are presented in this paper. The taking into account of multi-physics processes and the use of advanced computer technologies such as automatic geometry modeling, intelligent data analysis and visualization, high performance parallel computing and cloud computing, contribute to the efficiency of the code. SuperMC2.1, the latest version of the code for neutron, photon and coupled neutron and photon transport calculation, has been developed and validated by using a series of benchmarking cases such as the fusion reactor ITER model and the fast reactor BN-600 model
Calculation and analysis of heat source of PWR assemblies based on Monte Carlo method
International Nuclear Information System (INIS)
When fission occurs in nuclear fuel in reactor core, it releases numerous neutron and γ radiation, which takes energy deposition in fuel components and yields many factors such as thermal stressing and radiation damage influencing the safe operation of a reactor. Using the three-dimensional Monte Carlo transport calculation program MCNP and continuous cross-section database based on ENDF/B series to calculate the heat rate of the heat source on reference assemblies of a PWR when loading with 18-month short refueling cycle mode, and get the precise values of the control rod, thimble plug and new burnable poison rod within Gd, so as to provide basis for reactor design and safety verification. (authors)
Seabed radioactivity based on in situ measurements and Monte Carlo simulations
International Nuclear Information System (INIS)
Activity concentration measurements were carried out on the seabed, by implementing the underwater detection system KATERINA. The efficiency calibration was performed in the energy range 350–2600 keV, using in situ and laboratory measurements. The efficiency results were reproduced and extended in a broadened range of energies from 150 to 2600 keV, by Monte Carlo simulations, using the MCNP5 code. The concentrations of 40K, 214Bi and 208Tl were determined utilizing the present approach. The results were validated by laboratory measurements. - Highlights: • The KATERINA system was applied for marine sediments. • MC simulations using MCNP5 reproduced experimental energy spectra and efficiency. • The in-situ method provided quantitative measurements. • The measurements were validated with lab-based methods
Monte Carlo-based Noise Compensation in Coil Intensity Corrected Endorectal MRI
Lui, Dorothy; Haider, Masoom; Wong, Alexander
2015-01-01
Background: Prostate cancer is one of the most common forms of cancer found in males making early diagnosis important. Magnetic resonance imaging (MRI) has been useful in visualizing and localizing tumor candidates and with the use of endorectal coils (ERC), the signal-to-noise ratio (SNR) can be improved. The coils introduce intensity inhomogeneities and the surface coil intensity correction built into MRI scanners is used to reduce these inhomogeneities. However, the correction typically performed at the MRI scanner level leads to noise amplification and noise level variations. Methods: In this study, we introduce a new Monte Carlo-based noise compensation approach for coil intensity corrected endorectal MRI which allows for effective noise compensation and preservation of details within the prostate. The approach accounts for the ERC SNR profile via a spatially-adaptive noise model for correcting non-stationary noise variations. Such a method is useful particularly for improving the image quality of coil i...
A Monte Carlo-based treatment-planning tool for ion beam therapy
Böhlen, T T; Dosanjh, M; Ferrari, A; Haberer, T; Parodi, K; Patera, V; Mairan, A
2013-01-01
Ion beam therapy, as an emerging radiation therapy modality, requires continuous efforts to develop and improve tools for patient treatment planning (TP) and research applications. Dose and fluence computation algorithms using the Monte Carlo (MC) technique have served for decades as reference tools for accurate dose computations for radiotherapy. In this work, a novel MC-based treatment-planning (MCTP) tool for ion beam therapy using the pencil beam scanning technique is presented. It allows single-field and simultaneous multiple-fields optimization for realistic patient treatment conditions and for dosimetric quality assurance for irradiation conditions at state-of-the-art ion beam therapy facilities. It employs iterative procedures that allow for the optimization of absorbed dose and relative biological effectiveness (RBE)-weighted dose using radiobiological input tables generated by external RBE models. Using a re-implementation of the local effect model (LEM), theMCTP tool is able to perform TP studies u...
Simulation of nuclear material identification system based on Monte Carlo sampling method
International Nuclear Information System (INIS)
Background: Caused by the danger of radioactivity, nuclear material identification is sometimes a difficult problem. Purpose: In order to reflect the particle transport processes in nuclear fission and present the effectiveness of the signatures of Nuclear Materials Identification System (NMIS), based on physical principles and experimental statistical data. Methods: We established a Monte Carlo simulation model of nuclear material identification system and then acquired three channels of time domain pulse signal. Results: Auto-Correlation Functions (AC), Cross-Correlation Functions (CC), Auto Power Spectral Densities (APSD) and Cross Power Spectral Densities (CPSD) between channels can obtain several signatures, which can show some characters of nuclear material. Conclusions: The simulation results indicate that the way can help to further study the features of the system. (authors)
GPU-based fast Monte Carlo simulation for radiotherapy dose calculation
Jia, Xun; Graves, Yan Jiang; Folkerts, Michael; Jiang, Steve B
2011-01-01
Monte Carlo (MC) simulation is commonly considered to be the most accurate dose calculation method in radiotherapy. However, its efficiency still requires improvement for many routine clinical applications. In this paper, we present our recent progress towards the development a GPU-based MC dose calculation package, gDPM v2.0. It utilizes the parallel computation ability of a GPU to achieve high efficiency, while maintaining the same particle transport physics as in the original DPM code and hence the same level of simulation accuracy. In GPU computing, divergence of execution paths between threads can considerably reduce the efficiency. Since photons and electrons undergo different physics and hence attain different execution paths, we use a simulation scheme where photon transport and electron transport are separated to partially relieve the thread divergence issue. High performance random number generator and hardware linear interpolation are also utilized. We have also developed various components to hand...
Werner, M J; Sornette, D
2009-01-01
In meteorology, engineering and computer sciences, data assimilation is routinely employed as the optimal way to combine noisy observations with prior model information for obtaining better estimates of a state, and thus better forecasts, than can be achieved by ignoring data uncertainties. Earthquake forecasting, too, suffers from measurement errors and partial model information and may thus gain significantly from data assimilation. We present perhaps the first fully implementable data assimilation method for earthquake forecasts generated by a point-process model of seismicity. We test the method on a synthetic and pedagogical example of a renewal process observed in noise, which is relevant to the seismic gap hypothesis, models of characteristic earthquakes and to recurrence statistics of large quakes inferred from paleoseismic data records. To address the non-Gaussian statistics of earthquakes, we use sequential Monte Carlo methods, a set of flexible simulation-based methods for recursively estimating ar...
Monte Carlo simulation of grating-based neutron phase contrast imaging at CPHS
International Nuclear Information System (INIS)
Since the launching of the Compact Pulsed Hadron Source (CPHS) project of Tsinghua University in 2009, works have begun on the design and engineering of an imaging/radiography instrument for the neutron source provided by CPHS. The instrument will perform basic tasks such as transmission imaging and computerized tomography. Additionally, we include in the design the utilization of coded-aperture and grating-based phase contrast methodology, as well as the options of prompt gamma-ray analysis and neutron-energy selective imaging. Previously, we had implemented the hardware and data-analysis software for grating-based X-ray phase contrast imaging. Here, we investigate Geant4-based Monte Carlo simulations of neutron refraction phenomena and then model the grating-based neutron phase contrast imaging system according to the classic-optics-based method. The simulated experimental results of the retrieving phase shift gradient information by five-step phase-stepping approach indicate the feasibility of grating-based neutron phase contrast imaging as an option for the cold neutron imaging instrument at the CPHS.
A CAD based automatic modeling method for primitive solid based Monte Carlo calculation geometry
International Nuclear Information System (INIS)
The Multi-Physics Coupling Analysis Modeling Program (MCAM), developed by FDS Team, China, is an advanced modeling tool aiming to solve the modeling challenges for multi-physics coupling simulation. The automatic modeling method for SuperMC, the Super Monte Carlo Calculation Program for Nuclear and Radiation Process, was recently developed and integrated in MCAM5.2. This method could bi-convert between CAD model and SuperMC input file. While converting from CAD model to SuperMC model, the CAD model was decomposed into several convex solids set, and then corresponding SuperMC convex basic solids were generated and output. While inverting from SuperMC model to CAD model, the basic primitive solids was created and related operation was done to according the SuperMC model. This method was benchmarked with ITER Benchmark model. The results showed that the method was correct and effective. (author)
International Nuclear Information System (INIS)
The internal radiation dose calculations based on Chinese models is important in nuclear medicine. Most of the existing models are based on the physical and anatomical data of Caucasian, whose anatomical structure and physiological parameters are quite different from the Chinese, may lead significant effect on internal radiation. Therefore, it is necessary to establish the model based on the Chinese ethnic characteristics, and applied to radiation dosimetry calculation. In this study, a voxel model was established based on the high resolution Visible Chinese Human (VCH). The transport procedure of photon and electron was simulated using the MCNPX Monte Carlo code. Absorbed fraction (AF) and specific absorbed fraction (SAF) were calculated and S-factors and mean absorbed doses for organs with 99mTc located in liver were also obtained. In comparison with those of VIP-Man and MIRD models, discrepancies were found to be correlated with the racial and anatomical differences in organ mass and inter-organ distance. The internal dosimetry data based on other models that were used to apply to Chinese adult population are replaced with Chinese specific data. The obtained results provide a reference for nuclear medicine, such as dose verification after surgery and potential radiation evaluation for radionuclides in preclinical research, etc. (authors)
Institute of Scientific and Technical Information of China (English)
Xu Xiao-Bo; Zhang He-Ming; Hu Hui-Yong; Ma Jian-Li; Xu Li-Jun
2011-01-01
The base-collector depletion capacitance for vertical SiGe npn heterojunction bipolar transistors (HBTs) on silicon on insulator (SOI) is split into vertical and lateral parts. This paper proposes a novel analytical depletion capacitance model of this structure for the first time. A large discrepancy is predicted when the present model is compared with the conventional depletion model, and it is shown that the capacitance decreases with the increase of the reverse collectorbase bias-and shows a kink as the reverse collector-base bias reaches the effective vertical punch-through voltage while the voltage differs with the collector doping concentrations, which is consistent with measurement results. The model can be employed for a fast evaluation of the depletion capacitance of an SOI SiGe HBT and has useful applications on the design and simulation of high performance SiGe circuits and devices.
Full modelling of the MOSAIC animal PET system based on the GATE Monte Carlo simulation code
International Nuclear Information System (INIS)
within 9%. For a 410-665 keV energy window, the measured sensitivity for a centred point source was 1.53% and mouse and rat scatter fractions were respectively 12.0% and 18.3%. The scattered photons produced outside the rat and mouse phantoms contributed to 24% and 36% of total simulated scattered coincidences. Simulated and measured single and prompt count rates agreed well for activities up to the electronic saturation at 110 MBq for the mouse and rat phantoms. Volumetric spatial resolution was 17.6 μL at the centre of the FOV with differences less than 6% between experimental and simulated spatial resolution values. The comprehensive evaluation of the Monte Carlo modelling of the Mosaic(TM) system demonstrates that the GATE package is adequately versatile and appropriate to accurately describe the response of an Anger logic based animal PET system
Unfiltered Monte Carlo-based tungsten anode spectral model from 20 to 640 kV
Hernandez, A. M.; Boone, John M.
2014-03-01
A Monte Carlo-based tungsten anode spectral model, conceptually similar to the previously-developed TASMIP model, was developed. This new model provides essentially unfiltered x-ray spectra with better energy resolution and significantly extends the range of tube potentials for available spectra. MCNPX was used to simulate x-ray spectra as a function of tube potential for a conventional x-ray tube configuration with several anode compositions. Thirty five x-ray spectra were simulated and used as the basis of interpolating a complete set of tungsten x-ray spectra (at 1 kV intervals) from 20 to 640 kV. Additionally, Rh and Mo anode x-ray spectra were simulated from 20 to 60 kV. Cubic splines were used to construct piecewise polynomials that interpolate the photon fluence per energy bin as a function of tube potential for each anode material. The tungsten anode spectral model using interpolating cubic splines (TASMICS) generates minimally-filtered (0.8 mm Be) x-ray spectra from 20 to 640 kV with 1 keV energy bins. The rhodium and molybdenum anode spectral models (RASMICS and MASMICS, respectively) generate minimally-filtered x-ray spectra from 20 to 60 kV with 1 keV energy bins. TASMICS spectra showed no statistically significant differences when compared with the empirical TASMIP model, the semi-empirical Birch and Marshall model, and a Monte Carlo spectrum reported in AAPM TG 195. The RASMICS and MASMICS spectra showed no statistically significant differences when compared with their counterpart RASMIP and MASMIP models. Spectra from the TASMICS, MASMICS, and RASMICS models are available in spreadsheet format for interested users.
International Nuclear Information System (INIS)
Highlights: • Monte-Carlo burnup simulations are often used as reference calculations. • Monte-Carlo burnup simulations suffers however of prohibitive calculation times. • This paper proposes a method to accelerate Monte-Carlo burnup codes. • This method factorizes the transport steps using the correlated sampling method. - Abstract: Monte-Carlo burnup calculations are nowadays the reference method to obtain fuel inventories in reactor configurations. Their main drawback is the very long computing time associated with the calculation. A method is presented here which attempts to speed up the calculation by replacing full simulations by perturbation calculations based on correlated sampling. The method is tested in a PWR assembly configuration and numerical results are given for the figure of merit. These results show that a speed-up of up to a factor of 5 can be achieved
International Nuclear Information System (INIS)
Full text: The Director General of the International Atomic Energy Agency (IAEA), Mohamed ElBaradei, issued today the following statement: The IAEA has been involved in United Nations efforts relating to the impact of the use of depleted uranium (DU) ammunition in Kosovo. It has supported the United Nations Environment Programme (UNEP) in the assessment which it is making, at the request of the Secretary-General, of that impact. In this connection, in November 2000, Agency experts participated in a UNEP-led fact-finding mission in Kosovo. DU is only slightly radioactive, being about 40% as radioactive as natural uranium. Chemically and physically, DU behaves in the same way as natural uranium. The chemical toxicity is normally the dominant factor for human health. However, it is necessary to carefully assess the impact of DU in the special circumstances in which it was used, e.g. to determine whether it was inhaled or ingested or whether fragments came into close contact with individuals. It is therefore essential, before an authoritative conclusion can be reached, that a detailed survey of the territory in which DU was used and of the people who came in contact with the depleted uranium in any form be carried out. In the meantime it would be prudent, as recommended by the leader of the November UNEP mission, to adopt precautionary measures. Depending on the results of the survey further measures may be necessary. The Agency, within its statutory responsibilities and on the basis of internationally accepted radiation safety standards, will continue to co-operate with other organizations, in particular WHO and UNEP, with a view to carrying out a comprehensive assessment. Co-operation by and additional information from NATO will be prerequisites. The experience gained from such an assessment could be useful for similar studies that may be carried out elsewhere in the Balkans or in the Gulf. (author)
International Nuclear Information System (INIS)
Highlights: • We present a new Monte Carlo method to perform sensitivity/perturbation calculations. • Sensitivity of keff, reaction rates, point kinetics parameters to nuclear data. • Fully continuous implicitly constrained Monte Carlo sensitivities to scattering distributions. • Implementation of the method in the continuous energy Monte Carlo code SERPENT. • Verification against ERANOS and TSUNAMI generalized perturbation theory results. - Abstract: In this work, the implementation of a collision history-based approach to sensitivity/perturbation calculations in the Monte Carlo code SERPENT is discussed. The proposed methods allow the calculation of the effects of nuclear data perturbation on several response functions: the effective multiplication factor, reaction rate ratios and bilinear ratios (e.g., effective kinetics parameters). SERPENT results are compared to ERANOS and TSUNAMI Generalized Perturbation Theory calculations for two fast metallic systems and for a PWR pin-cell benchmark. New methods for the calculation of sensitivities to angular scattering distributions are also presented, which adopts fully continuous (in energy and angle) Monte Carlo estimators
International Nuclear Information System (INIS)
We present a new Monte Carlo method based upon the theoretical proposal of Claverie and Soto. By contrast with other Quantum Monte Carlo methods used so far, the present approach uses a pure diffusion process without any branching. The many-fermion problem (with the specific constraint due to the Pauli principle) receives a natural solution in the framework of this method: in particular, there is neither the fixed-node approximation not the nodal release problem which occur in other approaches (see, e.g., Ref. 8 for a recent account). We give some numerical results concerning simple systems in order to illustrate the numerical feasibility of the proposed algorithm
Development of a GPU-based Monte Carlo dose calculation code for coupled electron-photon transport
Jia, Xun; Sempau, Josep; Choi, Dongju; Majumdar, Amitava; Jiang, Steve B
2009-01-01
Monte Carlo simulation is the most accurate method for absorbed dose calculations in radiotherapy. Its efficiency still requires improvement for routine clinical applications, especially for online adaptive radiotherapy. In this paper, we report our recent development on a GPU-based Monte Carlo dose calculation code for coupled electron-photon transport. We have implemented the Dose Planning Method (DPM) Monte Carlo dose calculation package (Sempau et al, Phys. Med. Biol., 45(2000)2263-2291) on GPU architecture under CUDA platform. The implementation has been tested with respect to the original sequential DPM code on CPU in two cases. Our results demonstrate the adequate accuracy of the GPU implementation for both electron and photon beams in radiotherapy energy range. A speed up factor of 4.5 and 5.5 times have been observed for electron and photon testing cases, respectively, using an NVIDIA Tesla C1060 GPU card against a 2.27GHz Intel Xeon CPU processor .
Monte Carlo vs. Pencil Beam based optimization of stereotactic lung IMRT
Directory of Open Access Journals (Sweden)
Weinmann Martin
2009-12-01
Full Text Available Abstract Background The purpose of the present study is to compare finite size pencil beam (fsPB and Monte Carlo (MC based optimization of lung intensity-modulated stereotactic radiotherapy (lung IMSRT. Materials and methods A fsPB and a MC algorithm as implemented in a biological IMRT planning system were validated by film measurements in a static lung phantom. Then, they were applied for static lung IMSRT planning based on three different geometrical patient models (one phase static CT, density overwrite one phase static CT, average CT of the same patient. Both 6 and 15 MV beam energies were used. The resulting treatment plans were compared by how well they fulfilled the prescribed optimization constraints both for the dose distributions calculated on the static patient models and for the accumulated dose, recalculated with MC on each of 8 CTs of a 4DCT set. Results In the phantom measurements, the MC dose engine showed discrepancies Conclusions It is feasible to employ the MC dose engine for optimization of lung IMSRT and the plans are superior to fsPB. Use of static patient models introduces a bias in the MC dose distribution compared to the 4D MC recalculated dose, but this bias is predictable and therefore MC based optimization on static patient models is considered safe.
A new method for RGB to CIELAB color space transformation based on Markov chain Monte Carlo
Chen, Yajun; Liu, Ding; Liang, Junli
2013-10-01
During printing quality inspection, the inspection of color error is an important content. However, the RGB color space is device-dependent, usually RGB color captured from CCD camera must be transformed into CIELAB color space, which is perceptually uniform and device-independent. To cope with the problem, a Markov chain Monte Carlo (MCMC) based algorithms for the RGB to the CIELAB color space transformation is proposed in this paper. Firstly, the modeling color targets and testing color targets is established, respectively used in modeling and performance testing process. Secondly, we derive a Bayesian model for estimation the coefficients of a polynomial, which can be used to describe the relation between RGB and CIELAB color space. Thirdly, a Markov chain is set up base on Gibbs sampling algorithm (one of the MCMC algorithm) to estimate the coefficients of polynomial. Finally, the color difference of testing color targets is computed for evaluating the performance of the proposed method. The experimental results showed that the nonlinear polynomial regression based on MCMC algorithm is effective, whose performance is similar to the least square approach and can accurately model the RGB to the CIELAB color space conversion and guarantee the color error evaluation for printing quality inspection system.
Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation
Ziegenhein, Peter; Pirner, Sven; Kamerling, Cornelis Ph; Oelfke, Uwe
2015-08-01
Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37× compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25× and 1.95× faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.
Ultrafast cone-beam CT scatter correction with GPU-based Monte Carlo simulation
Directory of Open Access Journals (Sweden)
Yuan Xu
2014-03-01
Full Text Available Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT. We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstruction within 30 seconds.Methods: The method consists of six steps: 1 FDK reconstruction using raw projection data; 2 Rigid Registration of planning CT to the FDK results; 3 MC scatter calculation at sparse view angles using the planning CT; 4 Interpolation of the calculated scatter signals to other angles; 5 Removal of scatter from the raw projections; 6 FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC noise from the simulated scatter images caused by low photon numbers. The method is validated on one simulated head-and-neck case with 364 projection angles.Results: We have examined variation of the scatter signal among projection angles using Fourier analysis. It is found that scatter images at 31 angles are sufficient to restore those at all angles with < 0.1% error. For the simulated patient case with a resolution of 512 × 512 × 100, we simulated 5 × 106 photons per angle. The total computation time is 20.52 seconds on a Nvidia GTX Titan GPU, and the time at each step is 2.53, 0.64, 14.78, 0.13, 0.19, and 2.25 seconds, respectively. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU.Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. It accomplished the whole procedure of scatter correction and reconstruction within 30 seconds.----------------------------Cite this
International Nuclear Information System (INIS)
As The Monte Carlo (MC) particle transport analysis for a complex system such as research reactor, accelerator, and fusion facility may require accurate modeling of the complicated geometry. Its manual modeling by using the text interface of a MC code to define the geometrical objects is tedious, lengthy and error-prone. This problem can be overcome by taking advantage of modeling capability of the computer aided design (CAD) system. There have been two kinds of approaches to develop MC code systems utilizing the CAD data: the external format conversion and the CAD kernel imbedded MC simulation. The first approach includes several interfacing programs such as McCAD, MCAM, GEOMIT etc. which were developed to automatically convert the CAD data into the MCNP geometry input data. This approach makes the most of the existing MC codes without any modifications, but implies latent data inconsistency due to the difference of the geometry modeling system. In the second approach, a MC code utilizes the CAD data for the direct particle tracking or the conversion to an internal data structure of the constructive solid geometry (CSG) and/or boundary representation (B-rep) modeling with help of a CAD kernel. MCNP-BRL and OiNC have demonstrated their capabilities of the CAD-based MC simulations. Recently we have developed a CAD-based geometry processing module for the MC particle simulation by using the OpenCASCADE (OCC) library. In the developed module, CAD data can be used for the particle tracking through primitive CAD surfaces (hereafter the CAD-based tracking) or the internal conversion to the CSG data structure. In this paper, the performances of the text-based model, the CAD-based tracking, and the internal CSG conversion are compared by using an in-house MC code, McSIM, equipped with the developed CAD-based geometry processing module
Energy Technology Data Exchange (ETDEWEB)
Baba, Justin S [ORNL; John, Dwayne O [ORNL; Koju, Vijay [ORNL
2015-01-01
The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>10million) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case for many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al.,1 to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.
International Nuclear Information System (INIS)
A new Monte Carlo mesh tally based on a Kernel Density Estimator (KDE) approach using integrated particle tracks is presented. We first derive the KDE integral-track estimator and present a brief overview of its implementation as an alternative to the MCNP fmesh tally. To facilitate a valid quantitative comparison between these two tallies for verification purposes, there are two key issues that must be addressed. The first of these issues involves selecting a good data transfer method to convert the nodal-based KDE results into their cell-averaged equivalents (or vice versa with the cell-averaged MCNP results). The second involves choosing an appropriate resolution of the mesh, since if it is too coarse this can introduce significant errors into the reference MCNP solution. After discussing both of these issues in some detail, we present the results of a convergence analysis that shows the KDE integral-track and MCNP fmesh tallies are indeed capable of producing equivalent results for some simple 3D transport problems. In all cases considered, there was clear convergence from the KDE results to the reference MCNP results as the number of particle histories was increased. (authors)
The development of GPU-based parallel PRNG for Monte Carlo applications in CUDA Fortran
Directory of Open Access Journals (Sweden)
Hamed Kargaran
2016-04-01
Full Text Available The implementation of Monte Carlo simulation on the CUDA Fortran requires a fast random number generation with good statistical properties on GPU. In this study, a GPU-based parallel pseudo random number generator (GPPRNG have been proposed to use in high performance computing systems. According to the type of GPU memory usage, GPU scheme is divided into two work modes including GLOBAL_MODE and SHARED_MODE. To generate parallel random numbers based on the independent sequence method, the combination of middle-square method and chaotic map along with the Xorshift PRNG have been employed. Implementation of our developed PPRNG on a single GPU showed a speedup of 150x and 470x (with respect to the speed of PRNG on a single CPU core for GLOBAL_MODE and SHARED_MODE, respectively. To evaluate the accuracy of our developed GPPRNG, its performance was compared to that of some other commercially available PPRNGs such as MATLAB, FORTRAN and Miller-Park algorithm through employing the specific standard tests. The results of this comparison showed that the developed GPPRNG in this study can be used as a fast and accurate tool for computational science applications.
The development of GPU-based parallel PRNG for Monte Carlo applications in CUDA Fortran
Kargaran, Hamed; Minuchehr, Abdolhamid; Zolfaghari, Ahmad
2016-04-01
The implementation of Monte Carlo simulation on the CUDA Fortran requires a fast random number generation with good statistical properties on GPU. In this study, a GPU-based parallel pseudo random number generator (GPPRNG) have been proposed to use in high performance computing systems. According to the type of GPU memory usage, GPU scheme is divided into two work modes including GLOBAL_MODE and SHARED_MODE. To generate parallel random numbers based on the independent sequence method, the combination of middle-square method and chaotic map along with the Xorshift PRNG have been employed. Implementation of our developed PPRNG on a single GPU showed a speedup of 150x and 470x (with respect to the speed of PRNG on a single CPU core) for GLOBAL_MODE and SHARED_MODE, respectively. To evaluate the accuracy of our developed GPPRNG, its performance was compared to that of some other commercially available PPRNGs such as MATLAB, FORTRAN and Miller-Park algorithm through employing the specific standard tests. The results of this comparison showed that the developed GPPRNG in this study can be used as a fast and accurate tool for computational science applications.
Development of an unstructured mesh based geometry model in the Serpent 2 Monte Carlo code
International Nuclear Information System (INIS)
This paper presents a new unstructured mesh based geometry type, developed in the Serpent 2 Monte Carlo code as a by-product of another study related to multi-physics applications and coupling to CFD codes. The new geometry type is intended for the modeling of complicated and irregular objects, which are not easily constructed using the conventional CSG based approach. The capability is put to test by modeling the 'Stanford Critical Bunny' – a variation of a well-known 3D test case for methods used in the world of computer graphics. The results show that the geometry routine in Serpent 2 can handle the unstructured mesh, and that the use of delta-tracking results in a considerable reduction in the overall calculation time as the geometry is refined. The methodology is still very much under development, with the final goal of implementing a geometry routine capable of reading standardized geometry formats used by 3D design and imaging tools in industry and medical physics. (author)
TH-E-BRE-08: GPU-Monte Carlo Based Fast IMRT Plan Optimization
Energy Technology Data Exchange (ETDEWEB)
Li, Y; Tian, Z; Shi, F; Jiang, S; Jia, X [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States)
2014-06-15
Purpose: Intensity-modulated radiation treatment (IMRT) plan optimization needs pre-calculated beamlet dose distribution. Pencil-beam or superposition/convolution type algorithms are typically used because of high computation speed. However, inaccurate beamlet dose distributions, particularly in cases with high levels of inhomogeneity, may mislead optimization, hindering the resulting plan quality. It is desire to use Monte Carlo (MC) methods for beamlet dose calculations. Yet, the long computational time from repeated dose calculations for a number of beamlets prevents this application. It is our objective to integrate a GPU-based MC dose engine in lung IMRT optimization using a novel two-steps workflow. Methods: A GPU-based MC code gDPM is used. Each particle is tagged with an index of a beamlet where the source particle is from. Deposit dose are stored separately for beamlets based on the index. Due to limited GPU memory size, a pyramid space is allocated for each beamlet, and dose outside the space is neglected. A two-steps optimization workflow is proposed for fast MC-based optimization. At first step, rough beamlet dose calculations is conducted with only a small number of particles per beamlet. Plan optimization is followed to get an approximated fluence map. In the second step, more accurate beamlet doses are calculated, where sampled number of particles for a beamlet is proportional to the intensity determined previously. A second-round optimization is conducted, yielding the final Result. Results: For a lung case with 5317 beamlets, 10{sup 5} particles per beamlet in the first round, and 10{sup 8} particles per beam in the second round are enough to get a good plan quality. The total simulation time is 96.4 sec. Conclusion: A fast GPU-based MC dose calculation method along with a novel two-step optimization workflow are developed. The high efficiency allows the use of MC for IMRT optimizations.
Monte-Carlo Simulation for PDC-Based Optical CDMA System
FAHIM AZIZ UMRANI; AHSAN AHMED URSANI; ABDUL WAHEED UMRANI
2010-01-01
This paper presents the Monte-Carlo simulation of Optical CDMA (Code Division Multiple Access) systems, and analyse its performance in terms of the BER (Bit Error Rate). The spreading sequence chosen for CDMA is Perfect Difference Codes. Furthermore, this paper derives the expressions of noise variances from first principles to calibrate the noise for both bipolar (electrical domain) and unipolar (optical domain) signalling required for Monte-Carlo simulation. The simulated res...
A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom
Energy Technology Data Exchange (ETDEWEB)
Bellezzo, M.; Do Nascimento, E.; Yoriyaz, H., E-mail: mbellezzo@gmail.br [Instituto de Pesquisas Energeticas e Nucleares / CNEN, Av. Lineu Prestes 2242, Cidade Universitaria, 05508-000 Sao Paulo (Brazil)
2014-08-15
As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)
A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom
International Nuclear Information System (INIS)
As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)
Eutrophication of mangroves linked to depletion of foliar and soil base cations
Fauzi, A.; Skidmore, A.K.; Heitkonig, I.M.A.; Gils, van H.; Schlerf, M.
2014-01-01
There is growing concern that increasing eutrophication causes degradation of coastal ecosystems. Studies in terrestrial ecosystems have shown that increasing the concentration of nitrogen in soils contributes to the acidification process, which leads to leaching of base cations. To test the effects
International Nuclear Information System (INIS)
This study aims to utilize a measurement-based Monte Carlo (MBMC) method to evaluate the accuracy of dose distributions calculated using the Eclipse radiotherapy treatment planning system (TPS) based on the anisotropic analytical algorithm. Dose distributions were calculated for the nasopharyngeal carcinoma (NPC) patients treated with the intensity modulated radiotherapy (IMRT). Ten NPC IMRT plans were evaluated by comparing their dose distributions with those obtained from the in-house MBMC programs for the same CT images and beam geometry. To reconstruct the fluence distribution of the IMRT field, an efficiency map was obtained by dividing the energy fluence of the intensity modulated field by that of the open field, both acquired from an aS1000 electronic portal imaging device. The integrated image of the non-gated mode was used to acquire the full dose distribution delivered during the IMRT treatment. This efficiency map redistributed the particle weightings of the open field phase-space file for IMRT applications. Dose differences were observed in the tumor and air cavity boundary. The mean difference between MBMC and TPS in terms of the planning target volume coverage was 0.6% (range: 0.0–2.3%). The mean difference for the conformity index was 0.01 (range: 0.0–0.01). In conclusion, the MBMC method serves as an independent IMRT dose verification tool in a clinical setting. - Highlights: ► The patient-based Monte Carlo method serves as a reference standard to verify IMRT doses. ► 3D Dose distributions for NPC patients have been verified by the Monte Carlo method. ► Doses predicted by the Monte Carlo method matched closely with those by the TPS. ► The Monte Carlo method predicted a higher mean dose to the middle ears than the TPS. ► Critical organ doses should be confirmed to avoid overdose to normal organs
A Monte Carlo-based knee phantom for in vivo measurements of 241Am in bone
International Nuclear Information System (INIS)
Determination of internal contamination of 241Am can be done by direct counting of gamma emission using a Whole Body Counter. Due to the strong attenuation of the low-energy photons, it is advised to perform the measurement on bones surrounded by a thin layer of tissue. In vivo measurements are performed at CIEMAT using a system of four Low-Energy germanium (LE Ge) detectors calibrated with realistic anthropomorphic phantoms. As an alternative, Monte Carlo techniques are applied on voxel phantoms based on tomographic images to avoid the need of different physical phantoms for different radionuclides and organs. This technique is employed to study the convenience of americium measurements in the knee for the evaluation of the deposition in the whole skeleton. The spatial distribution of the photon fluence through a cylinder along the axis of the leg has been calculated to determine the best counting geometry. The detection efficiency is then calculated and the results are compared with those obtained using the physical phantom to validate the proposed method
Monte Carlo based time-domain Hspice noise simulation for CSA-CRRC circuit
International Nuclear Information System (INIS)
We present a time-domain Monte Carlo based Hspice noise simulation for a charge-sensitive preamplifier-CRRC (CSA-CRRC) circuit with random amplitude piecewise noise waveform. The amplitude distribution of thermal noise is modeled with Gaussian random number. For 1/f noise, its amplitude distribution is modeled with several low-pass filters with thermal noise generators. These time-domain noise sources are connected in parallel with the drain and source nodes of the CMOS input transistor of CSA. The Hspice simulation of the CSA-CRRC circuit with these noise sources yielded ENC values at the output node of the shaper for thermal and 1/f noise of 47e- and 732e-, respectively. ENC values calculated from the frequency-domain transfer function and its integration are 44e- and 882e-, respectively. The values for Hspice simulation are similar to those for frequency-domain calculation. A test chip was designed and fabricated for this study. The measured ENC value was 904 e-. This study shows that the time-domain noise modeling is valid and the transient Hspice noise simulation can be an effective tool for low-noise circuit design
Monte Carlo simulation of novel breast imaging modalities based on coherent x-ray scattering
International Nuclear Information System (INIS)
We present upgraded versions of MC-GPU and penEasyImaging, two open-source Monte Carlo codes for the simulation of radiographic projections and CT, that have been extended and validated to account for the effect of molecular interference in the coherent x-ray scatter. The codes were first validation by comparison between simulated and measured energy dispersive x-ray diffraction (EDXRD) spectra. A second validation was by evaluation of the rejection factor of a focused anti-scatter grid. To exemplify the capabilities of the new codes, the modified MC-GPU code was used to examine the possibility of characterizing breast tissue composition and microcalcifications in a volume of interest inside a whole breast phantom using EDXRD and to simulate a coherent scatter computed tomography (CSCT) system based on first generation CT acquisition geometry. It was confirmed that EDXRD and CSCT have the potential to characterize tissue composition inside a whole breast. The GPU-accelerated code was able to simulate, in just a few hours, a complete CSCT acquisition composed of 9758 independent pencil-beam projections. In summary, it has been shown that the presented software can be used for fast and accurate simulation of novel breast imaging modalities relying on scattering measurements and therefore can assist in the characterization and optimization of promising modalities currently under development. (paper)
Adjoint-based deviational Monte Carlo methods for phonon transport calculations
Péraud, Jean-Philippe M.; Hadjiconstantinou, Nicolas G.
2015-06-01
In the field of linear transport, adjoint formulations exploit linearity to derive powerful reciprocity relations between a variety of quantities of interest. In this paper, we develop an adjoint formulation of the linearized Boltzmann transport equation for phonon transport. We use this formulation for accelerating deviational Monte Carlo simulations of complex, multiscale problems. Benefits include significant computational savings via direct variance reduction, or by enabling formulations which allow more efficient use of computational resources, such as formulations which provide high resolution in a particular phase-space dimension (e.g., spectral). We show that the proposed adjoint-based methods are particularly well suited to problems involving a wide range of length scales (e.g., nanometers to hundreds of microns) and lead to computational methods that can calculate quantities of interest with a cost that is independent of the system characteristic length scale, thus removing the traditional stiffness of kinetic descriptions. Applications to problems of current interest, such as simulation of transient thermoreflectance experiments or spectrally resolved calculation of the effective thermal conductivity of nanostructured materials, are presented and discussed in detail.
IVF cycle cost estimation using Activity Based Costing and Monte Carlo simulation.
Cassettari, Lucia; Mosca, Marco; Mosca, Roberto; Rolando, Fabio; Costa, Mauro; Pisaturo, Valerio
2016-03-01
The Authors present a new methodological approach in stochastic regime to determine the actual costs of an healthcare process. The paper specifically shows the application of the methodology for the determination of the cost of an Assisted reproductive technology (ART) treatment in Italy. The reason of this research comes from the fact that deterministic regime is inadequate to implement an accurate estimate of the cost of this particular treatment. In fact the durations of the different activities involved are unfixed and described by means of frequency distributions. Hence the need to determine in addition to the mean value of the cost, the interval within which it is intended to vary with a known confidence level. Consequently the cost obtained for each type of cycle investigated (in vitro fertilization and embryo transfer with or without intracytoplasmic sperm injection), shows tolerance intervals around the mean value sufficiently restricted as to make the data obtained statistically robust and therefore usable also as reference for any benchmark with other Countries. It should be noted that under a methodological point of view the approach was rigorous. In fact it was used both the technique of Activity Based Costing for determining the cost of individual activities of the process both the Monte Carlo simulation, with control of experimental error, for the construction of the tolerance intervals on the final result. PMID:24752546
A global reaction route mapping-based kinetic Monte Carlo algorithm
Mitchell, Izaac; Irle, Stephan; Page, Alister J.
2016-07-01
We propose a new on-the-fly kinetic Monte Carlo (KMC) method that is based on exhaustive potential energy surface searching carried out with the global reaction route mapping (GRRM) algorithm. Starting from any given equilibrium state, this GRRM-KMC algorithm performs a one-step GRRM search to identify all surrounding transition states. Intrinsic reaction coordinate pathways are then calculated to identify potential subsequent equilibrium states. Harmonic transition state theory is used to calculate rate constants for all potential pathways, before a standard KMC accept/reject selection is performed. The selected pathway is then used to propagate the system forward in time, which is calculated on the basis of 1st order kinetics. The GRRM-KMC algorithm is validated here in two challenging contexts: intramolecular proton transfer in malonaldehyde and surface carbon diffusion on an iron nanoparticle. We demonstrate that in both cases the GRRM-KMC method is capable of reproducing the 1st order kinetics observed during independent quantum chemical molecular dynamics simulations using the density-functional tight-binding potential.
Monte Carlo efficiency calibration of a neutron generator-based total-body irradiator
International Nuclear Information System (INIS)
Many body composition measurement systems are calibrated against a single-sized reference phantom. Prompt-gamma neutron activation (PGNA) provides the only direct measure of total body nitrogen (TBN), an index of the body's lean tissue mass. In PGNA systems, body size influences neutron flux attenuation, induced gamma signal distribution, and counting efficiency. Thus, calibration based on a single-sized phantom could result in inaccurate TBN values. We used Monte Carlo simulations (MCNP-5; Los Alamos National Laboratory) in order to map a system's response to the range of body weights (65-160 kg) and body fat distributions (25-60%) in obese humans. Calibration curves were constructed to derive body-size correction factors relative to a standard reference phantom, providing customized adjustments to account for differences in body habitus of obese adults. The use of MCNP-generated calibration curves should allow for a better estimate of the true changes in lean tissue mass that many occur during intervention programs focused only on weight loss. (author)
Monte Carlo vs. Pencil Beam based optimization of stereotactic lung IMRT
International Nuclear Information System (INIS)
The purpose of the present study is to compare finite size pencil beam (fsPB) and Monte Carlo (MC) based optimization of lung intensity-modulated stereotactic radiotherapy (lung IMSRT). A fsPB and a MC algorithm as implemented in a biological IMRT planning system were validated by film measurements in a static lung phantom. Then, they were applied for static lung IMSRT planning based on three different geometrical patient models (one phase static CT, density overwrite one phase static CT, average CT) of the same patient. Both 6 and 15 MV beam energies were used. The resulting treatment plans were compared by how well they fulfilled the prescribed optimization constraints both for the dose distributions calculated on the static patient models and for the accumulated dose, recalculated with MC on each of 8 CTs of a 4DCT set. In the phantom measurements, the MC dose engine showed discrepancies < 2%, while the fsPB dose engine showed discrepancies of up to 8% in the presence of lateral electron disequilibrium in the target. In the patient plan optimization, this translates into violations of organ at risk constraints and unpredictable target doses for the fsPB optimized plans. For the 4D MC recalculated dose distribution, MC optimized plans always underestimate the target doses, but the organ at risk doses were comparable. The results depend on the static patient model, and the smallest discrepancy was found for the MC optimized plan on the density overwrite one phase static CT model. It is feasible to employ the MC dose engine for optimization of lung IMSRT and the plans are superior to fsPB. Use of static patient models introduces a bias in the MC dose distribution compared to the 4D MC recalculated dose, but this bias is predictable and therefore MC based optimization on static patient models is considered safe
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-01
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-01
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU’s shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75–2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0
Unfolding an under-determined neutron spectrum using genetic algorithm based Monte Carlo
International Nuclear Information System (INIS)
Spallation in addition to the other photon-neutron reactions in target materials and different components in accelerators may result in production of huge amount of energetic protons which further leads to the production of neutron and contributes to the main component of the total dose. For dosimetric purposes in accelerator facilities the detector measurements doesn't provide directly the actual neutron flux values but a cumulative picture. To obtain Neutron spectrum from the measured data, response functions of the measuring instrument together with the measurements are used into many unfolding techniques which are frequently used for unfolding the hidden spectral information. Here we discuss a genetic algorithm based unfolding technique which is in the process of development. Genetic Algorithm is a stochastic method based on natural selection, which mimics Darwinian theory of survival of the best. The above said method has been tested to unfold the neutron spectra obtained from a reaction carried out at an accelerator facility, with energetic carbon ions on thick silver target along with its respective neutron response of BC501A liquid scintillation detector. The problem dealt here is under-determined where the number of measurements is less than the required energy bin information. The results so obtained were compared with those obtained using the established unfolding code FERDOR, which unfolds data for completely determined problems. It is seen that the genetic algorithm based solution has a reasonable match with the results of FERDOR, when smoothening carried out by Monte Carlo is taken into consideration. This method appears to be a promising candidate for unfolding neutron spectrum in cases of under-determined and over-determined, where measurements are more. The method also has advantages of flexibility, computational simplicity and works well without need of any initial guess spectrum. (author)
Energy Technology Data Exchange (ETDEWEB)
Sihler, Holger [Institute of Environmental Physics, University of Heidelberg (Germany); Max-Planck-Institute for Chemistry, Mainz (Germany); Friess, Udo; Platt, Ulrich [Institute of Environmental Physics, University of Heidelberg (Germany); Wagner, Thomas [Max-Planck-Institute for Chemistry, Mainz (Germany)
2010-07-01
Bromine monoxide (BrO) radicals are known to play an important role in the chemistry of the springtime polar troposphere. Their release by halogen activation processes leads to the almost complete destruction of near-surface ozone during ozone depletion events ODEs. In order to improve our understanding of the halogen activation processes in three dimensions, we combine active and passive ground-based and satellite-borne measurements of BrO radicals. While satellites can not resolve the vertical distribution and have rather coarse horizontal resolution, they may provide information on the large-scale horizontal distribution. Information on the spatial variability within a satellite pixel may be derived from our combined ground-based instrumentation. Simultaneous passive multi-axis differential optical absorption spectroscopy (MAX-DOAS) and active long-path DOAS (LP-DOAS) measurements were conducted during the jointly organised OASIS campaign in Barrow, Alaska during Spring 2009 within the scope of the International Polar Year (IPY). Ground-based measurements are compared to BrO column densities measured by GOME-2 in order to find a conclusive picture of the spatial pattern of bromine activation.
International Nuclear Information System (INIS)
Dosimetric studies are necessary for all patients treated with targeted radiotherapy. In order to attain the precision required, we have developed Oedipe, a dosimetric tool based on the MCNPX Monte Carlo code. The anatomy of each patient is considered in the form of a voxel-based geometry created using computed tomography (CT) images or magnetic resonance imaging (MRI). Oedipe enables dosimetry studies to be carried out at the voxel scale. Validation of the results obtained by comparison with existing methods is complex because there are multiple sources of variation: calculation methods (different Monte Carlo codes, point kernel), patient representations (model or specific) and geometry definitions (mathematical or voxel-based). In this paper, we validate Oedipe by taking each of these parameters into account independently. Monte Carlo methodology requires long calculation times, particularly in the case of voxel-based geometries, and this is one of the limits of personalized dosimetric methods. However, our results show that the use of voxel-based geometry as opposed to a mathematically defined geometry decreases the calculation time two-fold, due to an optimization of the MCNPX2.5e code. It is therefore possible to envisage the use of Oedipe for personalized dosimetry in the clinical context of targeted radiotherapy
International Nuclear Information System (INIS)
The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.
A study of potential numerical pitfalls in GPU-based Monte Carlo dose calculation
Magnoux, Vincent; Ozell, Benoît; Bonenfant, Éric; Després, Philippe
2015-07-01
The purpose of this study was to evaluate the impact of numerical errors caused by the floating point representation of real numbers in a GPU-based Monte Carlo code used for dose calculation in radiation oncology, and to identify situations where this type of error arises. The program used as a benchmark was bGPUMCD. Three tests were performed on the code, which was divided into three functional components: energy accumulation, particle tracking and physical interactions. First, the impact of single-precision calculations was assessed for each functional component. Second, a GPU-specific compilation option that reduces execution time as well as precision was examined. Third, a specific function used for tracking and potentially more sensitive to precision errors was tested by comparing it to a very high-precision implementation. Numerical errors were found in two components of the program. Because of the energy accumulation process, a few voxels surrounding a radiation source end up with a lower computed dose than they should. The tracking system contained a series of operations that abnormally amplify rounding errors in some situations. This resulted in some rare instances (less than 0.1%) of computed distances that are exceedingly far from what they should have been. Most errors detected had no significant effects on the result of a simulation due to its random nature, either because they cancel each other out or because they only affect a small fraction of particles. The results of this work can be extended to other types of GPU-based programs and be used as guidelines to avoid numerical errors on the GPU computing platform.
A voxel-based mouse for internal dose calculations using Monte Carlo simulations (MCNP)
International Nuclear Information System (INIS)
Murine models are useful for targeted radiotherapy pre-clinical experiments. These models can help to assess the potential interest of new radiopharmaceuticals. In this study, we developed a voxel-based mouse for dosimetric estimates. A female nude mouse (30 g) was frozen and cut into slices. High-resolution digital photographs were taken directly on the frozen block after each section. Images were segmented manually. Monoenergetic photon or electron sources were simulated using the MCNP4c2 Monte Carlo code for each source organ, in order to give tables of S-factors (in Gy Bq sup - sup 1 s sup - sup 1) for all target organs. Results obtained from monoenergetic particles were then used to generate S-factors for several radionuclides of potential interest in targeted radiotherapy. Thirteen source and 25 target regions were considered in this study. For each source region, 16 photon and 16 electron energies were simulated. Absorbed fractions, specific absorbed fractions and S-factors were calculated for 16 radionuclides of interest for targeted radiotherapy. The results obtained generally agree well with data published previously. For electron energies ranging from 0.1 to 2.5 MeV, the self-absorbed fraction varies from 0.98 to 0.376 for the liver, and from 0.89 to 0.04 for the thyroid. Electrons cannot be considered as 'non-penetrating' radiation for energies above 0.5 MeV for mouse organs. This observation can be generalized to radionuclides: for example, the beta self-absorbed fraction for the thyroid was 0.616 for I-131; absorbed fractions for Y-90 for left kidney-to-left kidney and for left kidney-to-spleen were 0.486 and 0.058, respectively. Our voxel-based mouse allowed us to generate a dosimetric database for use in preclinical targeted radiotherapy experiments. (author)
A GPU-accelerated and Monte Carlo-based intensity modulated proton therapy optimization system
International Nuclear Information System (INIS)
Purpose: Conventional spot scanning intensity modulated proton therapy (IMPT) treatment planning systems (TPSs) optimize proton spot weights based on analytical dose calculations. These analytical dose calculations have been shown to have severe limitations in heterogeneous materials. Monte Carlo (MC) methods do not have these limitations; however, MC-based systems have been of limited clinical use due to the large number of beam spots in IMPT and the extremely long calculation time of traditional MC techniques. In this work, the authors present a clinically applicable IMPT TPS that utilizes a very fast MC calculation. Methods: An in-house graphics processing unit (GPU)-based MC dose calculation engine was employed to generate the dose influence map for each proton spot. With the MC generated influence map, a modified least-squares optimization method was used to achieve the desired dose volume histograms (DVHs). The intrinsic CT image resolution was adopted for voxelization in simulation and optimization to preserve spatial resolution. The optimizations were computed on a multi-GPU framework to mitigate the memory limitation issues for the large dose influence maps that resulted from maintaining the intrinsic CT resolution. The effects of tail cutoff and starting condition were studied and minimized in this work. Results: For relatively large and complex three-field head and neck cases, i.e., >100 000 spots with a target volume of ∼1000 cm3 and multiple surrounding critical structures, the optimization together with the initial MC dose influence map calculation was done in a clinically viable time frame (less than 30 min) on a GPU cluster consisting of 24 Nvidia GeForce GTX Titan cards. The in-house MC TPS plans were comparable to a commercial TPS plans based on DVH comparisons. Conclusions: A MC-based treatment planning system was developed. The treatment planning can be performed in a clinically viable time frame on a hardware system costing around 45 000
Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2015-04-01
Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 106 particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 105 particles per beamlet. Correspondingly, the computation time
A GPU-accelerated and Monte Carlo-based intensity modulated proton therapy optimization system
Energy Technology Data Exchange (ETDEWEB)
Ma, Jiasen, E-mail: ma.jiasen@mayo.edu; Beltran, Chris; Seum Wan Chan Tseung, Hok; Herman, Michael G. [Department of Radiation Oncology, Division of Medical Physics, Mayo Clinic, 200 First Street Southwest, Rochester, Minnesota 55905 (United States)
2014-12-15
Purpose: Conventional spot scanning intensity modulated proton therapy (IMPT) treatment planning systems (TPSs) optimize proton spot weights based on analytical dose calculations. These analytical dose calculations have been shown to have severe limitations in heterogeneous materials. Monte Carlo (MC) methods do not have these limitations; however, MC-based systems have been of limited clinical use due to the large number of beam spots in IMPT and the extremely long calculation time of traditional MC techniques. In this work, the authors present a clinically applicable IMPT TPS that utilizes a very fast MC calculation. Methods: An in-house graphics processing unit (GPU)-based MC dose calculation engine was employed to generate the dose influence map for each proton spot. With the MC generated influence map, a modified least-squares optimization method was used to achieve the desired dose volume histograms (DVHs). The intrinsic CT image resolution was adopted for voxelization in simulation and optimization to preserve spatial resolution. The optimizations were computed on a multi-GPU framework to mitigate the memory limitation issues for the large dose influence maps that resulted from maintaining the intrinsic CT resolution. The effects of tail cutoff and starting condition were studied and minimized in this work. Results: For relatively large and complex three-field head and neck cases, i.e., >100 000 spots with a target volume of ∼1000 cm{sup 3} and multiple surrounding critical structures, the optimization together with the initial MC dose influence map calculation was done in a clinically viable time frame (less than 30 min) on a GPU cluster consisting of 24 Nvidia GeForce GTX Titan cards. The in-house MC TPS plans were comparable to a commercial TPS plans based on DVH comparisons. Conclusions: A MC-based treatment planning system was developed. The treatment planning can be performed in a clinically viable time frame on a hardware system costing around 45
Monte-Carlo Simulation for PDC-Based Optical CDMA System
Directory of Open Access Journals (Sweden)
FAHIM AZIZ UMRANI
2010-10-01
Full Text Available This paper presents the Monte-Carlo simulation of Optical CDMA (Code Division Multiple Access systems, and analyse its performance in terms of the BER (Bit Error Rate. The spreading sequence chosen for CDMA is Perfect Difference Codes. Furthermore, this paper derives the expressions of noise variances from first principles to calibrate the noise for both bipolar (electrical domain and unipolar (optical domain signalling required for Monte-Carlo simulation. The simulated results conform to the theory and show that the receiver gain mismatch and splitter loss at the transceiver degrades the system performance.
GPU-based fast Monte Carlo dose calculation for proton therapy
Jia, Xun; Schümann, Jan; Paganetti, Harald; Jiang, Steve B.
2012-12-01
Accurate radiation dose calculation is essential for successful proton radiotherapy. Monte Carlo (MC) simulation is considered to be the most accurate method. However, the long computation time limits it from routine clinical applications. Recently, graphics processing units (GPUs) have been widely used to accelerate computationally intensive tasks in radiotherapy. We have developed a fast MC dose calculation package, gPMC, for proton dose calculation on a GPU. In gPMC, proton transport is modeled by the class II condensed history simulation scheme with a continuous slowing down approximation. Ionization, elastic and inelastic proton nucleus interactions are considered. Energy straggling and multiple scattering are modeled. Secondary electrons are not transported and their energies are locally deposited. After an inelastic nuclear interaction event, a variety of products are generated using an empirical model. Among them, charged nuclear fragments are terminated with energy locally deposited. Secondary protons are stored in a stack and transported after finishing transport of the primary protons, while secondary neutral particles are neglected. gPMC is implemented on the GPU under the CUDA platform. We have validated gPMC using the TOPAS/Geant4 MC code as the gold standard. For various cases including homogeneous and inhomogeneous phantoms as well as a patient case, good agreements between gPMC and TOPAS/Geant4 are observed. The gamma passing rate for the 2%/2 mm criterion is over 98.7% in the region with dose greater than 10% maximum dose in all cases, excluding low-density air regions. With gPMC it takes only 6-22 s to simulate 10 million source protons to achieve ˜1% relative statistical uncertainty, depending on the phantoms and energy. This is an extremely high efficiency compared to the computational time of tens of CPU hours for TOPAS/Geant4. Our fast GPU-based code can thus facilitate the routine use of MC dose calculation in proton therapy.
Test Population Selection from Weibull-Based, Monte Carlo Simulations of Fatigue Life
Vlcek, Brian L.; Zaretsky, Erwin V.; Hendricks, Robert C.
2012-01-01
Fatigue life is probabilistic and not deterministic. Experimentally establishing the fatigue life of materials, components, and systems is both time consuming and costly. As a result, conclusions regarding fatigue life are often inferred from a statistically insufficient number of physical tests. A proposed methodology for comparing life results as a function of variability due to Weibull parameters, variability between successive trials, and variability due to size of the experimental population is presented. Using Monte Carlo simulation of randomly selected lives from a large Weibull distribution, the variation in the L10 fatigue life of aluminum alloy AL6061 rotating rod fatigue tests was determined as a function of population size. These results were compared to the L10 fatigue lives of small (10 each) populations from AL2024, AL7075 and AL6061. For aluminum alloy AL6061, a simple algebraic relationship was established for the upper and lower L10 fatigue life limits as a function of the number of specimens failed. For most engineering applications where less than 30 percent variability can be tolerated in the maximum and minimum values, at least 30 to 35 test samples are necessary. The variability of test results based on small sample sizes can be greater than actual differences, if any, that exists between materials and can result in erroneous conclusions. The fatigue life of AL2024 is statistically longer than AL6061 and AL7075. However, there is no statistical difference between the fatigue lives of AL6061 and AL7075 even though AL7075 had a fatigue life 30 percent greater than AL6061.
TH-C-17A-08: Monte Carlo Based Design of Efficient Scintillating Fiber Dosimeters
International Nuclear Information System (INIS)
Purpose: To accurately predict Cherenkov radiation generation in scintillating fiber dosimeters. Quantifying Cherenkov radiation provides a method for optimizing fiber dimensions, orientation, optical filters, and photodiode spectral sensitivity to achieve efficient real time imaging dosimeter designs. Methods: We develop in-house Monte Carlo simulation software to model polymer scintillation fibers' fluorescence and Cherenkov emission in megavoltage clinical beams. The model computes emissions using generation probabilities, wavelength sampling, fiber photon capture, and fiber transport efficiency and incorporates the fiber's index of refraction, optical attenuation in the Cherenkov and visible spectrum and fiber dimensions. Detector component selection based on parameters such as silicon photomultiplier efficiency and optical coupling filters separates Cherenkov radiation from the dose-proportional scintillating emissions. The computation uses spectral and geometrical separation of Cherenkov radiation, however other filtering techniques can expand the model. Results: We compute Cherenkov generation per electron and fiber capture and transmission of those photons toward the detector with incident electron beam angle dependence. The model accounts for beam obliquity and nonperpendicular electron fiber impingement, which increases Cherenkov emission and trapping. The rotational angle around square fibers shows trapping efficiency variation from the normally incident minimum to a maximum at 45 degrees rotation. For rotation in the plane formed by the fiber axis and its surface normal, trapping efficiency increases with angle from the normal. The Cherenkov spectrum follows the theoretical curve from 300nm to 800nm, the wavelength range of interest defined by silicon photomultiplier and photodiode spectral efficiency. Conclusion: We are able to compute Cherenkov generation in realistic real time scintillating fiber dosimeter geometries. Design parameters
Cell death following BNCT: A theoretical approach based on Monte Carlo simulations
International Nuclear Information System (INIS)
In parallel to boron measurements and animal studies, investigations on radiation-induced cell death are also in progress in Pavia, with the aim of better characterisation of the effects of a BNCT treatment down to the cellular level. Such studies are being carried out not only experimentally but also theoretically, based on a mechanistic model and a Monte Carlo code. Such model assumes that: (1) only clustered DNA strand breaks can lead to chromosome aberrations; (2) only chromosome fragments within a certain threshold distance can undergo misrejoining; (3) the so-called 'lethal aberrations' (dicentrics, rings and large deletions) lead to cell death. After applying the model to normal cells exposed to monochromatic fields of different radiation types, the irradiation section of the code was purposely extended to mimic the cell exposure to a mixed radiation field produced by the 10B(n,α) 7Li reaction, which gives rise to alpha particles and Li ions of short range and high biological effectiveness, and by the 14N(n,p)14C reaction, which produces 0.58 MeV protons. Very good agreement between model predictions and literature data was found for human and animal cells exposed to X- or gamma-rays, protons and alpha particles, thus allowing to validate the model for cell death induced by monochromatic radiation fields. The model predictions showed good agreement also with experimental data obtained by our group exposing DHD cells to thermal neutrons in the TRIGA Mark II reactor of University of Pavia; this allowed to validate the model also for a BNCT exposure scenario, providing a useful predictive tool to bridge the gap between irradiation and cell death.
Cell death following BNCT: A theoretical approach based on Monte Carlo simulations
Energy Technology Data Exchange (ETDEWEB)
Ballarini, F., E-mail: francesca.ballarini@pv.infn.it [University of Pavia, Department of Nuclear and Theoretical Physics, via Bassi 6, Pavia (Italy)] [INFN (National Institute of Nuclear Physics)-Sezione di Pavia, via Bassi 6, Pavia (Italy); Bakeine, J. [University of Pavia, Department of Nuclear and Theoretical Physics, via Bassi 6, Pavia (Italy); Bortolussi, S. [University of Pavia, Department of Nuclear and Theoretical Physics, via Bassi 6, Pavia (Italy)] [INFN (National Institute of Nuclear Physics)-Sezione di Pavia, via Bassi 6, Pavia (Italy); Bruschi, P. [University of Pavia, Department of Nuclear and Theoretical Physics, via Bassi 6, Pavia (Italy); Cansolino, L.; Clerici, A.M.; Ferrari, C. [University of Pavia, Department of Surgery, Experimental Surgery Laboratory, Pavia (Italy); Protti, N.; Stella, S. [University of Pavia, Department of Nuclear and Theoretical Physics, via Bassi 6, Pavia (Italy)] [INFN (National Institute of Nuclear Physics)-Sezione di Pavia, via Bassi 6, Pavia (Italy); Zonta, A.; Zonta, C. [University of Pavia, Department of Surgery, Experimental Surgery Laboratory, Pavia (Italy); Altieri, S. [University of Pavia, Department of Nuclear and Theoretical Physics, via Bassi 6, Pavia (Italy)] [INFN (National Institute of Nuclear Physics)-Sezione di Pavia, via Bassi 6, Pavia (Italy)
2011-12-15
In parallel to boron measurements and animal studies, investigations on radiation-induced cell death are also in progress in Pavia, with the aim of better characterisation of the effects of a BNCT treatment down to the cellular level. Such studies are being carried out not only experimentally but also theoretically, based on a mechanistic model and a Monte Carlo code. Such model assumes that: (1) only clustered DNA strand breaks can lead to chromosome aberrations; (2) only chromosome fragments within a certain threshold distance can undergo misrejoining; (3) the so-called 'lethal aberrations' (dicentrics, rings and large deletions) lead to cell death. After applying the model to normal cells exposed to monochromatic fields of different radiation types, the irradiation section of the code was purposely extended to mimic the cell exposure to a mixed radiation field produced by the {sup 10}B(n,{alpha}) {sup 7}Li reaction, which gives rise to alpha particles and Li ions of short range and high biological effectiveness, and by the {sup 14}N(n,p){sup 14}C reaction, which produces 0.58 MeV protons. Very good agreement between model predictions and literature data was found for human and animal cells exposed to X- or gamma-rays, protons and alpha particles, thus allowing to validate the model for cell death induced by monochromatic radiation fields. The model predictions showed good agreement also with experimental data obtained by our group exposing DHD cells to thermal neutrons in the TRIGA Mark II reactor of University of Pavia; this allowed to validate the model also for a BNCT exposure scenario, providing a useful predictive tool to bridge the gap between irradiation and cell death.
Cell death following BNCT: a theoretical approach based on Monte Carlo simulations.
Ballarini, F; Bakeine, J; Bortolussi, S; Bruschi, P; Cansolino, L; Clerici, A M; Ferrari, C; Protti, N; Stella, S; Zonta, A; Zonta, C; Altieri, S
2011-12-01
In parallel to boron measurements and animal studies, investigations on radiation-induced cell death are also in progress in Pavia, with the aim of better characterisation of the effects of a BNCT treatment down to the cellular level. Such studies are being carried out not only experimentally but also theoretically, based on a mechanistic model and a Monte Carlo code. Such model assumes that: (1) only clustered DNA strand breaks can lead to chromosome aberrations; (2) only chromosome fragments within a certain threshold distance can undergo misrejoining; (3) the so-called "lethal aberrations" (dicentrics, rings and large deletions) lead to cell death. After applying the model to normal cells exposed to monochromatic fields of different radiation types, the irradiation section of the code was purposely extended to mimic the cell exposure to a mixed radiation field produced by the (10)B(n,α) (7)Li reaction, which gives rise to alpha particles and Li ions of short range and high biological effectiveness, and by the (14)N(n,p)(14)C reaction, which produces 0.58 MeV protons. Very good agreement between model predictions and literature data was found for human and animal cells exposed to X- or gamma-rays, protons and alpha particles, thus allowing to validate the model for cell death induced by monochromatic radiation fields. The model predictions showed good agreement also with experimental data obtained by our group exposing DHD cells to thermal neutrons in the TRIGA Mark II reactor of the University of Pavia; this allowed to validate the model also for a BNCT exposure scenario, providing a useful predictive tool to bridge the gap between irradiation and cell death. PMID:21481595
Monte Carlo-based searching as a tool to study carbohydrate structure.
Dowd, Michael K; Kiely, Donald E; Zhang, Jinsong
2011-07-01
A torsion angle-based Monte Carlo searching routine was developed and applied to several carbohydrate modeling problems. The routine was developed as a Unix shell script that calls several programs, which allows it to be interfaced with multiple potential functions and various utilities for evaluating conformers. In its current form, the program operates with several versions of the MM3 and MM4 molecular mechanics programs and has a module to calculate hydrogen-hydrogen coupling constants. The routine was used to study the low-energy exo-cyclic substituents of β-D-glucopyranose and the conformers of D-glucaramide, both of which had been previously studied with MM3 by full conformational searches. For these molecules, the program found all previously reported low-energy structures. The routine was also used to find favorable conformers of 2,3,4,5-tetra-O-acetyl-N,N'-dimethyl-D-glucaramide and D-glucitol, the latter of which is believed to have many low-energy forms. Finally, the technique was used to study the inter-ring conformations of β-gentiobiose, a β-(1→6)-linked disaccharide of D-glucopyranose. The program easily found conformers in the 10 previously identified low-energy regions for this disaccharide. In 6 of the 10 local regions, the same previously identified low-energy structures were found. In the remaining four regions, the search identified structures with slightly lower energies than those previously reported. The approach should be useful for extending modeling studies on acyclic monosaccharides and possibly oligosaccharides. PMID:21536262
Monte-Carlo simulation of an ultra small-angle neutron scattering instrument based on Soller slits
Energy Technology Data Exchange (ETDEWEB)
Rieker, T. [Univ. of New Mexico, Albuquerque, NM (United States); Hubbard, P. [Sandia National Labs., Albuquerque, NM (United States)
1997-09-01
Monte Carlo simulations are used to investigate an ultra small-angle neutron scattering instrument for use at a pulsed source based on a Soller slit collimator and analyzer. The simulations show that for a q{sub min} of {approximately}le-4 {angstrom}{sup -1} (15 {angstrom} neutrons) a few tenths of a percent of the incident flux is transmitted through both collimators at q=0.
Tseung, H. Wan Chan; J. Ma; Beltran, C.
2014-01-01
Purpose: Very fast Monte Carlo (MC) simulations of proton transport have been implemented recently on GPUs. However, these usually use simplified models for non-elastic (NE) proton-nucleus interactions. Our primary goal is to build a GPU-based proton transport MC with detailed modeling of elastic and NE collisions. Methods: Using CUDA, we implemented GPU kernels for these tasks: (1) Simulation of spots from our scanning nozzle configurations, (2) Proton propagation through CT geometry, consid...
Mommen, G.P.M.; Waterbeemd, van de B.; Meiring, H.D.; Kersten, G.; Heck, A.J.R.; Jong, de A.P.J.M.
2012-01-01
A positional proteomics strategy for global N-proteome analysis is presented based on phospho tagging (PTAG) of internal peptides followed by depletion by titanium dioxide (TiO2) affinity chromatography. Therefore, N-terminal and lysine amino groups are initially completely dimethylated with formald
Monte Carlo-based multiphysics coupling analysis of x-ray pulsar telescope
Li, Liansheng; Deng, Loulou; Mei, Zhiwu; Zuo, Fuchang; Zhou, Hao
2015-10-01
X-ray pulsar telescope (XPT) is a complex optical payload, which involves optical, mechanical, electrical and thermal disciplines. The multiphysics coupling analysis (MCA) plays an important role in improving the in-orbit performance. However, the conventional MCA methods encounter two serious problems in dealing with the XTP. One is that both the energy and reflectivity information of X-ray can't be taken into consideration, which always misunderstands the essence of XPT. Another is that the coupling data can't be transferred automatically among different disciplines, leading to computational inefficiency and high design cost. Therefore, a new MCA method for XPT is proposed based on the Monte Carlo method and total reflective theory. The main idea, procedures and operational steps of the proposed method are addressed in detail. Firstly, it takes both the energy and reflectivity information of X-ray into consideration simultaneously. And formulate the thermal-structural coupling equation and multiphysics coupling analysis model based on the finite element method. Then, the thermalstructural coupling analysis under different working conditions has been implemented. Secondly, the mirror deformations are obtained using construction geometry function. Meanwhile, the polynomial function is adopted to fit the deformed mirror and meanwhile evaluate the fitting error. Thirdly, the focusing performance analysis of XPT can be evaluated by the RMS. Finally, a Wolter-I XPT is taken as an example to verify the proposed MCA method. The simulation results show that the thermal-structural coupling deformation is bigger than others, the vary law of deformation effect on the focusing performance has been obtained. The focusing performances of thermal-structural, thermal, structural deformations have degraded 30.01%, 14.35% and 7.85% respectively. The RMS of dispersion spot are 2.9143mm, 2.2038mm and 2.1311mm. As a result, the validity of the proposed method is verified through
Monte Carlo-based QA for IMRT of head and neck cancers
Tang, F.; Sham, J.; Ma, C.-M.; Li, J.-S.
2007-06-01
It is well-known that the presence of large air cavity in a dense medium (or patient) introduces significant electronic disequilibrium when irradiated with megavoltage X-ray field. This condition may worsen by the possible use of tiny beamlets in intensity-modulated radiation therapy (IMRT). Commercial treatment planning systems (TPSs), in particular those based on the pencil-beam method, do not provide accurate dose computation for the lungs and other cavity-laden body sites such as the head and neck. In this paper we present the use of Monte Carlo (MC) technique for dose re-calculation of IMRT of head and neck cancers. In our clinic, a turn-key software system is set up for MC calculation and comparison with TPS-calculated treatment plans as part of the quality assurance (QA) programme for IMRT delivery. A set of 10 off-the-self PCs is employed as the MC calculation engine with treatment plan parameters imported from the TPS via a graphical user interface (GUI) which also provides a platform for launching remote MC simulation and subsequent dose comparison with the TPS. The TPS-segmented intensity maps are used as input for the simulation hence skipping the time-consuming simulation of the multi-leaf collimator (MLC). The primary objective of this approach is to assess the accuracy of the TPS calculations in the presence of air cavities in the head and neck whereas the accuracy of leaf segmentation is verified by fluence measurement using a fluoroscopic camera-based imaging device. This measurement can also validate the correct transfer of intensity maps to the record and verify system. Comparisons between TPS and MC calculations of 6 MV IMRT for typical head and neck treatments review regional consistency in dose distribution except at and around the sinuses where our pencil-beam-based TPS sometimes over-predicts the dose by up to 10%, depending on the size of the cavities. In addition, dose re-buildup of up to 4% is observed at the posterior nasopharyngeal
Monte Carlo-based QA for IMRT of head and neck cancers
International Nuclear Information System (INIS)
It is well-known that the presence of large air cavity in a dense medium (or patient) introduces significant electronic disequilibrium when irradiated with megavoltage X-ray field. This condition may worsen by the possible use of tiny beamlets in intensity-modulated radiation therapy (IMRT). Commercial treatment planning systems (TPSs), in particular those based on the pencil-beam method, do not provide accurate dose computation for the lungs and other cavity-laden body sites such as the head and neck. In this paper we present the use of Monte Carlo (MC) technique for dose re-calculation of IMRT of head and neck cancers. In our clinic, a turn-key software system is set up for MC calculation and comparison with TPS-calculated treatment plans as part of the quality assurance (QA) programme for IMRT delivery. A set of 10 off-the-self PCs is employed as the MC calculation engine with treatment plan parameters imported from the TPS via a graphical user interface (GUI) which also provides a platform for launching remote MC simulation and subsequent dose comparison with the TPS. The TPS-segmented intensity maps are used as input for the simulation hence skipping the time-consuming simulation of the multi-leaf collimator (MLC). The primary objective of this approach is to assess the accuracy of the TPS calculations in the presence of air cavities in the head and neck whereas the accuracy of leaf segmentation is verified by fluence measurement using a fluoroscopic camera-based imaging device. This measurement can also validate the correct transfer of intensity maps to the record and verify system. Comparisons between TPS and MC calculations of 6 MV IMRT for typical head and neck treatments review regional consistency in dose distribution except at and around the sinuses where our pencil-beam-based TPS sometimes over-predicts the dose by up to 10%, depending on the size of the cavities. In addition, dose re-buildup of up to 4% is observed at the posterior nasopharyngeal
Ding, George X.; Duggan, Dennis M.; Coffey, Charles W.; Shokrani, Parvaneh; Cygler, Joanna E.
2006-06-01
The purpose of this study is to present our experience of commissioning, testing and use of the first commercial macro Monte Carlo based dose calculation algorithm for electron beam treatment planning and to investigate new issues regarding dose reporting (dose-to-water versus dose-to-medium) as well as statistical uncertainties for the calculations arising when Monte Carlo based systems are used in patient dose calculations. All phantoms studied were obtained by CT scan. The calculated dose distributions and monitor units were validated against measurements with film and ionization chambers in phantoms containing two-dimensional (2D) and three-dimensional (3D) type low- and high-density inhomogeneities at different source-to-surface distances. Beam energies ranged from 6 to 18 MeV. New required experimental input data for commissioning are presented. The result of validation shows an excellent agreement between calculated and measured dose distributions. The calculated monitor units were within 2% of measured values except in the case of a 6 MeV beam and small cutout fields at extended SSDs (>110 cm). The investigation on the new issue of dose reporting demonstrates the differences up to 4% for lung and 12% for bone when 'dose-to-medium' is calculated and reported instead of 'dose-to-water' as done in a conventional system. The accuracy of the Monte Carlo calculation is shown to be clinically acceptable even for very complex 3D-type inhomogeneities. As Monte Carlo based treatment planning systems begin to enter clinical practice, new issues, such as dose reporting and statistical variations, may be clinically significant. Therefore it is imperative that a consistent approach to dose reporting is used.
International Nuclear Information System (INIS)
The purpose of this study is to present our experience of commissioning, testing and use of the first commercial macro Monte Carlo based dose calculation algorithm for electron beam treatment planning and to investigate new issues regarding dose reporting (dose-to-water versus dose-to-medium) as well as statistical uncertainties for the calculations arising when Monte Carlo based systems are used in patient dose calculations. All phantoms studied were obtained by CT scan. The calculated dose distributions and monitor units were validated against measurements with film and ionization chambers in phantoms containing two-dimensional (2D) and three-dimensional (3D) type low- and high-density inhomogeneities at different source-to-surface distances. Beam energies ranged from 6 to 18 MeV. New required experimental input data for commissioning are presented. The result of validation shows an excellent agreement between calculated and measured dose distributions. The calculated monitor units were within 2% of measured values except in the case of a 6 MeV beam and small cutout fields at extended SSDs (>110 cm). The investigation on the new issue of dose reporting demonstrates the differences up to 4% for lung and 12% for bone when 'dose-to-medium' is calculated and reported instead of 'dose-to-water' as done in a conventional system. The accuracy of the Monte Carlo calculation is shown to be clinically acceptable even for very complex 3D-type inhomogeneities. As Monte Carlo based treatment planning systems begin to enter clinical practice, new issues, such as dose reporting and statistical variations, may be clinically significant. Therefore it is imperative that a consistent approach to dose reporting is used
van der Graaf, E. R.; Limburg, J.; Koomans, R. L.; Tijs, M.
2011-01-01
The calibration of scintillation detectors for gamma radiation in a well characterized setup can be transferred to other geometries using Monte Carlo simulations to account for the differences between the calibration and the other geometry. In this study a calibration facility was used that is const
Performance analysis based on a Monte Carlo simulation of a liquid xenon PET detector
International Nuclear Information System (INIS)
Liquid xenon is a very attractive medium for position-sensitive gamma-ray detectors for a very wide range of applications, namely, in medical radionuclide imaging. Recently, the authors have proposed a liquid xenon detector for positron emission tomography (PET). In this paper, some aspects of the performance of a liquid xenon PET detector prototype were studied by means of Monte Carlo simulation
The information-based complexity of approximation problem by adaptive Monte Carlo methods
Institute of Scientific and Technical Information of China (English)
2008-01-01
In this paper, we study the complexity of information of approximation problem on the multivariate Sobolev space with bounded mixed derivative MWpr,α(Td), 1 < p < ∞, in the norm of Lq(Td), 1 < q < ∞, by adaptive Monte Carlo methods. Applying the discretization technique and some properties of pseudo-s-scale, we determine the exact asymptotic orders of this problem.
International Nuclear Information System (INIS)
The Monte Carlo (MC) and discrete ordinates (SN) are the commonly used methods in the design of radiation shielding. Monte Carlo method is able to treat the geometry exactly, but time-consuming in dealing with the deep penetration problem. The discrete ordinate method has great computational efficiency, but it is quite costly in computer memory and it suffers from ray effect. Single discrete ordinates method or single Monte Carlo method has limitation in shielding calculation for large complex nuclear facilities. In order to solve the problem, the Monte Carlo and discrete ordinates bidirectional coupling method is developed. The bidirectional coupling method is implemented in the interface program to transfer the particle probability distribution of MC and angular flux of discrete ordinates. The coupling method combines the advantages of MC and SN. The test problems of cartesian and cylindrical coordinate have been calculated by the coupling methods. The calculation results are performed with comparison to MCNP and TORT and satisfactory agreements are obtained. The correctness of the program is proved. (authors)
International Nuclear Information System (INIS)
Geometry navigation plays the most fundamental role in Monte Carlo particle transport simulation. It's mainly responsible for locating a particle inside which geometry volume it is and computing the distance to the volume boundary along the certain particle trajectory during each particle history. Geometry navigation directly affects the run-time performance of the Monte Carlo particle transport simulation, especially for large scale complicated systems. Two geometry acceleration algorithms, the automatic neighbor search algorithm and the oriented bounding box algorithm, are presented for improving geometry navigation performance. The algorithms have been implemented in the Super Monte Carlo Calculation Program for Nuclear and Radiation Process (SuperMC) version 2.0. The FDS-II and ITER benchmark models have been tested to highlight the efficiency gains that can be achieved by using the acceleration algorithms. The exact gains may be problem dependent, but testing results showed that runtime of Monte Carlo simulation can be considerably reduced 50%∼60% with the proposed acceleration algorithms. (author)
Miller, A C; Blakely, W F; Livengood, D; Whittaker, T; Xu, J.; Ejnik, J W; Hamilton, M. M.; Parlette, E; John, T S; Gerstenberg, H M; Hsu, H
1998-01-01
Depleted uranium (DU) is a dense heavy metal used primarily in military applications. Although the health effects of occupational uranium exposure are well known, limited data exist regarding the long-term health effects of internalized DU in humans. We established an in vitro cellular model to study DU exposure. Microdosimetric assessment, determined using a Monte Carlo computer simulation based on measured intracellular and extracellular uranium levels, showed that few (0.0014%) cell nuclei...
The probability distribution of the predicted CFM-induced ozone depletion. [Chlorofluoromethane
Ehhalt, D. H.; Chang, J. S.; Bulter, D. M.
1979-01-01
It is argued from the central limit theorem that the uncertainty in model predicted changes of the ozone column density is best represented by a normal probability density distribution. This conclusion is validated by comparison with a probability distribution generated by a Monte Carlo technique. In the case of the CFM-induced ozone depletion, and based on the estimated uncertainties in the reaction rate coefficients alone the relative mean standard deviation of this normal distribution is estimated to be 0.29.
Dual-energy CT-based material extraction for tissue segmentation in Monte Carlo dose calculations
Bazalova, Magdalena; Carrier, Jean-François; Beaulieu, Luc; Verhaegen, Frank
2008-05-01
Monte Carlo (MC) dose calculations are performed on patient geometries derived from computed tomography (CT) images. For most available MC codes, the Hounsfield units (HU) in each voxel of a CT image have to be converted into mass density (ρ) and material type. This is typically done with a (HU; ρ) calibration curve which may lead to mis-assignment of media. In this work, an improved material segmentation using dual-energy CT-based material extraction is presented. For this purpose, the differences in extracted effective atomic numbers Z and the relative electron densities ρe of each voxel are used. Dual-energy CT material extraction based on parametrization of the linear attenuation coefficient for 17 tissue-equivalent inserts inside a solid water phantom was done. Scans of the phantom were acquired at 100 kVp and 140 kVp from which Z and ρe values of each insert were derived. The mean errors on Z and ρe extraction were 2.8% and 1.8%, respectively. Phantom dose calculations were performed for 250 kVp and 18 MV photon beams and an 18 MeV electron beam in the EGSnrc/DOSXYZnrc code. Two material assignments were used: the conventional (HU; ρ) and the novel (HU; ρ, Z) dual-energy CT tissue segmentation. The dose calculation errors using the conventional tissue segmentation were as high as 17% in a mis-assigned soft bone tissue-equivalent material for the 250 kVp photon beam. Similarly, the errors for the 18 MeV electron beam and the 18 MV photon beam were up to 6% and 3% in some mis-assigned media. The assignment of all tissue-equivalent inserts was accurate using the novel dual-energy CT material assignment. As a result, the dose calculation errors were below 1% in all beam arrangements. Comparable improvement in dose calculation accuracy is expected for human tissues. The dual-energy tissue segmentation offers a significantly higher accuracy compared to the conventional single-energy segmentation.
Directory of Open Access Journals (Sweden)
Jimin Liang
2010-01-01
Full Text Available During the past decade, Monte Carlo method has obtained wide applications in optical imaging to simulate photon transport process inside tissues. However, this method has not been effectively extended to the simulation of free-space photon transport at present. In this paper, a uniform framework for noncontact optical imaging is proposed based on Monte Carlo method, which consists of the simulation of photon transport both in tissues and in free space. Specifically, the simplification theory of lens system is utilized to model the camera lens equipped in the optical imaging system, and Monte Carlo method is employed to describe the energy transformation from the tissue surface to the CCD camera. Also, the focusing effect of camera lens is considered to establish the relationship of corresponding points between tissue surface and CCD camera. Furthermore, a parallel version of the framework is realized, making the simulation much more convenient and effective. The feasibility of the uniform framework and the effectiveness of the parallel version are demonstrated with a cylindrical phantom based on real experimental results.
Parodi, K.; Ferrari, A.; Sommerer, F.; Paganetti, H.
2007-07-01
Clinical investigations on post-irradiation PET/CT (positron emission tomography/computed tomography) imaging for in vivo verification of treatment delivery and, in particular, beam range in proton therapy are underway at Massachusetts General Hospital (MGH). Within this project, we have developed a Monte Carlo framework for CT-based calculation of dose and irradiation-induced positron emitter distributions. Initial proton beam information is provided by a separate Geant4 Monte Carlo simulation modelling the treatment head. Particle transport in the patient is performed in the CT voxel geometry using the FLUKA Monte Carlo code. The implementation uses a discrete number of different tissue types with composition and mean density deduced from the CT scan. Scaling factors are introduced to account for the continuous Hounsfield unit dependence of the mass density and of the relative stopping power ratio to water used by the treatment planning system (XiO (Computerized Medical Systems Inc.)). Resulting Monte Carlo dose distributions are generally found in good correspondence with calculations of the treatment planning program, except a few cases (e.g. in the presence of air/tissue interfaces). Whereas dose is computed using standard FLUKA utilities, positron emitter distributions are calculated by internally combining proton fluence with experimental and evaluated cross-sections yielding 11C, 15O, 14O, 13N, 38K and 30P. Simulated positron emitter distributions yield PET images in good agreement with measurements. In this paper, we describe in detail the specific implementation of the FLUKA calculation framework, which may be easily adapted to handle arbitrary phase spaces of proton beams delivered by other facilities or include more reaction channels based on additional cross-section data. Further, we demonstrate the effects of different acquisition time regimes (e.g., PET imaging during or after irradiation) on the intensity and spatial distribution of the irradiation
International Nuclear Information System (INIS)
An accurate dose calculation in phantom and patient geometries requires an accurate description of the radiation source. Errors in the radiation source description are propagated through the dose calculation. With the emergence of linear accelerators whose dosimetric characteristics are similar to within measurement uncertainty, the same radiation source description can be used as the input to dose calculation for treatment planning at many institutions with the same linear accelerator model. Our goal in the current research was to determine the initial electron fluence above the linear accelerator target for such an accelerator to allow a dose calculation in water to within 1% or 1 mm of the measured data supplied by the manufacturer. The method used for both the radiation source description and the patient transport was Monte Carlo. The linac geometry was input into the Monte Carlo code using the accelerator's manufacturer's specifications. Assumptions about the initial electron source above the target were made based on previous studies. The free parameters derived for the calculations were the mean energy and radial Gaussian width of the initial electron fluence and the target density. A combination of the free parameters yielded an initial electron fluence that, when transported through the linear accelerator and into the phantom, allowed a dose-calculation agreement to the experimental ion chamber data to within the specified criteria at both 6 and 18 MV nominal beam energies, except near the surface, particularly for the 18 MV beam. To save time during Monte Carlo treatment planning, the initial electron fluence was transported through part of the treatment head to a plane between the monitor chambers and the jaws and saved as phase-space files. These files are used for clinical Monte Carlo-based treatment planning and are freely available from the authors
Ozone depletion by hydrofluorocarbons
Hurwitz, Margaret M.; Fleming, Eric L.; Newman, Paul A.; Li, Feng; Mlawer, Eli; Cady-Pereira, Karen; Bailey, Roshelle
2015-10-01
Atmospheric concentrations of hydrofluorocarbons (HFCs) are projected to increase considerably in the coming decades. Chemistry climate model simulations forced by current projections show that HFCs will impact the global atmosphere increasingly through 2050. As strong radiative forcers, HFCs increase tropospheric and stratospheric temperatures, thereby enhancing ozone-destroying catalytic cycles and modifying the atmospheric circulation. These changes lead to a weak depletion of stratospheric ozone. Simulations with the NASA Goddard Space Flight Center 2-D model show that HFC-125 is the most important contributor to HFC-related atmospheric change in 2050; its effects are comparable to the combined impacts of HFC-23, HFC-32, HFC-134a, and HFC-143a. Incorporating the interactions between chemistry, radiation, and dynamics, ozone depletion potentials (ODPs) for HFCs range from 0.39 × 10-3 to 30.0 × 10-3, approximately 100 times larger than previous ODP estimates which were based solely on chemical effects.
Charek, Daniel B; Meyer, Gregory J; Mihura, Joni L
2016-10-01
We investigated the impact of ego depletion on selected Rorschach cognitive processing variables and self-reported affect states. Research indicates acts of effortful self-regulation transiently deplete a finite pool of cognitive resources, impairing performance on subsequent tasks requiring self-regulation. We predicted that relative to controls, ego-depleted participants' Rorschach protocols would have more spontaneous reactivity to color, less cognitive sophistication, and more frequent logical lapses in visualization, whereas self-reports would reflect greater fatigue and less attentiveness. The hypotheses were partially supported; despite a surprising absence of self-reported differences, ego-depleted participants had Rorschach protocols with lower scores on two variables indicative of sophisticated combinatory thinking, as well as higher levels of color receptivity; they also had lower scores on a composite variable computed across all hypothesized markers of complexity. In addition, self-reported achievement striving moderated the effect of the experimental manipulation on color receptivity, and in the Depletion condition it was associated with greater attentiveness to the tasks, more color reactivity, and less global synthetic processing. Results are discussed with an emphasis on the response process, methodological limitations and strengths, implications for calculating refined Rorschach scores, and the value of using multiple methods in research and experimental paradigms to validate assessment measures. PMID:26002059
Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark
International Nuclear Information System (INIS)
There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as
Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark
Renner, F.; Wulff, J.; Kapsch, R.-P.; Zink, K.
2015-10-01
There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as
Radiative characteristics of depleted uranium bomb and it is protection
International Nuclear Information System (INIS)
Based on the developing process of depleted uranium bombs described in the first part, the radiative characteristics and mechanism of depleted uranium bombs are analyzed emphatically. The deeper discussion on protection of depleted uranium bombs is proceeded
Monte Carlo Capabilities of the SCALE Code System
Rearden, B. T.; Petrie, L. M.; Peplow, D. E.; Bekar, K. B.; Wiarda, D.; Celik, C.; Perfetti, C. M.; Ibrahim, A. M.; Hart, S. W. D.; Dunn, M. E.
2014-06-01
SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a "plug-and-play" framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE's graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2, to be released in 2014, will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. An overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.
Development of a hybrid multi-scale phantom for Monte-Carlo based internal dosimetry
International Nuclear Information System (INIS)
Full text of publication follows. Aim: in recent years several phantoms were developed for radiopharmaceutical dosimetry in clinical and preclinical settings. Voxel-based models (Zubal, Max/Fax, ICRP110) were developed to reach a level of realism that could not be achieved by mathematical models. In turn, 'hybrid' models (XCAT, MOBY/ROBY, Mash/Fash) allow a further degree of versatility by offering the possibility to finely tune each model according to various parameters. However, even 'hybrid' models require the generation of a voxel version for Monte-Carlo modeling of radiation transport. Since absorbed dose simulation time is strictly related to geometry spatial sampling, a compromise should be made between phantom realism and simulation speed. This trade-off leads on one side in an overestimation of the size of small radiosensitive structures such as the skin or hollow organs' walls, and on the other hand to unnecessarily detailed voxellization of large, homogeneous structures. The Aim of this work is to develop a hybrid multi-resolution phantom model for Geant4 and Gate, to better characterize energy deposition in small structures while preserving reasonable computation times. Materials and Methods: we have developed a pipeline for the conversion of preexisting phantoms into a multi-scale Geant4 model. Meshes of each organ are created from raw binary images of a phantom and then voxellized to the smallest spatial sampling required by the user. The user can then decide to re-sample the internal part of each organ, while leaving a layer of smallest voxels at the edge of the organ. In this way, the realistic shape of the organ is maintained while reducing the voxel number in the inner part. For hollow organs, the wall is always modeled using the smallest voxel sampling. This approach allows choosing different voxel resolutions for each organ according to a specific application. Results: preliminary results show that it is possible to
Tetrahedral-mesh-based computational human phantom for fast Monte Carlo dose calculations
International Nuclear Information System (INIS)
Although polygonal-surface computational human phantoms can address several critical limitations of conventional voxel phantoms, their Monte Carlo simulation speeds are much slower than those of voxel phantoms. In this study, we sought to overcome this problem by developing a new type of computational human phantom, a tetrahedral mesh phantom, by converting a polygonal surface phantom to a tetrahedral mesh geometry. The constructed phantom was implemented in the Geant4 Monte Carlo code to calculate organ doses as well as to measure computation speed, the values were then compared with those for the original polygonal surface phantom. It was found that using the tetrahedral mesh phantom significantly improved the computation speed by factors of between 150 and 832 considering all of the particles and simulated energies other than the low-energy neutrons (0.01 and 1 MeV), for which the improvement was less significant (17.2 and 8.8 times, respectively). (paper)
International Nuclear Information System (INIS)
A method for the calculation of the transit doses in HDR brachytherapy based on Monte Carlo simulations has been presented. The transit doses resulting from a linear implant with seven dwell positions is simulated by performing calculations at all positions in which, the moving 192Ir source, instantaneously, had its geometrical centre located exactly between two adjacent dwell positions. Discrete step sizes of 0.25 cm were used to calculate the dose rates and the total transit dose at any of the calculation points evaluated. By comparing this method to the results obtained from Sievert Integrals, we observed dose calculation errors ranging from 32 to 21% for the examples considered. The errors could be much higher for longer treatment lengths where contributions from points near the longitudinal axis of the source become more important. To date, the most accurate method of calculating doses in radiotherapy is by Monte Carlo Simulations but the long computational times associated with it renders its use in treatment planning impracticable. The Sievert Integral algorithms on the other hand are simple, versatile and very easy to use but its accuracy had been repeatedly put into question for low energy isotopes like iridium. We therefore advocate a modification of the Sievert Integral algorithms by superimposing the output from Monte Carlo Simulations on the Sievert Integrals when dealing with low energy isotopes. In this way, we would be combining accuracy, simplicity and reasonable computational times (author)
New approach based on tetrahedral-mesh geometry for accurate 4D Monte Carlo patient-dose calculation
International Nuclear Information System (INIS)
In the present study, to achieve accurate 4D Monte Carlo dose calculation in radiation therapy, we devised a new approach that combines (1) modeling of the patient body using tetrahedral-mesh geometry based on the patient’s 4D CT data, (2) continuous movement/deformation of the tetrahedral patient model by interpolation of deformation vector fields acquired through deformable image registration, and (3) direct transportation of radiation particles during the movement and deformation of the tetrahedral patient model. The results of our feasibility study show that it is certainly possible to construct 4D patient models (= phantoms) with sufficient accuracy using the tetrahedral-mesh geometry and to directly transport radiation particles during continuous movement and deformation of the tetrahedral patient model. This new approach not only produces more accurate dose distribution in the patient but also replaces the current practice of using multiple 3D voxel phantoms and combining multiple dose distributions after Monte Carlo simulations. For routine clinical application of our new approach, the use of fast automatic segmentation algorithms is a must. In order to achieve, simultaneously, both dose accuracy and computation speed, the number of tetrahedrons for the lungs should be optimized. Although the current computation speed of our new 4D Monte Carlo simulation approach is slow (i.e. ∼40 times slower than that of the conventional dose accumulation approach), this problem is resolvable by developing, in Geant4, a dedicated navigation class optimized for particle transportation in tetrahedral-mesh geometry. (paper)
Doronin, Alexander; Rushmeier, Holly E.; Meglinski, Igor; Bykov, Alexander V.
2016-03-01
We present a new Monte Carlo based approach for the modelling of Bidirectional Scattering-Surface Reflectance Distribution Function (BSSRDF) for accurate rendering of human skin appearance. The variations of both skin tissues structure and the major chromophores are taken into account correspondingly to the different ethnic and age groups. The computational solution utilizes HTML5, accelerated by the graphics processing units (GPUs), and therefore is convenient for the practical use at the most of modern computer-based devices and operating systems. The results of imitation of human skin reflectance spectra, corresponding skin colours and examples of 3D faces rendering are presented and compared with the results of phantom studies.
International Nuclear Information System (INIS)
Objective: With the Monte Carlo method to recalculate the IMRT dose distributions from four TPS to provide a platform for independent comparison and evaluation of the plan quality.These results will help make a clinical decision as which TPS will be used for prostate IMRT planning. Methods: Eleven prostate cancer cases were planned with the Corvus, Xio, Pinnacle and Eclipse TPS. The plans were recalculated by Monte Carlo using leaf sequences and MUs for individual plans. Dose-volume-histograms and isodose distributions were compared. Other quantities such as Dmin (the minimum dose received by 99% of CTV/PTV), Dmax (the maximum dose received by 1% of CTV/PTV), V110%, V105%, V95% (the volume of CTV/PTV receiving 110%, 105%, 95% of the prescription dose), the volume of rectum and bladder receiving >65 Gy and >40 Gy, and the volume of femur receiving >50 Gy were evaluated. Total segments and MUs were also compared. Results: The Monte Carlo results agreed with the dose distributions from the TPS to within 3%/3 mm. The Xio, Pinnacle and Eclipse plans show less target dose heterogeneity and lower V65 and V40 for the rectum and bladder compared to the Corvus plans. The PTV Dmin is about 2 Gy lower for Xio plans than others while the Corvus plans have slightly lower female head V50 (0.03% and 0.58%) than others. The Corvus plans require significantly most segments (187.8) and MUs (1264.7) to deliver and the Pinnacle plans require fewest segments (82.4) and MUs (703.6). Conclusions: We have tested an independent Monte Carlo dose calculation system for dose reconstruction and plan evaluation. This system provides a platform for the fair comparison and evaluation of treatment plans to facilitate clinical decision making in selecting a TPS and beam delivery system for particular treatment sites. (authors)
Monte Carlo tests of the Rasch model based on scalability coefficients
DEFF Research Database (Denmark)
Christensen, Karl Bang; Kreiner, Svend
that summarizes the number of Guttman errors in the data matrix. These coefficients are shown to yield efficient tests of the Rasch model using p-values computed using Markov chain Monte Carlo methods. The power of the tests of unequal item discrimination, and their ability to distinguish between local...... dependence and unequal item discrimination, are discussed. The methods are illustrated and motivated using a simulation study and a real data example....
Uncertainty Analysis of Power Grid Investment Capacity Based on Monte Carlo
Qin, Junsong; Liu, Bingyi; Niu, Dongxiao
By analyzing the influence factors of the investment capacity of power grid, to depreciation cost, sales price and sales quantity, net profit, financing and GDP of the second industry as the dependent variable to build the investment capacity analysis model. After carrying out Kolmogorov-Smirnov test, get the probability distribution of each influence factor. Finally, obtained the grid investment capacity uncertainty of analysis results by Monte Carlo simulation.
A Monte Carlo method based on antithetic variates for network reliability computations
El Khadiri, Mohamed; Rubino, Gerardo
1992-01-01
The exact evaluation of usual reliability measures of communication networks is seriously limited because of the excessive computational time usually needed to obtain them. In the general case, the computation of almost all the interesting reliability metrics are NP-hard problems. An alternative approach is to estimate them by means of a Monte Carlo simulation. This allows to deal with larger models than those that can be evaluated exactly. In this paper, we propose an algorithm much more per...
Design and evaluation of a Monte Carlo based model of an orthovoltage treatment system
International Nuclear Information System (INIS)
The aim of this study was to develop a flexible framework of an orthovoltage treatment system capable of calculating and visualizing dose distributions in different phantoms and CT datasets. The framework provides a complete set of various filters, applicators and X-ray energies and therefore can be adapted to varying studies or be used for educational purposes. A dedicated user friendly graphical interface was developed allowing for easy setup of the simulation parameters and visualization of the results. For the Monte Carlo simulations the EGSnrc Monte Carlo code package was used. Building the geometry was accomplished with the help of the EGSnrc C++ class library. The deposited dose was calculated according to the KERMA approximation using the track-length estimator. The validation against measurements showed a good agreement within 4-5% deviation, down to depths of 20% of the depth dose maximum. Furthermore, to show its capabilities, the validated model was used to calculate the dose distribution on two CT datasets. Typical Monte Carlo calculation time for these simulations was about 10 minutes achieving an average statistical uncertainty of 2% on a standard PC. However, this calculation time depends strongly on the used CT dataset, tube potential, filter material/thickness and applicator size.
Design and evaluation of a Monte Carlo based model of an orthovoltage treatment system
Energy Technology Data Exchange (ETDEWEB)
Penchev, Petar; Maeder, Ulf; Fiebich, Martin [IMPS University of Applied Sciences, Giessen (Germany). Inst. of Medical Physics and Radiation Protection; Zink, Klemens [IMPS University of Applied Sciences, Giessen (Germany). Inst. of Medical Physics and Radiation Protection; University Hospital Marburg (Germany). Dept. of Radiotherapy and Oncology
2015-07-01
The aim of this study was to develop a flexible framework of an orthovoltage treatment system capable of calculating and visualizing dose distributions in different phantoms and CT datasets. The framework provides a complete set of various filters, applicators and X-ray energies and therefore can be adapted to varying studies or be used for educational purposes. A dedicated user friendly graphical interface was developed allowing for easy setup of the simulation parameters and visualization of the results. For the Monte Carlo simulations the EGSnrc Monte Carlo code package was used. Building the geometry was accomplished with the help of the EGSnrc C++ class library. The deposited dose was calculated according to the KERMA approximation using the track-length estimator. The validation against measurements showed a good agreement within 4-5% deviation, down to depths of 20% of the depth dose maximum. Furthermore, to show its capabilities, the validated model was used to calculate the dose distribution on two CT datasets. Typical Monte Carlo calculation time for these simulations was about 10 minutes achieving an average statistical uncertainty of 2% on a standard PC. However, this calculation time depends strongly on the used CT dataset, tube potential, filter material/thickness and applicator size.
Development and validation of MCNPX-based Monte Carlo treatment plan verification system
Directory of Open Access Journals (Sweden)
Iraj Jabbari
2015-01-01
Full Text Available A Monte Carlo treatment plan verification (MCTPV system was developed for clinical treatment plan verification (TPV, especially for the conformal and intensity-modulated radiotherapy (IMRT plans. In the MCTPV, the MCNPX code was used for particle transport through the accelerator head and the patient body. MCTPV has an interface with TiGRT planning system and reads the information which is needed for Monte Carlo calculation transferred in digital image communications in medicine-radiation therapy (DICOM-RT format. In MCTPV several methods were applied in order to reduce the simulation time. The relative dose distribution of a clinical prostate conformal plan calculated by the MCTPV was compared with that of TiGRT planning system. The results showed well implementation of the beams configuration and patient information in this system. For quantitative evaluation of MCTPV a two-dimensional (2D diode array (MapCHECK2 and gamma index analysis were used. The gamma passing rate (3%/3 mm of an IMRT plan was found to be 98.5% for total beams. Also, comparison of the measured and Monte Carlo calculated doses at several points inside an inhomogeneous phantom for 6- and 18-MV photon beams showed a good agreement (within 1.5%. The accuracy and timing results of MCTPV showed that MCTPV could be used very efficiently for additional assessment of complicated plans such as IMRT plan.
Depleted uranium management alternatives
International Nuclear Information System (INIS)
This report evaluates two management alternatives for Department of Energy depleted uranium: continued storage as uranium hexafluoride, and conversion to uranium metal and fabrication to shielding for spent nuclear fuel containers. The results will be used to compare the costs with other alternatives, such as disposal. Cost estimates for the continued storage alternative are based on a life-cycle of 27 years through the year 2020. Cost estimates for the recycle alternative are based on existing conversion process costs and Capital costs for fabricating the containers. Additionally, the recycle alternative accounts for costs associated with intermediate product resale and secondary waste disposal for materials generated during the conversion process
An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations
Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B.; Jia, Xun
2015-10-01
Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum
Houska, Tobias; Multsch, Sebastian; Kraft, Philipp; Frede, Hans-Georg; Breuer, Lutz
2014-05-01
Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures, e.g. by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow a more detailed analysis of the dynamic behaviour of the soil-plant interface. We used the Python programming language to couple two of such high process oriented independent models and to calibrate both models simultaneously. The Catchment Modelling Framework (CMF) simulated soil hydrology based on the Richards equation and the Van-Genuchten-Mualem retention curve. CMF was coupled with the Plant growth Modelling Framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo based Generalised Likelihood Uncertainty Estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions to it. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 x 106 model runs randomly drawn from an equally distributed parameter space. Three objective functions were used to evaluate the model performance, i.e. coefficient of determination (R2), bias and model efficiency according to Nash Sutcliffe (NSE). The model was applied to three sites with different management in Muencheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matters of roots, storages, stems and leaves. Best parameter sets resulted in NSE of 0.57 for the simulation of soil moisture across all three sites. The shape
Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy
International Nuclear Information System (INIS)
Purpose: Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-μm-wide microbeams spaced by 200-400 μm) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. Methods: The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Results: Good agreement between MC simulations and experimental results was achieved, even at the
Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy
Energy Technology Data Exchange (ETDEWEB)
Martinez-Rovira, I.; Sempau, J.; Prezado, Y. [Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain) and ID17 Biomedical Beamline, European Synchrotron Radiation Facility (ESRF), 6 rue Jules Horowitz B.P. 220, F-38043 Grenoble Cedex (France); Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain); Laboratoire Imagerie et modelisation en neurobiologie et cancerologie, UMR8165, Centre National de la Recherche Scientifique (CNRS), Universites Paris 7 et Paris 11, Bat 440., 15 rue Georges Clemenceau, F-91406 Orsay Cedex (France)
2012-05-15
Purpose: Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-{mu}m-wide microbeams spaced by 200-400 {mu}m) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. Methods: The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Results: Good agreement between MC simulations and experimental results was achieved, even at
Generation of scintigraphic images in a virtual dosimetry trial based on Monte Carlo modelling
International Nuclear Information System (INIS)
Full text of publication follows. Aim: the purpose of dosimetry calculations in therapeutic nuclear medicine is to maximize tumour absorbed dose while minimizing normal tissue toxicities. However a wide heterogeneity of dosimetric approaches is observed: there is no standardized dosimetric protocol to date. The DosiTest project (www.dositest.com) intends to identify critical steps in the dosimetry chain by implementing clinical dosimetry in different Nuclear Medicine departments, on scintigraphic images generated by Monte Carlo simulation from a same virtual patient. This study aims at presenting the different steps contributing to image generation, following the imaging protocol of a given participating centre, Milan's European Institute of Oncology (IEO). Materiel and methods: the chosen clinical application is that of 111In-pentetreotide (OctreoscanTM). Pharmacokinetic data from the literature are used to derive a compartmental model. The kinetic rates between 6 compartments (liver, spleen, kidneys, blood, urine, remainder body) were obtained from WinSaam [3]: the activity in each compartment is known at any time point. The TestDose [1] software (computing architecture of DosiTest) implements the NURBS-based phantom NCAT-WB [2] to generate anatomical data for the virtual patient. IEO gamma-camera was modelled with GATE [4] v6.2. Scintigraphic images were simulated for each compartment and the resulting projections were weighted by the respective pharmacokinetics for each compartment. The final step consisted in aggregating each compartment to generate the resulting image. Results: following IEO's imaging protocol, planar and tomographic image simulations were generated at various time points. Computation times (on a 480 virtual cores computing cluster) for 'step and shoot' whole body simulations (5 steps/time point) and acceptable statistics were: 10 days for extra-vascular fluid, 28 h for blood, 12 h for liver, 7 h for kidneys, and 1-2 h for
Optimization of Depletion Modeling and Simulation for the High Flux Isotope Reactor
Energy Technology Data Exchange (ETDEWEB)
Betzler, Benjamin R [ORNL; Ade, Brian J [ORNL; Chandler, David [ORNL; Ilas, Germina [ORNL; Sunny, Eva E [ORNL
2015-01-01
Monte Carlo based depletion tools used for the high-fidelity modeling and simulation of the High Flux Isotope Reactor (HFIR) come at a great computational cost; finding sufficient approximations is necessary to make the use of these tools feasible. The optimization of the neutronics and depletion model for the HFIR is based on two factors: (i) the explicit representation of the involute fuel plates with sets of polyhedra and (ii) the treatment of depletion mixtures and control element position during depletion calculations. A very fine representation (i.e., more polyhedra in the involute plate approximation) does not significantly improve simulation accuracy. The recommended representation closely represents the physical plates and ensures sufficient fidelity in regions with high flux gradients. Including the fissile targets in the central flux trap of the reactor as depletion mixtures has the greatest effect on the calculated cycle length, while localized effects (e.g., the burnup of specific isotopes or the power distribution evolution over the cycle) are more noticeable consequences of including a critical control element search or depleting burnable absorbers outside the fuel region.
International Nuclear Information System (INIS)
This paper presents an unstructured mesh based multi-physics interface implemented in the Serpent 2 Monte Carlo code, for the purpose of coupling the neutronics solution to component-scale thermal hydraulics calculations, such as computational fluid dynamics (CFD). The work continues the development of a multi-physics coupling scheme, which relies on the separation of state-point information from the geometry input, and the capability to handle temperature and density distributions by a rejection sampling algorithm. The new interface type is demonstrated by a simplified molten-salt reactor test case, using a thermal hydraulics solution provided by the CFD solver in OpenFOAM. (author)
Sampling-Based Nuclear Data Uncertainty Quantification for Continuous Energy Monte Carlo Codes
Zhu, Ting
2015-01-01
The goal of the present PhD research is to establish a methodology of nuclear data uncertainty quantification (NDUQ) for MCNPX, the continuous-energy Monte-Carlo (M-C) code. The high fidelity (continuous-energy treatment and flexible geometry modelling) of MCNPX makes it the choice of routine criticality safety calculations at PSI/LRS, but also raises challenges for NDUQ by conventional sensitivity/uncertainty (S/U) methods. The methodology developed during this PhD research is fundamentally ...
Random vibration analysis of switching apparatus based on Monte Carlo method
Institute of Scientific and Technical Information of China (English)
ZHAI Guo-fu; CHEN Ying-hua; REN Wan-bin
2007-01-01
The performance in vibration environment of switching apparatus containing mechanical contact is an important element when judging the apparatus's reliability. A piecewise linear two-degrees-of-freedom mathematical model considering contact loss was built in this work, and the vibration performance of the model under random external Gaussian white noise excitation was investigated by using Monte Carlo simulation in Matlab/Simulink. Simulation showed that the spectral content and statistical characters of the contact force coincided strongly with reality. The random vibration character of the contact system was solved using time (numerical) domain simulation in this paper. Conclusions reached here are of great importance for reliability design of switching apparatus.
Microlens assembly error analysis for light field camera based on Monte Carlo method
Li, Sai; Yuan, Yuan; Zhang, Hao-Wei; Liu, Bin; Tan, He-Ping
2016-08-01
This paper describes numerical analysis of microlens assembly errors in light field cameras using the Monte Carlo method. Assuming that there were no manufacturing errors, home-built program was used to simulate images of coupling distance error, movement error and rotation error that could appear during microlens installation. By researching these images, sub-aperture images and refocus images, we found that the images present different degrees of fuzziness and deformation for different microlens assembly errors, while the subaperture image presents aliasing, obscured images and other distortions that result in unclear refocus images.
Monte Carlo calculations for design of An accelerator based PGNAA facility
International Nuclear Information System (INIS)
Monte Carlo calculations were carried out for design of a set up for Prompt Gamma Ray Neutron Activation Analysis (PGNAA) by 14 MeV neutrons to analyze cement raw material samples. The calculations were carried out using code the MCNP4B2. Various geometry parameters of the PGNAA experimental setup such as sample thickness, moderator geometry and detector shielding etc were optimized by maximizing the prompt gamma ray yield of different elements of sample material. Finally calibration curve of the PGNAA setup were generated for various concentrations of calcium in the material sample. Results of this simulation are presented. (author)
Monte Carlo calculations for design of An accelerator based PGNAA facility
Energy Technology Data Exchange (ETDEWEB)
Nagadi, M.M.; Naqvi, A.A. [King Fahd University of Petroleum and Minerals, Center for Applied Physical Sciences, Dhahran (Saudi Arabia); Rehman, Khateeb-ur; Kidwai, S. [King Fahd University of Petroleum and Minerals, Department of Physics, Dhahran (Saudi Arabia)
2002-08-01
Monte Carlo calculations were carried out for design of a set up for Prompt Gamma Ray Neutron Activation Analysis (PGNAA) by 14 MeV neutrons to analyze cement raw material samples. The calculations were carried out using code the MCNP4B2. Various geometry parameters of the PGNAA experimental setup such as sample thickness, moderator geometry and detector shielding etc were optimized by maximizing the prompt gamma ray yield of different elements of sample material. Finally calibration curve of the PGNAA setup were generated for various concentrations of calcium in the material sample. Results of this simulation are presented. (author)
Improved radiochemical assay analyses using TRITON depletion sequences in SCALE
International Nuclear Information System (INIS)
With the release of TRITON in SCALE 5.0, Oak Ridge National Laboratory has made available a rigorous two-dimensional (2D) depletion sequence based on the arbitrary-geometry 2D discrete ordinates transport solver NEWT. TRITON has recently been further enhanced by the addition of depletion sequences that use KENO V.a and KENO-VI for three-dimensional (3D) transport solutions. The Monte Carlo-based depletion sequences add stochastic uncertainty issues to the solution, but also provide a means to perform direct 3D depletion that can capture the effect of leakage near the ends of fuel assemblies. Additionally, improved resonance processing capabilities are available to TRITON using CENTRM. CENTRM provides lattice-weighted cross sections using a continuous energy solution that directly treats the resonance overlap effects that become more important in high-burnup fuel. And beginning with the release of SCALE 5.1 in the summer of 2006, point data and fine-structure multigroup libraries derived from ENDF/B-VI evaluations will be available. The combination of rigorous 2D and 3D capabilities with improved cross section processing capabilities and data will provide a powerful and accurate means for the characterization of spent fuel, making it possible to analyze a broad range of assembly designs and assay data. This in turn will reduce biases and uncertainties associated with the preduction of spent fuel isotopic compositions. This paper describes advanced capabilities of the TRITON sequence for depletion calculations and the results of analyses performed to date for radiochemical assay data. (author)
Energy Technology Data Exchange (ETDEWEB)
Al-Subeihi, Ala' A.A., E-mail: subeihi@yahoo.com [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands); BEN-HAYYAN-Aqaba International Laboratories, Aqaba Special Economic Zone Authority (ASEZA), P. O. Box 2565, Aqaba 77110 (Jordan); Alhusainy, Wasma; Kiwamoto, Reiko; Spenkelink, Bert [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands); Bladeren, Peter J. van [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands); Nestec S.A., Avenue Nestlé 55, 1800 Vevey (Switzerland); Rietjens, Ivonne M.C.M.; Punt, Ans [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands)
2015-03-01
The present study aims at predicting the level of formation of the ultimate carcinogenic metabolite of methyleugenol, 1′-sulfooxymethyleugenol, in the human population by taking variability in key bioactivation and detoxification reactions into account using Monte Carlo simulations. Depending on the metabolic route, variation was simulated based on kinetic constants obtained from incubations with a range of individual human liver fractions or by combining kinetic constants obtained for specific isoenzymes with literature reported human variation in the activity of these enzymes. The results of the study indicate that formation of 1′-sulfooxymethyleugenol is predominantly affected by variation in i) P450 1A2-catalyzed bioactivation of methyleugenol to 1′-hydroxymethyleugenol, ii) P450 2B6-catalyzed epoxidation of methyleugenol, iii) the apparent kinetic constants for oxidation of 1′-hydroxymethyleugenol, and iv) the apparent kinetic constants for sulfation of 1′-hydroxymethyleugenol. Based on the Monte Carlo simulations a so-called chemical-specific adjustment factor (CSAF) for intraspecies variation could be derived by dividing different percentiles by the 50th percentile of the predicted population distribution for 1′-sulfooxymethyleugenol formation. The obtained CSAF value at the 90th percentile was 3.2, indicating that the default uncertainty factor of 3.16 for human variability in kinetics may adequately cover the variation within 90% of the population. Covering 99% of the population requires a larger uncertainty factor of 6.4. In conclusion, the results showed that adequate predictions on interindividual human variation can be made with Monte Carlo-based PBK modeling. For methyleugenol this variation was observed to be in line with the default variation generally assumed in risk assessment. - Highlights: • Interindividual human differences in methyleugenol bioactivation were simulated. • This was done using in vitro incubations, PBK modeling
International Nuclear Information System (INIS)
The present study aims at predicting the level of formation of the ultimate carcinogenic metabolite of methyleugenol, 1′-sulfooxymethyleugenol, in the human population by taking variability in key bioactivation and detoxification reactions into account using Monte Carlo simulations. Depending on the metabolic route, variation was simulated based on kinetic constants obtained from incubations with a range of individual human liver fractions or by combining kinetic constants obtained for specific isoenzymes with literature reported human variation in the activity of these enzymes. The results of the study indicate that formation of 1′-sulfooxymethyleugenol is predominantly affected by variation in i) P450 1A2-catalyzed bioactivation of methyleugenol to 1′-hydroxymethyleugenol, ii) P450 2B6-catalyzed epoxidation of methyleugenol, iii) the apparent kinetic constants for oxidation of 1′-hydroxymethyleugenol, and iv) the apparent kinetic constants for sulfation of 1′-hydroxymethyleugenol. Based on the Monte Carlo simulations a so-called chemical-specific adjustment factor (CSAF) for intraspecies variation could be derived by dividing different percentiles by the 50th percentile of the predicted population distribution for 1′-sulfooxymethyleugenol formation. The obtained CSAF value at the 90th percentile was 3.2, indicating that the default uncertainty factor of 3.16 for human variability in kinetics may adequately cover the variation within 90% of the population. Covering 99% of the population requires a larger uncertainty factor of 6.4. In conclusion, the results showed that adequate predictions on interindividual human variation can be made with Monte Carlo-based PBK modeling. For methyleugenol this variation was observed to be in line with the default variation generally assumed in risk assessment. - Highlights: • Interindividual human differences in methyleugenol bioactivation were simulated. • This was done using in vitro incubations, PBK modeling
International Nuclear Information System (INIS)
A common approach to implementing the Monte Carlo method for the calculation of brachytherapy radiation dose deposition is to use a phase space file containing information on particles emitted from a brachytherapy source. However, the loading of the phase space file during the dose calculation consumes a large amount of computer random access memory, imposing a higher requirement for computer hardware. In this study, we propose a method to parameterize the information (e.g., particle location, direction and energy) stored in the phase space file by using several probability distributions. This method was implemented for dose calculations of a commercial Ir-192 high dose rate source. Dose calculation accuracy of the parameterized source was compared to the results observed using the full phase space file in a simple water phantom and in a clinical breast cancer case. The results showed the parameterized source at a size of 200 kB was as accurate as the phase space file represented source of 1.1 GB. By using the parameterized source representation, a compact Monte Carlo job can be designed, which allows an easy setup for parallel computing in brachytherapy planning. (paper)
Zhang, M.; Zou, W.; Chen, T.; Kim, L.; Khan, A.; Haffty, B.; Yue, N. J.
2014-01-01
A common approach to implementing the Monte Carlo method for the calculation of brachytherapy radiation dose deposition is to use a phase space file containing information on particles emitted from a brachytherapy source. However, the loading of the phase space file during the dose calculation consumes a large amount of computer random access memory, imposing a higher requirement for computer hardware. In this study, we propose a method to parameterize the information (e.g., particle location, direction and energy) stored in the phase space file by using several probability distributions. This method was implemented for dose calculations of a commercial Ir-192 high dose rate source. Dose calculation accuracy of the parameterized source was compared to the results observed using the full phase space file in a simple water phantom and in a clinical breast cancer case. The results showed the parameterized source at a size of 200 kB was as accurate as the phase space file represented source of 1.1 GB. By using the parameterized source representation, a compact Monte Carlo job can be designed, which allows an easy setup for parallel computing in brachytherapy planning.
International Nuclear Information System (INIS)
The numerical simulation of the dynamics of fast ions coming from neutral beam injection (NBI) heating is an important task in fusion devices, since these particles are used as sources to heat and fuel the plasma and their uncontrolled losses can damage the walls of the reactor. This paper shows a new application that simulates these dynamics on the grid: FastDEP. FastDEP plugs together two Monte Carlo codes used in fusion science, namely FAFNER2 and ISDEP, and add new functionalities. Physically, FAFNER2 provides the fast ion initial state in the device while ISDEP calculates their evolution in time; as a result, the fast ion distribution function in TJ-II stellerator has been estimated, but the code can be used on any other device. In this paper a comparison between the physics of the two NBI injectors in TJ-II is presented, together with the differences between fast ion confinement and the driven momentum in the two cases. The simulations have been obtained using Montera, a framework developed for achieving grid efficient executions of Monte Carlo applications. (paper)
Jedrychowski, M.; Bacroix, B.; Salman, O. U.; Tarasiuk, J.; Wronski, S.
2015-08-01
The work focuses on the influence of moderate plastic deformation on subsequent partial recrystallization of hexagonal zirconium (Zr702). In the considered case, strain induced boundary migration (SIBM) is assumed to be the dominating recrystallization mechanism. This hypothesis is analyzed and tested in detail using experimental EBSD-OIM data and Monte Carlo computer simulations. An EBSD investigation is performed on zirconium samples, which were channel-die compressed in two perpendicular directions: normal direction (ND) and transverse direction (TD) of the initial material sheet. The maximal applied strain was below 17%. Then, samples were briefly annealed in order to achieve a partly recrystallized state. Obtained EBSD data were analyzed in terms of texture evolution associated with a microstructural characterization, including: kernel average misorientation (KAM), grain orientation spread (GOS), twinning, grain size distributions, description of grain boundary regions. In parallel, Monte Carlo Potts model combined with experimental microstructures was employed in order to verify two main recrystallization scenarios: SIBM driven growth from deformed sub-grains and classical growth of recrystallization nuclei. It is concluded that simulation results provided by the SIBM model are in a good agreement with experimental data in terms of texture as well as microstructural evolution.
International Nuclear Information System (INIS)
Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics for one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.
A Monte Carlo and continuum study of mechanical properties of nanoparticle based films
Energy Technology Data Exchange (ETDEWEB)
Ogunsola, Oluwatosin; Ehrman, Sheryl [University of Maryland, Department of Chemical and Biomolecular Engineering, Chemical and Nuclear Engineering Building (United States)], E-mail: sehrman@eng.umd.edu
2008-01-15
A combination Monte Carlo and equivalent-continuum simulation approach was used to investigate the structure-mechanical property relationships of titania nanoparticle deposits. Films of titania composed of nanoparticle aggregates were simulated using a Monte Carlo approach with diffusion-limited aggregation. Each aggregate in the simulation is fractal-like and random in structure. In the film structure, it is assumed that bond strength is a function of distance with two limiting values for the bond strengths: one representing the strong chemical bond between the particles at closest proximity in the aggregate and the other representing the weak van der Waals bond between particles from different aggregates. The Young's modulus of the film is estimated using an equivalent-continuum modeling approach, and the influences of particle diameter (5-100 nm) and aggregate size (3-400 particles per aggregate) on predicted Young's modulus are investigated. The Young's modulus is observed to increase with a decrease in primary particle size and is independent of the size of the aggregates deposited. Decreasing porosity resulted in an increase in Young's modulus as expected from results reported previously in the literature.
Energy Technology Data Exchange (ETDEWEB)
Burke, TImothy P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kiedrowski, Brian C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Martin, William R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-11-19
Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics for one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.
International Nuclear Information System (INIS)
Over the years, various types of tritium-in-air monitors have been designed and developed based on different principles. Ionization chamber, proportional counter and scintillation detector systems are few among them. A plastic scintillator based, flow-cell type online tritium-in-air monitoring system was developed for online monitoring of tritium in air. The value of the scintillator mass inside the cell-volume, which maximizes the response of the detector system, should be obtained to get maximum efficiency. The present study is aimed to optimize the amount of mass of the plastic scintillator film for the flow-cell based tritium monitoring instrument so that maximum efficiency is achieved. The Monte Carlo based EGSnrc code system has been used for this purpose
DEPLETED URANIUM TECHNICAL WORK
The Depleted Uranium Technical Work is designed to convey available information and knowledge about depleted uranium to EPA Remedial Project Managers, On-Scene Coordinators, contractors, and other Agency managers involved with the remediation of sites contaminated with this mater...
Energy Technology Data Exchange (ETDEWEB)
Lamia, D., E-mail: debora.lamia@ibfm.cnr.it [Institute of Molecular Bioimaging and Physiology IBFM CNR – LATO, Cefalù (Italy); Russo, G., E-mail: giorgio.russo@ibfm.cnr.it [Institute of Molecular Bioimaging and Physiology IBFM CNR – LATO, Cefalù (Italy); Casarino, C.; Gagliano, L.; Candiano, G.C. [Institute of Molecular Bioimaging and Physiology IBFM CNR – LATO, Cefalù (Italy); Labate, L. [Intense Laser Irradiation Laboratory (ILIL) – National Institute of Optics INO CNR, Pisa (Italy); National Institute for Nuclear Physics INFN, Pisa Section and Frascati National Laboratories LNF (Italy); Baffigi, F.; Fulgentini, L.; Giulietti, A.; Koester, P.; Palla, D. [Intense Laser Irradiation Laboratory (ILIL) – National Institute of Optics INO CNR, Pisa (Italy); Gizzi, L.A. [Intense Laser Irradiation Laboratory (ILIL) – National Institute of Optics INO CNR, Pisa (Italy); National Institute for Nuclear Physics INFN, Pisa Section and Frascati National Laboratories LNF (Italy); Gilardi, M.C. [Institute of Molecular Bioimaging and Physiology IBFM CNR, Segrate (Italy); University of Milano-Bicocca, Milano (Italy)
2015-06-21
We report on the development of a Monte Carlo application, based on the GEANT4 toolkit, for the characterization and optimization of electron beams for clinical applications produced by a laser-driven plasma source. The GEANT4 application is conceived so as to represent in the most general way the physical and geometrical features of a typical laser-driven accelerator. It is designed to provide standard dosimetric figures such as percentage dose depth curves, two-dimensional dose distributions and 3D dose profiles at different positions both inside and outside the interaction chamber. The application was validated by comparing its predictions to experimental measurements carried out on a real laser-driven accelerator. The work is aimed at optimizing the source, by using this novel application, for radiobiological studies and, in perspective, for medical applications. - Highlights: • Development of a Monte Carlo application based on GEANT4 toolkit. • Experimental measurements carried out with a laser-driven acceleration system. • Validation of Geant4 application comparing experimental data with the simulated ones. • Dosimetric characterization of the acceleration system.
International Nuclear Information System (INIS)
We report on the development of a Monte Carlo application, based on the GEANT4 toolkit, for the characterization and optimization of electron beams for clinical applications produced by a laser-driven plasma source. The GEANT4 application is conceived so as to represent in the most general way the physical and geometrical features of a typical laser-driven accelerator. It is designed to provide standard dosimetric figures such as percentage dose depth curves, two-dimensional dose distributions and 3D dose profiles at different positions both inside and outside the interaction chamber. The application was validated by comparing its predictions to experimental measurements carried out on a real laser-driven accelerator. The work is aimed at optimizing the source, by using this novel application, for radiobiological studies and, in perspective, for medical applications. - Highlights: • Development of a Monte Carlo application based on GEANT4 toolkit. • Experimental measurements carried out with a laser-driven acceleration system. • Validation of Geant4 application comparing experimental data with the simulated ones. • Dosimetric characterization of the acceleration system
Monte Carlo simulation of primary reactions on HPLUS based on pluto event generator
International Nuclear Information System (INIS)
Hadron Physics Lanzhou Spectrometer (HPLUS) is designed for the study of hadron production and decay from nucleon-nucleon interaction in the GeV region. The current formation of HPLUS and the particle identification methods for three polar angle regions are discussed. The Pluto event generator is applied to simulate the primary reactions on HPLUS, concerning four issues as followed: the agreement on pp elastic scattering angular distribution between Pluto samples and experimental data; the acceptance of charged K mesons in the strangeness production channels for the forward region of HPLUS; the dependence of the maximum energy of photons and the minimum vertex angle of two photons on the polar angle; the influence on the mass spectrum of excited states of nucleon with large resonant width from different reconstruction methods. It is proved that the Pluto event generator satisfies the requirements of Monte Carlo simulation for HPLUS. (authors)
Web-Based Parallel Monte Carlo Simulation Platform for Financial Computation
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
Using Java, Java-enabled Web and object-oriented programming technologies, a framework is designed to organize multicomputer system on Intranet quickly to complete Monte Carlo simulation parallelizing. The high-performance computing environment is embedded in Web server so it can be accessed more easily. Adaptive parallelism and eager scheduling algorithm are used to realize load balancing, parallel processing and system fault-tolerance. Independent sequence pseudo-random number generator schemes to keep the parallel simulation availability. Three kinds of stock option pricing models as instances, ideal speedup and pricing results obtained on test bed. Now, as a Web service, a high-performance financial derivative security-pricing platform is set up for training and studying. The framework can also be used to develop other SPMD (single procedure multiple data) application. Robustness is still a major problem for further research.
GPU-based Monte Carlo dust radiative transfer scheme applied to AGN
Heymann, Frank
2012-01-01
A three dimensional parallel Monte Carlo (MC) dust radiative transfer code is presented. To overcome the huge computing time requirements of MC treatments, the computational power of vectorized hardware is used, utilizing either multi-core computer power or graphics processing units. The approach is a self-consistent way to solve the radiative transfer equation in arbitrary dust configurations. The code calculates the equilibrium temperatures of two populations of large grains and stochastic heated polycyclic aromatic hydrocarbons (PAH). Anisotropic scattering is treated applying the Heney-Greenstein phase function. The spectral energy distribution (SED) of the object is derived at low spatial resolution by a photon counting procedure and at high spatial resolution by a vectorized ray-tracer. The latter allows computation of high signal-to-noise images of the objects at any frequencies and arbitrary viewing angles. We test the robustness of our approach against other radiative transfer codes. The SED and dust...
PC-based process distribution to solve iterative Monte Carlo simulations in physical dosimetry
International Nuclear Information System (INIS)
A distribution model to simulate physical dosimetry measurements with Monte Carlo (MC) techniques has been developed. This approach is indicated to solve the simulations where there are continuous changes of measurement conditions (and hence of the input parameters) such as a TPR curve or the estimation of the resolution limit of an optimal densitometer in the case of small field profiles. As a comparison, a high resolution scan for narrow beams with no iterative process is presented. The model has been installed on a network PCs without any resident software. The only requirement for these PCs has been a small and temporal Linux partition in the hard disks and to be connecting by the net with our server PC. (orig.)
MONTE: An automated Monte Carlo based approach to nuclear magnetic resonance assignment of proteins
Energy Technology Data Exchange (ETDEWEB)
Hitchens, T. Kevin; Lukin, Jonathan A.; Zhan Yiping; McCallum, Scott A.; Rule, Gordon S. [Carnegie Mellon University, Department of Biological Sciences (United States)], E-mail: rule@andrew.cmu.edu
2003-01-15
A general-purpose Monte Carlo assignment program has been developed to aid in the assignment of NMR resonances from proteins. By virtue of its flexible data requirements the program is capable of obtaining assignments of both heavily deuterated and fully protonated proteins. A wide variety of source data, such as inter-residue scalar connectivity, inter-residue dipolar (NOE) connectivity, and residue specific information, can be utilized in the assignment process. The program can also use known assignments from one form of a protein to facilitate the assignment of another form of the protein. This attribute is useful for assigning protein-ligand complexes when the assignments of the unliganded protein are known. The program can be also be used as an interactive research tool to assist in the choice of additional experimental data to facilitate completion of assignments. The assignment of a deuterated 45 kDa homodimeric Glutathione-S-transferase illustrates the principal features of the program.
International Nuclear Information System (INIS)
measurements by 1.35%–5.31% (mean difference =−3.42%, SD = 1.09%).Conclusions: This work demonstrates the feasibility of using a measurement-based kV CBCT source model to facilitate dose calculations with Monte Carlo methods for both the radiographic and CBCT mode of operation. While this initial work validates simulations against measurements for simple geometries, future work will involve utilizing the source model to investigate kV CBCT dosimetry with more complex anthropomorphic phantoms and patient specific models
International Nuclear Information System (INIS)
We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm
Shi, Ming; Saint-Martin, Jérôme; Bournel, Arnaud; Maher, Hassan; Renvoise, Michel; Dollfus, Philippe
2010-11-01
High-mobility III-V heterostructures are emerging and very promising materials likely to fulfil high-speed and low-power specifications for ambient intelligent applications. The main objective of this work is to theoretically explore the potentialities of MOSFET based on III-V materials with low bandgap and high electron mobility. First, the charge control is studied in III-V MOS structures using a Schrödinger-Poisson solver. Electronic transport in III-V devices is then analyzed using a particle Monte Carlo device simulator. The external access resistances used in the calculations are carefully calibrated on experimental results. The performance of different structures of nanoscale MOS transistor based on III-V materials is evaluated and the quasi-ballistic character of electron transport is compared to that in Si transistors of same gate length. PMID:21137856
A hybrid Monte Carlo and response matrix Monte Carlo method in criticality calculation
International Nuclear Information System (INIS)
Full core calculations are very useful and important in reactor physics analysis, especially in computing the full core power distributions, optimizing the refueling strategies and analyzing the depletion of fuels. To reduce the computing time and accelerate the convergence, a method named Response Matrix Monte Carlo (RMMC) method based on analog Monte Carlo simulation was used to calculate the fixed source neutron transport problems in repeated structures. To make more accurate calculations, we put forward the RMMC method based on non-analog Monte Carlo simulation and investigate the way to use RMMC method in criticality calculations. Then a new hybrid RMMC and MC (RMMC+MC) method is put forward to solve the criticality problems with combined repeated and flexible geometries. This new RMMC+MC method, having the advantages of both MC method and RMMC method, can not only increase the efficiency of calculations, also simulate more complex geometries rather than repeated structures. Several 1-D numerical problems are constructed to test the new RMMC and RMMC+MC method. The results show that RMMC method and RMMC+MC method can efficiently reduce the computing time and variations in the calculations. Finally, the future research directions are mentioned and discussed at the end of this paper to make RMMC method and RMMC+MC method more powerful. (authors)
International Nuclear Information System (INIS)
Graphical abstract: - Highlights: • Continuous Energy Monte-Carlo burnup code. • Instabilities of depletion calculation in loosely coupled system. • Advanced step model for burnup calculations. • Xenon profile oscillation in thermal reactor. • Parametrical study of instabilities. - Abstract: In this paper we use the Continuous Energy Monte-Carlo tool to expose the problem of burnup instabilities occurring in 1D and 2D systems based on PWR geometry. The intensity of power profile oscillations is studied as a function of geometry properties and time step length. We compare two step models for depletion procedure: classic staircase step model and stochastic implicit Euler method, that belongs to the family of predictor–corrector schemes. What is more, we consider the usage of better neutron source intensity value than beginning-of-step approximation. Required methodology was implemented into MCB5 simulation code. The practical conclusions about depletion calculations were formulated and the efficiency of advanced step model was confirmed
International Nuclear Information System (INIS)
Highlights: • 3-group cross sections is collapsed by WIMS and SN2. Core is calculated by CITATION. • Engineering adjustments are made to generate better few group cross-sections. • Validation is made by JRR-3M measurements and Monte Carlo simulation. - Abstract: The control rods (CRs) worth is key parameter for the research reactors (RRs) operation and utilization. Control rods worth computation is a challenge for the full deterministic calculation methodology, including the few group cross section generation, and the core analysis. The purpose of this work is to interpret our codes system, and their applicability of obtaining reliable CRs worth by some engineering adjustments. Cross sections collapsing in three energy groups is made by WIMS and SN2 codes, while the core analysis is performed by CITATION. We use these codes for the design, construction, and operation of our research reactor CMRR (China Mianyang Research Reactor). However, due to the intrinsic deficiency of the diffusion theory and homogenizing approximation, the directly obtained results, such as CRs worth and neutron flux distributions are not satisfactory. So two points of simple adjustments are made to generate the few group cross-sections with the assistance of measurements and auxiliary Monte Carlo runs. The first step is to adjust the fuel cross sections by changing properly the mass of a non-fissile material, such as the mass of the two 0.4 mm Cd wires existing at both sides of each uranium plate, so that the core model of CITATION can get good eigenvalue when all CRs are completely extracted. The second step is to revise the shim absorber cross section of CRs by adjusting the hafnium mass, so that the CITATION model can get correct critical rods position. In this manuscript, the JRR-3M (Japan Research Reactor No. 3 Modified) reactor is employed as a demonstration. Final revised results are validated with the stochastic simulation and experimental measurement values, including the
Assessment of the depletion capability in MPACT
International Nuclear Information System (INIS)
The objective of the paper is to develop and demonstrate the depletion capability with pin resolved transport using the MPACT code. The first section of the paper provides a description of the depletion methodology and the algorithm used to solve the depletion equations in MPACT. A separate depletion library for MPACT is used based on the ORIGEN-S library to provide the basic decay constants and fission yields, as well as the 3-group cross-sections which are used for the isotopes not contained in the MPACT multi-group library. The cross sections for the depletion transmutation matrix were collapsed using the transport flux solution in MPACT based on either the 47 group HELIOS library based on ENDF-VI or a 56 group ORNL library based on ENDF-VII. The second section of this paper then describes the numerical verification of the depletion algorithm using two sets of benchmarks. The first is the JAERI LWR lattice benchmark which had participants from most of the lattice depletion codes currently used in the international nuclear community and the second benchmark is based on data from spent fuel of the Takahama-3 reactor. The results show that MPACT is generally in good agreement with the results of the other benchmark participants as well as the experimental data. Finally, a full core 2D model of CASL AMA benchmark was depleted based on the central plane of the Watts Bar reactor core which demonstrates the whole core depletion capability of MPACT. (author)
International Nuclear Information System (INIS)
We implemented the simplified Monte Carlo (SMC) method on graphics processing unit (GPU) architecture under the computer-unified device architecture platform developed by NVIDIA. The GPU-based SMC was clinically applied for four patients with head and neck, lung, or prostate cancer. The results were compared to those obtained by a traditional CPU-based SMC with respect to the computation time and discrepancy. In the CPU- and GPU-based SMC calculations, the estimated mean statistical errors of the calculated doses in the planning target volume region were within 0.5% rms. The dose distributions calculated by the GPU- and CPU-based SMCs were similar, within statistical errors. The GPU-based SMC showed 12.30–16.00 times faster performance than the CPU-based SMC. The computation time per beam arrangement using the GPU-based SMC for the clinical cases ranged 9–67 s. The results demonstrate the successful application of the GPU-based SMC to a clinical proton treatment planning. (note)
Monte Carlo based water/medium stopping-power ratios for various ICRP and ICRU tissues
International Nuclear Information System (INIS)
Water/medium stopping-power ratios, sw,m, have been calculated for several ICRP and ICRU tissues, namely adipose tissue, brain, cortical bone, liver, lung (deflated and inflated) and spongiosa. The considered clinical beams were 6 and 18 MV x-rays and the field size was 10 x 10 cm2. Fluence distributions were scored at a depth of 10 cm using the Monte Carlo code PENELOPE. The collision stopping powers for the studied tissues were evaluated employing the formalism of ICRU Report 37 (1984 Stopping Powers for Electrons and Positrons (Bethesda, MD: ICRU)). The Bragg-Gray values of sw,m calculated with these ingredients range from about 0.98 (adipose tissue) to nearly 1.14 (cortical bone), displaying a rather small variation with beam quality. Excellent agreement, to within 0.1%, is found with stopping-power ratios reported by Siebers et al (2000a Phys. Med. Biol. 45 983-95) for cortical bone, inflated lung and spongiosa. In the case of cortical bone, sw,m changes approximately 2% when either ICRP or ICRU compositions are adopted, whereas the stopping-power ratios of lung, brain and adipose tissue are less sensitive to the selected composition. The mass density of lung also influences the calculated values of sw,m, reducing them by around 1% (6 MV) and 2% (18 MV) when going from deflated to inflated lung
Testing planetary transit detection methods with grid-based Monte-Carlo simulations.
Bonomo, A. S.; Lanza, A. F.
The detection of extrasolar planets by means of the transit method is a rapidly growing field of modern astrophysics. The periodic light dips produced by the passage of a planet in front of its parent star can be used to reveal the presence of the planet itself, to measure its orbital period and relative radius, as well as to perform studies on the outer layers of the planet by analysing the light of the star passing through the planet's atmosphere. We have developed a new method to detect transits of Earth-sized planets in front of solar-like stars that allows us to reduce the impact of stellar microvariability on transit detection. A large Monte Carlo numerical experiment has been designed to test the performance of our approach in comparison with other transit detection methods for stars of different magnitudes and planets of different radius and orbital period, as will be observed by the space experiments CoRoT and Kepler. The large computational load of this experiment has been managed by means of the Grid infrastructure of the COMETA consortium.
Institute of Scientific and Technical Information of China (English)
ZHANG Jun; GUO Fan
2015-01-01
Tooth modification technique is widely used in gear industry to improve the meshing performance of gearings. However, few of the present studies on tooth modification considers the influence of inevitable random errors on gear modification effects. In order to investigate the uncertainties of tooth modification amount variations on system’s dynamic behaviors of a helical planetary gears, an analytical dynamic model including tooth modification parameters is proposed to carry out a deterministic analysis on the dynamics of a helical planetary gear. The dynamic meshing forces as well as the dynamic transmission errors of the sun-planet 1 gear pair with and without tooth modifications are computed and compared to show the effectiveness of tooth modifications on gear dynamics enhancement. By using response surface method, a fitted regression model for the dynamic transmission error(DTE) fluctuations is established to quantify the relationship between modification amounts and DTE fluctuations. By shifting the inevitable random errors arousing from manufacturing and installing process to tooth modification amount variations, a statistical tooth modification model is developed and a methodology combining Monte Carlo simulation and response surface method is presented for uncertainty analysis of tooth modifications. The uncertainly analysis reveals that the system’s dynamic behaviors do not obey the normal distribution rule even though the design variables are normally distributed. In addition, a deterministic modification amount will not definitely achieve an optimal result for both static and dynamic transmission error fluctuation reduction simultaneously.
Calculation of Credit Valuation Adjustment Based on Least Square Monte Carlo Methods
Directory of Open Access Journals (Sweden)
Qian Liu
2015-01-01
Full Text Available Counterparty credit risk has become one of the highest-profile risks facing participants in the financial markets. Despite this, relatively little is known about how counterparty credit risk is actually priced mathematically. We examine this issue using interest rate swaps. This largely traded financial product allows us to well identify the risk profiles of both institutions and their counterparties. Concretely, Hull-White model for rate and mean-reverting model for default intensity have proven to be in correspondence with the reality and to be well suited for financial institutions. Besides, we find that least square Monte Carlo method is quite efficient in the calculation of credit valuation adjustment (CVA, for short as it avoids the redundant step to generate inner scenarios. As a result, it accelerates the convergence speed of the CVA estimators. In the second part, we propose a new method to calculate bilateral CVA to avoid double counting in the existing bibliographies, where several copula functions are adopted to describe the dependence of two first to default times.
Dosimetric investigation of proton therapy on CT-based patient data using Monte Carlo simulation
Chongsan, T.; Liamsuwan, T.; Tangboonduangjit, P.
2016-03-01
The aim of radiotherapy is to deliver high radiation dose to the tumor with low radiation dose to healthy tissues. Protons have Bragg peaks that give high radiation dose to the tumor but low exit dose or dose tail. Therefore, proton therapy is promising for treating deep- seated tumors and tumors locating close to organs at risk. Moreover, the physical characteristic of protons is suitable for treating cancer in pediatric patients. This work developed a computational platform for calculating proton dose distribution using the Monte Carlo (MC) technique and patient's anatomical data. The studied case is a pediatric patient with a primary brain tumor. PHITS will be used for MC simulation. Therefore, patient-specific CT-DICOM files were converted to the PHITS input. A MATLAB optimization program was developed to create a beam delivery control file for this study. The optimization program requires the proton beam data. All these data were calculated in this work using analytical formulas and the calculation accuracy was tested, before the beam delivery control file is used for MC simulation. This study will be useful for researchers aiming to investigate proton dose distribution in patients but do not have access to proton therapy machines.
Monte Carlo based unit commitment procedures for the deregulated market environment
International Nuclear Information System (INIS)
The unit commitment problem, originally conceived in the framework of short term operation of vertically integrated utilities, needs a thorough re-examination in the light of the ongoing transition towards the open electricity market environment. In this work the problem is re-formulated to adapt unit commitment to the viewpoint of a generation company (GENCO) which is no longer bound to satisfy its load, but is willing to maximize its profits. Moreover, with reference to the present day situation in many countries, the presence of a GENCO (the former monopolist) which is in the position of exerting the market power, requires a careful analysis to be carried out considering the different perspectives of a price taker and of the price maker GENCO. Unit commitment is thus shown to lead to a couple of distinct, yet slightly different problems. The unavoidable uncertainties in load profile and price behaviour over the time period of interest are also taken into account by means of a Monte Carlo simulation. Both the forecasted loads and prices are handled as random variables with a normal multivariate distribution. The correlation between the random input variables corresponding to successive hours of the day was considered by carrying out a statistical analysis of actual load and price data. The whole procedure was tested making use of reasonable approximations of the actual data of the thermal generation units available to come actual GENCOs operating in Italy. (author)
Research of photon beam dose deposition kernel based on Monte Carlo method
International Nuclear Information System (INIS)
Using Monte Carlo program BEAMnrc to simulate Siemens accelerator 6 MV photon beam, using BEAMdp program to analyse the energy spectrum distribution and mean energy from phase space data of different field sizes, then building beam source, energy spectrum and mono-energy source, to use DOSXYZnrc program to calculate the dose deposition kernels at dmax in standard water phantom with different beam sources and make comparison with different dose deposition kernels. The results show that the dose difference using energy spectrum source is small, the maximum percentage dose discrepancy is 1.47%, but it is large using mono-energy source, which is 6.28%. The maximum dose difference for the kernels derived from energy spectrum source and mono-energy source of the same field is larger than 9%, up to 13.2%. Thus, dose deposition has dependence on photon energy, it can lead to larger errors only using mono-energy source because of the beam spectrum distribution of accelerator. A good method to calculate dose more accurately is to use deposition kernel of energy spectrum source. (authors)
Comparison of polynomial approximations to speed up planewave-based quantum Monte Carlo calculations
Parker, William D; Alfè, Dario; Hennig, Richard G; Wilkins, John W
2013-01-01
The computational cost of quantum Monte Carlo (QMC) calculations of realistic periodic systems depends strongly on the method of storing and evaluating the many-particle wave function. Previous work [A. J. Williamson et al., Phys. Rev. Lett. 87, 246406 (2001); D. Alf\\`e and M. J. Gillan, Phys. Rev. B 70, 161101 (2004)] has demonstrated the reduction of the O(N^3) cost of evaluating the Slater determinant with planewaves to O(N^2) using localized basis functions. We compare four polynomial approximations as basis functions -- interpolating Lagrange polynomials, interpolating piecewise-polynomial-form (pp-) splines, and basis-form (B-) splines (interpolating and smoothing). All these basis functions provide a similar speedup relative to the planewave basis. The pp-splines have eight times the memory requirement of the other methods. To test the accuracy of the basis functions, we apply them to the ground state structures of Si, Al, and MgO. The polynomial approximations differ in accuracy most strongly for MgO ...
Accuracy assessment of a new Monte Carlo based burnup computer code
International Nuclear Information System (INIS)
Highlights: ► A new burnup code called BUCAL1 was developed. ► BUCAL1 uses the MCNP tallies directly in the calculation of the isotopic inventories. ► Validation of BUCAL1 was done by code to code comparison using VVER-1000 LEU Benchmark Assembly. ► Differences from BM value were found to be ± 600 pcm for k∞ and ±6% for the isotopic compositions. ► The effect on reactivity due to the burnup of Gd isotopes is well reproduced by BUCAL1. - Abstract: This study aims to test for the suitability and accuracy of a new home-made Monte Carlo burnup code, called BUCAL1, by investigating and predicting the neutronic behavior of a “VVER-1000 LEU Assembly Computational Benchmark”, at lattice level. BUCAL1 uses MCNP tally information directly in the computation; this approach allows performing straightforward and accurate calculation without having to use the calculated group fluxes to perform transmutation analysis in a separate code. ENDF/B-VII evaluated nuclear data library was used in these calculations. Processing of the data library is performed using recent updates of NJOY99 system. Code to code comparisons with the reported Nuclear OECD/NEA results are presented and analyzed.
Energy Technology Data Exchange (ETDEWEB)
Abdel-Khalik, Hany S. [North Carolina State Univ., Raleigh, NC (United States); Zhang, Qiong [North Carolina State Univ., Raleigh, NC (United States)
2014-05-20
The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 10^{3} - 10^{5} times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.
Markov chain Monte Carlo based analysis of post-translationally modified VDAC1 gating kinetics
Directory of Open Access Journals (Sweden)
Shivendra eTewari
2015-01-01
Full Text Available The voltage-dependent anion channel (VDAC is the main conduit for permeation of solutes (including nucleotides and metabolites of up to 5 kDa across the mitochondrial outer membrane (MOM. Recent studies suggest that VDAC activity is regulated via post-translational modifications (PTMs. Yet the nature and effect of these modifications is not understood. Herein, single channel currents of wild-type, nitrosated and phosphorylated VDAC are analyzed using a generalized continuous-time Markov chain Monte Carlo (MCMC method. This developed method describes three distinct conducting states (open, half-open, and closed of VDAC1 activity. Lipid bilayer experiments are also performed to record single VDAC activity under un-phosphorylated and phosphorylated conditions, and are analyzed using the developed stochastic search method. Experimental data show significant alteration in VDAC gating kinetics and conductance as a result of PTMs. The effect of PTMs on VDAC kinetics is captured in the parameters associated with the identified Markov model. Stationary distributions of the Markov model suggests that nitrosation of VDAC not only decreased its conductance but also significantly locked VDAC in a closed state. On the other hand, stationary distributions of the model associated with un-phosphorylated and phosphorylated VDAC suggest a reversal in channel conformation from relatively closed state to an open state. Model analyses of the nitrosated data suggest that faster reaction of nitric oxide with Cys-127 thiol group might be responsible for the biphasic effect of nitric oxide on basal VDAC conductance.
International Nuclear Information System (INIS)
The paper considers radiological and toxic impact of the depleted uranium on the human health. Radiological influence of depleted uranium is less for 60 % than natural uranium due to the decreasing of short-lived isotopes uranium-234 and uranium-235 after enrichment. The formation of radioactive aerosols and their impact on the human are mentioned. Use of the depleted uranium weapons has also a chemical effect on intake due to possible carcinogenic influence on kidney. Uranium-236 in the substance of the depleted uranium is determined. The fact of beta-radiation formation in the uranium-238 decay is regarded. This effect practically is the same for both depleted and natural uranium. Importance of toxicity of depleted uranium, as the heavier chemical substance, has a considerable contribution to the population health. The paper analyzes risks regarding the use of the depleted uranium weapons. There is international opposition against using weapons with depleted uranium. Resolution on effects of the use of armaments and ammunitions containing depleted uranium was five times supported by the United Nations (USA, United Kingdom, France and Israel did not support). The decision for banning of depleted uranium weapons was supported by the European Parliament
Depleted Reactor Analysis With MCNP-4B
International Nuclear Information System (INIS)
Monte Carlo neutronics calculations are mostly done for fresh reactor cores. There is today an ongoing activity in the development of Monte Carlo plus burnup code systems made possible by the fast gains in computer processor speeds. In this work we investigate the use of MCNP-4B for the calculation of a depleted core of the Soreq reactor (IRR-1). The number densities as function of burnup were taken from the WIMS-D/4 cell code calculations. This particular code coupling has been implemented before. The Monte Carlo code MCNP-4B calculates the coupled transport of neutrons and photons for complicated geometries. We have done neutronics calculations of the IRR-1 core with the WIMS and CITATION codes in the past Also, we have developed an MCNP model of the IRR-1 standard fuel for a criticality safety calculation of a spent fuel storage pool
International Nuclear Information System (INIS)
Liquid Salt Cooled Reactors (LSCRs) are high temperature reactors, cooled by liquid salt, with a TRISO-particle based fuel in a solid form (stationary fuel elements or circulating fuel pebbles); this paper is focusing on the former. In either case, due to the double heterogeneity, core physics analysis require different considerations with more complex approaches than LWRs core physics calculations. Additional challenges appear when using the multi-group approach. In this paper we examine the use of SCALE6.1.1. Double heterogeneity may be accounted for through the Dancoff factor, however, SCALE6.1.1 does not provide an automated method to calculate Dancoff Factors for fuel planks with TRISO fuel particles. Therefore, depletion with continuous energy Monte Carlo Transport (CE depletion) in SCALE6.2 beta was used to generate MC Dancoff factors for multi-group calculations. MCDancoff corrected multi-group depletion agrees with the results for CE depletion within ±100 pcm, and within ±2σ. Producing MCDancoff factors for multi-group (MG) depletion calculations is necessary to LSCR analysis because CE depletion runtime and memory requirements are prohibitive for routine use. MG depletion with MCDancoff provides significantly shorter runtime and lower memory requirements while providing results of acceptable accuracy. (author)
Akyurek, Z.; Sürer, S.; Bolat, K.
2012-12-01
Snow cover is an important feature of mountainous regions. Depending on latitude, the higher altitudes are completely covered by snow for several months in a year. Snow cover is also an important factor for optimum use of water in energy production, flood control, irrigation and reservoir operation optimization, as well as ski tourism. Snow cover depletion curve (SDC) is one of the important variables in snow hydrological applications, and these curves are very much required for snowmelt runoff modeling in a snow-fed catchment. In this study it is aimed to monitor the temporal changes in the snow cover depletion in Upper-Euphrates basin for the period of 2000-2011. Snow mapping was performed by reclassifying the fractional snow cover areas obtained from MODIS-Terra (MOD09GA) data by the algorithm derived for the region. An automatic approach was developed in deriving the snow cover depletion curves. Maximum snow cover occurs in winter months in Upper-Euphrates basin and the amount of maximum snow cover is between 80-90 % of the total area. Approximately 45% of the area is covered with snow in the autumn, the melting occurs in spring and 15% of the area is covered with snow during spring months. At the beginning of April there exists snow generally above 1900 m in the basin, at the lower elevations snow does not stay after the end of February. The previous studies indicate warming trends for the basin's temperatures. Statistically insignificant decreasing trends in precipitation in the basin except autumn season for the period of 1975-2008 were obtained. The major melting period in this basin starts in early April, but in the last three years a shift in snow melting time was detected. When sufficient satellite data are not available due to cloud cover or due to some other reasons, then SDC can be generated using temperature data. Mean cloud coverage for the melting period was obtained as 82% from MODIS-Terra images in the basin. Under changed climate conditions also
Idiri, Z.; Mazrou, H.; Beddek, S.; Amokrane, A.; Azbouche, A.
2007-07-01
The present paper describes the optimization of sample dimensions of a 241Am-Be neutron source-based Prompt gamma neutron activation analysis (PGNAA) setup devoted for in situ environmental water rejects analysis. The optimal dimensions have been achieved following extensive Monte Carlo neutron flux calculations using MCNP5 computer code. A validation process has been performed for the proposed preliminary setup with measurements of thermal neutron flux by activation technique of indium foils, bare and with cadmium covered sheet. Sensitive calculations were subsequently performed to simulate real conditions of in situ analysis by determining thermal neutron flux perturbations in samples according to chlorine and organic matter concentrations changes. The desired optimal sample dimensions were finally achieved once established constraints regarding neutron damage to semi-conductor gamma detector, pulse pile-up, dead time and radiation hazards were fully met.
Bznuni, S A; Zhamkochyan, V M; Polanski, A; Sosnin, A N; Khudaverdyan, A H
2001-01-01
Parameters of a subcritical cascade reactor driven by a proton accelerator and based on a primary lead-bismuth target, main reactor constructed analogously to the molten salt breeder (MSBR) reactor core and a booster-reactor analogous to the core of the BN-350 liquid metal cooled fast breeder reactor (LMFBR). It is shown by means of Monte-Carlo modeling that the reactor under study provides safe operation modes (k_{eff}=0.94-0.98), is apable to transmute effectively radioactive nuclear waste and reduces by an order of magnitude the requirements on the accelerator beam current. Calculations show that the maximal neutron flux in the thermal zone is 10^{14} cm^{12}\\cdot s^_{-1}, in the fast booster zone is 5.12\\cdot10^{15} cm^{12}\\cdot s{-1} at k_{eff}=0.98 and proton beam current I=2.1 mA.
Monte Carlo based approach to the LS–NaI 4πβ–γ anticoincidence extrapolation and uncertainty.
Fitzgerald, R
2016-03-01
The 4πβ–γ anticoincidence method is used for the primary standardization of β−, β+, electron capture (EC), α, and mixed-mode radionuclides. Efficiency extrapolation using one or more γ ray coincidence gates is typically carried out by a low-order polynomial fit. The approach presented here is to use a Geant4-based Monte Carlo simulation of the detector system to analyze the efficiency extrapolation. New code was developed to account for detector resolution, direct γ ray interaction with the PMT, and implementation of experimental β-decay shape factors. The simulation was tuned to 57Co and 60Co data, then tested with 99mTc data, and used in measurements of 18F, 129I, and 124I. The analysis method described here offers a more realistic activity value and uncertainty than those indicated from a least-squares fit alone. PMID:27358944
Tseung, H Wan Chan; Beltran, C
2014-01-01
Purpose: Very fast Monte Carlo (MC) simulations of proton transport have been implemented recently on GPUs. However, these usually use simplified models for non-elastic (NE) proton-nucleus interactions. Our primary goal is to build a GPU-based proton transport MC with detailed modeling of elastic and NE collisions. Methods: Using CUDA, we implemented GPU kernels for these tasks: (1) Simulation of spots from our scanning nozzle configurations, (2) Proton propagation through CT geometry, considering nuclear elastic scattering, multiple scattering, and energy loss straggling, (3) Modeling of the intranuclear cascade stage of NE interactions, (4) Nuclear evaporation simulation, and (5) Statistical error estimates on the dose. To validate our MC, we performed: (1) Secondary particle yield calculations in NE collisions, (2) Dose calculations in homogeneous phantoms, (3) Re-calculations of head and neck plans from a commercial treatment planning system (TPS), and compared with Geant4.9.6p2/TOPAS. Results: Yields, en...
Ye, Hong-zhou; Jiang, Hong
2014-01-01
Materials with spin-crossover (SCO) properties hold great potentials in information storage and therefore have received a lot of concerns in the recent decades. The hysteresis phenomena accompanying SCO is attributed to the intermolecular cooperativity whose underlying mechanism may have a vibronic origin. In this work, a new vibronic Ising-like model in which the elastic coupling between SCO centers is included by considering harmonic stretching and bending (SAB) interactions is proposed and solved by Monte Carlo simulations. The key parameters in the new model, $k_1$ and $k_2$, corresponding to the elastic constant of the stretching and bending mode, respectively, can be directly related to the macroscopic bulk and shear modulus of the material in study, which can be readily estimated either based on experimental measurements or first-principles calculations. The convergence issue in the MC simulations of the thermal hysteresis has been carefully checked, and it was found that the stable hysteresis loop can...
Performance Analysis of Korean Liquid metal type TBM based on Monte Carlo code
International Nuclear Information System (INIS)
The objective of this project is to analyze a nuclear performance of the Korean HCML(Helium Cooled Molten Lithium) TBM(Test Blanket Module) which will be installed in ITER(International Thermonuclear Experimental Reactor). This project is intended to analyze a neutronic design and nuclear performances of the Korean HCML ITER TBM through the transport calculation of MCCARD. In detail, we will conduct numerical experiments for analyzing the neutronic design of the Korean HCML TBM and the DEMO fusion blanket, and improving the nuclear performances. The results of the numerical experiments performed in this project will be utilized further for a design optimization of the Korean HCML TBM. In this project, Monte Carlo transport calculations for evaluating TBR (Tritium Breeding Ratio) and EMF (Energy Multiplication factor) were conducted to analyze a nuclear performance of the Korean HCML TBM. The activation characteristics and shielding performances for the Korean HCML TBM were analyzed using ORIGEN and MCCARD. We proposed the neutronic methodologies for analyzing the nuclear characteristics of the fusion blanket, which was applied to the blanket analysis of a DEMO fusion reactor. In the results, the TBR of the Korean HCML ITER TBM is 0.1352 and the EMF is 1.362. Taking into account a limitation for the Li amount in ITER TBM, it is expected that tritium self-sufficiency condition can be satisfied through a change of the Li quantity and enrichment. In the results of activation and shielding analysis, the activity drops to 1.5% of the initial value and the decay heat drops to 0.02% of the initial amount after 10 years from plasma shutdown
International Nuclear Information System (INIS)
The γ-index test has been commonly adopted to quantify the degree of agreement between a reference dose distribution and an evaluation dose distribution. Monte Carlo (MC) simulation has been widely used for the radiotherapy dose calculation for both clinical and research purposes. The goal of this work is to investigate both theoretically and experimentally the impact of the MC statistical fluctuation on the γ-index test when the fluctuation exists in the reference, the evaluation, or both dose distributions. To the first order approximation, we theoretically demonstrated in a simplified model that the statistical fluctuation tends to overestimate γ-index values when existing in the reference dose distribution and underestimate γ-index values when existing in the evaluation dose distribution given the original γ-index is relatively large for the statistical fluctuation. Our numerical experiments using realistic clinical photon radiation therapy cases have shown that (1) when performing a γ-index test between an MC reference dose and a non-MC evaluation dose, the average γ-index is overestimated and the gamma passing rate decreases with the increase of the statistical noise level in the reference dose; (2) when performing a γ-index test between a non-MC reference dose and an MC evaluation dose, the average γ-index is underestimated when they are within the clinically relevant range and the gamma passing rate increases with the increase of the statistical noise level in the evaluation dose; (3) when performing a γ-index test between an MC reference dose and an MC evaluation dose, the gamma passing rate is overestimated due to the statistical noise in the evaluation dose and underestimated due to the statistical noise in the reference dose. We conclude that the γ-index test should be used with caution when comparing dose distributions computed with MC simulation. (paper)
GPU-BASED MONTE CARLO DUST RADIATIVE TRANSFER SCHEME APPLIED TO ACTIVE GALACTIC NUCLEI
International Nuclear Information System (INIS)
A three-dimensional parallel Monte Carlo (MC) dust radiative transfer code is presented. To overcome the huge computing-time requirements of MC treatments, the computational power of vectorized hardware is used, utilizing either multi-core computer power or graphics processing units. The approach is a self-consistent way to solve the radiative transfer equation in arbitrary dust configurations. The code calculates the equilibrium temperatures of two populations of large grains and stochastic heated polycyclic aromatic hydrocarbons. Anisotropic scattering is treated applying the Heney-Greenstein phase function. The spectral energy distribution (SED) of the object is derived at low spatial resolution by a photon counting procedure and at high spatial resolution by a vectorized ray tracer. The latter allows computation of high signal-to-noise images of the objects at any frequencies and arbitrary viewing angles. We test the robustness of our approach against other radiative transfer codes. The SED and dust temperatures of one- and two-dimensional benchmarks are reproduced at high precision. The parallelization capability of various MC algorithms is analyzed and included in our treatment. We utilize the Lucy algorithm for the optical thin case where the Poisson noise is high, the iteration-free Bjorkman and Wood method to reduce the calculation time, and the Fleck and Canfield diffusion approximation for extreme optical thick cells. The code is applied to model the appearance of active galactic nuclei (AGNs) at optical and infrared wavelengths. The AGN torus is clumpy and includes fluffy composite grains of various sizes made up of silicates and carbon. The dependence of the SED on the number of clumps in the torus and the viewing angle is studied. The appearance of the 10 μm silicate features in absorption or emission is discussed. The SED of the radio-loud quasar 3C 249.1 is fit by the AGN model and a cirrus component to account for the far-infrared emission.
International Nuclear Information System (INIS)
Highlights: • Code works based on Monte Carlo and escape probability methods. • Sensitivity of Dancoff factor to number of energy groups and type and arrangement of neighbor’s fuels is considered. • Sensitivity of Dancoff factor to control rod’s height is considered. • Dancoff factor high efficiency is achieved versus method sampling neutron flight direction from the fuel surface. • Sensitivity of K to Dancoff factor is considered. - Abstract: Evaluation of multigroup constants in reactor calculations depends on several parameters, the Dancoff factor amid them is used for calculation of the resonance integral as well as flux depression in the resonance region in the heterogeneous systems. This paper focuses on the computer program (MCDAN-3D) developed for calculation of the multigroup black and gray Dancoff factor in three dimensional geometry based on Monte Carlo and escape probability methods. The developed program is capable to calculate the Dancoff factor for an arbitrary arrangement of fuel rods with different cylindrical fuel dimensions and control rods with various lengths inserted in the reactor core. The initiative calculates the black and gray Dancoff factor versus generated neutron flux in cosine and constant shapes in axial fuel direction. The effects of clad and moderator are followed by studying of Dancoff factor’s sensitivity with variation of fuel arrangements and neutron’s energy group for CANDU37 and VVER1000 fuel assemblies. MCDAN-3D outcomes poses excellent agreement with the MCNPX code. The calculated Dancoff factors are then used for cell criticality calculations by the WIMS code
TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations
Energy Technology Data Exchange (ETDEWEB)
Schuemann, J; Grassberger, C; Paganetti, H [Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Dowdell, S [Illawarra Shoalhaven Local Health District, Wollongong (Australia)
2014-06-15
Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50) were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we recommend
International Nuclear Information System (INIS)
The growing demand for wireless multimedia applications (smartphones, tablets, digital cameras) requires the development of devices combining both high speed performances and low power consumption. A recent technological breakthrough making a good compromise between these two antagonist conditions has been proposed: the 28-14nm CMOS transistor generations based on a fully-depleted Silicon-on-Insulator (FD-SOI) performed on a thin Si film of 5-6nm. In this paper, we propose to review the TEM characterization challenges that are essential for the development of extremely power-efficient System on Chip (SoC)
Skrzyński, Witold
2014-11-01
The aim of this work was to create a model of a wide-bore Siemens Somatom Sensation Open CT scanner for use with GMCTdospp, which is an EGSnrc-based software tool dedicated for Monte Carlo calculations of dose in CT examinations. The method was based on matching spectrum and filtration to half value layer and dose profile, and thus was similar to the method of Turner et al. (Med. Phys. 36, pp. 2154-2164). Input data on unfiltered beam spectra were taken from two sources: the TASMIP model and IPEM Report 78. Two sources of HVL data were also used, namely measurements and documentation. Dose profile along the fan-beam was measured with Gafchromic RTQA-1010 (QA+) film. Two-component model of filtration was assumed: bow-tie filter made of aluminum with 0.5 mm thickness on central axis, and flat filter made of one of four materials: aluminum, graphite, lead, or titanium. Good agreement between calculations and measurements was obtained for models based on the measured values of HVL. Doses calculated with GMCTdospp differed from the doses measured with pencil ion chamber placed in PMMA phantom by less than 5%, and root mean square difference for four tube potentials and three positions in the phantom did not exceed 2.5%. The differences for models based on HVL values from documentation exceeded 10%. Models based on TASMIP spectra and IPEM78 spectra performed equally well. PMID:25028213
Nonlinear lower hybrid wave depletion
International Nuclear Information System (INIS)
Two numerical ray tracing codes with focusing are used to compute lower hybrid daughter wave amplification by quasi-mode parametric decay. The first code, LHPUMP provides a numerical pump model on a grid. This model is used by a second code, LHFQM which computes daughter wave amplification inside the pump extent and follows the rays until their energy is absorbed by the plasma. An analytic model is then used to estimate pump depletion based on the numerical results. Results for PLT indicate strong pump depletion at the plasma edge at high density operation for the 800 Mhz wave frequency, but weak depletion for the 2.45 Ghz experiment. This is proposed to be the mechanism responsible for the high density limit for current drive as well as for the difficulty to heat ions
Monte Carlo-based diode design for correction-less small field dosimetry
International Nuclear Information System (INIS)
Due to their small collecting volume, diodes are commonly used in small field dosimetry. However, the relative sensitivity of a diode increases with decreasing small field size. Conversely, small air gaps have been shown to cause a significant decrease in the sensitivity of a detector as the field size is decreased. Therefore, this study uses Monte Carlo simulations to look at introducing air upstream to diodes such that they measure with a constant sensitivity across all field sizes in small field dosimetry. Varying thicknesses of air were introduced onto the upstream end of two commercial diodes (PTW 60016 photon diode and PTW 60017 electron diode), as well as a theoretical unenclosed silicon chip using field sizes as small as 5 mm × 5 mm. The metric used in this study represents the ratio of the dose to a point of water to the dose to the diode active volume, for a particular field size and location. The optimal thickness of air required to provide a constant sensitivity across all small field sizes was found by plotting as a function of introduced air gap size for various field sizes, and finding the intersection point of these plots. That is, the point at whichwas constant for all field sizes was found. The optimal thickness of air was calculated to be 3.3, 1.15 and 0.10 mm for the photon diode, electron diode and unenclosed silicon chip, respectively. The variation in these results was due to the different design of each detector. When calculated with the new diode design incorporating the upstream air gap, kQclin,Qmsrfclin,fmsr was equal to unity to within statistical uncertainty (0.5%) for all three diodes. Cross-axis profile measurements were also improved with the new detector design. The upstream air gap could be implanted on the commercial diodes via a cap consisting of the air cavity surrounded by water equivalent material. The results for the unclosed silicon chip show that an ideal small field dosimetry diode could be created by using a silicon chip
Monte Carlo-based diode design for correction-less small field dosimetry
Charles, P. H.; Crowe, S. B.; Kairn, T.; Knight, R. T.; Hill, B.; Kenny, J.; Langton, C. M.; Trapp, J. V.
2013-07-01
Due to their small collecting volume, diodes are commonly used in small field dosimetry. However, the relative sensitivity of a diode increases with decreasing small field size. Conversely, small air gaps have been shown to cause a significant decrease in the sensitivity of a detector as the field size is decreased. Therefore, this study uses Monte Carlo simulations to look at introducing air upstream to diodes such that they measure with a constant sensitivity across all field sizes in small field dosimetry. Varying thicknesses of air were introduced onto the upstream end of two commercial diodes (PTW 60016 photon diode and PTW 60017 electron diode), as well as a theoretical unenclosed silicon chip using field sizes as small as 5 mm × 5 mm. The metric \\frac{{D_{w,Q} }}{{D_{Det,Q} }} used in this study represents the ratio of the dose to a point of water to the dose to the diode active volume, for a particular field size and location. The optimal thickness of air required to provide a constant sensitivity across all small field sizes was found by plotting \\frac{{D_{w,Q} }}{{D_{Det,Q} }} as a function of introduced air gap size for various field sizes, and finding the intersection point of these plots. That is, the point at which \\frac{{D_{w,Q} }}{{D_{Det,Q} }} was constant for all field sizes was found. The optimal thickness of air was calculated to be 3.3, 1.15 and 0.10 mm for the photon diode, electron diode and unenclosed silicon chip, respectively. The variation in these results was due to the different design of each detector. When calculated with the new diode design incorporating the upstream air gap, k_{Q_{clin} ,Q_{msr} }^{f_{clin} ,f_{msr} } was equal to unity to within statistical uncertainty (0.5%) for all three diodes. Cross-axis profile measurements were also improved with the new detector design. The upstream air gap could be implanted on the commercial diodes via a cap consisting of the air cavity surrounded by water equivalent material. The
Monte Carlo based beam model using a photon MLC for modulated electron radiotherapy
International Nuclear Information System (INIS)
Purpose: Modulated electron radiotherapy (MERT) promises sparing of organs at risk for certain tumor sites. Any implementation of MERT treatment planning requires an accurate beam model. The aim of this work is the development of a beam model which reconstructs electron fields shaped using the Millennium photon multileaf collimator (MLC) (Varian Medical Systems, Inc., Palo Alto, CA) for a Varian linear accelerator (linac). Methods: This beam model is divided into an analytical part (two photon and two electron sources) and a Monte Carlo (MC) transport through the MLC. For dose calculation purposes the beam model has been coupled with a macro MC dose calculation algorithm. The commissioning process requires a set of measurements and precalculated MC input. The beam model has been commissioned at a source to surface distance of 70 cm for a Clinac 23EX (Varian Medical Systems, Inc., Palo Alto, CA) and a TrueBeam linac (Varian Medical Systems, Inc., Palo Alto, CA). For validation purposes, measured and calculated depth dose curves and dose profiles are compared for four different MLC shaped electron fields and all available energies. Furthermore, a measured two-dimensional dose distribution for patched segments consisting of three 18 MeV segments, three 12 MeV segments, and a 9 MeV segment is compared with corresponding dose calculations. Finally, measured and calculated two-dimensional dose distributions are compared for a circular segment encompassed with a C-shaped segment. Results: For 15 × 34, 5 × 5, and 2 × 2 cm2 fields differences between water phantom measurements and calculations using the beam model coupled with the macro MC dose calculation algorithm are generally within 2% of the maximal dose value or 2 mm distance to agreement (DTA) for all electron beam energies. For a more complex MLC pattern, differences between measurements and calculations are generally within 3% of the maximal dose value or 3 mm DTA for all electron beam energies. For the two
International Nuclear Information System (INIS)
In Japan, depleted uranium ammunition is regarded as nuclear weapons and meets with fierce opposition. The fact that US Marines mistakenly fired bullets containing depleted uranium on an island off Okinawa during training exercises in December 1995 and January 1996, also contributes. The overall situation in this area in Japan is outlined. (P.A.)
Management of depleted uranium
International Nuclear Information System (INIS)
Large stocks of depleted uranium have arisen as a result of enrichment operations, especially in the United States and the Russian Federation. Countries with depleted uranium stocks are interested in assessing strategies for the use and management of depleted uranium. The choice of strategy depends on several factors, including government and business policy, alternative uses available, the economic value of the material, regulatory aspects and disposal options, and international market developments in the nuclear fuel cycle. This report presents the results of a depleted uranium study conducted by an expert group organised jointly by the OECD Nuclear Energy Agency and the International Atomic Energy Agency. It contains information on current inventories of depleted uranium, potential future arisings, long term management alternatives, peaceful use options and country programmes. In addition, it explores ideas for international collaboration and identifies key issues for governments and policy makers to consider. (authors)
Jin, Shengye; Tamura, Masayuki
2013-10-01
Monte Carlo Ray Tracing (MCRT) method is a versatile application for simulating radiative transfer regime of the Solar - Atmosphere - Landscape system. Moreover, it can be used to compute the radiation distribution over a complex landscape configuration, as an example like a forest area. Due to its robustness to the complexity of the 3-D scene altering, MCRT method is also employed for simulating canopy radiative transfer regime as the validation source of other radiative transfer models. In MCRT modeling within vegetation, one basic step is the canopy scene set up. 3-D scanning application was used for representing canopy structure as accurately as possible, but it is time consuming. Botanical growth function can be used to model the single tree growth, but cannot be used to express the impaction among trees. L-System is also a functional controlled tree growth simulation model, but it costs large computing memory. Additionally, it only models the current tree patterns rather than tree growth during we simulate the radiative transfer regime. Therefore, it is much more constructive to use regular solid pattern like ellipsoidal, cone, cylinder etc. to indicate single canopy. Considering the allelopathy phenomenon in some open forest optical images, each tree in its own `domain' repels other trees. According to this assumption a stochastic circle packing algorithm is developed to generate the 3-D canopy scene in this study. The canopy coverage (%) and the tree amount (N) of the 3-D scene are declared at first, similar to the random open forest image. Accordingly, we randomly generate each canopy radius (rc). Then we set the circle central coordinate on XY-plane as well as to keep circles separate from each other by the circle packing algorithm. To model the individual tree, we employ the Ishikawa's tree growth regressive model to set the tree parameters including DBH (dt), tree height (H). However, the relationship between canopy height (Hc) and trunk height (Ht) is
Water Depletion Threatens Agriculture
Brauman, K. A.; Richter, B. D.; Postel, S.; Floerke, M.; Malsy, M.
2014-12-01
Irrigated agriculture is the human activity that has by far the largest impact on water, constituting 85% of global water consumption and 67% of global water withdrawals. Much of this water use occurs in places where water depletion, the ratio of water consumption to water availability, exceeds 75% for at least one month of the year. Although only 17% of global watershed area experiences depletion at this level or more, nearly 30% of total cropland and 60% of irrigated cropland are found in these depleted watersheds. Staple crops are particularly at risk, with 75% of global irrigated wheat production and 65% of irrigated maize production found in watersheds that are at least seasonally depleted. Of importance to textile production, 75% of cotton production occurs in the same watersheds. For crop production in depleted watersheds, we find that one half to two-thirds of production occurs in watersheds that have not just seasonal but annual water shortages, suggesting that re-distributing water supply over the course of the year cannot be an effective solution to shortage. We explore the degree to which irrigated production in depleted watersheds reflects limitations in supply, a byproduct of the need for irrigation in perennially or seasonally dry landscapes, and identify heavy irrigation consumption that leads to watershed depletion in more humid climates. For watersheds that are not depleted, we evaluate the potential impact of an increase in irrigated production. Finally, we evaluate the benefits of irrigated agriculture in depleted and non-depleted watersheds, quantifying the fraction of irrigated production going to food production, animal feed, and biofuels.
An OpenCL-based Monte Carlo dose calculation engine (oclMC) for coupled photon-electron transport
Tian, Zhen; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun
2015-01-01
Monte Carlo (MC) method has been recognized the most accurate dose calculation method for radiotherapy. However, its extremely long computation time impedes clinical applications. Recently, a lot of efforts have been made to realize fast MC dose calculation on GPUs. Nonetheless, most of the GPU-based MC dose engines were developed in NVidia CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a fast cross-platform MC dose engine oclMC using OpenCL environment for external beam photon and electron radiotherapy in MeV energy range. Coupled photon-electron MC simulation was implemented with analogue simulations for photon transports and a Class II condensed history scheme for electron transports. To test the accuracy and efficiency of our dose engine oclMC, we compared dose calculation results of oclMC and gDPM, our previously developed GPU-based MC code, for a 15 MeV electron ...
Pan, J.; Durand, M. T.; Vanderjagt, B. J.
2015-12-01
Markov Chain Monte Carlo (MCMC) method is a retrieval algorithm based on Bayes' rule, which starts from an initial state of snow/soil parameters, and updates it to a series of new states by comparing the posterior probability of simulated snow microwave signals before and after each time of random walk. It is a realization of the Bayes' rule, which gives an approximation to the probability of the snow/soil parameters in condition of the measured microwave TB signals at different bands. Although this method could solve all snow parameters including depth, density, snow grain size and temperature at the same time, it still needs prior information of these parameters for posterior probability calculation. How the priors will influence the SWE retrieval is a big concern. Therefore, in this paper at first, a sensitivity test will be carried out to study how accurate the snow emission models and how explicit the snow priors need to be to maintain the SWE error within certain amount. The synthetic TB simulated from the measured snow properties plus a 2-K observation error will be used for this purpose. It aims to provide a guidance on the MCMC application under different circumstances. Later, the method will be used for the snowpits at different sites, including Sodankyla, Finland, Churchill, Canada and Colorado, USA, using the measured TB from ground-based radiometers at different bands. Based on the previous work, the error in these practical cases will be studied, and the error sources will be separated and quantified.
Directory of Open Access Journals (Sweden)
Tuija Kangasmaa
2012-01-01
Full Text Available Simultaneous Tl-201/Tc-99m dual isotope myocardial perfusion SPECT is seriously hampered by down-scatter from Tc-99m into the Tl-201 energy window. This paper presents and optimises the ordered-subsets-expectation-maximisation-(OS-EM- based reconstruction algorithm, which corrects the down-scatter using an efficient Monte Carlo (MC simulator. The algorithm starts by first reconstructing the Tc-99m image with attenuation, collimator response, and MC-based scatter correction. The reconstructed Tc-99m image is then used as an input for an efficient MC-based down-scatter simulation of Tc-99m photons into the Tl-201 window. This down-scatter estimate is finally used in the Tl-201 reconstruction to correct the crosstalk between the two isotopes. The mathematical 4D NCAT phantom and physical cardiac phantoms were used to optimise the number of OS-EM iterations where the scatter estimate is updated and the number of MC simulated photons. The results showed that two scatter update iterations and 105 simulated photons are enough for the Tc-99m and Tl-201 reconstructions, whereas 106 simulated photons are needed to generate good quality down-scatter estimates. With these parameters, the entire Tl-201/Tc-99m dual isotope reconstruction can be accomplished in less than 3 minutes.
Tsukamoto, Tetsuo; Yamamoto, Hiroyuki; Okada, Seiji; Matano, Tetsuro
2016-09-01
Although antiretroviral therapy has made human immunodeficiency virus (HIV) infection a controllable disease, it is still unclear how viral replication persists in untreated patients and causes CD4(+) T-cell depletion leading to acquired immunodeficiency syndrome (AIDS) in several years. Theorists tried to explain it with the diversity threshold theory in which accumulated mutations in the HIV genome make the virus so diverse that the immune system will no longer be able to recognize all the variants and fail to control the viraemia. Although the theory could apply to a number of cases, macaque AIDS models using simian immunodeficiency virus (SIV) have shown that failed viral control at the set point is not always associated with T-cell escape mutations. Moreover, even monkeys without a protective major histocompatibility complex (MHC) allele can contain replication of a super infected SIV following immunization with a live-attenuated SIV vaccine, while those animals are not capable of fighting primary SIV infection. Here we propose a recursion-based virus-specific naive CD4(+) T-cell depletion hypothesis through thinking on what may happen in individuals experiencing primary immunodeficiency virus infection. This could explain the mechanism for impairment of virus-specific immune response in the course of HIV infection. PMID:27515208
International Nuclear Information System (INIS)
During construction of the whole body counter (WBC) at the Children's Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of 40K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Carlo n-particle simulation program was used to describe the WBC (54 detectors plus shielding), test individual detector counting response, and create a series of virtual anthropomorphic phantoms based on national reference anthropometric data. Each phantom included an outer layer of adipose tissue and an inner core of lean tissue. Phantoms were designed for both genders representing ages 3.5 to 18.5 years with body sizes from the 5th to the 95th percentile based on body weight. In addition, a spherical surface source surrounding the WBC was modeled in order to measure the effects of subject mass on room background interference. Individual detector measurements showed good agreement with the MCNP model. The background source model came close to agreement with empirical measurements, but showed a trend deviating from unity with increasing subject size. Results from the MCNP simulation of the CNRC WBC agreed well with empirical measurements using BOMAB phantoms. Individual detector efficiency corrections were used to improve the accuracy of the model. Nonlinear multiple regression efficiency calibration equations were derived for each gender. Room background correction is critical in improving the accuracy of the WBC calibration.
Simakov, S P; Moellendorff, U V; Schmuck, I; Konobeev, A Y; Korovin, Y A; Pereslavtsev, P
2002-01-01
A newly developed computational procedure is presented for the generation of d-Li source neutrons in Monte Carlo transport calculations based on the use of evaluated double-differential d+ sup 6 sup , sup 7 Li cross section data. A new code M sup c DeLicious was developed as an extension to MCNP4C to enable neutronics design calculations for the d-Li based IFMIF neutron source making use of the evaluated deuteron data files. The M sup c DeLicious code was checked against available experimental data and calculation results of M sup c DeLi and MCNPX, both of which use built-in analytical models for the Li(d, xn) reaction. It is shown that M sup c DeLicious along with newly evaluated d+ sup 6 sup , sup 7 Li data is superior in predicting the characteristics of the d-Li neutron source. As this approach makes use of tabulated Li(d, xn) cross sections, the accuracy of the IFMIF d-Li neutron source term can be steadily improved with more advanced and validated data.
Directory of Open Access Journals (Sweden)
Qinming Liu
2012-01-01
Full Text Available Health management for a complex nonlinear system is becoming more important for condition-based maintenance and minimizing the related risks and costs over its entire life. However, a complex nonlinear system often operates under dynamically operational and environmental conditions, and it subjects to high levels of uncertainty and unpredictability so that effective methods for online health management are still few now. This paper combines hidden semi-Markov model (HSMM with sequential Monte Carlo (SMC methods. HSMM is used to obtain the transition probabilities among health states and health state durations of a complex nonlinear system, while the SMC method is adopted to decrease the computational and space complexity, and describe the probability relationships between multiple health states and monitored observations of a complex nonlinear system. This paper proposes a novel method of multisteps ahead health recognition based on joint probability distribution for health management of a complex nonlinear system. Moreover, a new online health prognostic method is developed. A real case study is used to demonstrate the implementation and potential applications of the proposed methods for online health management of complex nonlinear systems.
International Nuclear Information System (INIS)
To date, the light scattering models of snow consider very little about the real snow microstructures. The ideal spherical or other single shaped particle assumptions in previous snow light scattering models can cause error in light scattering modeling of snow and further cause errors in remote sensing inversion algorithms. This paper tries to build up a snow polarized reflectance model based on bicontinuous medium, with which the real snow microstructure is considered. The accurate specific surface area of bicontinuous medium can be analytically derived. The polarized Monte Carlo ray tracing technique is applied to the computer generated bicontinuous medium. With proper algorithms, the snow surface albedo, bidirectional reflectance distribution function (BRDF) and polarized BRDF can be simulated. The validation of model predicted spectral albedo and bidirectional reflectance factor (BRF) using experiment data shows good results. The relationship between snow surface albedo and snow specific surface area (SSA) were predicted, and this relationship can be used for future improvement of snow specific surface area (SSA) inversion algorithms. The model predicted polarized reflectance is validated and proved accurate, which can be further applied in polarized remote sensing. -- Highlights: • Bicontinuous random medium were used for real snow microstructure modeling. • Photon tracing technique with polarization status tracking ability was applied. • SSA–albedo relationship of snow is close to that of sphere based medium. • Validation of albedo and BRDF showed good results. • Validation of polarized reflectance showed good agreement with experiment data
Shypailo, R. J.; Ellis, K. J.
2011-05-01
During construction of the whole body counter (WBC) at the Children's Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of 40K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Carlo n-particle simulation program was used to describe the WBC (54 detectors plus shielding), test individual detector counting response, and create a series of virtual anthropomorphic phantoms based on national reference anthropometric data. Each phantom included an outer layer of adipose tissue and an inner core of lean tissue. Phantoms were designed for both genders representing ages 3.5 to 18.5 years with body sizes from the 5th to the 95th percentile based on body weight. In addition, a spherical surface source surrounding the WBC was modeled in order to measure the effects of subject mass on room background interference. Individual detector measurements showed good agreement with the MCNP model. The background source model came close to agreement with empirical measurements, but showed a trend deviating from unity with increasing subject size. Results from the MCNP simulation of the CNRC WBC agreed well with empirical measurements using BOMAB phantoms. Individual detector efficiency corrections were used to improve the accuracy of the model. Nonlinear multiple regression efficiency calibration equations were derived for each gender. Room background correction is critical in improving the accuracy of the WBC calibration.
Energy Technology Data Exchange (ETDEWEB)
Garcia, Marie-Paule, E-mail: marie-paule.garcia@univ-brest.fr; Villoing, Daphnée [UMR 1037 INSERM/UPS, CRCT, 133 Route de Narbonne, 31062 Toulouse (France); McKay, Erin [St George Hospital, Gray Street, Kogarah, New South Wales 2217 (Australia); Ferrer, Ludovic [ICO René Gauducheau, Boulevard Jacques Monod, St Herblain 44805 (France); Cremonesi, Marta; Botta, Francesca; Ferrari, Mahila [European Institute of Oncology, Via Ripamonti 435, Milano 20141 (Italy); Bardiès, Manuel [UMR 1037 INSERM/UPS, CRCT, 133 Route de Narbonne, Toulouse 31062 (France)
2015-12-15
Purpose: The TestDose platform was developed to generate scintigraphic imaging protocols and associated dosimetry by Monte Carlo modeling. TestDose is part of a broader project (www.dositest.com) whose aim is to identify the biases induced by different clinical dosimetry protocols. Methods: The TestDose software allows handling the whole pipeline from virtual patient generation to resulting planar and SPECT images and dosimetry calculations. The originality of their approach relies on the implementation of functional segmentation for the anthropomorphic model representing a virtual patient. Two anthropomorphic models are currently available: 4D XCAT and ICRP 110. A pharmacokinetic model describes the biodistribution of a given radiopharmaceutical in each defined compartment at various time-points. The Monte Carlo simulation toolkit GATE offers the possibility to accurately simulate scintigraphic images and absorbed doses in volumes of interest. The TestDose platform relies on GATE to reproduce precisely any imaging protocol and to provide reference dosimetry. For image generation, TestDose stores user’s imaging requirements and generates automatically command files used as input for GATE. Each compartment is simulated only once and the resulting output is weighted using pharmacokinetic data. Resulting compartment projections are aggregated to obtain the final image. For dosimetry computation, emission data are stored in the platform database and relevant GATE input files are generated for the virtual patient model and associated pharmacokinetics. Results: Two samples of software runs are given to demonstrate the potential of TestDose. A clinical imaging protocol for the Octreoscan™ therapeutical treatment was implemented using the 4D XCAT model. Whole-body “step and shoot” acquisitions at different times postinjection and one SPECT acquisition were generated within reasonable computation times. Based on the same Octreoscan™ kinetics, a dosimetry
International Nuclear Information System (INIS)
Purpose: The TestDose platform was developed to generate scintigraphic imaging protocols and associated dosimetry by Monte Carlo modeling. TestDose is part of a broader project (www.dositest.com) whose aim is to identify the biases induced by different clinical dosimetry protocols. Methods: The TestDose software allows handling the whole pipeline from virtual patient generation to resulting planar and SPECT images and dosimetry calculations. The originality of their approach relies on the implementation of functional segmentation for the anthropomorphic model representing a virtual patient. Two anthropomorphic models are currently available: 4D XCAT and ICRP 110. A pharmacokinetic model describes the biodistribution of a given radiopharmaceutical in each defined compartment at various time-points. The Monte Carlo simulation toolkit GATE offers the possibility to accurately simulate scintigraphic images and absorbed doses in volumes of interest. The TestDose platform relies on GATE to reproduce precisely any imaging protocol and to provide reference dosimetry. For image generation, TestDose stores user’s imaging requirements and generates automatically command files used as input for GATE. Each compartment is simulated only once and the resulting output is weighted using pharmacokinetic data. Resulting compartment projections are aggregated to obtain the final image. For dosimetry computation, emission data are stored in the platform database and relevant GATE input files are generated for the virtual patient model and associated pharmacokinetics. Results: Two samples of software runs are given to demonstrate the potential of TestDose. A clinical imaging protocol for the Octreoscan™ therapeutical treatment was implemented using the 4D XCAT model. Whole-body “step and shoot” acquisitions at different times postinjection and one SPECT acquisition were generated within reasonable computation times. Based on the same Octreoscan™ kinetics, a dosimetry
International Nuclear Information System (INIS)
Monte Carlo simulation code on impurity transport has been developed by several groups to be utilized mainly for fusion related edge plasmas. State of impurity particle is determined by atomic and molecular processes such as ionization, charge exchange in plasma. A lot of atomic and molecular processes have been considered because the edge plasma has not only impurity atoms, but also impurity molecules mainly related to chemical erosion of carbon materials, and their cross sections have been given experimentally and theoretically. We need to reveal which process is essential in a given edge plasma condition. Monte Carlo simulation code, which takes such various atomic and molecular processes into account, is necessary to investigate the behavior of impurity particle in plasmas. Usually, the impurity transport simulation code has been intended for some specific atomic and molecular processes so that the introduction of a new process forces complicated programming work. In order to evaluate various proposed atomic and molecular processes, a flexible management of atomic and molecular reaction should be established. We have developed the impurity transport simulation code based on object-oriented method. By employing object-oriented programming, we can handle each particle as 'object', which enfolds data and procedure function itself. A user (notice, not programmer) can define property of each particle species and the related atomic and molecular processes and then each 'object' is defined by analyzing this information. According to the relation among plasma particle species, objects are connected with each other and change their state by themselves. Dynamic allocation of these objects to program memory is employed to adapt for arbitrary number of species and atomic/molecular reactions. Thus we can treat arbitrary species and process starting from, for instance, methane and acetylene. Such a software procedure would be useful also for industrial application plasmas
International Nuclear Information System (INIS)
An explosive detection system based on a Deuterium–Deuterium (D–D) neutron generator has been simulated using the Monte Carlo N-Particle Transport Code (MCNP5). Nuclear-based explosive detection methods can detect explosives by identifying their elemental components, especially nitrogen. Thermal neutron capture reactions have been used for detecting prompt gamma emission (10.82 MeV) following radiative neutron capture by 14N nuclei. The explosive detection system was built based on a fully high-voltage-shielded, axial D–D neutron generator with a radio frequency (RF) driven ion source and nominal yield of about 1010 fast neutrons per second (E=2.5 MeV). Polyethylene and paraffin were used as moderators with borated polyethylene and lead as neutron and gamma ray shielding, respectively. The shape and the thickness of the moderators and shields are optimized to produce the highest thermal neutron flux at the position of the explosive and the minimum total dose at the outer surfaces of the explosive detection system walls. In addition, simulation of the response functions of NaI, BGO, and LaBr3-based γ-ray detectors to different explosives is described. - Highlights: • Explosive detection system based on Deuterium–Deuterium neutron generator has been designed. • Shielding for a D–D neutron generator has been designed using MCNP code. • The special shield must be designed for each detector and neutron source. • Thermal neutron capture reactions have been used for detecting 10.82 MeV line from 14N nuclei. • Simulation of the response functions of NaI, BGO, and LaBr3 detectors
MC-Net: a method for the construction of phylogenetic networks based on the Monte-Carlo method
Directory of Open Access Journals (Sweden)
Eslahchi Changiz
2010-08-01
Full Text Available Abstract Background A phylogenetic network is a generalization of phylogenetic trees that allows the representation of conflicting signals or alternative evolutionary histories in a single diagram. There are several methods for constructing these networks. Some of these methods are based on distances among taxa. In practice, the methods which are based on distance perform faster in comparison with other methods. The Neighbor-Net (N-Net is a distance-based method. The N-Net produces a circular ordering from a distance matrix, then constructs a collection of weighted splits using circular ordering. The SplitsTree which is a program using these weighted splits makes a phylogenetic network. In general, finding an optimal circular ordering is an NP-hard problem. The N-Net is a heuristic algorithm to find the optimal circular ordering which is based on neighbor-joining algorithm. Results In this paper, we present a heuristic algorithm to find an optimal circular ordering based on the Monte-Carlo method, called MC-Net algorithm. In order to show that MC-Net performs better than N-Net, we apply both algorithms on different data sets. Then we draw phylogenetic networks corresponding to outputs of these algorithms using SplitsTree and compare the results. Conclusions We find that the circular ordering produced by the MC-Net is closer to optimal circular ordering than the N-Net. Furthermore, the networks corresponding to outputs of MC-Net made by SplitsTree are simpler than N-Net.
International Nuclear Information System (INIS)
The majority of present position emission tomography (PET) animal systems are based on the coupling of high-density scintillators and light detectors. A disadvantage of these detector configurations is the compromise between image resolution, sensitivity and energy resolution. In addition, current combined imaging devices are based on simply placing back-to-back and in axial alignment different apparatus without any significant level of software or hardware integration. The use of semiconductor CdZnTe (CZT) detectors is a promising alternative to scintillators for gamma-ray imaging systems. At the same time CZT detectors have the potential properties necessary for the construction of a truly integrated imaging device (PET/SPECT/CT). The aims of this study was to assess the performance of different small animal PET scanner architectures based on CZT pixellated detectors and compare their performance with that of state of the art existing PET animal scanners. Different scanner architectures were modelled using GATE (Geant4 Application for Tomographic Emission). Particular scanner design characteristics included an overall cylindrical scanner format of 8 and 24 cm in axial and transaxial field of view, respectively, and a temporal coincidence window of 8 ns. Different individual detector modules were investigated, considering pixel pitch down to 0.625 mm and detector thickness from 1 to 5 mm. Modified NEMA NU2-2001 protocols were used in order to simulate performance based on mouse, rat and monkey imaging conditions. These protocols allowed us to directly compare the performance of the proposed geometries with the latest generation of current small animal systems. Results attained demonstrate the potential for higher NECR with CZT based scanners in comparison to scintillator based animal systems
Visvikis, D.; Lefevre, T.; Lamare, F.; Kontaxakis, G.; Santos, A.; Darambara, D.
2006-12-01
The majority of present position emission tomography (PET) animal systems are based on the coupling of high-density scintillators and light detectors. A disadvantage of these detector configurations is the compromise between image resolution, sensitivity and energy resolution. In addition, current combined imaging devices are based on simply placing back-to-back and in axial alignment different apparatus without any significant level of software or hardware integration. The use of semiconductor CdZnTe (CZT) detectors is a promising alternative to scintillators for gamma-ray imaging systems. At the same time CZT detectors have the potential properties necessary for the construction of a truly integrated imaging device (PET/SPECT/CT). The aims of this study was to assess the performance of different small animal PET scanner architectures based on CZT pixellated detectors and compare their performance with that of state of the art existing PET animal scanners. Different scanner architectures were modelled using GATE (Geant4 Application for Tomographic Emission). Particular scanner design characteristics included an overall cylindrical scanner format of 8 and 24 cm in axial and transaxial field of view, respectively, and a temporal coincidence window of 8 ns. Different individual detector modules were investigated, considering pixel pitch down to 0.625 mm and detector thickness from 1 to 5 mm. Modified NEMA NU2-2001 protocols were used in order to simulate performance based on mouse, rat and monkey imaging conditions. These protocols allowed us to directly compare the performance of the proposed geometries with the latest generation of current small animal systems. Results attained demonstrate the potential for higher NECR with CZT based scanners in comparison to scintillator based animal systems.
Energy Technology Data Exchange (ETDEWEB)
Visvikis, D. [INSERM U650, LaTIM, University Hospital Medical School, F-29609 Brest (France)]. E-mail: Visvikis.Dimitris@univ-brest.fr; Lefevre, T. [INSERM U650, LaTIM, University Hospital Medical School, F-29609 Brest (France); Lamare, F. [INSERM U650, LaTIM, University Hospital Medical School, F-29609 Brest (France); Kontaxakis, G. [ETSI Telecomunicacion Universidad Politecnica de Madrid, Ciudad Universitaria, s/n 28040, Madrid (Spain); Santos, A. [ETSI Telecomunicacion Universidad Politecnica de Madrid, Ciudad Universitaria, s/n 28040, Madrid (Spain); Darambara, D. [Department of Physics, School of Engineering and Physical Sciences, University of Surrey, Guildford (United Kingdom)
2006-12-20
The majority of present position emission tomography (PET) animal systems are based on the coupling of high-density scintillators and light detectors. A disadvantage of these detector configurations is the compromise between image resolution, sensitivity and energy resolution. In addition, current combined imaging devices are based on simply placing back-to-back and in axial alignment different apparatus without any significant level of software or hardware integration. The use of semiconductor CdZnTe (CZT) detectors is a promising alternative to scintillators for gamma-ray imaging systems. At the same time CZT detectors have the potential properties necessary for the construction of a truly integrated imaging device (PET/SPECT/CT). The aims of this study was to assess the performance of different small animal PET scanner architectures based on CZT pixellated detectors and compare their performance with that of state of the art existing PET animal scanners. Different scanner architectures were modelled using GATE (Geant4 Application for Tomographic Emission). Particular scanner design characteristics included an overall cylindrical scanner format of 8 and 24 cm in axial and transaxial field of view, respectively, and a temporal coincidence window of 8 ns. Different individual detector modules were investigated, considering pixel pitch down to 0.625 mm and detector thickness from 1 to 5 mm. Modified NEMA NU2-2001 protocols were used in order to simulate performance based on mouse, rat and monkey imaging conditions. These protocols allowed us to directly compare the performance of the proposed geometries with the latest generation of current small animal systems. Results attained demonstrate the potential for higher NECR with CZT based scanners in comparison to scintillator based animal systems.
Kianoush Fathi Vajargah
2014-01-01
The accuracy of Monte Carlo and quasi-Monte Carlo methods decreases in problems of high dimensions. Therefore, the objective of this study was to present an optimum method to increase the accuracy of the answer. As the problem gets larger, the resulting accuracy will be higher. In this respect, this study combined the two previous methods, QMC and MC, and presented a hybrid method with efficiency higher than that of those two methods.
Schreiber, Eric C.; Chang, Sha X.
2012-01-01
Purpose: Microbeam radiation therapy (MRT) is an experimental radiotherapy technique that has shown potent antitumor effects with minimal damage to normal tissue in animal studies. This unique form of radiation is currently only produced in a few large synchrotron accelerator research facilities in the world. To promote widespread translational research on this promising treatment technology we have proposed and are in the initial development stages of a compact MRT system that is based on ca...
Energy Technology Data Exchange (ETDEWEB)
Moore, Stephen C. [Department of Radiology, Brigham and Women' s Hospital and Harvard Medical School, 75 Francis Street, Boston, MA 02115 (United States)]. E-mail: scmoore@bwh.harvard.edu; Ouyang, Jinsong [Department of Radiology, Brigham and Women' s Hospital and Harvard Medical School, 75 Francis Street, Boston, MA 02115 (United States); Park, Mi-Ae [Department of Radiology, Brigham and Women' s Hospital and Harvard Medical School, 75 Francis Street, Boston, MA 02115 (United States); El Fakhri, Georges [Department of Radiology, Brigham and Women' s Hospital and Harvard Medical School, 75 Francis Street, Boston, MA 02115 (United States)
2006-12-20
We have incorporated Monte Carlo (MC)-based estimates of patient scatter, detector scatter, and crosstalk into an iterative reconstruction algorithm, and compared its performance to that of a general spectral (GS) approach. We extended the MC-based reconstruction algorithm of de Jong et al. by (1) using the 'Delta scattering' method to determine photon interaction points (2) simulating scatter maps for many energy bins simultaneously, and (3) decoupling the simulation of the object and detector by using pre-stored point spread functions (PSF) that included all collimator and detector effects. A numerical phantom was derived from a segmented CT scan of a torso phantom. The relative values of In-111 activity concentration simulated in soft tissue, liver, spine, left lung, right lung, and five spherical tumors (1.3-2.0 cm diam.) were 1.0, 1.5, 1.5, 0.3, 0.5, and 10.0, respectively. GS scatter projections were incorporated additively in an OSEM reconstruction (6 subsetsx10 projectionsx2 photopeak windows). After three iterations, GS scatter projections were replaced by MC-estimated scatter projections for two additional iterations. MC-based compensation was quantitatively compared to GS-based compensation after five iterations. The bias of organ activity estimates ranged from -13% to -6.5% (GS), and from -1.4% to +5.0% (MC); tumor bias ranged from -20.0% to +10.0% for GS (mean{+-}std.dev.=-4.3{+-}11.9%), and from -2.2 to +18.8% for MC (+4.1{+-}8.6%). Image noise in all organs was less with MC than with GS.
International Nuclear Information System (INIS)
The depleted uranium is that in which percentage of uranium-235 fission executable is less than 0.2% or 0.3%. It is usually caused by the process of reprocessing the nuclear fuel burning, and also mixed with some other radioactive elements such as uranium 236, 238 and plutonium 239. The good features of the depleted uranium are its high density, low price and easily mined. So, the specifications for depleted uranium make it one of the best materials in case you need to have objects small in size, but quite heavy regarding its size. Uses of deplet ed uranium were relatively increased in domestic industrial uses as well as some uses in nuclear industry in the last few years. So it has increased uses in many areas of military and peaceful means such as: in balancing the giant air crafts, ships and missiles and in the manufacture of some types of concrete with severe hardness. (author)
International Nuclear Information System (INIS)
Nuclear data libraries are widely used in reactor design, shielding, and activation analyses. In many instances there is either a complete lack of paucity of data; therefore nuclear reaction models must supplement and augment experimental data. Theoretical nuclear models, therefore, go hand in hand with data in creating the best nuclear data libraries possible. The intranuclear cascade model (INC), the first and classical approach to describing the preequilibrium regime, follows coordinate-space, particle trajectories within the nucleus by means of the Monte Carlo algorithm in which numerical simulation of the scattering process is based on experimental nucleon-nucleon scattering cross sections. Angular distributions are calculated but the emission at back angles can underpredict data by a few orders of magnitude. The multistage preequilibrium exciton model (MPM) has been implemented in LAHET, the Los Alamos National Laboratory version of the high-energy transport code, in order to correct this defect. Preequilibrium models are statistical, but employ an analytical solution technique. First, the MPM does not completely replace the INC, even at low-incident energies. It is possible that augmenting the MPM with a nuclear surface model, which provides greater particle emission at the tail of the spectra, may allow MPM to replace INC for these cases. Second, a more physical interfacing scheme between MPM and the Bertini model INC is being sought that would employ an excitation energy dependence. This would augment the present ISABEL INC counting scheme that follows the number of excitons created
International Nuclear Information System (INIS)
In this work a comparative analysis of the results for the neutrons dispersion in a not multiplicative semi-infinite medium is presented. One of the frontiers of this medium is located in the origin of coordinates, where a neutrons source in beam form, i.e., μο=1 is also. The neutrons dispersion is studied on the statistical method of Monte Carlo and through the unidimensional transport theory and for an energy group. The application of transport theory gives a semi-analytic solution for this problem while the statistical solution for the flow was obtained applying the MCNPX code. The dispersion in light water and heavy water was studied. A first remarkable result is that both methods locate the maximum of the neutrons distribution to less than two mean free trajectories of transport for heavy water, while for the light water is less than ten mean free trajectories of transport; the differences between both methods is major for the light water case. A second remarkable result is that the tendency of both distributions is similar in small mean free trajectories, while in big mean free trajectories the transport theory spreads to an asymptote value and the solution in base statistical method spreads to zero. The existence of a neutron current of low energy and toward the source is demonstrated, in contrary sense to the neutron current of high energy coming from the own source. (Author)
International Nuclear Information System (INIS)
A univel geometry, neutral particle Monte Carlo transport code, written entirely in the Java programming language, is under development for medical radiotherapy applications. The code uses ENDF-VI based continuous energy cross section data in a flexible XML format. Full neutron-photon coupling, including detailed photon production and photonuclear reactions, is included. Charged particle equilibrium is assumed within the patient model so that detailed transport of electrons produced by photon interactions may be neglected. External beam and internal distributed source descriptions for mixed neutron-photon sources are allowed. Flux and dose tallies are performed on a univel basis. A four-tap, shift-register-sequence random number generator is used. Initial verification and validation testing of the basic neutron transport routines is underway. The searchlight problem was chosen as a suitable first application because of the simplicity of the physical model. Results show excellent agreement with analytic solutions. Computation times for similar numbers of histories are comparable to other neutron MC codes written in C and FORTRAN
Investigation of the CRT performance of a PET scanner based in liquid xenon: A Monte Carlo study
Gomez-Cadenas, J J; Ferrario, P; Monrabal, F; Rodríguez, J; Toledo, J F
2016-01-01
The measurement of the time of flight of the two 511 keV gammas recorded in coincidence in a PET scanner provides an effective way of reducing the random background and therefore increases the scanner sensitivity, provided that the coincidence resolving time (CRT) of the gammas is sufficiently good. Existing commercial systems based in LYSO crystals, such as the GEMINIS of Philips, reach CRT values of ~ 600 ps (FWHM). In this paper we present a Monte Carlo investigation of the CRT performance of a PET scanner exploiting the scintillating properties of liquid xenon. We find that an excellent CRT of 60-70 ps (depending on the PDE of the sensor) can be obtained if the scanner is instrumented with silicon photomultipliers (SiPMs) sensitive to the ultraviolet light emitted by xenon. Alternatively, a CRT of 120 ps can be obtained instrumenting the scanner with (much cheaper) blue-sensitive SiPMs coated with a suitable wavelength shifter. These results show the excellent time of flight capabilities of a PET device b...
Study on the Uncertainty of the Available Time Under Ship Fire Based on Monte Carlo Sampling Method
Institute of Scientific and Technical Information of China (English)
WANG Jin-hui; CHU Guan-quan; LI Kai-yuan
2013-01-01
Available safety egress time under ship fire (SFAT) is critical to ship fire safety assessment,design and emergency rescue.Although it is available to determine SFAT by using fire models such as the two-zone fire model CFAST and the field model FDS,none of these models can address the uncertainties involved in the input parameters.To solve this problem,current study presents a framework of uncertainty analysis for SFAT.Firstly,a deterministic model estimating SFAT is built.The uncertainties of the input parameters are regarded as random variables with the given probability distribution functions.Subsequently,the deterministic SFAT model is employed to couple with a Monte Carlo sampling method to investigate the uncertainties of the SFAT.The Spearman's rank-order correlation coefficient (SRCC) is used to examine the sensitivity of each input uncertainty parameter on SFAT.To illustrate the proposed approach in detail,a case study is performed.Based on the proposed approach,probability density function and cumulative density function of SFAT are obtained.Furthermore,sensitivity analysis with regard to SFAT is also conducted.The results give a high-negative correlation of SFAT and the fire growth coefficient whereas the effect of other parameters is so weak that they can be neglected.
Monte Carlo simulation of the scanner rSPECT using GAMOS: a Geant4 based-framework
International Nuclear Information System (INIS)
The molecular imaging of cellular processes in vivo using preclinical animal studies and SPECT technique is one of the main reasons for the design of new devices with high spatial resolution. As an auxiliary tool, Monte Carlo simulation has allowed the characterization and optimization of those medical imaging systems. GAMOS (Geant4-based Architecture for Medicine-Oriented Simulations) has been proved as a powerful and effective toolkit to reproduce experimental data obtained with PET (Positron Emission Tomography) systems. This work aims to demonstrate the potential of this new simulation framework to generate reliable simulated data using SPECT (Single Photon Emission Tomography) applications package. For this purpose, simulation of a novel installation, dedicated to preclinical studies with rodents 'sPECT' has been done. The study comprises collimation, detection geometries, spatial distribution and activity of the source in correspondence with experimental measurements. Studies have been done using 99mTc, 20% energy window and two collimators: 1. hexagonal parallel holes and 2. pinhole. Performance evaluation of the facility was focused to calculate spatial resolution and sensitivity as function of source-collimator distance. Simulated values had been compared with experimental ones. A micro-Derenzo phantom was recreated in order to carry out tomographic reconstruction using Single Slice ReBinning (SSRB) algorithm. It was concluded that simulation shows good agreement with experimental data, which proves GAMOS feasibility in reproducing SPECT data. (Author)
Li, Jiahao; Klee Barillas, Joaquin; Guenther, Clemens; Danzer, Michael A.
2014-02-01
Battery state monitoring is one of the key techniques in battery management systems e.g. in electric vehicles. An accurate estimation can help to improve the system performance and to prolong the battery remaining useful life. Main challenges for the state estimation for LiFePO4 batteries are the flat characteristic of open-circuit-voltage over battery state of charge (SOC) and the existence of hysteresis phenomena. Classical estimation approaches like Kalman filtering show limitations to handle nonlinear and non-Gaussian error distribution problems. In addition, uncertainties in the battery model parameters must be taken into account to describe the battery degradation. In this paper, a novel model-based method combining a Sequential Monte Carlo filter with adaptive control to determine the cell SOC and its electric impedance is presented. The applicability of this dual estimator is verified using measurement data acquired from a commercial LiFePO4 cell. Due to a better handling of the hysteresis problem, results show the benefits of the proposed method against the estimation with an Extended Kalman filter.
Kholodtsova, Maria N.; Loschenov, Victor B.; Daul, Christian; Blondel, Walter
2014-05-01
Determining the optical properties of biological tissues in vivo from spectral intensity measurements performed at their surface is still a challenge. Based on spectroscopic data acquired, the aim is to solve an inverse problem, where the optical parameter values of a forward model are to be estimated through optimization procedure of some cost function. In many cases it is an ill-posed problem because of small numbers of measures, errors on experimental data, nature of a forward model output data, which may be affected by statistical noise in the case of Monte Carlo (MC) simulation or approximated values for short inter-fibre distances (for Diffusion Equation Approximation (DEA)). In case of optical biopsy, spatially resolved diffuse reflectance spectroscopy is one simple technique that uses various excitation-toemission fibre distances to probe tissue in depths. The aim of the present contribution is to study the characteristics of some classically used cost function, optimization methods (Levenberg-Marquardt algorithm) and how it is reaching global minimum when using MC and/or DEA approaches. Several methods of smoothing filters and fitting were tested on the reflectance curves, I(r), gathered from MC simulations. It was obtained that smoothing the initial data with local regression weighted second degree polynomial and then fitting the data with double exponential decay function decreases the probability of the inverse algorithm to converge to local minima close to the initial point of first guess.
Xiong, Chuan; Shi, Jiancheng
2014-01-01
To date, the light scattering models of snow consider very little about the real snow microstructures. The ideal spherical or other single shaped particle assumptions in previous snow light scattering models can cause error in light scattering modeling of snow and further cause errors in remote sensing inversion algorithms. This paper tries to build up a snow polarized reflectance model based on bicontinuous medium, with which the real snow microstructure is considered. The accurate specific surface area of bicontinuous medium can be analytically derived. The polarized Monte Carlo ray tracing technique is applied to the computer generated bicontinuous medium. With proper algorithms, the snow surface albedo, bidirectional reflectance distribution function (BRDF) and polarized BRDF can be simulated. The validation of model predicted spectral albedo and bidirectional reflectance factor (BRF) using experiment data shows good results. The relationship between snow surface albedo and snow specific surface area (SSA) were predicted, and this relationship can be used for future improvement of snow specific surface area (SSA) inversion algorithms. The model predicted polarized reflectance is validated and proved accurate, which can be further applied in polarized remote sensing.
Scalability tests of R-GMA based Grid job monitoring system for CMS Monte Carlo data production
Bonacorsi, D; Field, L; Fisher, S; Grandi, C; Hobson, P R; Kyberd, P; MacEvoy, B; Nebrensky, J J; Tallini, H; Traylen, S
2004-01-01
High Energy Physics experiments such as CMS (Compact Muon Solenoid) at the Large Hadron Collider have unprecedented, large-scale data processing computing requirements, with data accumulating at around 1 Gbyte/s. The Grid distributed computing paradigm has been chosen as the solution to provide the requisite computing power. The demanding nature of CMS software and computing requirements, such as the production of large quantities of Monte Carlo simulated data, makes them an ideal test case for the Grid and a major driver for the development of Grid technologies. One important challenge when using the Grid for large-scale data analysis is the ability to monitor the large numbers of jobs that are being executed simultaneously at multiple remote sites. R-GMA is a monitoring and information management service for distributed resources based on the Grid Monitoring Architecture of the Global Grid Forum. In this paper we report on the first measurements of R-GMA as part of a monitoring architecture to be used for b...
International Nuclear Information System (INIS)
Diffusion tensor tractography (DTT) allows one to explore axonal connectivity patterns in neuronal tissue by linking local predominant diffusion directions determined by diffusion tensor imaging (DTI). The majority of existing tractography approaches use continuous coordinates for calculating single trajectories through the diffusion tensor field. The tractography algorithm we propose is characterized by (1) a trajectory propagation rule that uses voxel centres as vertices and (2) orientation probabilities for the calculated steps in a trajectory that are obtained from the diffusion tensors of either two or three voxels. These voxels include the last voxel of each previous step and one or two candidate successor voxels. The precision and the accuracy of the suggested method are explored with synthetic data. Results clearly favour probabilities based on two consecutive successor voxels. Evidence is also provided that in any voxel-centre-based tractography approach, there is a need for a probability correction that takes into account the geometry of the acquisition grid. Finally, we provide examples in which the proposed fibre-tracking method is applied to the human optical radiation, the cortico-spinal tracts and to connections between Broca's and Wernicke's area to demonstrate the performance of the proposed method on measured data.
International Nuclear Information System (INIS)
The purpose of this study was to examine dose distribution of a skull base tumor and surrounding critical structures in response to high dose intensity-modulated radiosurgery (IMRS) with Monte Carlo (MC) simulation using a dual resolution sandwich phantom. The measurement-based Monte Carlo (MBMC) method (Lin et al., 2009) was adopted for the study. The major components of the MBMC technique involve (1) the BEAMnrc code for beam transport through the treatment head of a Varian 21EX linear accelerator, (2) the DOSXYZnrc code for patient dose simulation and (3) an EPID-measured efficiency map which describes non-uniform fluence distribution of the IMRS treatment beam. For the simulated case, five isocentric 6 MV photon beams were designed to deliver a total dose of 1200 cGy in two fractions to the skull base tumor. A sandwich phantom for the MBMC simulation was created based on the patient's CT scan of a skull base tumor [gross tumor volume (GTV)=8.4 cm3] near the right 8th cranial nerve. The phantom, consisted of a 1.2-cm thick skull base region, had a voxel resolution of 0.05×0.05×0.1 cm3 and was sandwiched in between 0.05×0.05×0.3 cm3 slices of a head phantom. A coarser 0.2×0.2×0.3 cm3 single resolution (SR) phantom was also created for comparison with the sandwich phantom. A particle history of 3×108 for each beam was used for simulations of both the SR and the sandwich phantoms to achieve a statistical uncertainty of <2%. Our study showed that the planning target volume (PTV) receiving at least 95% of the prescribed dose (VPTV95) was 96.9%, 96.7% and 99.9% for the TPS, SR, and sandwich phantom, respectively. The maximum and mean doses to large organs such as the PTV, brain stem, and parotid gland for the TPS, SR and sandwich MC simulations did not show any significant difference; however, significant dose differences were observed for very small structures like the right 8th cranial nerve, right cochlea, right malleus and right semicircular canal
Energy Technology Data Exchange (ETDEWEB)
Becker, N.M. [Los Alamos National Lab., NM (United States); Vanta, E.B. [Wright Laboratory Armament Directorate, Eglin Air Force Base, FL (United States)
1995-05-01
Hydrologic investigations on depleted uranium fate and transport associated with dynamic testing activities were instituted in the 1980`s at Los Alamos National Laboratory and Eglin Air Force Base. At Los Alamos, extensive field watershed investigations of soil, sediment, and especially runoff water were conducted. Eglin conducted field investigations and runoff studies similar to those at Los Alamos at former and active test ranges. Laboratory experiments complemented the field investigations at both installations. Mass balance calculations were performed to quantify the mass of expended uranium which had transported away from firing sites. At Los Alamos, it is estimated that more than 90 percent of the uranium still remains in close proximity to firing sites, which has been corroborated by independent calculations. At Eglin, we estimate that 90 to 95 percent of the uranium remains at test ranges. These data demonstrate that uranium moves slowly via surface water, in both semi-arid (Los Alamos) and humid (Eglin) environments.
International Nuclear Information System (INIS)
Hydrologic investigations on depleted uranium fate and transport associated with dynamic testing activities were instituted in the 1980's at Los Alamos National Laboratory and Eglin Air Force Base. At Los Alamos, extensive field watershed investigations of soil, sediment, and especially runoff water were conducted. Eglin conducted field investigations and runoff studies similar to those at Los Alamos at former and active test ranges. Laboratory experiments complemented the field investigations at both installations. Mass balance calculations were performed to quantify the mass of expended uranium which had transported away from firing sites. At Los Alamos, it is estimated that more than 90 percent of the uranium still remains in close proximity to firing sites, which has been corroborated by independent calculations. At Eglin, we estimate that 90 to 95 percent of the uranium remains at test ranges. These data demonstrate that uranium moves slowly via surface water, in both semi-arid (Los Alamos) and humid (Eglin) environments
Monte Carlo Simulation of RPC-based PET with GEANT4
Weizheng, Zhou; Cheng, Li; Hongfang, Chen; Yongjie, Sun; Tianxiang, Chen
2014-01-01
The Resistive Plate Chambers (RPC) are low-cost charged-particle detectors with good timing resolution and potentially good spatial resolution. Using RPC as gamma detector provides an opportunity for application in positron emission tomography (PET). In this work, we use GEANT4 simulation package to study various methods improving the detection efficiency of a realistic RPC-based PET model for 511keV photons, by adding more detection units, changing the thickness of each layer, choosing different converters and using multi-gaps RPC (MRPC) technique. Proper balance among these factors are discussed. It's found that although RPC with materials of high atomic number can reach a higher efficiency, they may contribute to a poor spatial resolution and higher background level.
International Nuclear Information System (INIS)
Highlights: ► An automatic computation and control sequence has been developed for MSR neutronics and depletion analyses. ► The method was developed based on a series of stepwise SCALE6/TRITON calculations. ► A detailed reexamination of the MOSART operation in 30 years was performed. ► Clean-up scenarios of fission products have a significant impact on the MOSART operation. - Abstract: A special sequence based on SCALE6/TRITON was developed to perform fuel cycle analysis of the Molten Salt Actinide Recycler and Transmuter (MOSART), with emphasis on the simulation of its dynamic refueling and salt reprocessing scheme during long-term operation. MOSART is one of conceptual designs in the molten salt reactor (MSR) category of the Generation-IV systems. This type of reactors is distinguished by the use of liquid fuel circulating in and out of the core, which offers many unique advantages but complicates the modeling and simulation of core behavior using conventional reactor physics codes. The TRITON control module in SCALE6 can perform reliable depletion and decay analysis for many reactor physics applications due to its problem-dependent cross-section processing and rigorous treatment of neutron transport. In order to accommodate a simulation of on-line refueling and reprocessing scenarios, several in-house programs together with a run script were developed to integrate a series of stepwise TRITON calculations; the result greatly facilitates the neutronics analyses of long-term MSR operation. Using this method, a detailed reexamination of the MOSART operation in 30 years was performed to investigate the neutronic characteristics of the core design, the change of fuel salt composition from start-up to equilibrium, the effects of various salt reprocessing scenarios, the performance of actinide transmutation, and the radiotoxicity reduction
Directory of Open Access Journals (Sweden)
S. Maiti
2011-03-01
Full Text Available Koyna region is well-known for its triggered seismic activities since the hazardous earthquake of M=6.3 occurred around the Koyna reservoir on 10 December 1967. Understanding the shallow distribution of resistivity pattern in such a seismically critical area is vital for mapping faults, fractures and lineaments. However, deducing true resistivity distribution from the apparent resistivity data lacks precise information due to intrinsic non-linearity in the data structures. Here we present a new technique based on the Bayesian neural network (BNN theory using the concept of Hybrid Monte Carlo (HMC/Markov Chain Monte Carlo (MCMC simulation scheme. The new method is applied to invert one and two-dimensional Direct Current (DC vertical electrical sounding (VES data acquired around the Koyna region in India. Prior to apply the method on actual resistivity data, the new method was tested for simulating synthetic signal. In this approach the objective/cost function is optimized following the Hybrid Monte Carlo (HMC/Markov Chain Monte Carlo (MCMC sampling based algorithm and each trajectory was updated by approximating the Hamiltonian differential equations through a leapfrog discretization scheme. The stability of the new inversion technique was tested in presence of correlated red noise and uncertainty of the result was estimated using the BNN code. The estimated true resistivity distribution was compared with the results of singular value decomposition (SVD-based conventional resistivity inversion results. Comparative results based on the HMC-based Bayesian Neural Network are in good agreement with the existing model results, however in some cases, it also provides more detail and precise results, which appears to be justified with local geological and structural details. The new BNN approach based on HMC is faster and proved to be a promising inversion scheme to interpret complex and non-linear resistivity problems. The HMC-based BNN results
Li, Jiahao; Klee Barillas, Joaquin; Guenther, Clemens; Danzer, Michael A.
2015-03-01
Accurate state monitoring is required for the high performance of battery management systems (BMS) in electric vehicles. By using model-based observation methods, state estimation of a single cell can be achieved with non-linear filtering algorithms e.g. Kalman filtering and Particle filtering. Considering the limited computational capability of a BMS and its real-time constraint, duplication of this approach to a multicell system is very time consuming and can hardly be implemented for a large number of cells in a battery pack. Several possible solutions have been reported in recent years. In this work, an extended two-step estimation approach is studied. At first, the mean value of the battery state of charge is determined in the form of a probability density function (PDF). Secondly, the intrinsic variations in cell SOC and resistance are identified simultaneously in an extended framework using a recursive least squares (RLS) algorithm. The on-board reliability and estimation accuracy of the proposed method is validated by experiment and simulation using an NMC/graphite battery module.
Simulation of Ni-63 based nuclear micro battery using Monte Carlo modeling
International Nuclear Information System (INIS)
The radioisotope batteries have an energy density of 100-10000 times greater than chemical batteries. Also, Li ion battery has the fundamental problems such as short life time and requires recharge system. In addition to these things, the existing batteries are hard to operate at internal human body, national defense arms or space environment. Since the development of semiconductor process and materials technology, the micro device is much more integrated. It is expected that, based on new semiconductor technology, the conversion device efficiency of betavoltaic battery will be highly increased. Furthermore, the radioactivity from the beta particle cannot penetrate a skin of human body, so it is safer than Li battery which has the probability to explosion. In the other words, the interest for radioisotope battery is increased because it can be applicable to an artificial internal organ power source without recharge and replacement, micro sensor applied to arctic and special environment, small size military equipment and space industry. However, there is not enough data for beta particle fluence from radioisotope source using nuclear battery. Beta particle fluence directly influences on battery efficiency and it is seriously affected by radioisotope source thickness because of self-absorption effect. Therefore, in this article, we present a basic design of Ni-63 nuclear battery and simulation data of beta particle fluence with various thickness of radioisotope source and design of battery
International Nuclear Information System (INIS)
A theoretically based analytical energy-range relationship has been developed and calibrated against well established experimental and Monte Carlo calculated energy-range data. Only published experimental data with a clear statement of accuracy and method of evaluation have been used. Besides published experimental range data for different uniform media, new accurate experimental data on the practical range of high-energy electron beams in water for the energy range 10-50 MeV from accurately calibrated racetrack microtrons have been used. Largely due to the simultaneous pooling of accurate experimental and Monte Carlo data for different materials, the fit has resulted in an increased accuracy of the resultant energy-range relationship, particularly at high energies. Up to date Monte Carlo data from the latest versions of the codes ITS3 and EGS4 for absorbers of atomic numbers between four and 92 (Be, C, H2O, PMMA, Al, Cu, Ag, Pb and U) and incident electron energies between 1 and 100 MeV have been used as a complement where experimental data are sparse or missing. The standard deviation of the experimental data relative to the new relation is slightly larger than that of the Monte Carlo data. This is partly due to the fact that theoretically based stopping and scattering cross-sections are used both to account for the material dependence of the analytical energy-range formula and to calculate ranges with the Monte Carlo programs. For water the deviation from the traditional energy-range relation of ICRU Report 35 is only 0.5% at 20 MeV but as high as - 2.2% at 50 MeV. An improved method for divergence and ionization correction in high-energy electron beams has also been developed to enable use of a wider range of experimental results. (Author)
International Nuclear Information System (INIS)
Uncertainty and sensitivity analyses with respect to nuclear data are performed with depletion calculations for BWR and PWR fuel assemblies specified in the framework of the UAM-LWR Benchmark Phase II. For this, the GRS sampling based tool XSUSA is employed together with the TRITON depletion sequences from the SCALE 6.1 code system. Uncertainties for multiplication factors and nuclide inventories are determined, as well as the main contributors to these result uncertainties by calculating importance indicators. The corresponding neutron transport calculations are performed with the deterministic discrete-ordinates code NEWT. In addition, the Monte Carlo code KENO in multi-group mode is used to demonstrate a method with which the number of neutron histories per calculation run can be substantially reduced as compared to that in a calculation for the nominal case without uncertainties, while uncertainties and sensitivities are obtained with almost the same accuracy.
International Nuclear Information System (INIS)
In this paper the civilian exploitation of depleted uranium is briefly reviewed. Different scenarios relevant to its use are discussed in terms of radiation exposure for workers and the general public. The case of the aircraft accident which occurred in Amsterdam in 1992 involving a fire, is discussed in terms of the radiological exposure to bystanders. All information given has been obtained on the basis of an extensive literature search and are not based on measurements performed at the Institute for Transuranium Elements
International Nuclear Information System (INIS)
Radiation therapy treatment planning requires accurate determination of absorbed dose in the patient. Monte Carlo simulation is the most accurate method for solving the transport problem of particles in matter. This thesis is the first study dealing with the validation of the Monte Carlo simulation platform GATE (GEANT4 Application for Tomographic Emission), based on GEANT4 (Geometry And Tracking) libraries, for the computation of absorbed dose deposited by electron beams. This thesis aims at demonstrating that GATE/GEANT4 calculations are able to reach treatment planning requirements in situations where analytical algorithms are not satisfactory. The goal is to prove that GATE/GEANT4 is useful for treatment planning using electrons and competes with well validated Monte Carlo codes. This is demonstrated by the simulations with GATE/GEANT4 of realistic electron beams and electron sources used for external radiation therapy or targeted radiation therapy. The computed absorbed dose distributions are in agreement with experimental measurements and/or calculations from other Monte Carlo codes. Furthermore, guidelines are proposed to fix the physics parameters of the GATE/GEANT4 simulations in order to ensure the accuracy of absorbed dose calculations according to radiation therapy requirements. (author)
International Nuclear Information System (INIS)
The effective multiplication factor Keff of a nuclear reactor is calculated by Monte Carlo technique that is a source iteration procedure based on a fixed number of fission points per generation. In this paper, in order to reduce the statistical errors included in the estimated Keff value by accumlating a large number of neutron histories for a given computing time, a parallel computing technique is applied by using the PPA (Parallel Processor Array) system located in the ''General Purpose Simulator Facility'' of Hokkaido University. The architecture having this parallel computing machine permits a parallel Monte Carlo calculation such as a Monte Carlo game required for the estimation of the Keff value. This is carried out in each processor, independent of the other processors and in parallel with each other. For this purpose, we prepare a software that can maximize a computing capability of the PPA system under a unique architecture having the PPA system and some limitations such as a small capacity of a storage memory of each processor. The verification studies by using this software have confirmed that Monte Carlo technique with the parallel computing machine is very useful for three dimensional neutron transport problems as dealt with in this paper. (author)
Specification for the VERA Depletion Benchmark Suite
Energy Technology Data Exchange (ETDEWEB)
Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2015-12-17
CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.
Bozkurt, Ahmet
The distribution of absorbed doses in the body can be computationally determined using mathematical or tomographic representations of human anatomy. A whole- body model was developed from the color images of the National Library of Medicine's Visible Human Project® for simulating the transport of radiation in the human body. The model, called Visible Photographic Man (VIP-Man), has sixty-one organs and tissues represented in the Monte Carlo code MCNPX at 4-mm voxel resolution. Organ dose calculations from external neutron sources were carried out using VIP-man and MCNPX to determine a new set of dose conversion coefficients to be used in radiation protection. Monoenergetic neutron beams between 10-9 MeV and 10 GeV were studied under six different irradiation geometries: anterior-posterior, posterior-anterior, right lateral, left lateral, rotational and isotropic. The results for absorbed doses in twenty-four organs and the effective doses based on twelve critical organs are presented in tabular form. A comprehensive comparison of the results with those from the mathematical models show discrepancies that can be attributed to the variations in body modeling (size, location and shape of the individual organs) and the use of different nuclear datasets or models to derive the reaction cross sections, as well as the use of different transport packages for simulation radiation effects. The organ dose results based on the realistic VIP-Man body model allow the existing radiation protection dosimetry on neutrons to be re-evaluated and improved.
Image quality assessment of LaBr3-based whole-body 3D PET scanners: a Monte Carlo evaluation
International Nuclear Information System (INIS)
The main thrust for this work is the investigation and design of a whole-body PET scanner based on new lanthanum bromide scintillators. We use Monte Carlo simulations to generate data for a 3D PET scanner based on LaBr3 detectors, and to assess the count-rate capability and the reconstructed image quality of phantoms with hot and cold spheres using contrast and noise parameters. Previously we have shown that LaBr3 has very high light output, excellent energy resolution and fast timing properties which can lead to the design of a time-of-flight (TOF) whole-body PET camera. The data presented here illustrate the performance of LaBr3 without the additional benefit of TOF information, although our intention is to develop a scanner with TOF measurement capability. The only drawbacks of LaBr3 are the lower stopping power and photo-fraction which affect both sensitivity and spatial resolution. However, in 3D PET imaging where energy resolution is very important for reducing scattered coincidences in the reconstructed image, the image quality attained in a non-TOF LaBr3 scanner can potentially equal or surpass that achieved with other high sensitivity scanners. Our results show that there is a gain in NEC arising from the reduced scatter and random fractions in a LaBr3 scanner. The reconstructed image resolution is slightly worse than a high-Z scintillator, but at increased count-rates, reduced pulse pileup leads to an image resolution similar to that of LSO. Image quality simulations predict reduced contrast for small hot spheres compared to an LSO scanner, but improved noise characteristics at similar clinical activity levels
Atriana Palma, Bianey; Ureba Sánchez, Ana; Salguero, Francisco Javier; Arráns, Rafael; Míguez Sánchez, Carlos; Walls Zurita, Amadeo; Romero Hermida, María Isabel; Leal, Antonio
2012-03-01
The purpose of this study was to present a Monte-Carlo (MC)-based optimization procedure to improve conventional treatment plans for accelerated partial breast irradiation (APBI) using modulated electron beams alone or combined with modulated photon beams, to be delivered by a single collimation device, i.e. a photon multi-leaf collimator (xMLC) already installed in a standard hospital. Five left-sided breast cases were retrospectively planned using modulated photon and/or electron beams with an in-house treatment planning system (TPS), called CARMEN, and based on MC simulations. For comparison, the same cases were also planned by a PINNACLE TPS using conventional inverse intensity modulated radiation therapy (IMRT). Normal tissue complication probability for pericarditis, pneumonitis and breast fibrosis was calculated. CARMEN plans showed similar acceptable planning target volume (PTV) coverage as conventional IMRT plans with 90% of PTV volume covered by the prescribed dose (Dp). Heart and ipsilateral lung receiving 5% Dp and 15% Dp, respectively, was 3.2-3.6 times lower for CARMEN plans. Ipsilateral breast receiving 50% Dp and 100% Dp was an average of 1.4-1.7 times lower for CARMEN plans. Skin and whole body low-dose volume was also reduced. Modulated photon and/or electron beams planned by the CARMEN TPS improve APBI treatments by increasing normal tissue sparing maintaining the same PTV coverage achieved by other techniques. The use of the xMLC, already installed in the linac, to collimate photon and electron beams favors the clinical implementation of APBI with the highest efficiency.
Zhang, Junlong; Li, Yongping; Huang, Guohe; Chen, Xi; Bao, Anming
2016-07-01
Without a realistic assessment of parameter uncertainty, decision makers may encounter difficulties in accurately describing hydrologic processes and assessing relationships between model parameters and watershed characteristics. In this study, a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis (MCMC-MFA) method is developed, which can not only generate samples of parameters from a well constructed Markov chain and assess parameter uncertainties with straightforward Bayesian inference, but also investigate the individual and interactive effects of multiple parameters on model output through measuring the specific variations of hydrological responses. A case study is conducted for addressing parameter uncertainties in the Kaidu watershed of northwest China. Effects of multiple parameters and their interactions are quantitatively investigated using the MCMC-MFA with a three-level factorial experiment (totally 81 runs). A variance-based sensitivity analysis method is used to validate the results of parameters' effects. Results disclose that (i) soil conservation service runoff curve number for moisture condition II (CN2) and fraction of snow volume corresponding to 50% snow cover (SNO50COV) are the most significant factors to hydrological responses, implying that infiltration-excess overland flow and snow water equivalent represent important water input to the hydrological system of the Kaidu watershed; (ii) saturate hydraulic conductivity (SOL_K) and soil evaporation compensation factor (ESCO) have obvious effects on hydrological responses; this implies that the processes of percolation and evaporation would impact hydrological process in this watershed; (iii) the interactions of ESCO and SNO50COV as well as CN2 and SNO50COV have an obvious effect, implying that snow cover can impact the generation of runoff on land surface and the extraction of soil evaporative demand in lower soil layers. These findings can help enhance the hydrological model
Zhao, Huijuan; Zhang, Shunqi; Wang, Zhaoxia; Miao, Hui; Du, Zhen; Jiang, Jingying
2008-02-01
This article aims at the optical parameter reconstruction technology for the frequency- domain measurement of near-infrared diffused light. For mimicking the cervix, a cylindrical model with hole in the middle is used in the simulation and experiments. Concerning the structure of the cervix, Monte-Carlo simulation is adopted for describing the photon migration in tissue and Perturbation Monte-Carlo is used for the reconstruction of the optical properties of cervix. The difficulties in the reconstruction of cervical optical properties with frequency domain measurement are the description of the tissue boundary, expression of the frequency-domain signal, and development of rapid reconstruction method for clinical use. To get the frequency domain signal in Monte Carlos simulation, discrete Fourier transformation of the photon migration history in time-domain is employed. By combining the perturbation Monte-Carlo simulation and the LM optimization technology, a rapid reconstruction algorithm is constructed, by which only one Monte-Carlo simulation is needed. The reconstruction method is validated by simulation and experiments on solid phantom. Simulation results show that the inaccuracy in reconstruction of absorption coefficient is less than 3% for a certain range of optical properties. The algorithm is also proved to be robust to the initial guess of optical properties and noise. Experimental results showed that the absorption coefficient can be reconstructed with inaccuracy of less than 10%. The absorption coefficient reconstruction for one set of measurement data can be fulfilled within one minute.
Brake, te B.; Hanssen, R.F.; Ploeg, van der M.J.; Rooij, de G.H.
2013-01-01
Satellite-based radar interferometry is a technique capable of measuring small surface elevation changes at large scales and with a high resolution. In vadose zone hydrology, it has been recognized for a long time that surface elevation changes due to swell and shrinkage of clayey soils can serve as
Tian, Zhen; Li, Yongbao; Shi, Feng; Jiang, Steve B; Jia, Xun
2015-01-01
We recently built an analytical source model for GPU-based MC dose engine. In this paper, we present a sampling strategy to efficiently utilize this source model in GPU-based dose calculation. Our source model was based on a concept of phase-space-ring (PSR). This ring structure makes it effective to account for beam rotational symmetry, but not suitable for dose calculations due to rectangular jaw settings. Hence, we first convert PSR source model to its phase-space let (PSL) representation. Then in dose calculation, different types of sub-sources were separately sampled. Source sampling and particle transport were iterated. So that the particles being sampled and transported simultaneously are of same type and close in energy to alleviate GPU thread divergence. We also present an automatic commissioning approach to adjust the model for a good representation of a clinical linear accelerator . Weighting factors were introduced to adjust relative weights of PSRs, determined by solving a quadratic minimization ...
International Nuclear Information System (INIS)
The contributon Monte Carlo method is based on a new recipe to calculate target responses by means of volume integral of the contributon current in a region between the source and the detector. A comprehensive description of the method, its implementation in the general-purpose MCNP code, and results of the method for realistic nonhomogeneous, energy-dependent problems are presented. 23 figures, 10 tables
Jiang, Xu; Deng, Yong; Luo, Zhaoyang; Luo, Qingming
2015-10-01
The excessive time required by fluorescence diffuse optical tomography (fDOT) image reconstruction based on path-history fluorescence Monte Carlo model is its primary limiting factor. Herein, we present a method that accelerates fDOT image reconstruction. We employ three-level parallel architecture including multiple nodes in cluster, multiple cores in central processing unit (CPU), and multiple streaming multiprocessors in graphics processing unit (GPU). Different GPU memories are selectively used, the data-writing time is effectively eliminated, and the data transport per iteration is minimized. Simulation experiments demonstrated that this method can utilize general-purpose computing platforms to efficiently implement and accelerate fDOT image reconstruction, thus providing a practical means of using path-history-based fluorescence Monte Carlo model for fDOT imaging. PMID:26480115
A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC)
International Nuclear Information System (INIS)
Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia’s CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon–electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783–97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE’s random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48–0.53% for the electron beam cases and 0.15–0.17% for the photon beam cases. In terms of efficiency, goMC was ∼4–16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was
A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC)
Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B.; Jia, Xun
2015-09-01
Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia’s CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE’s random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by
A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC).
Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun
2015-10-01
Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia's CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE's random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by
Yeh, Chi-Yuan; Tung, Chuan-Jung; Chao, Tsi-Chain; Lin, Mu-Han; Lee, Chung-Chi
2014-11-01
The purpose of this study was to examine dose distribution of a skull base tumor and surrounding critical structures in response to high dose intensity-modulated radiosurgery (IMRS) with Monte Carlo (MC) simulation using a dual resolution sandwich phantom. The measurement-based Monte Carlo (MBMC) method (Lin et al., 2009) was adopted for the study. The major components of the MBMC technique involve (1) the BEAMnrc code for beam transport through the treatment head of a Varian 21EX linear accelerator, (2) the DOSXYZnrc code for patient dose simulation and (3) an EPID-measured efficiency map which describes non-uniform fluence distribution of the IMRS treatment beam. For the simulated case, five isocentric 6 MV photon beams were designed to deliver a total dose of 1200 cGy in two fractions to the skull base tumor. A sandwich phantom for the MBMC simulation was created based on the patient's CT scan of a skull base tumor [gross tumor volume (GTV)=8.4 cm3] near the right 8th cranial nerve. The phantom, consisted of a 1.2-cm thick skull base region, had a voxel resolution of 0.05×0.05×0.1 cm3 and was sandwiched in between 0.05×0.05×0.3 cm3 slices of a head phantom. A coarser 0.2×0.2×0.3 cm3 single resolution (SR) phantom was also created for comparison with the sandwich phantom. A particle history of 3×108 for each beam was used for simulations of both the SR and the sandwich phantoms to achieve a statistical uncertainty of <2%. Our study showed that the planning target volume (PTV) receiving at least 95% of the prescribed dose (VPTV95) was 96.9%, 96.7% and 99.9% for the TPS, SR, and sandwich phantom, respectively. The maximum and mean doses to large organs such as the PTV, brain stem, and parotid gland for the TPS, SR and sandwich MC simulations did not show any significant difference; however, significant dose differences were observed for very small structures like the right 8th cranial nerve, right cochlea, right malleus and right semicircular canal. Dose
International Nuclear Information System (INIS)
Sheet metal stamping is one of the most commonly used manufacturing processes, and hence, much research has been carried for economic gain. Searching through the literatures, however, it is found that there are still a lots of problems unsolved. For example, it is well known that for a same press, same workpiece material, and same set of die, the product quality may vary owing to a number of factors, such as the inhomogeneous of the workpice material, the loading error, the lubrication, and etc. Presently, few seem able to predict the quality variation, not to mention what contribute to the quality variation. As a result, trial-and-error is still needed in the shop floor, causing additional cost and time delay. This paper introduces a new approach to predict the product quality variation and identify the sensitive design / process parameters. The new approach is based on a combination of inverse Finite Element Modeling (FEM) and Monte Carlo Simulation (more specifically, the Latin Hypercube Sampling (LHS) approach). With an acceptable accuracy, the inverse FEM (also called one-step FEM) requires much less computation load than that of the usual incremental FEM and hence, can be used to predict the quality variations under various conditions. LHS is a statistical method, through which the sensitivity analysis can be carried out. The result of the sensitivity analysis has clear physical meaning and can be used to optimize the die design and / or the process design. Two simulation examples are presented including drawing a rectangular box and drawing a two-step rectangular box
International Nuclear Information System (INIS)
This paper presents a novel decision-support tool for assessing future generation portfolios in an increasingly uncertain electricity industry. The tool combines optimal generation mix concepts with Monte Carlo simulation and portfolio analysis techniques to determine expected overall industry costs, associated cost uncertainty, and expected CO2 emissions for different generation portfolio mixes. The tool can incorporate complex and correlated probability distributions for estimated future fossil-fuel costs, carbon prices, plant investment costs, and demand, including price elasticity impacts. The intent of this tool is to facilitate risk-weighted generation investment and associated policy decision-making given uncertainties facing the electricity industry. Applications of this tool are demonstrated through a case study of an electricity industry with coal, CCGT, and OCGT facing future uncertainties. Results highlight some significant generation investment challenges, including the impacts of uncertain and correlated carbon and fossil-fuel prices, the role of future demand changes in response to electricity prices, and the impact of construction cost uncertainties on capital intensive generation. The tool can incorporate virtually any type of input probability distribution, and support sophisticated risk assessments of different portfolios, including downside economic risks. It can also assess portfolios against multi-criterion objectives such as greenhouse emissions as well as overall industry costs. - Highlights: ► Present a decision support tool to assist generation investment and policy making under uncertainty. ► Generation portfolios are assessed based on their expected costs, risks, and CO2 emissions. ► There is tradeoff among expected cost, risks, and CO2 emissions of generation portfolios. ► Investment challenges include economic impact of uncertainties and the effect of price elasticity. ► CO2 emissions reduction depends on the mix of
Energy Technology Data Exchange (ETDEWEB)
Dieudonne, C.; Dumonteil, E.; Malvagi, F.; Diop, C. M. [Commissariat a l' Energie Atomique et aux Energies Alternatives CEA, Service d' Etude des Reacteurs et de Mathematiques Appliquees, DEN/DANS/DM2S/SERMA/LTSD, F91191 Gif-sur-Yvette cedex (France)
2013-07-01
For several years, Monte Carlo burnup/depletion codes have appeared, which couple a Monte Carlo code to simulate the neutron transport to a deterministic method that computes the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3 dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the time-expensive Monte Carlo solver called at each time step. Therefore, great improvements in term of calculation time could be expected if one could get rid of Monte Carlo transport sequences. For example, it may seem interesting to run an initial Monte Carlo simulation only once, for the first time/burnup step, and then to use the concentration perturbation capability of the Monte Carlo code to replace the other time/burnup steps (the different burnup steps are seen like perturbations of the concentrations of the initial burnup step). This paper presents some advantages and limitations of this technique and preliminary results in terms of speed up and figure of merit. Finally, we will detail different possible calculation scheme based on that method. (authors)
International Nuclear Information System (INIS)
Antarctic ozone depletion is most severe during the southern hemisphere spring, when the local reduction in the column amount may be as much as 50 percent. The extent to which this ozone poor air contributes to the observed global ozone loss is a matter of debate, but there is some evidence that fragments of the 'ozone hole' can reach lower latitudes following its breakup in summer. Satellite data show the seasonal evolution of the ozone hole. A new dimension has been added to Antarctic ozone depletion with the advent of large volcanic eruptions such as that from Mount Pinatubo in 1991. (author). 5 refs., 1 fig
Stratospheric ozone depletion.
Rowland, F Sherwood
2006-05-29
Solar ultraviolet radiation creates an ozone layer in the atmosphere which in turn completely absorbs the most energetic fraction of this radiation. This process both warms the air, creating the stratosphere between 15 and 50 km altitude, and protects the biological activities at the Earth's surface from this damaging radiation. In the last half-century, the chemical mechanisms operating within the ozone layer have been shown to include very efficient catalytic chain reactions involving the chemical species HO, HO2, NO, NO2, Cl and ClO. The NOX and ClOX chains involve the emission at Earth's surface of stable molecules in very low concentration (N2O, CCl2F2, CCl3F, etc.) which wander in the atmosphere for as long as a century before absorbing ultraviolet radiation and decomposing to create NO and Cl in the middle of the stratospheric ozone layer. The growing emissions of synthetic chlorofluorocarbon molecules cause a significant diminution in the ozone content of the stratosphere, with the result that more solar ultraviolet-B radiation (290-320 nm wavelength) reaches the surface. This ozone loss occurs in the temperate zone latitudes in all seasons, and especially drastically since the early 1980s in the south polar springtime-the 'Antarctic ozone hole'. The chemical reactions causing this ozone depletion are primarily based on atomic Cl and ClO, the product of its reaction with ozone. The further manufacture of chlorofluorocarbons has been banned by the 1992 revisions of the 1987 Montreal Protocol of the United Nations. Atmospheric measurements have confirmed that the Protocol has been very successful in reducing further emissions of these molecules. Recovery of the stratosphere to the ozone conditions of the 1950s will occur slowly over the rest of the twenty-first century because of the long lifetime of the precursor molecules. PMID:16627294
International Nuclear Information System (INIS)
-Small animal PET allows qualitative assessment and quantitative measurement of biochemical processes in vivo, but the accuracy and reproducibility of imaging results can be affected by several parameters. The first aim of this study was to investigate the performance of different CT-based attenuation correction strategies and assess the resulting impact on PET images. The absorbed dose in different tissues caused by scanning procedures was also discussed to minimize biologic damage generated by radiation exposure due to PET/CT scanning. A small animal PET/CT system was modeled based on Monte Carlo simulation to generate imaging results and dose distribution. Three energy mapping methods, including the bilinear scaling method, the dual-energy method and the hybrid method which combines the kVp conversion and the dual-energy method, were investigated comparatively through assessing the accuracy of estimating linear attenuation coefficient at 511 keV and the bias introduced into PET quantification results due to CT-based attenuation correction. Our results showed that the hybrid method outperformed the bilinear scaling method, while the dual-energy method achieved the highest accuracy among the three energy mapping methods. Overall, the accuracy of PET quantification results have similar trend as that for the estimation of linear attenuation coefficients, whereas the differences between the three methods are more obvious in the estimation of linear attenuation coefficients than in the PET quantification results. With regards to radiation exposure from CT, the absorbed dose ranged between 7.29-45.58 mGy for 50-kVp scan and between 6.61-39.28 mGy for 80-kVp scan. For 18F radioactivity concentration of 1.86x105 Bq/ml, the PET absorbed dose was around 24 cGy for tumor with a target-to-background ratio of 8. The radiation levels for CT scans are not lethal to the animal, but concurrent use of PET in longitudinal study can increase the risk of biological effects. The
Análisis de uso de las bases de datos de la biblioteca de la Universidad Carlos III de Madrid
Directory of Open Access Journals (Sweden)
Suárez Balseiro, Carlos
2001-03-01
Full Text Available In this paper we identify and show, through the application of statistical techniques, unidimensional and multidimensional indicators, some of the characteristics revealed by the users community of the Madrid Carlos III University in their use of databases. The information for this study has been obtained from the analysis of the accesses made in the Access to Databases Service of the Carlos III University library, paying a special attention to the behaviour of the teaching departments during the 1995-1998 period. The evolution in the use of on-line databases is valued and the indicators, which typify the trends and patterns of use from the part of those users, are analysed. Some of the criteria highlighted by the interaction between the user, new technologies and electronic sources of information in university libraries.
En este trabajo se identifican y muestran, mediante la aplicación de técnicas estadísticas e indicadores unidimensionales y multidimensionales, algunas de las características manifestadas por la comunidad de usuarios de la Universidad Carlos III de Madrid en el uso de las bases de datos. La información para el estudio se ha obtenido a partir del análisis de los accesos realizados en el Servicio de Acceso a Bases de Datos de la Biblioteca de la Universidad Carlos III, prestando especial atención al comportamiento de los departamentos docentes durante el período 1995 a 1998. Se valora la evolución del uso de las bases de datos en red y se analizan los indicadores que tipifican las tendencias y patrones de uso por parte de estos usuarios. Se exponen, además, algunos criterios sobre la interacción entre el usuario, las nuevas tecnologías y las fuentes electrónicas de información en las bibliotecas universitarias.
Institute of Scientific and Technical Information of China (English)
Jiang Wei; Xiang Haige
2004-01-01
This paper addresses the issues of channel estimation in a Multiple-Input/Multiple-Output (MIMO) system. Markov Chain Monte Carlo (MCMC) method is employed to jointly estimate the Channel State Information (CSI) and the transmitted signals. The deduced algorithms can work well under circumstances of low Signal-to-Noise Ratio (SNR). Simulation results are presented to demonstrate their effectiveness.
International Nuclear Information System (INIS)
In tokamak-type DT nuclear fusion reactor, there are various type slits and ducts in the blanket and the vacuum vessel. The helium production in the rewelding location of the blanket and the vacuum vessel, the nuclear properties in the super-conductive TF coil, e.g. the nuclear heating rate in the coil winding pack, are enhanced by the radiation streaming through the slits and ducts, and they are critical concern in the shielding design. The decay gamma ray dose rate around the duct penetrating the blanket and the vacuum vessel is also enhanced by the radiation streaming through the duct, and they are also critical concern from the view point of the human access to the cryostat during maintenance. In order to evaluate these nuclear properties with good accuracy, three dimensional Monte Carlo calculation is required but requires long calculation time. Therefore, the development of the effective simple design evaluation method for radiation streaming is substantially important. This study aims to establish the systematic evaluation method for the nuclear properties of the blanket, the vacuum vessel and the Toroidal Field (TF) coil taking into account the radiation streaming through various types of slits and ducts, based on three dimensional Monte Carlo calculation using the MNCP code, and for the decay gamma ray dose rates penetrated around the ducts. The present thesis describes three topics in five chapters as follows; 1) In Chapter 2, the results calculated by the Monte Carlo code, MCNP, are compared with those by the Sn code, DOT3.5, for the radiation streaming in the tokamak-type nuclear fusion reactor, for validating the results of the Sn calculation. From this comparison, the uncertainties of the Sn calculation results coming from the ray-effect and the effect due to approximation of the geometry are investigated whether the two dimensional Sn calculation can be applied instead of the Monte Carlo calculation. Through the study, it can be concluded that the
Willpower depletion and framing effects
de Haan, Thomas; van Veldhuizen, Roel
2013-01-01
We investigate whether depleting people's cognitive resources (or willpower) affects the degree to which they are susceptible to framing effects. Recent research in social psychology and economics has suggested that willpower is a resource that can be temporarily depleted and that a depleted level of willpower is associated with self-control problems in a variety of contexts. In this study, we extend the willpower depletion paradigm to framing effects and argue that willpower depletion should...
Gontcharova, Viktoria; Youn, Eunseog; Wolcott, Randall D; Hollister, Emily B; Gentry, Terry J; Dowd, Scot E
2010-01-01
The existing chimera detection programs are not specifically designed for "next generation" sequence data. Technologies like Roche 454 FLX and Titanium have been adapted over the past years especially with the introduction of bacterial tag-encoded FLX/Titanium amplicon pyrosequencing methodologies to produce over one million 250-600 bp 16S rRNA gene reads that need to be depleted of chimeras prior to downstream analysis. Meeting the needs of basic scientists who are venturing into high-throughput microbial diversity studies such as those based upon pyrosequencing and specifically providing a solution for Windows users, the B2C2 software is designed to be able to accept files containing large multi-FASTA formatted sequences and screen for possible chimeras in a high throughput fashion. The graphical user interface (GUI) is also able to batch process multiple files. When compared to popular chimera screening software the B2C2 performed as well or better while dramatically decreasing the amount of time required generating and screening results. Even average computer users are able to interact with the Windows .Net GUI-based application and define the stringency to which the analysis should be done. B2C2 may be downloaded from http://www.researchandtesting.com/B2C2. PMID:21339894
Obot, I. B.; Kaya, Savaş; Kaya, Cemal; Tüzün, Burak
2016-06-01
DFT and Monte Carlo simulation were performed on three Schiff bases namely, 4-(4-bromophenyl)-N‧-(4-methoxybenzylidene)thiazole-2-carbohydrazide (BMTC), 4-(4-bromophenyl)-N‧-(2,4-dimethoxybenzylidene)thiazole-2-carbohydrazide (BDTC), 4-(4-bromophenyl)-N‧-(4-hydroxybenzylidene)thiazole-2-carbohydrazide (BHTC) recently studied as corrosion inhibitor for steel in acid medium. Electronic parameters relevant to their inhibition activity such as EHOMO, ELUMO, Energy gap (ΔE), hardness (η), softness (σ), the absolute electronegativity (χ), proton affinity (PA) and nucleophilicity (ω) etc., were computed and discussed. Monte Carlo simulations were applied to search for the most stable configuration and adsorption energies for the interaction of the inhibitors with Fe (110) surface. The theoretical data obtained are in most cases in agreement with experimental results.
Ren, Lixia; He, Li; Lu, Hongwei; Chen, Yizhong
2016-08-01
A new Monte Carlo-based interval transformation analysis (MCITA) is used in this study for multi-criteria decision analysis (MCDA) of naphthalene-contaminated groundwater management strategies. The analysis can be conducted when input data such as total cost, contaminant concentration and health risk are represented as intervals. Compared to traditional MCDA methods, MCITA-MCDA has the advantages of (1) dealing with inexactness of input data represented as intervals, (2) mitigating computational time due to the introduction of Monte Carlo sampling method, (3) identifying the most desirable management strategies under data uncertainty. A real-world case study is employed to demonstrate the performance of this method. A set of inexact management alternatives are considered in each duration on the basis of four criteria. Results indicated that the most desirable management strategy lied in action 15 for the 5-year, action 8 for the 10-year, action 12 for the 15-year, and action 2 for the 20-year management.
TH-A-18C-04: Ultrafast Cone-Beam CT Scatter Correction with GPU-Based Monte Carlo Simulation
Energy Technology Data Exchange (ETDEWEB)
Xu, Y [UT Southwestern Medical Center, Dallas, TX (United States); Southern Medical University, Guangzhou (China); Bai, T [UT Southwestern Medical Center, Dallas, TX (United States); Xi' an Jiaotong University, Xi' an (China); Yan, H; Ouyang, L; Wang, J; Pompos, A; Jiang, S; Jia, X [UT Southwestern Medical Center, Dallas, TX (United States); Zhou, L [Southern Medical University, Guangzhou (China)
2014-06-15
Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT). We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC) simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstructions within 30 seconds. Methods: The method consists of six steps: 1) FDK reconstruction using raw projection data; 2) Rigid Registration of planning CT to the FDK results; 3) MC scatter calculation at sparse view angles using the planning CT; 4) Interpolation of the calculated scatter signals to other angles; 5) Removal of scatter from the raw projections; 6) FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC scatter noise caused by low photon numbers. The method is validated on head-and-neck cases with simulated and clinical data. Results: We have studied impacts of photo histories, volume down sampling factors on the accuracy of scatter estimation. The Fourier analysis was conducted to show that scatter images calculated at 31 angles are sufficient to restore those at all angles with <0.1% error. For the simulated case with a resolution of 512×512×100, we simulated 10M photons per angle. The total computation time is 23.77 seconds on a Nvidia GTX Titan GPU. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU. Similar results were found for a real patient case. Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. The whole process of scatter correction and reconstruction is accomplished within 30 seconds. This study is supported in part by NIH (1R01CA154747-01), The Core Technology Research
International Nuclear Information System (INIS)
Through the Monte Carlo (MC) simulation of 6 and 10 MV flattening-filter-free (FFF) beams from Varian TrueBeam accelerator, this study aims to find the best incident electron distribution for further studying the small field characteristics of these beams. By incorporating the training materials of Varian on the geometry and material parameters of TrueBeam Linac head, the 6 and 10 MV FFF beams were modelled using the BEAMnrc and DOSXYZnrc codes, where the percentage depth doses (PDDs) and the off-axis ratios (OARs) curves of fields ranging from 4 × 4 to 40 × 40 cm2 were simulated for both energies by adjusting the incident beam energy, radial intensity distribution and angular spread, respectively. The beam quality and relative output factor (ROF) were calculated. The simulations and measurements were compared using Gamma analysis method provided by Verisoft program (PTW, Freiburg, Germany), based on which the optimal MC model input parameters were selected and were further used to investigate the beam characteristics of small fields. The Full Width Half Maximum (FWHM), mono-energetic energy and angular spread of the resultant incident Gaussian radial intensity electron distribution were 0.75 mm, 6.1 MeV and 0.9° for the nominal 6 MV FFF beam, and 0.7 mm, 10.8 MeV and 0.3° for the nominal 10 MV FFF beam respectively. The simulation was mostly comparable to the measurement. Gamma criteria of 1 mm/1 % (local dose) can be met by all PDDs of fields larger than 1 × 1 cm2, and by all OARs of no larger than 20 × 20 cm2, otherwise criteria of 1 mm/2 % can be fulfilled. Our MC simulated ROFs agreed well with the measured ROFs of various field sizes (the discrepancies were less than 1 %), except for the 1 × 1 cm2 field. The MC simulation agrees well with the measurement and the proposed model parameters can be clinically used for further dosimetric studies of 6 and 10 MV FFF beams
Cassola, V. F.; Kramer, R.; Brayner, C.; Khoury, H. J.
2010-08-01
Does the posture of a patient have an effect on the organ and tissue absorbed doses caused by x-ray examinations? This study aims to find the answer to this question, based on Monte Carlo (MC) simulations of commonly performed x-ray examinations using adult phantoms modelled to represent humans in standing as well as in the supine posture. The recently published FASH (female adult mesh) and MASH (male adult mesh) phantoms have the standing posture. In a first step, both phantoms were updated with respect to their anatomy: glandular tissue was separated from adipose tissue in the breasts, visceral fat was separated from subcutaneous fat, cartilage was segmented in ears, nose and around the thyroid, and the mass of the right lung is now 15% greater than the left lung. The updated versions are called FASH2_sta and MASH2_sta (sta = standing). Taking into account the gravitational effects on organ position and fat distribution, supine versions of the FASH2 and the MASH2 phantoms have been developed in this study and called FASH2_sup and MASH2_sup. MC simulations of external whole-body exposure to monoenergetic photons and partial-body exposure to x-rays have been made with the standing and supine FASH2 and MASH2 phantoms. For external whole-body exposure for AP and PA projection with photon energies above 30 keV, the effective dose did not change by more than 5% when the posture changed from standing to supine or vice versa. Apart from that, the supine posture is quite rare in occupational radiation protection from whole-body exposure. However, in the x-ray diagnosis supine posture is frequently used for patients submitted to examinations. Changes of organ absorbed doses up to 60% were found for simulations of chest and abdomen radiographs if the posture changed from standing to supine or vice versa. A further increase of differences between posture-specific organ and tissue absorbed doses with increasing whole-body mass is to be expected.
International Nuclear Information System (INIS)
Does the posture of a patient have an effect on the organ and tissue absorbed doses caused by x-ray examinations? This study aims to find the answer to this question, based on Monte Carlo (MC) simulations of commonly performed x-ray examinations using adult phantoms modelled to represent humans in standing as well as in the supine posture. The recently published FASH (female adult mesh) and MASH (male adult mesh) phantoms have the standing posture. In a first step, both phantoms were updated with respect to their anatomy: glandular tissue was separated from adipose tissue in the breasts, visceral fat was separated from subcutaneous fat, cartilage was segmented in ears, nose and around the thyroid, and the mass of the right lung is now 15% greater than the left lung. The updated versions are called FASH2sta and MASH2sta (sta = standing). Taking into account the gravitational effects on organ position and fat distribution, supine versions of the FASH2 and the MASH2 phantoms have been developed in this study and called FASH2sup and MASH2sup. MC simulations of external whole-body exposure to monoenergetic photons and partial-body exposure to x-rays have been made with the standing and supine FASH2 and MASH2 phantoms. For external whole-body exposure for AP and PA projection with photon energies above 30 keV, the effective dose did not change by more than 5% when the posture changed from standing to supine or vice versa. Apart from that, the supine posture is quite rare in occupational radiation protection from whole-body exposure. However, in the x-ray diagnosis supine posture is frequently used for patients submitted to examinations. Changes of organ absorbed doses up to 60% were found for simulations of chest and abdomen radiographs if the posture changed from standing to supine or vice versa. A further increase of differences between posture-specific organ and tissue absorbed doses with increasing whole-body mass is to be expected.
Casas, Ricard; Cardiel-Sas, Laia; Castander, Francisco J.; Jiménez, Jorge; de Vicente, Juan
2014-08-01
The focal plane of the PAU camera is composed of eighteen 2K x 4K CCDs. These devices, plus four spares, were provided by the Japanese company Hamamatsu Photonics K.K. with type no. S10892-04(X). These detectors are 200 μm thick fully depleted and back illuminated with an n-type silicon base. They have been built with a specific coating to be sensitive in the range from 300 to 1,100 nm. Their square pixel size is 15 μm. The read-out system consists of a Monsoon controller (NOAO) and the panVIEW software package. The deafualt CCD read-out speed is 133 kpixel/s. This is the value used in the calibration process. Before installing these devices in the camera focal plane, they were characterized using the facilities of the ICE (CSIC- IEEC) and IFAE in the UAB Campus in Bellaterra (Barcelona, Catalonia, Spain). The basic tests performed for all CCDs were to obtain the photon transfer curve (PTC), the charge transfer efficiency (CTE) using X-rays and the EPER method, linearity, read-out noise, dark current, persistence, cosmetics and quantum efficiency. The X-rays images were also used for the analysis of the charge diffusion for different substrate voltages (VSUB). Regarding the cosmetics, and in addition to white and dark pixels, some patterns were also found. The first one, which appears in all devices, is the presence of half circles in the external edges. The origin of this pattern can be related to the assembly process. A second one appears in the dark images, and shows bright arcs connecting corners along the vertical axis of the CCD. This feature appears in all CCDs exactly in the same position so our guess is that the pattern is due to electrical fields. Finally, and just in two devices, there is a spot with wavelength dependence whose origin could be the result of a defectous coating process.
Asadi, Somayeh; Masoudi, S Farhad; Rahmani, Faezeh
2014-01-01
Materials of high atomic number such as gold, can provide a high probability for photon interaction by photoelectric effects during radiation therapy. In cancer therapy, the object of brachytherapy as a kind of radiotherapy is to deliver adequate radiation dose to tumor while sparing surrounding healthy tissue. Several studies demonstrated that the preferential accumulation of gold nanoparticles within the tumor can enhance the absorbed dose by the tumor without increasing the radiation dose delivered externally. Accordingly, the required time for tumor irradiation decreases as the estimated adequate radiation dose for tumor is provided following this method. The dose delivered to healthy tissue is reduced when the time of irradiation is decreased. Hear, GNPs effects on choroidal Melanoma dosimetry is discussed by Monte Carlo study. Monte Carlo Ophthalmic brachytherapy dosimetry usually, is studied by simulation of water phantom. Considering the composition and density of eye material instead of water in thes...
Energy Technology Data Exchange (ETDEWEB)
Wuerl, Matthias
2016-08-01
Matthias Wuerl presents two essential steps to implement offline PET monitoring of proton dose delivery at a clinical facility, namely the setting up of an accurate Monte Carlo model of the clinical beamline and the experimental validation of positron emitter production cross-sections. In the first part, the field size dependence of the dose output is described for scanned proton beams. Both the Monte Carlo and an analytical computational beam model were able to accurately predict target dose, while the latter tends to overestimate dose in normal tissue. In the second part, the author presents PET measurements of different phantom materials, which were activated by the proton beam. The results indicate that for an irradiation with a high number of protons for the sake of good statistics, dead time losses of the PET scanner may become important and lead to an underestimation of positron-emitter production yields.
International Nuclear Information System (INIS)
A statistical method is derived for the calculation of thermodynamic properties of many-body systems at low temperatures. This method is based on the self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo, J. Chem. Phys. 136, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. 89, 6316 (1988)]. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric guiding wave functions. In the process we obtain a parallel algorithm that optimizes a small subspace of the many-body Hilbert space to provide maximum overlap with the subspace spanned by the lowest-energy eigenstates of a many-body Hamiltonian. We show in a model system that the partition function is progressively maximized within this subspace. We show that the subspace spanned by the small basis systematically converges towards the subspace spanned by the lowest energy eigenstates. Possible applications of this method for calculating the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can also be used to accelerate the calculation of the ground or excited states with quantum Monte Carlo
Clouvas, A; Xanthos, S; Antonopoulos-Domis, M; Silva, J
1998-02-01
A Monte Carlo based method for the conversion of an in-situ gamma-ray spectrum obtained with a portable Ge detector to photon flux energy distribution is proposed. The spectrum is first stripped of the partial absorption and cosmic-ray events leaving only the events corresponding to the full absorption of a gamma ray. Applying to the resulting spectrum the full absorption efficiency curve of the detector determined by calibrated point sources and Monte Carlo simulations, the photon flux energy distribution is deduced. The events corresponding to partial absorption in the detector are determined by Monte Carlo simulations for different incident photon energies and angles using the CERN's GEANT library. Using the detector's characteristics given by the manufacturer as input it is impossible to reproduce experimental spectra obtained with point sources. A transition zone of increasing charge collection efficiency has to be introduced in the simulation geometry, after the inactive Ge layer, in order to obtain good agreement between the simulated and experimental spectra. The functional form of the charge collection efficiency is deduced from a diffusion model. PMID:9450590
PCXMC. A PC-based Monte Carlo program for calculating patient doses in medical x-ray examinations
International Nuclear Information System (INIS)
The report describes PCXMC, a Monte Carlo program for calculating patients' organ doses and the effective dose in medical x-ray examinations. The organs considered are: the active bone marrow, adrenals, brain, breasts, colon (upper and lower large intestine), gall bladder, heats, kidneys, liver, lungs, muscle, oesophagus, ovaries, pancreas, skeleton, skin, small intestine, spleen, stomach, testes, thymes, thyroid, urinary bladder, and uterus. (42 refs.)
DEFF Research Database (Denmark)
Klösgen, Beate; Bruun, Sara; Hansen, Søren;
in between he polymer cushion and bulk water the layer was attributed to water of reduced density and was called "depletion layer". Impurities or preparative artefacts were excluded as its origin. Later on, the formation of nanobubbles from this vapour-like water phase was initiated by tipping the...... The presence of a depletion layer of water along extended hydrophobic interfaces, and a possibly related formation of nanobubbles, is an ongoing discussion. The phenomenon was initially reported when we, years ago, chose thick films (~300-400Å) of polystyrene as cushions between a crystalline...... carrier and biomimetic membranes deposited thereupon and exposed to bulk water. While monitoring the sequential build-up of the sandwiched composite structure by continuous neutron reflectivity experiments the formation of an unexpected additional layer was detected (1). Located at the polystyrene surface...
Seipt, D; Marklund, M; Bulanov, S S
2016-01-01
The interaction of charged particles and photons with intense electromagnetic fields gives rise to multi-photon Compton and Breit-Wheeler processes. These are usually described in the framework of the external field approximation, where the electromagnetic field is assumed to have infinite energy. However, the multi-photon nature of these processes implies the absorption of a significant number of photons, which scales as the external field amplitude cubed. As a result, the interaction of a highly charged electron bunch with an intense laser pulse can lead to significant depletion of the laser pulse energy, thus rendering the external field approximation invalid. We provide relevant estimates for this depletion and find it to become important in the interaction between fields of amplitude $a_0 \\sim 10^3$ and electron bunches with charges of the order of nC.
Energy Technology Data Exchange (ETDEWEB)
Zhuang Guilin, E-mail: glzhuang@zjut.edu.cn [Institute of Industrial Catalysis, College of Chemical Engineering and Materials Science, Zhejiang University of Technology, Hangzhou 310032 (China); Chen Wulin [Institute of Industrial Catalysis, College of Chemical Engineering and Materials Science, Zhejiang University of Technology, Hangzhou 310032 (China); Zheng Jun [Center of Modern Experimental Technology, Anhui University, Hefei 230039 (China); Yu Huiyou [Institute of Industrial Catalysis, College of Chemical Engineering and Materials Science, Zhejiang University of Technology, Hangzhou 310032 (China); Wang Jianguo, E-mail: jgw@zjut.edu.cn [Institute of Industrial Catalysis, College of Chemical Engineering and Materials Science, Zhejiang University of Technology, Hangzhou 310032 (China)
2012-08-15
A series of lanthanide coordination polymers have been obtained through the hydrothermal reaction of N-(sulfoethyl) iminodiacetic acid (H{sub 3}SIDA) and Ln(NO{sub 3}){sub 3} (Ln=La, 1; Pr, 2; Nd, 3; Gd, 4). Crystal structure analysis exhibits that lanthanide ions affect the coordination number, bond length and dimension of compounds 1-4, which reveal that their structure diversity can be attributed to the effect of lanthanide contraction. Furthermore, the combination of magnetic measure with quantum Monte Carlo(QMC) studies exhibits that the coupling parameters between two adjacent Gd{sup 3+} ions for anti-anti and syn-anti carboxylate bridges are -1.0 Multiplication-Sign 10{sup -3} and -5.0 Multiplication-Sign 10{sup -3} cm{sup -1}, respectively, which reveals weak antiferromagnetic interaction in 4. - Graphical abstract: Four lanthanide coordination polymers with N-(sulfoethyl) iminodiacetic acid were obtained under hydrothermal condition and reveal the weak antiferromagnetic coupling between two Gd{sup 3+} ions by Quantum Monte Carlo studies. Highlights: Black-Right-Pointing-Pointer Four lanthanide coordination polymers of H{sub 3}SIDA ligand were obtained. Black-Right-Pointing-Pointer Lanthanide ions play an important role in their structural diversity. Black-Right-Pointing-Pointer Magnetic measure exhibits that compound 4 features antiferromagnetic property. Black-Right-Pointing-Pointer Quantum Monte Carlo studies reveal the coupling parameters of two Gd{sup 3+} ions.
Capital expenditure and depletion
International Nuclear Information System (INIS)
In the future, the increase in oil demand will be covered for the most part by non conventional oils, but conventional sources will continue to represent a preponderant share of the world oil supply. Their depletion represents a complex challenge involving technological, economic and political factors. At the same time, there is reason for concern about the decrease in exploration budgets at the major oil companies. (author)
International Nuclear Information System (INIS)
Purpose: For proton radiation therapy, Monte Carlo simulation (MCS) methods are recognized as the gold-standard dose calculation approach. Although previously unrealistic due to limitations in available computing power, GPU-based applications allow MCS of proton treatment fields to be performed in routine clinical use, on time scales comparable to that of conventional pencil-beam algorithms. This study focuses on validating the results of our GPU-based code (gPMC) versus fully implemented proton therapy based MCS code (TOPAS) for clinical patient cases. Methods: Two treatment sites were selected to provide clinical cases for this study: head-and-neck cases due to anatomical geometrical complexity (air cavities and density heterogeneities), making dose calculation very challenging, and prostate cases due to higher proton energies used and close proximity of the treatment target to sensitive organs at risk. Both gPMC and TOPAS methods were used to calculate 3-dimensional dose distributions for all patients in this study. Comparisons were performed based on target coverage indices (mean dose, V90 and D90) and gamma index distributions for 2% of the prescription dose and 2mm. Results: For seven out of eight studied cases, mean target dose, V90 and D90 differed less than 2% between TOPAS and gPMC dose distributions. Gamma index analysis for all prostate patients resulted in passing rate of more than 99% of voxels in the target. Four out of five head-neck-cases showed passing rate of gamma index for the target of more than 99%, the fifth having a gamma index passing rate of 93%. Conclusion: Our current work showed excellent agreement between our GPU-based MCS code and fully implemented proton therapy based MC code for a group of dosimetrically challenging patient cases
Fully Depleted Charge-Coupled Devices
International Nuclear Information System (INIS)
We have developed fully depleted, back-illuminated CCDs that build upon earlier research and development efforts directed towards technology development of silicon-strip detectors used in high-energy-physics experiments. The CCDs are fabricated on the same type of high-resistivity, float-zone-refined silicon that is used for strip detectors. The use of high-resistivity substrates allows for thick depletion regions, on the order of 200-300 um, with corresponding high detection efficiency for near-infrared and soft x-ray photons. We compare the fully depleted CCD to the p-i-n diode upon which it is based, and describe the use of fully depleted CCDs in astronomical and x-ray imaging applications
Jalayer, Fatemeh; Ebrahimian, Hossein
2014-05-01
Introduction The first few days elapsed after the occurrence of a strong earthquake and in the presence of an ongoing aftershock sequence are quite critical for emergency decision-making purposes. Epidemic Type Aftershock Sequence (ETAS) models are used frequently for forecasting the spatio-temporal evolution of seismicity in the short-term (Ogata, 1988). The ETAS models are epidemic stochastic point process models in which every earthquake is a potential triggering event for subsequent earthquakes. The ETAS model parameters are usually calibrated a priori and based on a set of events that do not belong to the on-going seismic sequence (Marzocchi and Lombardi 2009). However, adaptive model parameter estimation, based on the events in the on-going sequence, may have several advantages such as, tuning the model to the specific sequence characteristics, and capturing possible variations in time of the model parameters. Simulation-based methods can be employed in order to provide a robust estimate for the spatio-temporal seismicity forecasts in a prescribed forecasting time interval (i.e., a day) within a post-main shock environment. This robust estimate takes into account the uncertainty in the model parameters expressed as the posterior joint probability distribution for the model parameters conditioned on the events that have already occurred (i.e., before the beginning of the forecasting interval) in the on-going seismic sequence. The Markov Chain Monte Carlo simulation scheme is used herein in order to sample directly from the posterior probability distribution for ETAS model parameters. Moreover, the sequence of events that is going to occur during the forecasting interval (and hence affecting the seismicity in an epidemic type model like ETAS) is also generated through a stochastic procedure. The procedure leads to two spatio-temporal outcomes: (1) the probability distribution for the forecasted number of events, and (2) the uncertainty in estimating the
Directory of Open Access Journals (Sweden)
J. D. Rösevall
2007-01-01
Full Text Available The objective of this study is to demonstrate how polar ozone depletion can be mapped and quantified by assimilating ozone data from satellites into the wind driven transport model DIAMOND, (Dynamical Isentropic Assimilation Model for OdiN Data. By assimilating a large set of satellite data into a transport model, ozone fields can be built up that are less noisy than the individual satellite ozone profiles. The transported fields can subsequently be compared to later sets of incoming satellite data so that the rates and geographical distribution of ozone depletion can be determined. By tracing the amounts of solar irradiation received by different air parcels in a transport model it is furthermore possible to study the photolytic reactions that destroy ozone. In this study, destruction of ozone that took place in the Antarctic winter of 2003 and in the Arctic winter of 2002/2003 have been examined by assimilating ozone data from the ENVISAT/MIPAS and Odin/SMR satellite-instruments. Large scale depletion of ozone was observed in the Antarctic polar vortex of 2003 when sunlight returned after the polar night. By mid October ENVISAT/MIPAS data indicate vortex ozone depletion in the ranges 80–100% and 70–90% on the 425 and 475 K potential temperature levels respectively while the Odin/SMR data indicates depletion in the ranges 70–90% and 50–70%. The discrepancy between the two instruments has been attributed to systematic errors in the Odin/SMR data. Assimilated fields of ENVISAT/MIPAS data indicate ozone depletion in the range 10–20% on the 475 K potential temperature level, (~19 km altitude, in the central regions of the 2002/2003 Arctic polar vortex. Assimilated fields of Odin/SMR data on the other hand indicate ozone depletion in the range 20–30%.
Monte Carlo Application ToolKit (MCATK)
International Nuclear Information System (INIS)
Highlights: • Component-based Monte Carlo radiation transport parallel software library. • Designed to build specialized software applications. • Provides new functionality for existing general purpose Monte Carlo transport codes. • Time-independent and time-dependent algorithms with population control. • Algorithm verification and validation results are provided. - Abstract: The Monte Carlo Application ToolKit (MCATK) is a component-based software library designed to build specialized applications and to provide new functionality for existing general purpose Monte Carlo radiation transport codes. We will describe MCATK and its capabilities along with presenting some verification and validations results
Monte Carlo techniques in radiation therapy
Verhaegen, Frank
2013-01-01
Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...
Ozone-depleting Substances (ODS)
U.S. Environmental Protection Agency — This site includes all of the ozone-depleting substances (ODS) recognized by the Montreal Protocol. The data include ozone depletion potentials (ODP), global...
Fast Monte Carlo based joint iterative reconstruction for simultaneous 99mTc/ 123I SPECT imaging.
Ouyang, Jinsong; El Fakhri, Georges; Moore, Stephen C
2007-08-01
Simultaneous 99mTC/ 123I SPECT allows the assessment of two physiological functions under identical conditions. The separation of these radionuclides is difficult, however, because their energies are close. Most energy-window-based scatter correction methods do not fully model either physical factors or patient-specific activity and attenuation distributions. We have developed a fast Monte Carlo (MC) simulation-based multiple-radionuclide and multiple-energy joint ordered-subset expectation-maximization (JOSEM) iterative reconstruction algorithm, MC-JOSEM. MC-JOSEM simultaneously corrects for scatter and cross talk as well as detector response within the reconstruction algorithm. We evaluated MC-JOSEM for simultaneous brain profusion (99mTc-HMPAO) and neurotransmission (123I-altropane) SPECT. MC simulations of 99mTc and 123I studies were generated separately and then combined to mimic simultaneous 99mTc/ 123I SPECT. All the details of photon transport through the brain, the collimator, and detector, including Compton and coherent scatter, septal penetration, and backscatter from components behind the crystal, were modeled. We reconstructed images from simultaneous dual-radionuclide projections in three ways. First, we reconstructed the photopeak-energy-window projections (with an asymmetric energy window for 1231) using the standard ordered-subsets expectation-maximization algorithm (NSC-OSEM). Second, we used standard OSEM to reconstruct 99mTc photopeak-energy-window projections, while including an estimate of scatter from a Compton-scatter energy window (SC-OSEM). Third, we jointly reconstructed both 99mTc and 123I images using projection data associated with two photo-peak energy windows and an intermediate-energy window using MC-JOSEM. For 15 iterations of reconstruction, the bias and standard deviation of 99mTc activity estimates in several brain structures were calculated for NSC-OSEM, SC-OSEM, and MC-JOSEM, using images reconstructed from primary
International Nuclear Information System (INIS)
Full text of publication follows. Aim: the aim of this study was to perform a critical comparison of 3 dosimetric approaches in Molecular Radiotherapy: phantom based dosimetry, Dose Voxel Kernels (DVKs) and full Monte Carlo (MC) dosimetry. The objective was to establish the impact of the absorbed dose calculation algorithm on the final result. Materials and Methods: we calculated the absorbed dose to various organs in six healthy volunteers injected with a novel 18F-labelled PET radiotracer from GE Healthcare. Each patient underwent from 8 to 10 whole body 3D PET/CT scans. The first 8 scans were acquired dynamically in order to limit co-registration issues. Eleven organs were segmented on the first PET/CT scan by a physician. We analysed this dataset using the OLINDA/EXM software taking into account actual patient's organ masses; the commercial software Stratos by Philips implementing a DVK approach; and performing full MC dosimetry on the basis of a custom application developed with Gate. The calculations performed with these three techniques were based on the cumulated activities calculated at the voxel level by Stratos. Results: all the absorbed doses calculated with Gate were higher than those calculated with OLINDA. The average ratios between the Gate absorbed dose and OLINDA's was 1.38±0.34 σ (from 0.93 to 2.23) considering all patients. The discrepancy was particularly high for the thyroid, with an average Gate/OLINDA ratio of 1.97±0.83 σ for the 6 patients. The lower absorbed doses in OLINDA may be explained considering the inter-organ distances in the MIRD phantom. These are in general overestimated, leading to lower absorbed doses in target organs. The differences between Stratos and Gate resulted to be the highest. The average ratios between Gate and Stratos absorbed doses were 2.51±1.21 σ (from 1.09 to 6.06). The highest differences were found for lungs (average ratio 4.76±2.13 σ), as expected, since Stratos considers unit density
Burnup calculation capability in the PSG2 / Serpent Monte Carlo reactor physics code
International Nuclear Information System (INIS)
The PSG continuous-energy Monte Carlo reactor physics code has been developed at VTT Technical Research Centre of Finland since 2004. The code is mainly intended for group constant generation for coupled reactor simulator calculations and other tasks traditionally handled using deterministic lattices physics codes. The name was recently changed from acronym PSG to 'Serpent', and the capabilities have been extended by implementing built-in burnup calculation routines that enable the code to be used for fuel cycle studies and the modelling of irradiated fuels. This paper presents the methodology used for burnup calculation. Serpent has two fundamentally different options for solving the Bateman depletion equations: 1) the Transmutation Trajectory Analysis method (TTA), based on the analytical solution of linearized depletion chains and 2) the Chebyshev Rational Approximation Method (CRAM), an advanced matrix exponential solution developed at VTT. The first validation results are compared to deterministic CASMO-4E calculations. It is also shown that the overall running time in Monte Carlo burnup calculation can be significantly reduced using specialized calculation techniques, and that the continuous-energy Monte Carlo method is becoming a viable alternative to deterministic assembly burnup codes. (authors)
International Nuclear Information System (INIS)
The recommended target dose in radioiodine therapy of solitary hyperfunctioning thyroid nodules is 300-400 Gy and therefore higher than in other radiotherapies. This is due to the fact that an unknown, yet significant portion of the activity is stored in extranodular areas but is neglected in the calculatory dosimetry. We investigate the feasibility of determining the ratio of nodular and extranodular activity concentrations (uptakes) from post-therapeutically acquired planar scintigrams with Monte Carlo simulations in GATE. The geometry of a gamma camera with a high energy collimator was emulated in GATE (Version 5). A geometrical thyroid-neck phantom (GP) and the ICRP reference voxel phantoms 'Adult Female' (AF, 16 ml thyroid) and 'Adult Male' (AM, 19 ml thyroid) were used as source regions. Nodules of 1 ml and 3 ml volume were placed in the phantoms. For each phantom and each nodule 200 scintigraphic acquisitions were simulated. Uptake ratios of nodule and rest of thyroid ranging from 1 to 20 could be created by summation. Quantitative image analysis was performed by investigating the number of simulated counts in regions of interest (ROIs). ROIs were created by perpendicular projection of the phantom onto the camera plane to avoid a user dependant bias. The ratio of count densities in ROIs over the nodule and over the contralateral lobe, which should be least affected by nodular activity, was taken to be the best available measure for the uptake ratios. However, the predefined uptake ratios are underestimated by these count density ratios: For an uptake ratio of 20 the count ratios range from 4.5 (AF, 1 ml nodule) to 15.3 (AM, 3 ml nodule). Furthermore, the contralateral ROI is more strongly affected by nodular activity than expected: For an uptake ratio of 20 between nodule and rest of thyroid up to 29% of total counts in the ROI over the contralateral lobe are caused by decays in the nodule (AF 3 ml). In the case of the 1 ml nodules this effect is smaller: 9
Energy Technology Data Exchange (ETDEWEB)
Hammes, Jochen; Schmidt, Matthias; Schicha, Harald; Eschner, Wolfgang [Universitaetsklinikum Koeln (Germany). Klinik und Poliklinik fuer Nuklearmedizin; Pietrzyk, Uwe [Forschungszentrum Juelich GmbH (Germany). Inst. fuer Neurowissenschaften und Medizin (INM-4); Wuppertal Univ. (Germany). Fachbereich C - Physik
2011-07-01
The recommended target dose in radioiodine therapy of solitary hyperfunctioning thyroid nodules is 300-400 Gy and therefore higher than in other radiotherapies. This is due to the fact that an unknown, yet significant portion of the activity is stored in extranodular areas but is neglected in the calculatory dosimetry. We investigate the feasibility of determining the ratio of nodular and extranodular activity concentrations (uptakes) from post-therapeutically acquired planar scintigrams with Monte Carlo simulations in GATE. The geometry of a gamma camera with a high energy collimator was emulated in GATE (Version 5). A geometrical thyroid-neck phantom (GP) and the ICRP reference voxel phantoms 'Adult Female' (AF, 16 ml thyroid) and 'Adult Male' (AM, 19 ml thyroid) were used as source regions. Nodules of 1 ml and 3 ml volume were placed in the phantoms. For each phantom and each nodule 200 scintigraphic acquisitions were simulated. Uptake ratios of nodule and rest of thyroid ranging from 1 to 20 could be created by summation. Quantitative image analysis was performed by investigating the number of simulated counts in regions of interest (ROIs). ROIs were created by perpendicular projection of the phantom onto the camera plane to avoid a user dependant bias. The ratio of count densities in ROIs over the nodule and over the contralateral lobe, which should be least affected by nodular activity, was taken to be the best available measure for the uptake ratios. However, the predefined uptake ratios are underestimated by these count density ratios: For an uptake ratio of 20 the count ratios range from 4.5 (AF, 1 ml nodule) to 15.3 (AM, 3 ml nodule). Furthermore, the contralateral ROI is more strongly affected by nodular activity than expected: For an uptake ratio of 20 between nodule and rest of thyroid up to 29% of total counts in the ROI over the contralateral lobe are caused by decays in the nodule (AF 3 ml). In the case of the 1 ml nodules this
Edimo, Paul; Kwato Njock, M.G.; Vynckier, Stefaan
2013-01-01
The purpose of the present study is to perform a clinical validation of a new commercial Monte Carlo (MC) based treatment planning system (TPS) for electron beams, i.e. the XiO 4.60 electron MC (XiO eMC). Firstly, MC models for electron beams (4, 8, 12 and 18MeV) have been simulated using BEAMnrc user code and validated by measurements in a homogeneous water phantom. Secondly, these BEAMnrc models have been set as the reference tool to evaluate the ability of XiO eMC to reproduce dose perturb...
Directory of Open Access Journals (Sweden)
Kohei Arai
2013-04-01
Full Text Available Comparative study on linear and nonlinear mixed pixel models of which pixels in remote sensing satellite images is composed with plural ground cover materials mixed together, is conducted for remote sensing satellite image analysis. The mixed pixel models are based on Cierniewski of ground surface reflectance model. The comparative study is conducted by using of Monte Carlo Ray Tracing: MCRT simulations. Through simulation study, the difference between linear and nonlinear mixed pixel models is clarified. Also it is found that the simulation model is validated.
International Nuclear Information System (INIS)
We have established a dynamic scenario quantification method based on the coupling of a Continuous Markov Monte Carlo (CMMC) method and a plant thermal-hydraulics analysis code for level 2 PSA (probabilistic safety assessment). This paper presents meta-analysis coupling model to obtain the dynamic scenario quantification with a reasonable computational cost. The PLOHS (protected-loss-of-heat-sink) accident of a liquid sodium fast reactor is selected as the level 2 PSA scenario in the model. Furthermore, we also discuss categorizing methods of the quantification result because the coupling method differs widely from existing event tree method. (author)
Monte Carlo simulations of a scanning system based on a panoramic X-ray tube with a conical anode
Andrii Sofiienko; Chad Jarvis; Ådne Voll
2014-01-01
Monte Carlo simulations were used to study photon production in a panoramic X-ray tube with a conical tungsten target to determine the optimal characteristics of the target shape and electron beam configuration. Several simulations were performed for accelerating potentials equal to 250 kV, 300 kV, and 500 kV with electron beams of various radii and anode sizes. The angular distribution of the photon intensity was analysed by numerical calculations for an assembly composed of an X-ray tube an...
Elenius, M. T.; Miller, E. L.; Abriola, L. M.
2014-12-01
Chlorinated solvents tend to persist for long periods in heterogeneous porous media, in part due to sustained sources of contaminant sequestered in lower permeability zones and sorbed to the soil matrix. Sharp contrasts in soil properties have been modeled successfully using Markov Chain / Transition Probability (MC/TP) methods. This statistical approach provides a means of generating permeability fields that are consistent with prior knowledge concerning the frequency and relative positioning of different strata.To assess source zone mass depletion in a suite of such geological realizations, the large computational burden may prohibit the use of direct numerical simulations. One alternative approach is the application of a multi-rate-mass-transfer (MRMT) method, an extension of the dual-domain concept that was first developed in the soil science literature for sorption modeling. In MRMT, rather than discretizing immobile regions, such as clay layers, the concentration in these regions is treated by explicit state variables, and the transport between mobile and immobile regions is modeled by first-order exchange terms. However, in the implementation of this approach, fine-scale simulations on subdomains are often necessary to develop appropriate parameters. Such simulations are tedious, especially when attempting to account for uncertainty in the geological description. In this work, the link between characteristics of MC/TP and transfer parameters in the MRMT is evaluated by regression based on fine-scale simulations, using the simulator MODFLOW/MT3DMS. Upscaled simulation results are obtained with the same simulator, linked to an MRMT module. The results facilitate efficient assessment of reactive transport in domains with sharp contrasts in soil properties and limit the need for fine-scale numerical simulations.
International Nuclear Information System (INIS)
The amount of stratospheric ozone and the reduction of the ozone layer vary according to seasons and latitudes. At present total and vertical ozone is monitored over all Austria. The mean monthly ozone levels between 1994 and 2000 are presented. Data on stratospheric ozone and UV-B radiation are published daily on the home page http: www.lebesministerium.at. The use of ozone depleting substances such as chlorofluorocarbons (CFCs), hydrochlorofluorocarbons (HCFCs) is provided. Besides, the national measures taken to reduce their use. Figs. 2, Tables 2. (nevyjel)
EPRI depletion benchmark calculations using PARAGON
International Nuclear Information System (INIS)
Highlights: • PARAGON depletion calculations are benchmarked against the EPRI reactivity decrement experiments. • Benchmarks cover a wide range of enrichments, burnups, cooling times, and burnable absorbers, and different depletion and storage conditions. • Results from PARAGON-SCALE scheme are more conservative relative to the benchmark data. • ENDF/B-VII based data reduces the excess conservatism and brings the predictions closer to benchmark reactivity decrement values. - Abstract: In order to conservatively apply burnup credit in spent fuel pool criticality analyses, code validation for both fresh and used fuel is required. Fresh fuel validation is typically done by modeling experiments from the “International Handbook.” A depletion validation can determine a bias and bias uncertainty for the worth of the isotopes not found in the fresh fuel critical experiments. Westinghouse’s burnup credit methodology uses PARAGON™ (Westinghouse 2-D lattice physics code) and its 70-group cross-section library, which have been benchmarked, qualified, and licensed both as a standalone transport code and as a nuclear data source for core design simulations. A bias and bias uncertainty for the worth of depletion isotopes, however, are not available for PARAGON. Instead, the 5% decrement approach for depletion uncertainty is used, as set forth in the Kopp memo. Recently, EPRI developed a set of benchmarks based on a large set of power distribution measurements to ascertain reactivity biases. The depletion reactivity has been used to create 11 benchmark cases for 10, 20, 30, 40, 50, and 60 GWd/MTU and 3 cooling times 100 h, 5 years, and 15 years. These benchmark cases are analyzed with PARAGON and the SCALE package and sensitivity studies are performed using different cross-section libraries based on ENDF/B-VI.3 and ENDF/B-VII data to assess that the 5% decrement approach is conservative for determining depletion uncertainty
International Nuclear Information System (INIS)
Magnetic fluid is a new popular functional material, which is a new kind of stable colloid. The optical properties of the magnetic fluids have been studied widely by experiments. The theoretical research, however, on the microstructure and transmission characteristics of magnetic fluids is still ongoing. In this paper the Monte Carlo method was adopted to construct the model of the magnetic fluid and to simulate the microstructure and the transmission of the magnetic fluids film. The experimental setup to record the microstructure of the magnetic fluid was especially designed with a water-cooling system, which could ensure that the environmental temperature would not vary when the magnetic field was applied. Theoretical simulations and experiments of the magnetic fluid films with thicknesses of 8 μm and 10 μm under an external magnetic field of different strength were carried out. The experimental results indicated that the proposed method in this paper was feasible and could be well used in the study for optical properties of the magnetic fluids. - Highlights: ► The Monte Carlo method was adopted to simulate the microstructure of magnetic fluids film. ► The especially designed water-cooling system was used in the experimental setup. ► Results indicated the method could be well used to study optical properties of magnetic fluids
State of the art of Monte Carlo technics for reliable activated waste evaluations
International Nuclear Information System (INIS)
This paper presents the calculation scheme used for many studies to assess the activities inventory of French shutdown reactors (including Pressurized Water Reactor, Heavy Water Reactor, Sodium-Cooled Fast Reactor and Natural Uranium Gas Cooled or UNGG). This calculation scheme is based on Monte Carlo calculations (MCNP) and involves advanced technique for source modeling, geometry modeling (with Computer-Aided Design integration), acceleration methods and depletion calculations coupling on 3D meshes. All these techniques offer efficient and reliable evaluations on large scale model with a high level of details reducing the risks of underestimation or conservatisms. (authors)
Microscopic to macroscopic depletion model development for FORMOSA-P
International Nuclear Information System (INIS)
Microscopic depletion has been gaining popularity with regard to employment in reactor core nodal calculations, mainly attributed to the superiority of microscopic depletion in treating spectral history effects during depletion. Another trend is the employment of loading pattern optimization computer codes in support of reload core design. Use of such optimization codes has significantly reduced design efforts to optimize reload core loading patterns associated with increasingly complicated lattice designs. A microscopic depletion model has been developed for the FORMOSA-P pressurized water reactor (PWR) loading pattern optimization code. This was done for both fidelity improvements and to make FORMOSA-P compatible with microscopic-based nuclear design methods. Needless to say, microscopic depletion requires more computational effort compared with macroscopic depletion. This implies that microscopic depletion may be computationally restrictive if employed during the loading pattern optimization calculation because many loading patterns are examined during the course of an optimization search. Therefore, the microscopic depletion model developed here uses combined models of microscopic and macroscopic depletion. This is done by first performing microscopic depletions for a subset of possible loading patterns from which 'collapsed' macroscopic cross sections are obtained. The collapsed macroscopic cross sections inherently incorporate spectral history effects. Subsequently, the optimization calculations are done using the collapsed macroscopic cross sections. Using this approach allows maintenance of microscopic depletion level accuracy without substantial additional computing resources
Power distributions in fresh and depleted LEU and HEU cores of the MITR reactor.
Energy Technology Data Exchange (ETDEWEB)
Wilson, E.H.; Horelik, N.E.; Dunn, F.E.; Newton, T.H., Jr.; Hu, L.; Stevens, J.G. (Nuclear Engineering Division); (2MIT Nuclear Reactor Laboratory and Nuclear Science and Engineering Department)
2012-04-04
The Massachusetts Institute of Technology Reactor (MITR-II) is a research reactor in Cambridge, Massachusetts designed primarily for experiments using neutron beam and in-core irradiation facilities. It delivers a neutron flux comparable to current LWR power reactors in a compact 6 MW core using Highly Enriched Uranium (HEU) fuel. In the framework of its non-proliferation policies, the international community presently aims to minimize the amount of nuclear material available that could be used for nuclear weapons. In this geopolitical context, most research and test reactors both domestic and international have started a program of conversion to the use of Low Enriched Uranium (LEU) fuel. A new type of LEU fuel based on an alloy of uranium and molybdenum (UMo) is expected to allow the conversion of U.S. domestic high performance reactors like the MITR-II reactor. Toward this goal, core geometry and power distributions are presented. Distributions of power are calculated for LEU cores depleted with MCODE using an MCNP5 Monte Carlo model. The MCNP5 HEU and LEU MITR models were previously compared to experimental benchmark data for the MITR-II. This same model was used with a finer spatial depletion in order to generate power distributions for the LEU cores. The objective of this work is to generate and characterize a series of fresh and depleted core peak power distributions, and provide a thermal hydraulic evaluation of the geometry which should be considered for subsequent thermal hydraulic safety analyses.
Development of burnup calculation function in reactor Monte Carlo code RMC
International Nuclear Information System (INIS)
This paper presents the burnup calculation capability of RMC, which is a new Monte Carlo (MC) neutron transport code developed by Reactor Engineering Analysis Laboratory (REAL) in Tsinghua University of China. Unlike most of existing MC depletion codes which explicitly couple the depletion module, RMC incorporates ORIGEN 2.1 in an implicit way. Different burn step strategies, including the middle-of-step approximation and the predictor-corrector method, are adopted by RMC to assure the accuracy under large burnup step size. RMC employs a spectrum-based method of tallying one-group cross section, which can considerably saves computational time with negligible accuracy loss. According to the validation results of benchmarks and examples, it is proved that the burnup function of RMC performs quite well in accuracy and efficiency. (authors)
Development and validation of burnup function in reactor Monte Carlo RMC
International Nuclear Information System (INIS)
This paper presents the burnup calculation capability of RMC, which is a new Monte Carlo (MC) neutron transport code developed by Reactor Engineering Analysis Laboratory (REAL) in Tsinghua University of China. Unlike most of existing MC depletion codes which explicitly couple the depletion module, RMC incorporates ORIGEN 2.1 in an implicit way. Different burn step strategies, including middle-of-step approximation and predictor-corrector method, are adopted by RMC to assure accuracy under large step size. RMC employs a spectrum-based method of tallying one-group cross section, which can considerably save computational time with negligible accuracy loss. According to validation results of benchmarks and examples, it is proved that the burnup function of RMC performs quite well in accuracy and efficiency. (author)
Monte Carlo application tool-kit (MCATK)
International Nuclear Information System (INIS)
The Monte Carlo Application tool-kit (MCATK) is a C++ component-based software library designed to build specialized applications and to provide new functionality for existing general purpose Monte Carlo radiation transport codes such as MCNP. We will describe MCATK and its capabilities along with presenting some verification and validations results. (authors)
Depleted uranium waste assay at AWE
International Nuclear Information System (INIS)
The Atomic Weapons Establishment (AWE) at Aldermaston has recently conducted a Best Practical Means (BPM) study, for solid Depleted Uranium (DU) waste assay, in order to satisfy key stakeholders that AWE is applying best practice. This study has identified portable passive High Resolution Gamma Spectrometry (HRGS), combined with an analytical software package called Spectral Nondestructive Assay Platform (SNAP), as the preferred option with the best balance between performance and costs. HRGS/SNAP performance has been assessed by monitoring 200 l DU waste drum standards and also heterogeneous, high density drums from DU firing trials. Accuracy was usually within 30 % with Detection Limits (DL) in the region of 10 g DU for short count times. Monte Carlo N-Particle (MCNP) calculations have been used to confirm the shape of the calibration curve generated by the SNAP software procured from Eberline Services Inc. (authors)
A GPU-based Large-scale Monte Carlo Simulation Method for Systems with Long-range Interactions
Liang, Yihao; Li, Yaohang
2016-01-01
In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures. It adopts the sequential updating scheme of Metropolis algorithm, and makes no approximation in the computation of energy. It reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We use this method to simulate primitive model electrolytes. We measure very precisely all ion-ion pair correlation functions at high concentrations, and extract renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.
Experimental and Monte Carlo analysis of near-breakdown phenomena in GaAs-based heterostructure FETs
Sleiman, A.; Di Carlo, A.; Tocca, L.; Lugli, P.; Zandler, G.; Meneghesso, G.; Zanoni, E.; Canali, C.; Cetronio, A.; Lanzieri, M.; Peroni, M.
2001-05-01
We present experimental and theoretical data related to the impact ionization in the near-breakdown regime of AlGaAs/InGaAs pseudomorphic high-electron-mobility transistors (P-HEMTs) and AlGaAs/GaAs heterostructure field effect transistors (HFETs). Room-temperature electroluminescence spectra of P-HEMT exhibit a maximum around the InGaAs energy gap (1.3 eV). Two peaks have been observed for the HFETs. These experiments are interpreted by means of Monte Carlo simulations. The most important differences between the two devices are found in the hole distribution. While the holes in the P-HEMT are confined in the gate-source channel region and responsible for the breakdown, they are absent from the active part of the HFET. This absence reduces the feedback and improves the on-state breakdown voltage characteristics.
Institute of Scientific and Technical Information of China (English)
Zhang Zhi-Dong; Chang Chun-Rui; Ma Dong-Lai
2009-01-01
Hybrid nematic films have been studied by Monte Carlo simulations using a lattice spin model,in which the pair potential is spatially anisotropic and dependent on elastic constants of liquid crystals.We confirm in the thin hybrid nematic film the existence of a biaxially nonbent structure and the structarc transition from the biaxial to the bent-director structure,which is similar to the result obtained using the Lebwohl-Lasher model.However,the step-like director's profile,characteristic for the biaxial structure,is spatially asymmetric in the film because the pair potential leads to K1≠K3.We estimate the upper cell thickness to be 69 spin layers,in which the biaxial structure can be found.
Meshkian, Mohsen
2016-02-01
Neutron radiography is rapidly extending as one of the methods for non-destructive screening of materials. There are various parameters to be studied for optimising imaging screens and image quality for different fast-neutron radiography systems. Herein, a Geant4 Monte Carlo simulation is employed to evaluate the response of a fast-neutron radiography system using a 252Cf neutron source. The neutron radiography system is comprised of a moderator as the neutron-to-proton converter with suspended silver-activated zinc sulphide (ZnS(Ag)) as the phosphor material. The neutron-induced protons deposit energy in the phosphor which consequently emits scintillation light. Further, radiographs are obtained by simulating the overall radiography system including source and sample. Two different standard samples are used to evaluate the quality of the radiographs.
International Nuclear Information System (INIS)
The aim of this study was to validate the computed tomography dose index (CTDI) and organ doses evaluated by Monte Carlo simulations through comparisons with doses evaluated by in-phantom dosimetry. Organ doses were measured with radio-photoluminescence glass dosemeter (RGD) set at various organ positions within adult and 1-y-old anthropomorphic phantoms. For the dose simulations, the X-ray spectrum and bow-tie filter shape of a CT scanner were estimated and 3D voxelised data of the CTDI and anthropomorphic phantoms from the acquired CT images were derived. Organ dose simulations and measurements were performed with chest and abdomen -pelvis CT examination scan parameters. Relative differences between the simulated and measured doses were within 5 % for the volume CTDI and 13 % for organ doses for organs within the scan range in adult and paediatric CT examinations. The simulation results were considered to be in good agreement with the measured doses. (authors)
Influencing factors of dose equivalence for X and γ rays with different energy based on Monte Carlo
International Nuclear Information System (INIS)
Background: The accuracy of dosimeter measurement of X and γ rays needs to be resolved. Purpose: The aim is to study the correction term of the equivalent process of low energy X-ray and the natural radioactive source. Methods: Instead of the standard sources, X-ray machine was adopted on the dose instrument calibration. The influence factors of the equivalence between low-energy X-ray and high-energy X or γ rays were simulated using Monte Carlo (MCNP) software. Results: The influences of distance, space scattering, response of detector on dose equivalence were obtained. The simulation results were also analyzed. Conclusion: The method can be used in dose equivalent correction for low-energy X-ray, high-energy X or γ rays, which is significant for the widespread use of X rays. (authors)
Sunil, C.; Tyagi, Mohit; Biju, K.; Shanbhag, A. A.; Bandyopadhyay, T.
2015-12-01
The scarcity and the high cost of 3He has spurred the use of various detectors for neutron monitoring. A new lithium yttrium borate scintillator developed in BARC has been studied for its use in a neutron rem counter. The scintillator is made of natural lithium and boron, and the yield of reaction products that will generate a signal in a real time detector has been studied by FLUKA Monte Carlo radiation transport code. A 2 cm lead introduced to enhance the gamma rejection shows no appreciable change in the shape of the fluence response or in the yield of reaction products. The fluence response when normalized at the average energy of an Am-Be neutron source shows promise of being used as rem counter.
Fujii, K; Nomura, K; Muramatsu, Y; Takahashi, K; Obara, S; Akahane, K; Satake, M
2015-07-01
The aim of this study was to validate the computed tomography dose index (CTDI) and organ doses evaluated by Monte Carlo simulations through comparisons with doses evaluated by in-phantom dosimetry. Organ doses were measured with radio-photoluminescence glass dosemeter (RGD) set at various organ positions within adult and 1-y-old anthropomorphic phantoms. For the dose simulations, the X-ray spectrum and bow-tie filter shape of a CT scanner were estimated and 3D voxelised data of the CTDI and anthropomorphic phantoms from the acquired CT images were derived. Organ dose simulations and measurements were performed with chest and abdomen-pelvis CT examination scan parameters. Relative differences between the simulated and measured doses were within 5 % for the volume CTDI and 13 % for organ doses for organs within the scan range in adult and paediatric CT examinations. The simulation results were considered to be in good agreement with the measured doses. PMID:25848103
Depleted uranium: Metabolic disruptor?
International Nuclear Information System (INIS)
The presence of uranium in the environment can lead to long-term contamination of the food chain and of water intended for human consumption and thus raises many questions about the scientific and societal consequences of this exposure on population health. Although the biological effects of chronic low-level exposure are poorly understood, results of various recent studies show that contamination by depleted uranium (DU) induces subtle but significant biological effects at the molecular level in organs including the brain, liver, kidneys and testicles. For the first time, it has been demonstrated that DU induces effects on several metabolic pathways, including those metabolizing vitamin D, cholesterol, steroid hormones, acetylcholine and xenobiotics. This evidence strongly suggests that DU might well interfere with many metabolic pathways. It might thus contribute, together with other man-made substances in the environment, to increased health risks in some regions. (authors)
International Nuclear Information System (INIS)
Depleted Uranium (DU) is the waste product of uranium enrichment from the manufacturing of fuel rods for nuclear reactors in nuclear power plants and nuclear power ships. DU may also results from the reprocessing of spent nuclear reactor fuel. Potentially DU has both chemical and radiological toxicity with two important targets organs being the kidney and the lungs. DU is made into a metal and, due to its availability, low price, high specific weight, density and melting point as well as its pyrophoricity; it has a wide range of civilian and military applications. Due to the use of DU over the recent years, there appeared in some press on health hazards that are alleged to be due to DU. In these paper properties, applications, potential environmental and health effects of DU are briefly reviewed
Zagrebin, M. A.; Sokolovskiy, V. V.; Buchelnikov, V. D.
2016-09-01
Structural, magnetic and electronic properties of stoichiometric Co2 YZ Heusler alloys (Y = Cr, Fe, Mn and Z = Al, Si, Ge) have been studied by means of ab initio calculations and Monte Carlo simulations. The investigations were performed in dependence on different levels of approximations in DFT (FP and ASA modes, as well as GGA and GGA + U schemes) and external pressure. It is shown that in the case of the GGA scheme the half-metallic behavior is clearly observed for compounds containing Cr and Mn transition metals, while Co2FeZ alloys demonstrate the pseudo half-metallic behavior. It is demonstrated that an applied pressure and an account of Coulomb repulsion (U) lead to the stabilization of the half-metallic nature for Co2 YZ alloys. The strongest ferromagnetic inter-sublattice (Co–Y) interactions together with intra-sublattice (Co–Co and Y–Y) interactions explain the high values of the Curie temperature obtained by Monte Carlo simulations using the Heisenberg model. It is observed that a decrease in valence electrons of Y atoms (i.e. Fe substitution by Mn and Cr) leads to the weakening of the exchange interactions and to the reduction of the Curie temperature. Besides, in the case of the FP mode Curie temperatures were found in a good agreement with available experimental and theoretical data, where the latter were obtained by applying the empirical relation between the Curie temperature and the total magnetic moment.
Proton Upset Monte Carlo Simulation
O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.
2009-01-01
The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.
A worldwide view of groundwater depletion
van Beek, L. P.; Wada, Y.; van Kempen, C.; Reckman, J. W.; Vasak, S.; Bierkens, M. F.
2010-12-01
During the last decades, global water demand has increased two-fold due to increasing population, expanding irrigated area and economic development. Globally such demand can be met by surface water availability (i.e., water in rivers, lakes and reservoirs) but regional variations are large and the absence of sufficient rainfall and run-off increasingly encourages the use of groundwater resources, particularly in the (semi-)arid regions of the world. Excessive abstraction for irrigation frequently leads to overexploitation, i.e. if groundwater abstraction exceeds the natural groundwater recharge over extensive areas and prolonged times, persistent groundwater depletion may occur. Observations and various regional studies have revealed that groundwater depletion is a substantial issue in regions such as Northwest India, Northeast Pakistan, Central USA, Northeast China and Iran. Here we provide a global overview of groundwater depletion from the year 1960 to 2000 at a spatial resolution of 0.5 degree by assessing groundwater recharge with the global hydrological model PCR-GLOBWB and subtracting estimates of groundwater abstraction obtained from IGRAC-GGIS database. PCR-GLOBWB was forced by the CRU climate dataset downscaled to daily time steps using ERA40 re-analysis data. PCR-GLOBWB simulates daily global groundwater recharge (0.5 degree) while considering sub-grid variability of each grid cell (e.g., short and tall vegetation, different soil types, fraction of saturated soil). Country statistics of groundwater abstraction were downscaled to 0.5 degree by using water demand (i.e., agriculture, industry and domestic) as a proxy. To limit problems related to increased capture of discharge and increased recharge due to groundwater pumping, we restricted our analysis to sub-humid to arid areas. The uncertainty in the resulting estimates was assessed by a Monte Carlo analysis of 100 realizations of groundwater recharge and 100 realizations of groundwater abstraction
ALEPH 1.1.2: A Monte Carlo burn-up code
International Nuclear Information System (INIS)
In the last 40 years, Monte Carlo particle transport has been applied to a multitude of problems such as shielding and medical applications, to various types of nuclear reactors, . . . The success of the Monte Carlo method is mainly based on its broad application area, on its ability to handle nuclear data not only in its most basic but also most complex form (namely continuous energy cross sections, complex interaction laws, detailed energy-angle correlations, multi-particle physics, . . . ), on its capability of modeling geometries from simple 1D to complex 3D, . . . There is also a current trend in Monte Carlo applications toward high detail 3D calculations (for instance voxel-based medical applications), something for which deterministic codes are neither suited nor performant as to computational time and precision. Apart from all these fields where Monte Carlo particle transport has been applied successfully, there is at least one area where Monte Carlo has had limited success, namely burn-up and activation calculations where the time parameter is added to the problem. The concept of Monte Carlo burn-up consists of coupling a Monte Carlo code to a burn-up module to improve the accuracy of depletion and activation calculations. For every time step the Monte Carlo code will provide reaction rates to the burn-up module which will return new material compositions to the Monte Carlo code. So if static Monte Carlo particle transport is slow, then Monte Carlo particle transport with burn-up will be even slower as calculations have to be performed for every time step in the problem. The computational issues to perform accurate Monte Carlo calculations are however continuously reduced due to improvements made in the basic Monte Carlo algorithms, due to the development of variance reduction techniques and due to developments in computer architecture (more powerful processors, the so-called brute force approach through parallel processors and networked systems
International Nuclear Information System (INIS)
The course of ''Monte Carlo Techniques'' will try to give a general overview of how to build up a method based on a given theory, allowing you to compare the outcome of an experiment with that theory. Concepts related with the construction of the method, such as, random variables, distributions of random variables, generation of random variables, random-based numerical methods, will be introduced in this course. Examples of some of the current theories in High Energy Physics describing the e+e- annihilation processes (QED, Electro-Weak, QCD) will also be briefly introduced. A second step in the employment of this method is related to the detector. The interactions that a particle could have along its way, through the detector as well as the response of the different materials which compound the detector will be quoted in this course. An example of detector at LEP era, in which these techniques are being applied, will close the course. (orig.)
Fast sequential Monte Carlo methods for counting and optimization
Rubinstein, Reuven Y; Vaisman, Radislav
2013-01-01
A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the
International Nuclear Information System (INIS)
Human anatomical models have been indispensable to radiation protection dosimetry using Monte Carlo calculations. Existing MIRD-based mathematical models are easy to compute and standardize, but they are simplified and crude compared to human anatomy. This article describes the development of an image-based whole-body model, called VIP-Man, using transversal color photographic images obtained from the National Library of Medicine's Visible Human Project for Monte Carlo organ dose calculations involving photons, electron, neutrons, and protons. As the first of a series of papers on dose calculations based on VIP-Man, this article provides detailed information about how to construct an image-based model, as well as how to adopt it into well-tested Monte Carlo codes, EGS4, MCNP4B, and MCNPX
International Nuclear Information System (INIS)
In order to realize fast and accurate Monte Carlo simulation of neutron and photon transport problems, two vectorized Monte Carlo codes MVP and GMVP have been developed at JAERI. MVP is based on the continuous energy model and GMVP is on the multigroup model. Compared with conventional scalar codes, these codes achieve higher computation speed by a factor of 10 or more on vector super-computers. Both codes have sufficient functions for production use by adopting accurate physics model, geometry description capability and variance reduction techniques. The first version of the codes was released in 1994. They have been extensively improved and new functions have been implemented. The major improvements and new functions are (1) capability to treat the scattering model expressed with File 6 of the ENDF-6 format, (2) time-dependent tallies, (3) reaction rate calculation with the pointwise response function, (4) flexible source specification, (5) continuous-energy calculation at arbitrary temperatures, (6) estimation of real variances in eigenvalue problems, (7) point detector and surface crossing estimators, (8) statistical geometry model, (9) function of reactor noise analysis (simulation of the Feynman-α experiment), (10) arbitrary shaped lattice boundary, (11) periodic boundary condition, (12) parallelization with standard libraries (MPI, PVM), (13) supporting many platforms, etc. This report describes the physical model, geometry description method used in the codes, new functions and how to use them. (author)
The Toxicity of Depleted Uranium
Wayne Briner
2010-01-01
Depleted uranium (DU) is an emerging environmental pollutant that is introduced into the environment primarily by military activity. While depleted uranium is less radioactive than natural uranium, it still retains all the chemical toxicity associated with the original element. In large doses the kidney is the target organ for the acute chemical toxicity of this metal, producing potentially lethal tubular necrosis. In contrast, chronic low dose exposure to depleted uranium may not produce a c...
,
2015-01-01
We present a sophisticated likelihood reconstruction algorithm for shower-image analysis of imaging Cherenkov telescopes. The reconstruction algorithm is based on the comparison of the camera pixel amplitudes with the predictions from a Monte Carlo based model. Shower parameters are determined by a maximisation of a likelihood function. Maximisation of the likelihood as a function of shower fit parameters is performed using a numerical non-linear optimisation technique. A related reconstruction technique has already been developed by the CAT and the H.E.S.S. experiments, and provides a more precise direction and energy reconstruction of the photon induced shower compared to the second moment of the camera image analysis. Examples are shown of the performance of the analysis on simulated gamma-ray data from the VERITAS array.
International Nuclear Information System (INIS)
The neutron flux at low energy (En ≤ 15 MeV) resulting from the radioactivity of the rock in the underground cavern of the India-based Neutrino Observatory is estimated using Geant4-based Monte Carlo simulations. The neutron production rate due to the spontaneous fission of 235, 238U, 232Th and (α, n) interactions in the rock is determined employing the actual rock composition. It is shown that the total flux is equivalent to a finite size cylindrical rock (D=L=140 cm) element. The energy integrated neutron flux thus obtained at the center of the underground tunnel is 2.76 (0.47) × 10−6 n cm−2 s−1. The estimated neutron flux is of the same order (∼10−6 n cm−2 s−1) as measured in other underground laboratories
Dokania, N; Mathimalar, S; Garai, A; Nanal, V; Pillay, R G; Bhushan, K G
2015-01-01
The neutron flux at low energy ($E_n\\leq15$ MeV) resulting from the radioactivity of the rock in the underground cavern of the India-based Neutrino Observatory is estimated using Geant4-based Monte Carlo simulations. The neutron production rate due to the spontaneous fission of U, Th and ($\\alpha, n$) interactions in the rock is determined employing the actual rock composition. It has been demonstrated that the total flux is equivalent to a finite size cylindrical rock ($D=L=140$ cm) element. The energy integrated neutron flux thus obtained at the center of the underground tunnel is 2.76 (0.47) $\\times 10^{-6}\\rm~n ~cm^{-2}~s^{-1}$. The estimated neutron flux is of the same order ($\\sim10^{-6}\\rm~n ~cm^{-2}~s^{-1}$)~as measured in other underground laboratories.
Depleted zinc: Properties, application, production
International Nuclear Information System (INIS)
The addition of ZnO, depleted in the Zn-64 isotope, to the water of boiling water nuclear reactors lessens the accumulation of Co-60 on the reactor interior surfaces, reduces radioactive wastes and increases the reactor service-life because of the inhibitory action of zinc on inter-granular stress corrosion cracking. To the same effect depleted zinc in the form of acetate dihydrate is used in pressurized water reactors. Gas centrifuge isotope separation method is applied for production of depleted zinc on the industrial scale. More than 20 years of depleted zinc application history demonstrates its benefits for reduction of NPP personnel radiation exposure and combating construction materials corrosion.
Common misconceptions in Monte Carlo particle transport
Energy Technology Data Exchange (ETDEWEB)
Booth, Thomas E., E-mail: teb@lanl.gov [LANL, XCP-7, MS F663, Los Alamos, NM 87545 (United States)
2012-07-15
Monte Carlo particle transport is often introduced primarily as a method to solve linear integral equations such as the Boltzmann transport equation. This paper discusses some common misconceptions about Monte Carlo methods that are often associated with an equation-based focus. Many of the misconceptions apply directly to standard Monte Carlo codes such as MCNP and some are worth noting so that one does not unnecessarily restrict future methods. - Highlights: Black-Right-Pointing-Pointer Adjoint variety and use from a Monte Carlo perspective. Black-Right-Pointing-Pointer Misconceptions and preconceived notions about statistical weight. Black-Right-Pointing-Pointer Reasons that an adjoint based weight window sometimes works well or does not. Black-Right-Pointing-Pointer Pulse height/probability of initiation tallies and 'the' transport equation. Black-Right-Pointing-Pointer Highlights unnecessary preconceived notions about Monte Carlo transport.
A Novel Depletion-Mode MOS Gated Emitter Shorted Thyristor
Institute of Scientific and Technical Information of China (English)
张鹤鸣; 戴显英; 张义门; 马晓华; 林大松
2000-01-01
A Novel MOS-gated thyristor, depletion-mode MOS gated emitter shorted thyristor (DMST),and its two structures are proposed. In DMST,the channel of depletion-mode MOS makes the thyristor emitter-based junction inherently short. The operation of the device is controlled by the interruption and recovery of the depletion-mode MOS P channel. The perfect properties have been demonstrated by 2-D numerical simulations and the tests on the fabricated chips.
Künzler, Thomas; Fotina, Irina; Stock, Markus; Georg, Dietmar
2009-12-01
The dosimetric performance of a Monte Carlo algorithm as implemented in a commercial treatment planning system (iPlan, BrainLAB) was investigated. After commissioning and basic beam data tests in homogenous phantoms, a variety of single regular beams and clinical field arrangements were tested in heterogeneous conditions (conformal therapy, arc therapy and intensity-modulated radiotherapy including simultaneous integrated boosts). More specifically, a cork phantom containing a concave-shaped target was designed to challenge the Monte Carlo algorithm in more complex treatment cases. All test irradiations were performed on an Elekta linac providing 6, 10 and 18 MV photon beams. Absolute and relative dose measurements were performed with ion chambers and near tissue equivalent radiochromic films which were placed within a transverse plane of the cork phantom. For simple fields, a 1D gamma (γ) procedure with a 2% dose difference and a 2 mm distance to agreement (DTA) was applied to depth dose curves, as well as to inplane and crossplane profiles. The average gamma value was 0.21 for all energies of simple test cases. For depth dose curves in asymmetric beams similar gamma results as for symmetric beams were obtained. Simple regular fields showed excellent absolute dosimetric agreement to measurement values with a dose difference of 0.1% ± 0.9% (1 standard deviation) at the dose prescription point. A more detailed analysis at tissue interfaces revealed dose discrepancies of 2.9% for an 18 MV energy 10 × 10 cm2 field at the first density interface from tissue to lung equivalent material. Small fields (2 × 2 cm2) have their largest discrepancy in the re-build-up at the second interface (from lung to tissue equivalent material), with a local dose difference of about 9% and a DTA of 1.1 mm for 18 MV. Conformal field arrangements, arc therapy, as well as IMRT beams and simultaneous integrated boosts were in good agreement with absolute dose measurements in the
Monte Carlo Code System Development for Liquid Metal Reactor
Energy Technology Data Exchange (ETDEWEB)
Kim, Chang Hyo; Shim, Hyung Jin; Han, Beom Seok; Park, Ho Jin; Park, Dong Gyu [Seoul National University, Seoul (Korea, Republic of)
2007-03-15
We have implemented the composition cell class and the use cell to MCCARD for hierarchy input processing. For the inputs of KALlMER-600 core consisted of 336 assemblies, we require the geometric data of 91,056 pin cells. Using hierarchy input processing, it was observed that the system geometries are correctly handled with the geometric data of total 611 cells; 2 cells for fuel rods, 2 cells for guide holes, 271 translation cells for rods, and 336 translation cells for assemblies. We have developed monte carlo decay-chain models based on decay chain model of REBUS code for liquid metal reactor analysis. Using developed decay-chain models, the depletion analysis calculations have performed for the homogeneous and heterogeneous model of KALlMER-600. The k-effective for the depletion analysis agrees well with that of REBUS code. and the developed decay chain models shows more efficient performance for time and memories, as compared with the existing decay chain model The chi-square criterion has been developed to diagnose the temperature convergence for the MC TjH feedback calculations. From the application results to the KALlMER pin and fuel assembly problem, it is observed that the new criterion works well Wc have applied the high efficiency variance reduction technique by splitting Russian roulette to estimate the PPPF of the KALIMER core at BOC. The PPPF of KALlMER core at BOC is 1.235({+-}0.008). The developed technique shows four time faster calculation, as compared with the existin2 calculation Subject Keywords Monte Carlo
International Nuclear Information System (INIS)
This paper discusses the simulation of contemporary computed tomography (CT) scanners using Monte Carlo calculation methods to derive normalized organ doses, which enable hospital physicists to estimate typical organ and effective doses for CT examinations. The hardware used in a small PC-cluster at the Health Protection Agency (HPA) for these calculations is described. Investigations concerning optimization of software, including the radiation transport codes MCNP5 and MCNPX, and the Intel and PGI FORTRAN compilers, are presented in relation to results and calculation speed. Differences in approach for modelling the X-ray source are described and their influences are analysed. Comparisons with previously published calculations at HPA from the early 1990's proved satisfactory for the purposes of quality assurance and are presented in terms of organ dose ratios for whole body exposure and differences in organ location. Influences on normalized effective dose are discussed in relation to choice of cross section library, CT scanner technology (contemporary multi slice versus single slice), definition for effective dose (1990 and 2007 versions) and anthropomorphic phantom (mathematical and voxel). The results illustrate the practical need for the updated scanner-specific dose coefficients presently being calculated at HPA, in order to facilitate improved dosimetry for contemporary CT practice. (authors)
Directory of Open Access Journals (Sweden)
J. Tonttila
2013-08-01
Full Text Available A new method for parameterizing the subgrid variations of vertical velocity and cloud droplet number concentration (CDNC is presented for general circulation models (GCMs. These parameterizations build on top of existing parameterizations that create stochastic subgrid cloud columns inside the GCM grid cells, which can be employed by the Monte Carlo independent column approximation approach for radiative transfer. The new model version adds a description for vertical velocity in individual subgrid columns, which can be used to compute cloud activation and the subgrid distribution of the number of cloud droplets explicitly. Autoconversion is also treated explicitly in the subcolumn space. This provides a consistent way of simulating the cloud radiative effects with two-moment cloud microphysical properties defined at subgrid scale. The primary impact of the new parameterizations is to decrease the CDNC over polluted continents, while over the oceans the impact is smaller. Moreover, the lower CDNC induces a stronger autoconversion of cloud water to rain. The strongest reduction in CDNC and cloud water content over the continental areas promotes weaker shortwave cloud radiative effects (SW CREs even after retuning the model. However, compared to the reference simulation, a slightly stronger SW CRE is seen e.g. over mid-latitude oceans, where CDNC remains similar to the reference simulation, and the in-cloud liquid water content is slightly increased after retuning the model.
Antolínez, Alfonso; Rapisarda, David
2016-07-01
Fission chambers have become one of the main devices for the measurement of neutron fluxes in nuclear facilities; including fission reactors, future fusion ones, spallation sources, etc. The main goal of a fission chamber is to estimate the neutron flux inside the facility, as well as instantaneous changes in the irradiation conditions. A Monte Carlo Fission Chamber Designer (MCFCD) has been developed in order to assist engineers in the complete design cycle of the fission chambers. So far MCFCD focuses on the most important neutron reactions taking place in a thermal nuclear reactor. A theoretical model describing the most important outcomes in fission chambers design has been developed, including the expected electrical signals (current intensity and drop in potential) and, current-polarization voltage characteristics (sensitivity and saturation plateau); the saturation plateau is the zone of the saturation curve where the output current is proportional to fission rate; fission chambers work in this region. Data provided by MCFCD are in good agreement with measurements available.
Directory of Open Access Journals (Sweden)
J. Tonttila
2013-02-01
Full Text Available A new method for parameterizing the subgrid variations of vertical velocity and cloud droplet number concentration (CDNC is presented for GCMs. These parameterizations build on top of existing parameterizations that create stochastic subgrid cloud columns inside the GCM grid-cells, which can be employed by the Monte Carlo independent column approximation approach for radiative transfer. The new model version adds a description for vertical velocity in individual subgrid columns, which can be used to compute cloud activation and the subgrid distribution of the number of cloud droplets explicitly. This provides a consistent way for simulating the cloud radiative effects with two-moment cloud microphysical properties defined in subgrid-scale. The primary impact of the new parameterizations is to decrease the CDNC over polluted continents, while over the oceans the impact is smaller. This promotes changes in the global distribution of the cloud radiative effects and might thus have implications on model estimation of the indirect radiative effect of aerosols.
A Monte Carlo (MC) based individual calibration method for in vivo x-ray fluorescence analysis (XRF)
Hansson, Marie; Isaksson, Mats
2007-04-01
X-ray fluorescence analysis (XRF) is a non-invasive method that can be used for in vivo determination of thyroid iodine content. System calibrations with phantoms resembling the neck may give misleading results in the cases when the measurement situation largely differs from the calibration situation. In such cases, Monte Carlo (MC) simulations offer a possibility of improving the calibration by better accounting for individual features of the measured subjects. This study investigates the prospects of implementing MC simulations in a calibration procedure applicable to in vivo XRF measurements. Simulations were performed with Penelope 2005 to examine a procedure where a parameter, independent of the iodine concentration, was used to get an estimate of the expected detector signal if the thyroid had been measured outside the neck. An attempt to increase the simulation speed and reduce the variance by exclusion of electrons and by implementation of interaction forcing was conducted. Special attention was given to the geometry features: analysed volume, source-sample-detector distances, thyroid lobe size and position in the neck. Implementation of interaction forcing and exclusion of electrons had no obvious adverse effect on the quotients while the simulation time involved in an individual calibration was low enough to be clinically feasible.
A MONTE CARLO MARKOV CHAIN BASED INVESTIGATION OF BLACK HOLE SPIN IN THE ACTIVE GALAXY NGC 3783
International Nuclear Information System (INIS)
The analysis of relativistically broadened X-ray spectral features from the inner accretion disk provides a powerful tool for measuring the spin of supermassive black holes in active galactic nuclei (AGNs). However, AGN spectra are often complex and careful analysis employing appropriate and self-consistent models is required if one has to obtain robust results. In this paper, we revisit the deep 2009 July Suzaku observation of the Seyfert galaxy NGC 3783 in order to study in a rigorous manner the robustness of the inferred black hole spin parameter. Using Monte Carlo Markov chain techniques, we identify a (partial) modeling degeneracy between the iron abundance of the disk and the black hole spin parameter. We show that the data for NGC 3783 strongly require both supersolar iron abundance (ZFe = 2-4 Z☉) and a rapidly spinning black hole (a > 0.89). We discuss various astrophysical considerations that can affect the measured abundance. We note that, while the abundance enhancement inferred in NGC 3783 is modest, the X-ray analysis of some other objects has found extreme iron abundances. We introduce the hypothesis that the radiative levitation of iron ions in the innermost regions of radiation-dominated AGN disks can enhance the photospheric abundance of iron. We show that radiative levitation is a plausible mechanism in the very inner regions of high accretion rate AGN disks.
International Nuclear Information System (INIS)
Purpose: Patient-specific QA for VMAT is incapable of providing full 3D dosimetric information and is labor intensive in the case of severe heterogeneities or small-aperture beams. A cloud-based Monte Carlo dose reconstruction method described here can perform the evaluation in entire 3D space and rapidly reveal the source of discrepancies between measured and planned dose. Methods: This QA technique consists of two integral parts: measurement using a phantom containing array of dosimeters, and a cloud-based voxel Monte Carlo algorithm (cVMC). After a VMAT plan was approved by a physician, a dose verification plan was created and delivered to the phantom using our Varian Trilogy or TrueBeam system. Actual delivery parameters (i.e., dose fraction, gantry angle, and MLC at control points) were extracted from Dynalog or trajectory files. Based on the delivery parameters, the 3D dose distribution in the phantom containing detector were recomputed using Eclipse dose calculation algorithms (AAA and AXB) and cVMC. Comparison and Gamma analysis is then conducted to evaluate the agreement between measured, recomputed, and planned dose distributions. To test the robustness of this method, we examined several representative VMAT treatments. Results: (1) The accuracy of cVMC dose calculation was validated via comparative studies. For cases that succeeded the patient specific QAs using commercial dosimetry systems such as Delta- 4, MAPCheck, and PTW Seven29 array, agreement between cVMC-recomputed, Eclipse-planned and measured doses was obtained with >90% of the points satisfying the 3%-and-3mm gamma index criteria. (2) The cVMC method incorporating Dynalog files was effective to reveal the root causes of the dosimetric discrepancies between Eclipse-planned and measured doses and provide a basis for solutions. Conclusion: The proposed method offers a highly robust and streamlined patient specific QA tool and provides a feasible solution for the rapidly increasing use of VMAT
Dunn, William L
2012-01-01
Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble
Depletable resources and the economy.
Heijman, W.J.M.
1991-01-01
The subject of this thesis is the depletion of scarce resources. The main question to be answered is how to avoid future resource crises. After dealing with the complex relation between nature and economics, three important concepts in relation with resource depletion are discussed: steady state, ti
The Toxicity of Depleted Uranium
Directory of Open Access Journals (Sweden)
Wayne Briner
2010-01-01
Full Text Available Depleted uranium (DU is an emerging environmental pollutant that is introduced into the environment primarily by military activity. While depleted uranium is less radioactive than natural uranium, it still retains all the chemical toxicity associated with the original element. In large doses the kidney is the target organ for the acute chemical toxicity of this metal, producing potentially lethal tubular necrosis. In contrast, chronic low dose exposure to depleted uranium may not produce a clear and defined set of symptoms. Chronic low-dose, or subacute, exposure to depleted uranium alters the appearance of milestones in developing organisms. Adult animals that were exposed to depleted uranium during development display persistent alterations in behavior, even after cessation of depleted uranium exposure. Adult animals exposed to depleted uranium demonstrate altered behaviors and a variety of alterations to brain chemistry. Despite its reduced level of radioactivity evidence continues to accumulate that depleted uranium, if ingested, may pose a radiologic hazard. The current state of knowledge concerning DU is discussed.
Energy Technology Data Exchange (ETDEWEB)
Idiri, Z. [Centre de Recherche Nucleaire d' Alger (CRNA), 02 Boulevard Frantz-Fanon, B.P. 399, 16000 Alger (Algeria)]. E-mail: zmidiri@yahoo.fr; Mazrou, H. [Centre de Recherche Nucleaire d' Alger (CRNA), 02 Boulevard Frantz-Fanon, B.P. 399, 16000 Alger (Algeria); Beddek, S. [Centre de Recherche Nucleaire d' Alger (CRNA), 02 Boulevard Frantz-Fanon, B.P. 399, 16000 Alger (Algeria); Amokrane, A. [Faculte de Physique, Universite des Sciences et de la Technologie Houari-Boumediene (USTHB), Alger (Algeria); Azbouche, A. [Centre de Recherche Nucleaire d' Alger (CRNA), 02 Boulevard Frantz-Fanon, B.P. 399, 16000 Alger (Algeria)
2007-07-21
The present paper describes the optimization of sample dimensions of a {sup 241}Am-Be neutron source-based Prompt gamma neutron activation analysis (PGNAA) setup devoted for in situ environmental water rejects analysis. The optimal dimensions have been achieved following extensive Monte Carlo neutron flux calculations using MCNP5 computer code. A validation process has been performed for the proposed preliminary setup with measurements of thermal neutron flux by activation technique of indium foils, bare and with cadmium covered sheet. Sensitive calculations were subsequently performed to simulate real conditions of in situ analysis by determining thermal neutron flux perturbations in samples according to chlorine and organic matter concentrations changes. The desired optimal sample dimensions were finally achieved once established constraints regarding neutron damage to semi-conductor gamma detector, pulse pile-up, dead time and radiation hazards were fully met.
Energy Technology Data Exchange (ETDEWEB)
Baba, Justin S [ORNL; Koju, Vijay [ORNL; John, Dwayne O [ORNL
2016-01-01
The modulation of the state of polarization of photons due to scatter generates associated geometric phase that is being investigated as a means for decreasing the degree of uncertainty in back-projecting the paths traversed by photons detected in backscattered geometry. In our previous work, we established that polarimetrically detected Berry phase correlates with the mean photon penetration depth of the backscattered photons collected for image formation. In this work, we report on the impact of state-of-linear-polarization (SOLP) filtering on both the magnitude and population distributions of image forming detected photons as a function of the absorption coefficient of the scattering sample. The results, based on Berry phase tracking implemented Polarized Monte Carlo Code, indicate that sample absorption plays a significant role in the mean depth attained by the image forming backscattered detected photons.
Energy Technology Data Exchange (ETDEWEB)
Borio Di Tigliole, A.; Bruni, J.; Panza, F. [Dept. of Nuclear and Theoretical Physics, Univ. of Pavia, 27100 Pavia (Italy); Italian National Inst. of Nuclear Physics INFN, Section of Pavia, Via A. Bassi, 6, 27100 Pavia (Italy); Alloni, D.; Cagnazzo, M.; Magrotti, G.; Manera, S.; Prata, M.; Salvini, A. [Italian National Inst. of Nuclear Physics INFN, Section of Pavia, Via A. Bassi, 6, 27100 Pavia (Italy); Applied Nuclear Energy Laboratory LENA, Univ. of Pavia, Via Aselli, 41, 27100 Pavia (Italy); Chiesa, D.; Clemenza, M.; Pattavina, L.; Previtali, E.; Sisti, M. [Physics Dept. G. Occhialini, Univ. of Milano Bicocca, 20126 Milano (Italy); Italian National Inst. of Nuclear Physics INFN, Section of Milano Bicocca, P.zza della Scienza, 3, 20126 Milano (Italy); Cammi, A. [Italian National Inst. of Nuclear Physics INFN, Section of Milano Bicocca, P.zza della Scienza, 3, 20126 Milano (Italy); Dept. of Energy Enrico Fermi Centre for Nuclear Studies CeSNEF, Polytechnic Univ. of Milan, Via U. Bassi, 34/3, 20100 Milano (Italy)
2012-07-01
Aim of this work was to perform a rough preliminary evaluation of the burn-up of the fuel of TRIGA Mark II research reactor of the Applied Nuclear Energy Laboratory (LENA) of the Univ. of Pavia. In order to achieve this goal a computation of the neutron flux density in each fuel element was performed by means of Monte Carlo code MCNP (Version 4C). The results of the simulations were used to calculate the effective cross sections (fission and capture) inside fuel and, at the end, to evaluate the burn-up and the uranium consumption in each fuel element. The evaluation, showed a fair agreement with the computation for fuel burn-up based on the total energy released during reactor operation. (authors)
DEFF Research Database (Denmark)
Strunk, Astrid; Knudsen, Mads Faurschou; Larsen, Nicolaj Krog;
investigate the landscape history in eastern and western Greenland by applying a novel Markov Chain Monte Carlo (MCMC) inversion approach to the existing 10Be-26Al data from these regions. The new MCMC approach allows us to constrain the most likely landscape history based on comparisons between simulated...... the landscape history in previously glaciated terrains may be difficult, however, due to unknown erosion rates and the presence of inherited nuclides. The potential use of cosmogenic nuclides in landscapes with a complex history of exposure and erosion is therefore often quite limited. In this study, we...... and measured cosmogenic nuclide concentrations. It is a fundamental assumption of the model approach that the exposure history at the site/location can be divided into two distinct regimes: i) interglacial periods characterized by zero shielding due to overlying ice and a uniform interglacial erosion rate...
International Nuclear Information System (INIS)
A method for estimation of forest parameters, species, tree shape, distance between canopies by means of Monte-Carlo based radiative transfer model with forestry surface model is proposed. The model is verified through experiments with the miniature model of forest, tree array of relatively small size of trees. Two types of miniature trees, ellipse-looking and cone-looking canopy are examined in the experiments. It is found that the proposed model and experimental results show a coincidence so that the proposed method is validated. It is also found that estimation of tree shape, trunk tree distance as well as distinction between deciduous or coniferous trees can be done with the proposed model. Furthermore, influences due to multiple reflections between trees and interaction between trees and under-laying grass are clarified with the proposed method
International Nuclear Information System (INIS)
Aim of this work was to perform a rough preliminary evaluation of the burn-up of the fuel of TRIGA Mark II research reactor of the Applied Nuclear Energy Laboratory (LENA) of the Univ. of Pavia. In order to achieve this goal a computation of the neutron flux density in each fuel element was performed by means of Monte Carlo code MCNP (Version 4C). The results of the simulations were used to calculate the effective cross sections (fission and capture) inside fuel and, at the end, to evaluate the burn-up and the uranium consumption in each fuel element. The evaluation, showed a fair agreement with the computation for fuel burn-up based on the total energy released during reactor operation. (authors)
International Nuclear Information System (INIS)
Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: The visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying
Energy Technology Data Exchange (ETDEWEB)
Dong, Han; Sharma, Diksha; Badano, Aldo, E-mail: aldo.badano@fda.hhs.gov [Division of Imaging, Diagnostics, and Software Reliability, Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, Maryland 20993 (United States)
2014-12-15
Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: The visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying
International Nuclear Information System (INIS)
The two alloy systems: namely, Ni-Mo-based alloys and Al-Ti alloys, share some common features in that the ordered structures and the ordering processes in these two systems can be described in terms of three types of superlattice tiles: squares and fat or lean rhombi. In Ni- Mo-based alloys these represent one-molecule clusters of three fcc superlattice structures: Ni4Mo (D1a), Ni3Mo (D022) and Ni2Mo (Pt2Mo-type), while in Al-Ti these represent two dimensional Ti4AI, Ti3Al and Ti2Al derivatives on Ti-rich (002) planes of the off stoichiometric TiAl (L10) phase. Evolution of short range order (SRO): 11/20 special point SRO in the case of Ni-Mo and the incommensurate SRO in the case of the Al-rich TiAl intermetallic alloys and evolution of LRO phases from these have been followed using both conventional and high resolution TEM. Corroborative evidence from Monte Carlo simulations will also be presented in order to explain the observed experimental results. Occurrence of antiphase boundaries (APBs) and their energies, as we will see, play an important role in these transformations. Predominantly two types of APBs occur in the Al5Ti3 phase in Al-rich TiAl. Monte Carlo Simulations and the experimental observations reveal both of these. These play a synergistic role in the formation of Al5Ti3 antiphase domains
11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing
Nuyens, Dirk
2016-01-01
This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.
Energy Technology Data Exchange (ETDEWEB)
Geramifar, P. [Faculty of Physics and Nuclear Engineering, Amir Kabir University of Technology (Tehran Polytechnic), Tehran (Iran, Islamic Republic of); Research Center for Science and Technology in Medicine, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Research Institute for Nuclear Medicine, Tehran University of Medical Sciences, Shariati Hospital, Tehran (Iran, Islamic Republic of); Ay, M.R., E-mail: mohammadreza_ay@tums.ac.ir [Research Center for Science and Technology in Medicine, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Research Institute for Nuclear Medicine, Tehran University of Medical Sciences, Shariati Hospital, Tehran (Iran, Islamic Republic of); Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Shamsaie Zafarghandi, M. [Faculty of Physics and Nuclear Engineering, Amir Kabir University of Technology (Tehran Polytechnic), Tehran (Iran, Islamic Republic of); Sarkar, S. [Research Center for Science and Technology in Medicine, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Research Institute for Nuclear Medicine, Tehran University of Medical Sciences, Shariati Hospital, Tehran (Iran, Islamic Republic of); Loudos, G. [Department of Medical Instruments Technology, Technological Educational Institute, Athens (Greece); Rahmim, A. [Department of Radiology, School of Medicine, Johns Hopkins University, Baltimore (United States); Department of Electrical and Computer Engineering, School of Engineering, Johns Hopkins University, Baltimore (United States)
2011-06-11
The advent of fast scintillators yielding great light yield and/or stopping power, along with advances in photomultiplier tubes and electronics, have rekindled interest in time-of-flight (TOF) PET. Because the potential performance improvements offered by TOF PET are substantial, efforts to improve PET timing should prove very fruitful. In this study, we performed Monte Carlo simulations to explore what gains in PET performance could be achieved if the coincidence resolving time (CRT) in the LYSO-based PET component of Discovery RX PET/CT scanner were improved. For this purpose, the GATE Monte Carlo package was utilized, providing the ability to model and characterize various physical phenomena in PET imaging. For the present investigation, count rate performance and signal to noise ratio (SNR) values in different activity concentrations were simulated for different coincidence timing windows of 4, 5.85, 6, 6.5, 8, 10 and 12 ns and with different CRTs of 100-900 ps FWHM involving 50 ps FWHM increments using the NEMA scatter phantom. Strong evidence supporting robustness of the simulations was found as observed in the good agreement between measured and simulated data for the cases of estimating axial sensitivity, axial and transaxial detection position, gamma non-collinearity angle distribution and positron annihilation distance. In the non-TOF context, the results show that the random event rate can be reduced by using narrower coincidence timing window widths, demonstrating considerable enhancements in the peak noise equivalent count rate (NECR) performance. The peak NECR had increased by {approx}50% when utilizing the coincidence window width of 4 ns. At the same time, utilization of TOF information resulted in improved NECR and SNR with the dramatic reduction of random coincidences as a function of CRT. For example, with CRT of 500 ps FWHM, a factor of 2.3 reduction in random rates, factor of 1.5 increase in NECR and factor of 2.1 improvement in SNR is
International Nuclear Information System (INIS)
The advent of fast scintillators yielding great light yield and/or stopping power, along with advances in photomultiplier tubes and electronics, have rekindled interest in time-of-flight (TOF) PET. Because the potential performance improvements offered by TOF PET are substantial, efforts to improve PET timing should prove very fruitful. In this study, we performed Monte Carlo simulations to explore what gains in PET performance could be achieved if the coincidence resolving time (CRT) in the LYSO-based PET component of Discovery RX PET/CT scanner were improved. For this purpose, the GATE Monte Carlo package was utilized, providing the ability to model and characterize various physical phenomena in PET imaging. For the present investigation, count rate performance and signal to noise ratio (SNR) values in different activity concentrations were simulated for different coincidence timing windows of 4, 5.85, 6, 6.5, 8, 10 and 12 ns and with different CRTs of 100-900 ps FWHM involving 50 ps FWHM increments using the NEMA scatter phantom. Strong evidence supporting robustness of the simulations was found as observed in the good agreement between measured and simulated data for the cases of estimating axial sensitivity, axial and transaxial detection position, gamma non-collinearity angle distribution and positron annihilation distance. In the non-TOF context, the results show that the random event rate can be reduced by using narrower coincidence timing window widths, demonstrating considerable enhancements in the peak noise equivalent count rate (NECR) performance. The peak NECR had increased by ∼50% when utilizing the coincidence window width of 4 ns. At the same time, utilization of TOF information resulted in improved NECR and SNR with the dramatic reduction of random coincidences as a function of CRT. For example, with CRT of 500 ps FWHM, a factor of 2.3 reduction in random rates, factor of 1.5 increase in NECR and factor of 2.1 improvement in SNR is achievable
International Nuclear Information System (INIS)
In deuterium-deuterium (D-D) and deuterium-tritium (D-T) fusion plasmas neutrons are produced causing activation of JET machine components. For safe operation and maintenance it is important to be able to predict the induced activation and the resulting shut down dose rates. This requires a suitable system of codes which is capable of simulating both the neutron induced material activation during operation and the decay gamma radiation transport after shut-down in the proper 3-D geometry. Two methodologies to calculate the dose rate in fusion devices have been developed recently and applied to fusion machines, both using the MCNP Monte Carlo code. FZK has developed a more classical approach, the rigorous 2-step (R2S) system in which MCNP is coupled to the FISPACT inventory code with an automated routing. ENEA, in collaboration with the ITER Team, has developed an alternative approach, the direct 1 step method (D1S). Neutron and decay gamma transport are handled in one single MCNP run, using an ad hoc cross section library. The intention was to tightly couple the neutron induced production of a radio-isotope and the emission of its decay gammas for an accurate spatial distribution and a reliable calculated statistical error. The two methods have been used by the two Associations to calculate the dose rate in five positions of JET machine, two inside the vacuum chamber and three outside, at cooling times between 1 second and 1 year after shutdown. The same MCNP model and irradiation conditions have been assumed. The exercise has been proposed and financed in the frame of the Fusion Technological Program of the JET machine. The scope is to supply the designers with the most reliable tool and data to calculate the dose rate on fusion machines. Results showed that there is a good agreement: the differences range between 5-35%. The next step to be considered in 2003 will be an exercise in which the comparison will be done with dose-rate data from JET taken during and
Energy Technology Data Exchange (ETDEWEB)
Petrizzi, L.; Batistoni, P.; Migliori, S. [Associazione EURATOM ENEA sulla Fusione, Frascati (Roma) (Italy); Chen, Y.; Fischer, U.; Pereslavtsev, P. [Association FZK-EURATOM Forschungszentrum Karlsruhe (Germany); Loughlin, M. [EURATOM/UKAEA Fusion Association, Culham Science Centre, Abingdon, Oxfordshire, OX (United Kingdom); Secco, A. [Nice Srl Via Serra 33 Camerano Casasco AT (Italy)
2003-07-01
In deuterium-deuterium (D-D) and deuterium-tritium (D-T) fusion plasmas neutrons are produced causing activation of JET machine components. For safe operation and maintenance it is important to be able to predict the induced activation and the resulting shut down dose rates. This requires a suitable system of codes which is capable of simulating both the neutron induced material activation during operation and the decay gamma radiation transport after shut-down in the proper 3-D geometry. Two methodologies to calculate the dose rate in fusion devices have been developed recently and applied to fusion machines, both using the MCNP Monte Carlo code. FZK has developed a more classical approach, the rigorous 2-step (R2S) system in which MCNP is coupled to the FISPACT inventory code with an automated routing. ENEA, in collaboration with the ITER Team, has developed an alternative approach, the direct 1 step method (D1S). Neutron and decay gamma transport are handled in one single MCNP run, using an ad hoc cross section library. The intention was to tightly couple the neutron induced production of a radio-isotope and the emission of its decay gammas for an accurate spatial distribution and a reliable calculated statistical error. The two methods have been used by the two Associations to calculate the dose rate in five positions of JET machine, two inside the vacuum chamber and three outside, at cooling times between 1 second and 1 year after shutdown. The same MCNP model and irradiation conditions have been assumed. The exercise has been proposed and financed in the frame of the Fusion Technological Program of the JET machine. The scope is to supply the designers with the most reliable tool and data to calculate the dose rate on fusion machines. Results showed that there is a good agreement: the differences range between 5-35%. The next step to be considered in 2003 will be an exercise in which the comparison will be done with dose-rate data from JET taken during and
International Nuclear Information System (INIS)
The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Kraków, Poland. In order to assess the uncertainty, the software CrystalBall® (CB), which is associated with Microsoft® Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management. - Highlights: • The benefits of Monte Carlo simulation are examined. • The normal probability distribution is studied. • LCI data on Mittal Steel Poland (MSP) complex in Kraków, Poland dates back to 2005. • This is the first assessment of the LCI uncertainties in the Polish steel industry
Energy Technology Data Exchange (ETDEWEB)
Bieda, Bogusław
2014-05-01
The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Kraków, Poland. In order to assess the uncertainty, the software CrystalBall® (CB), which is associated with Microsoft® Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management. - Highlights: • The benefits of Monte Carlo simulation are examined. • The normal probability distribution is studied. • LCI data on Mittal Steel Poland (MSP) complex in Kraków, Poland dates back to 2005. • This is the first assessment of the LCI uncertainties in the Polish steel industry.
Zhang, Shixun; Yamagia, Shinichi; Yunoki, Seiji
2013-08-01
Models of fermions interacting with classical degrees of freedom are applied to a large variety of systems in condensed matter physics. For this class of models, Weiße [Phys. Rev. Lett. 102, 150604 (2009)] has recently proposed a very efficient numerical method, called O(N) Green-Function-Based Monte Carlo (GFMC) method, where a kernel polynomial expansion technique is used to avoid the full numerical diagonalization of the fermion Hamiltonian matrix of size N, which usually costs O(N3) computational complexity. Motivated by this background, in this paper we apply the GFMC method to the double exchange model in three spatial dimensions. We mainly focus on the implementation of GFMC method using both MPI on a CPU-based cluster and Nvidia's Compute Unified Device Architecture (CUDA) programming techniques on a GPU-based (Graphics Processing Unit based) cluster. The time complexity of the algorithm and the parallel implementation details on the clusters are discussed. We also show the performance scaling for increasing Hamiltonian matrix size and increasing number of nodes, respectively. The performance evaluation indicates that for a 323 Hamiltonian a single GPU shows higher performance equivalent to more than 30 CPU cores parallelized using MPI.
International Nuclear Information System (INIS)
Models of fermions interacting with classical degrees of freedom are applied to a large variety of systems in condensed matter physics. For this class of models, Weiße [Phys. Rev. Lett. 102, 150604 (2009)] has recently proposed a very efficient numerical method, called O(N) Green-Function-Based Monte Carlo (GFMC) method, where a kernel polynomial expansion technique is used to avoid the full numerical diagonalization of the fermion Hamiltonian matrix of size N, which usually costs O(N3) computational complexity. Motivated by this background, in this paper we apply the GFMC method to the double exchange model in three spatial dimensions. We mainly focus on the implementation of GFMC method using both MPI on a CPU-based cluster and Nvidia's Compute Unified Device Architecture (CUDA) programming techniques on a GPU-based (Graphics Processing Unit based) cluster. The time complexity of the algorithm and the parallel implementation details on the clusters are discussed. We also show the performance scaling for increasing Hamiltonian matrix size and increasing number of nodes, respectively. The performance evaluation indicates that for a 323 Hamiltonian a single GPU shows higher performance equivalent to more than 30 CPU cores parallelized using MPI
Mainhagu, J.; Brusseau, M. L.
2016-09-01
The mass of contaminant present at a site, particularly in the source zones, is one of the key parameters for assessing the risk posed by contaminated sites, and for setting and evaluating remediation goals and objectives. This quantity is rarely known and is challenging to estimate accurately. This work investigated the efficacy of fitting mass-depletion functions to temporal contaminant mass discharge (CMD) data as a means of estimating initial mass. Two common mass-depletion functions, exponential and power functions, were applied to historic soil vapor extraction (SVE) CMD data collected from 11 contaminated sites for which the SVE operations are considered to be at or close to essentially complete mass removal. The functions were applied to the entire available data set for each site, as well as to the early-time data (the initial 1/3 of the data available). Additionally, a complete differential-time analysis was conducted. The latter two analyses were conducted to investigate the impact of limited data on method performance, given that the primary mode of application would be to use the method during the early stages of a remediation effort. The estimated initial masses were compared to the total masses removed for the SVE operations. The mass estimates obtained from application to the full data sets were reasonably similar to the measured masses removed for both functions (13 and 15% mean error). The use of the early-time data resulted in a minimally higher variation for the exponential function (17%) but a much higher error (51%) for the power function. These results suggest that the method can produce reasonable estimates of initial mass useful for planning and assessing remediation efforts.
Directory of Open Access Journals (Sweden)
P. Li
2013-01-01
Full Text Available The growth of global population and economy continually increases the waste volumes and consequently creates challenges to handle and dispose solid wastes. It becomes more challenging in mixed rural-urban areas (i.e., areas of mixed land use for rural and urban purposes where both agricultural waste (e.g., manure and municipal solid waste are generated. The efficiency and confidence of decisions in current management practices significantly rely on the accurate information and subjective judgments, which are usually compromised by uncertainties. This study proposed a resource-oriented solid waste management system for mixed rural-urban areas. The system is featured by a novel Monte Carlo simulation-based fuzzy programming approach. The developed system was tested by a real-world case with consideration of various resource-oriented treatment technologies and the associated uncertainties. The modeling results indicated that the community-based bio-coal and household-based CH4 facilities were necessary and would become predominant in the waste management system. The 95% confidence intervals of waste loadings to the CH4 and bio-coal facilities were 387, 450 and 178, 215 tonne/day (mixed flow, respectively. In general, the developed system has high capability in supporting solid waste management for mixed rural-urban areas in a cost-efficient and sustainable manner under uncertainty.
International Nuclear Information System (INIS)
For nuclear reactor analysis such as the neutron eigenvalue calculations, the time consuming Monte Carlo (MC) simulations can be accelerated by using graphics processing units (GPUs). However, traditional MC methods are often history-based, and their performance on GPUs is affected significantly by the thread divergence problem. In this paper we describe the development of a newly designed event-based vectorized MC algorithm for solving the neutron eigenvalue problem. The code was implemented using NVIDIA's Compute Unified Device Architecture (CUDA), and tested on a NVIDIA Tesla M2090 GPU card. We found that although the vectorized MC algorithm greatly reduces the occurrence of thread divergence thus enhancing the warp execution efficiency, the overall simulation speed is roughly ten times slower than the history-based MC code on GPUs. Profiling results suggest that the slow speed is probably due to the memory access latency caused by the large amount of global memory transactions. Possible solutions to improve the code efficiency are discussed. (authors)
Neural Adaptive Sequential Monte Carlo
Gu, Shixiang; Ghahramani, Zoubin; Turner, Richard E
2015-01-01
Sequential Monte Carlo (SMC), or particle filtering, is a popular class of methods for sampling from an intractable target distribution using a sequence of simpler intermediate distributions. Like other importance sampling-based methods, performance is critically dependent on the proposal distribution: a bad proposal can lead to arbitrarily inaccurate estimates of the target distribution. This paper presents a new method for automatically adapting the proposal using an approximation of the Ku...
SU-E-T-29: A Web Application for GPU-Based Monte Carlo IMRT/VMAT QA with Delivered Dose Verification
Energy Technology Data Exchange (ETDEWEB)
Folkerts, M [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States); University of California, San Diego, La Jolla, CA (United States); Graves, Y [University of California, San Diego, La Jolla, CA (United States); Tian, Z; Gu, X; Jia, X; Jiang, S [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States)
2014-06-01
Purpose: To enable an existing web application for GPU-based Monte Carlo (MC) 3D dosimetry quality assurance (QA) to compute “delivered dose” from linac logfile data. Methods: We added significant features to an IMRT/VMAT QA web application which is based on existing technologies (HTML5, Python, and Django). This tool interfaces with python, c-code libraries, and command line-based GPU applications to perform a MC-based IMRT/VMAT QA. The web app automates many complicated aspects of interfacing clinical DICOM and logfile data with cutting-edge GPU software to run a MC dose calculation. The resultant web app is powerful, easy to use, and is able to re-compute both plan dose (from DICOM data) and delivered dose (from logfile data). Both dynalog and trajectorylog file formats are supported. Users upload zipped DICOM RP, CT, and RD data and set the expected statistic uncertainty for the MC dose calculation. A 3D gamma index map, 3D dose distribution, gamma histogram, dosimetric statistics, and DVH curves are displayed to the user. Additional the user may upload the delivery logfile data from the linac to compute a 'delivered dose' calculation and corresponding gamma tests. A comprehensive PDF QA report summarizing the results can also be downloaded. Results: We successfully improved a web app for a GPU-based QA tool that consists of logfile parcing, fluence map generation, CT image processing, GPU based MC dose calculation, gamma index calculation, and DVH calculation. The result is an IMRT and VMAT QA tool that conducts an independent dose calculation for a given treatment plan and delivery log file. The system takes both DICOM data and logfile data to compute plan dose and delivered dose respectively. Conclusion: We sucessfully improved a GPU-based MC QA tool to allow for logfile dose calculation. The high efficiency and accessibility will greatly facilitate IMRT and VMAT QA.
SU-E-T-29: A Web Application for GPU-Based Monte Carlo IMRT/VMAT QA with Delivered Dose Verification
International Nuclear Information System (INIS)
Purpose: To enable an existing web application for GPU-based Monte Carlo (MC) 3D dosimetry quality assurance (QA) to compute “delivered dose” from linac logfile data. Methods: We added significant features to an IMRT/VMAT QA web application which is based on existing technologies (HTML5, Python, and Django). This tool interfaces with python, c-code libraries, and command line-based GPU applications to perform a MC-based IMRT/VMAT QA. The web app automates many complicated aspects of interfacing clinical DICOM and logfile data with cutting-edge GPU software to run a MC dose calculation. The resultant web app is powerful, easy to use, and is able to re-compute both plan dose (from DICOM data) and delivered dose (from logfile data). Both dynalog and trajectorylog file formats are supported. Users upload zipped DICOM RP, CT, and RD data and set the expected statistic uncertainty for the MC dose calculation. A 3D gamma index map, 3D dose distribution, gamma histogram, dosimetric statistics, and DVH curves are displayed to the user. Additional the user may upload the delivery logfile data from the linac to compute a 'delivered dose' calculation and corresponding gamma tests. A comprehensive PDF QA report summarizing the results can also be downloaded. Results: We successfully improved a web app for a GPU-based QA tool that consists of logfile parcing, fluence map generation, CT image processing, GPU based MC dose calculation, gamma index calculation, and DVH calculation. The result is an IMRT and VMAT QA tool that conducts an independent dose calculation for a given treatment plan and delivery log file. The system takes both DICOM data and logfile data to compute plan dose and delivered dose respectively. Conclusion: We sucessfully improved a GPU-based MC QA tool to allow for logfile dose calculation. The high efficiency and accessibility will greatly facilitate IMRT and VMAT QA
Depletable resources and the economy.
Heijman, W. J. M.
1991-01-01
The subject of this thesis is the depletion of scarce resources. The main question to be answered is how to avoid future resource crises. After dealing with the complex relation between nature and economics, three important concepts in relation with resource depletion are discussed: steady state, time preference and efficiency.For the steady state, three variants are distinguished; the stationary state, the physical steady state and the state of steady growth. It is concluded that the so-call...
Enhancements in continuous-energy Monte Carlo capabilities for SCALE 6.2
International Nuclear Information System (INIS)
SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, industry, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that are selected based on the desired solution. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 provides several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity/uncertainty analysis, as well as improved fidelity in nuclear data libraries. A brief overview of SCALE capabilities is provided with emphasis on new features for SCALE 6.2. (author)
Enhancements in Continuous-Energy Monte Carlo Capabilities for SCALE 6.2
Energy Technology Data Exchange (ETDEWEB)
Rearden, Bradley T [ORNL; Petrie Jr, Lester M [ORNL; Peplow, Douglas E. [ORNL; Bekar, Kursat B [ORNL; Wiarda, Dorothea [ORNL; Celik, Cihangir [ORNL; Perfetti, Christopher M [ORNL; Dunn, Michael E [ORNL
2014-01-01
SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, industry, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a plug-and-play framework that includes three deterministic and three Monte Carlo radiation transport solvers that are selected based on the desired solution. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 provides several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, sensitivity and uncertainty analysis, and improved fidelity in nuclear data libraries. A brief overview of SCALE capabilities is provided with emphasis on new features for SCALE 6.2.
Flampouri, Stella; Jiang, Steve B.; Sharp, Greg C.; Wolfgang, John; Patel, Abhijit A.; Choi, Noah C.
2006-06-01
The purpose of this study is to accurately estimate the difference between the planned and the delivered dose due to respiratory motion and free breathing helical CT artefacts for lung IMRT treatments, and to estimate the impact of this difference on clinical outcome. Six patients with representative tumour motion, size and position were selected for this retrospective study. For each patient, we had acquired both a free breathing helical CT and a ten-phase 4D-CT scan. A commercial treatment planning system was used to create four IMRT plans for each patient. The first two plans were based on the GTV as contoured on the free breathing helical CT set, with a GTV to PTV expansion of 1.5 cm and 2.0 cm, respectively. The third plan was based on the ITV, a composite volume formed by the union of the CTV volumes contoured on free breathing helical CT, end-of-inhale (EOI) and end-of-exhale (EOE) 4D-CT. The fourth plan was based on GTV contoured on the EOE 4D-CT. The prescribed dose was 60 Gy for all four plans. Fluence maps and beam setup parameters of the IMRT plans were used by the Monte Carlo dose calculation engine MCSIM for absolute dose calculation on both the free breathing CT and 4D-CT data. CT deformable registration between the breathing phases was performed to estimate the motion trajectory for both the tumour and healthy tissue. Then, a composite dose distribution over the whole breathing cycle was calculated as a final estimate of the delivered dose. EUD values were computed on the basis of the composite dose for all four plans. For the patient with the largest motion effect, the difference in the EUD of CTV between the planed and the delivered doses was 33, 11, 1 and 0 Gy for the first, second, third and fourth plan, respectively. The number of breathing phases required for accurate dose prediction was also investigated. With the advent of 4D-CT, deformable registration and Monte Carlo simulations, it is feasible to perform an accurate calculation of the
Energy Technology Data Exchange (ETDEWEB)
Flampouri, Stella; Jiang, Steve B; Sharp, Greg C; Wolfgang, John; Patel, Abhijit A; Choi, Noah C [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114 (United States)
2006-06-07
The purpose of this study is to accurately estimate the difference between the planned and the delivered dose due to respiratory motion and free breathing helical CT artefacts for lung IMRT treatments, and to estimate the impact of this difference on clinical outcome. Six patients with representative tumour motion, size and position were selected for this retrospective study. For each patient, we had acquired both a free breathing helical CT and a ten-phase 4D-CT scan. A commercial treatment planning system was used to create four IMRT plans for each patient. The first two plans were based on the GTV as contoured on the free breathing helical CT set, with a GTV to PTV expansion of 1.5 cm and 2.0 cm, respectively. The third plan was based on the ITV, a composite volume formed by the union of the CTV volumes contoured on free breathing helical CT, end-of-inhale (EOI) and end-of-exhale (EOE) 4D-CT. The fourth plan was based on GTV contoured on the EOE 4D-CT. The prescribed dose was 60 Gy for all four plans. Fluence maps and beam setup parameters of the IMRT plans were used by the Monte Carlo dose calculation engine MCSIM for absolute dose calculation on both the free breathing CT and 4D-CT data. CT deformable registration between the breathing phases was performed to estimate the motion trajectory for both the tumour and healthy tissue. Then, a composite dose distribution over the whole breathing cycle was calculated as a final estimate of the delivered dose. EUD values were computed on the basis of the composite dose for all four plans. For the patient with the largest motion effect, the difference in the EUD of CTV between the planed and the delivered doses was 33, 11, 1 and 0 Gy for the first, second, third and fourth plan, respectively. The number of breathing phases required for accurate dose prediction was also investigated. With the advent of 4D-CT, deformable registration and Monte Carlo simulations, it is feasible to perform an accurate calculation of the
International Nuclear Information System (INIS)
The purpose of this study is to accurately estimate the difference between the planned and the delivered dose due to respiratory motion and free breathing helical CT artefacts for lung IMRT treatments, and to estimate the impact of this difference on clinical outcome. Six patients with representative tumour motion, size and position were selected for this retrospective study. For each patient, we had acquired both a free breathing helical CT and a ten-phase 4D-CT scan. A commercial treatment planning system was used to create four IMRT plans for each patient. The first two plans were based on the GTV as contoured on the free breathing helical CT set, with a GTV to PTV expansion of 1.5 cm and 2.0 cm, respectively. The third plan was based on the ITV, a composite volume formed by the union of the CTV volumes contoured on free breathing helical CT, end-of-inhale (EOI) and end-of-exhale (EOE) 4D-CT. The fourth plan was based on GTV contoured on the EOE 4D-CT. The prescribed dose was 60 Gy for all four plans. Fluence maps and beam setup parameters of the IMRT plans were used by the Monte Carlo dose calculation engine MCSIM for absolute dose calculation on both the free breathing CT and 4D-CT data. CT deformable registration between the breathing phases was performed to estimate the motion trajectory for both the tumour and healthy tissue. Then, a composite dose distribution over the whole breathing cycle was calculated as a final estimate of the delivered dose. EUD values were computed on the basis of the composite dose for all four plans. For the patient with the largest motion effect, the difference in the EUD of CTV between the planed and the delivered doses was 33, 11, 1 and 0 Gy for the first, second, third and fourth plan, respectively. The number of breathing phases required for accurate dose prediction was also investigated. With the advent of 4D-CT, deformable registration and Monte Carlo simulations, it is feasible to perform an accurate calculation of the
Running Out Of and Into Oil. Analyzing Global Oil Depletion and Transition Through 2050
Energy Technology Data Exchange (ETDEWEB)
Greene, David L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hopson, Janet L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Li, Jia [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2003-10-01
This report presents a risk analysis of world conventional oil resource production, depletion, expansion, and a possible transition to unconventional oil resources such as oil sands, heavy oil and shale oil over the period 2000 to 2050. Risk analysis uses Monte Carlo simulation methods to produce a probability distribution of outcomes rather than a single value.
Monte Carlo Radiative Transfer
Whitney, Barbara A
2011-01-01
I outline methods for calculating the solution of Monte Carlo Radiative Transfer (MCRT) in scattering, absorption and emission processes of dust and gas, including polarization. I provide a bibliography of relevant papers on methods with astrophysical applications.
Energy Technology Data Exchange (ETDEWEB)
Craig Kruschwitz, Ming Wu, Ken Moy, Greg Rochau
2008-10-31
We present here results of continued efforts to understand the performance of microchannel plate (MCP)–based, high-speed, gated, x-ray detectors. This work involves the continued improvement of a Monte Carlo simulation code to describe MCP performance coupled with experimental efforts to better characterize such detectors. Our goal is a quantitative description of MCP saturation behavior in both static and pulsed modes. We have developed a new model of charge buildup on the walls of the MCP channels and measured its effect on MCP gain. The results are compared to experimental data obtained with a short-pulse, high-intensity ultraviolet laser; these results clearly demonstrate MCP saturation behavior in both DC and pulsed modes. The simulations compare favorably to the experimental results. The dynamic range of the detectors in pulsed operation is of particular interest when fielding an MCP–based camera. By adjusting the laser flux we study the linear range of the camera. These results, too, are compared to our simulations.
Monte Carlo-based Bragg-Gray tissue-to-air mass-collision-stopping power ratios for ISO beta sources
International Nuclear Information System (INIS)
Quantity of interest in external beta radiation protection is the absorbed dose rate to tissue at a depth of 7 mg/cm2 Dt (7 mg/cm2) in a 4-element ICRU (International Commission for Radiation Units and Measurements) unit density tissue phantom. ISO (International Organization for Standardization) 6980-2 provides guidelines to establish this quantity for beta emitters using an extrapolation chamber as a primary standard. ISO 6980-1 proposes two series of beta reference radiation fields, namely, series 1 and series 2. Series 1 covers 90Sr/90Y, 85Kr, 204Tl and 147Pm sources used with beam flattening filter and Series 2 covers 14C and 106Ru/106Rh sources used with beam flattening filter. Dt (7 mg/cm2) is realized based on measured current and set of corrections including Bragg-Gray tissue-to-air mass-stopping power ratio, (S/ρ)t,a. ISO provides (S/ρ)t,a values which are based on approximate methods. The present study is aimed at calculating (S/ρ)t,a for 90Sr/90Y, 85Kr, 106Ru/106Rh and 147Pm sources using the Monte Carlo (MC) methods and compare the same against the ISO values. By definition, (S/ρ)t,a should be independent of cavity length of the chamber which was verified in the work